January 14, 2024

Discord have changed the way bots work quite a few times. Recently, though, they built a system that lets you create and register “slash commands” — commands that you can type into the Discord chat and which do things, like /hello — and which are powered by “webhooks”. That is: when someone uses your command, it sends an HTTP request to a URL of your choice, and your URL then responds, and that process powers what your users see in Discord. Importantly, this means that operating a Discord bot does not require a long-running server process. You don’t need to host it somewhere where you worry about the bot process crashing, how you’re going to recover from that, all that stuff. No daemon required. In fact, you can make a complete Discord bot in one single PHP file. You don’t even need any PHP libraries. One file, which you upload to your completely-standard shared hosting webspace, the same way you might upload any other simple PHP thing. Here’s some notes on how I did that.

The Discord documentation is pretty annoying and difficult to follow; all the stuff you need is in there, somewhere, but it’s often hard to find where, and there’s very little that explains why a thing is the way it is. It’s tough to grasp the “Zen” of how Discord wants you to work with their stuff. But in essence, you’ll need to create a Discord app: follow their instructions to do that. Then, we’ll write our small PHP file, and upload it; finally, fill in the URL of that PHP file as the “interactive endpoint URL” in your newly-created app’s general information page in the Discord developer admin. You can then add the bot to a server by visiting the URL from the “URL generator” for your app in the Discord dev admin.

The PHP file will get sent blocks of JSON, which describe what a user is doing — a command they’ve typed, parameters to that command, or whatever — and respond with something which is shown to the user — the text of a message which is your bot’s reply, or a command to alter the text of a previous message, or add a clickable button to it, and the like. I won’t go into detail about all the things you can do here (if that would be interesting, let me know and maybe I’ll write a followup or two), but the basic structure of your bot needs to be that it authenticates the incoming request from Discord, it interprets that request, and it responds to that request.

Authentication first. When you create your app, you get a client_public_key value, a big long string of hex digits that will look like c78c32c3c7871369fa67 or whatever. Your PHP file needs to know this value somehow. (How you do that is up to you; think of this like a MySQL username and password, and handle this the same way you do those.) Then, every request that comes in will have two important HTTP headers: X-Signature-ED25519 and X-Signature-Timestamp. You use a combination of these (which provide a signature for the incoming request) and your public key to check whether the request is legitimate. There are PHP libraries to do this, but fortunately we don’t need them; PHP has the relevant signature verification stuff built in, these days. So, to read the content of the incoming post and verify the signature on it:

/* read the incoming request data */
$postData = file_get_contents('php://input');
/* get the signature and timestamp header values */
$signature = isset($_SERVER['HTTP_X_SIGNATURE_ED25519']) ? 
    $_SERVER['HTTP_X_SIGNATURE_ED25519'] : "";
$timestamp = isset($_SERVER['HTTP_X_SIGNATURE_TIMESTAMP']) ? 
    $_SERVER['HTTP_X_SIGNATURE_TIMESTAMP'] : "";
/* check the signature */
$sigok = sodium_crypto_sign_verify_detached(
    hex2bin($signature), 
    $timestamp . $postData,
    hex2bin($client_public_key));
/* If signature is not OK, reject the request */
if (!$sigok) {
    http_response_code(401);
    die();
}

We need to correctly reject invalidly signed requests, because Discord will check that we do — they will occasionally send test requests with bad signatures to confirm that you’re doing the check. (They do this when you first add the URL to the Discord admin screens; if it won’t let you save the settings, then it’s because Discord thinks your URL returned the wrong thing. This is annoying, because you have no idea why Discord didn’t like it; best bet is to add lots of error_log() logging of inputs and outputs to your PHP file and inspect the results carefully.)

Next, interpret the incoming request and do things with it. The only thing we have to respond to here is a ping message; Discord will send them as part of their irregular testing, and expects to get back a correctly-formatted pong message.

$data = json_decode($postData);
if ($data->type == 1) { // this is a ping message
    echo json_encode(array('type' => 1)); // response: pong
    die();
}

The magic numbers there (1 for a ping, 1 for a pong) are both defined in the Discord docs (incoming values being the “Interaction Type” field and outgoing values the “Interaction Callback Type”.)

After that, the world’s approximately your oyster. You check the incoming type field for the type of incoming thing this is — a slash command, a button click in a message, whatever — and respond appropriately. This is all stuff for future posts if there’s interest, but the docs (in particular the “receiving and responding and “message components” sections) have all the detail. For your bot to provide a slash command, you have to register it first, which is a faff; I wrote a little Python script to do that. You only have to do it once. The script looks approximately like this; you’ll need your APP_ID and your BOT_TOKEN from the Discord dashboard.

import requests, json
MY_COMMAND = {
    "name": 'doit',
    "description": 'Do the thing',
    "type": 1
}
discord_endpoint = _
    f"https://discord.com/api/v10/applications/{APP_ID}/commands"
requests.request("PUT", discord_endpoint, 
    json=[MY_COMMAND], headers={
        "Authorization": f"Bot {BOT_TOKEN}",
        "User-Agent": 'mybotname (myboturl, 1.0.0)',
})

Once you’ve done that, you can use /doit in a channel with your bot in, and your PHP bot URL will receive the incoming request for you to process.

on January 14, 2024 09:57 PM

January 12, 2024

We’ve all heard about cloud-native applications in recent years, but what about cloud-native infrastructure? Is there any reason why the infrastructure couldn’t be cloud-native, too? Or maybe it’s already cloud-native, but you’ve never had a chance to dive deep into the stack to check it out? What does the term “cloud-native infrastructure” actually even mean?

The more you think about it, the more confusing it gets. We all know that the modern way to build infrastructure is to turn it into a cloud. But then, how can a cloud itself be cloud-native or… native to itself? If this sounds tricky, don’t worry – you’re in the right place. Keep reading to see what happens when the future of infrastructure meets the present.

Cloud-native building blocks

Before we start exploring cloud-native infrastructure, let’s make sure we have a common understanding of the concept of cloud-native itself. According to the official definition  maintained by the Cloud Native Computing Foundation (CNCF), cloud-native is a set of technologies that “empower organisations to build and run scalable applications in modern, dynamic environments”.

A lot of buzzwords but not many technical details. Fortunately, the CNCF definition also highlights some sample building blocks of cloud-native applications. Those are:

  • Containers – package applications’ code together with their dependencies and run them in isolation inside of their runtime environment
  • Service meshes – control service-to-service communication with a centrally configurable network of proxies
  • Microservices – turn applications into small independent services that communicate with each other through well-defined APIs
  • Immutable infrastructure – shifts service management paradigm from the one where components are changed to the one where they are replaced
  • Declarative APIs – enable describing the desired application state

While the definition does not force cloud-native application developers to use all these components, the majority of cloud-native applications usually follow this pattern. But can we apply the same approach to the underlying infrastructure as well?

<noscript> <img alt="" height="150" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_269,h_150/https://lh7-us.googleusercontent.com/z3Ec3MN76WHXTHWF2KLnVwbw7-3_RlmtXE06N3EgxNR4HazbjPhbKxPa_n_b3cMHS-Ly3t_r6TX7Hy1jNLorZ74npVrRoXsNguh7tZ4O9EGCBysICCQJvvfhO4yyUhpKYnVL2u1DmRG3YBPX9qketwI" width="269" /> </noscript>

It all starts with cloud-native

It’s easier when you’re operating in the applications space. The infrastructure is already there, providing all the capabilities you need, like container runtimes, rollback mechanisms and more. But what if you’re operating in an environment where the underlying infrastructure is yet to be built, for instance in the private cloud space?

Underneath every cloud is nothing more than a pool of bare metal resources. A cluster of physical machines equipped with CPUs, RAM and disks. What turns this raw infrastructure into a fully functional cloud is the cloud management software. And there is no single reason why this software couldn’t be cloud-native, too.

When the cloud becomes an app

Greatly simplified, the cloud itself is just another app. It installs directly on metal and provides functions to tenant applications running on top. Putting it this way, whether it’s cloud-native or not depends on how the cloud management software is implemented underneath. Porting its architecture on the components listed above effectively sets the foundation for cloud-native infrastructure. 

First, we can decompose the cloud management software into several microservices. In fact, leading cloud platforms, such as OpenStack, already follow this pattern. Then, we can run each of those microservices inside immutable containers. Both Kubernetes (K8s) and snapd are suitable for this purpose. Finally, declarative APIs enable high-level abstraction. Instead of struggling with the configuration of individual containers, we can just declare the desired state of the cloud.

Cloud-native infrastructure is an ideal answer for organisations looking for a future-proof cloud platform that will run on their premises. While adopting a hybrid multi-cloud architecture with a private cloud enables them to achieve cost optimisation, digital sovereignty and performance goals, using cloud-native principles enables them to operate it effectively. This way, the cloud management software simply becomes yet another application in their modern, containerised ecosystem, flattening the learning curve and increasing DevOps efficiency.

Let’s have a look at how it works in practice.   

Sunbeam – cloud-native infrastructure implementation example

A perfect example of cloud-native infrastructure implementation is Sunbeam.

Sunbeam is an upstream OpenStack project that revolutionises the way users deploy and operate clouds. Its architecture is entirely based on the components that define the cloud-native paradigm. By containerising OpenStack’s control plane and running it on top of K8s, Sunbeam effectively turns OpenStack into an extension to Kubernetes. This way, the K8s cluster gets additional functionalities, such as traditional infrastructure-as-a-service (IaaS) capabilities, which are not natively available in its ecosystem by default.

The architecture of Sunbeam is depicted in the diagram below:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/ihVaDF3RkbpIi_nPmZ9q4OHbsjXPPn6-kwX4ypdmAm3U2kWzeWwBWPFmuPyVfpf1YqXO9_P1mcvZJr1FVpwIrUbFc0nqcqxVJ5MDBv4i94Ss0Ae7s8o7WuebEtSWH4hex8wbxsezLyGOE8uYnhKr6GE" width="720" /> </noscript>

With Sunbeam, all cloud management software components that require hardware access are delivered as snaps. This includes cloud management and governance services, the Kubernetes cluster and hypervisor and storage functions provided by data plane services. This approach ensures a high level of security thanks to the isolation and strict confinement provided by snapd. In turn, all services that don’t require hardware access run on top of K8s as OCI images. This mostly includes cloud control plane services.

All the pieces of the stack are wrapped with charmed operators. A declarative API in front of them enables a high level of abstraction. This way, the initial deployment of the cloud gets significantly simplified, while its post-deployment operations, such as the enablement of plugins, become fully automated.

Learn more

Download our e-book to learn more about Sunbeam and how you can turn OpenStack and Kubernetes into cloud-native apps.

Read more blogs about cloud-native and Sunbeam.

Get in touch with Canonical cloud experts.

on January 12, 2024 07:00 AM

January 11, 2024

Migrating to Incus from LXD

Simos Xenitellis

Incus is a manager for virtual machines and system containers.

A virtual machine is an instance of an operating system that runs on a computer, along with the main operating system. A virtual machine uses hardware virtualization features for the separation from the main operating system.

A system container is an instance of an operating system that also runs on a computer, along with the main operating system. A system container uses, instead, security primitives of the Linux kernel for the separation from the main operating system. You can think of system containers as software virtual machines.

All these are managed together by Incus. Depending on your requirements, you would select either a virtual machine of a system container. Between the two, system containers are more lightweight and you can fit much more of them on your computer.

Why Incus?

There are many use-cases to go for instances of virtual machines and system containers. They all boil down to the simplicity of installing software and services into instances and not on your main operating system. If you install something in an instance and then want to remove it, you can get rid of the instance. Your main operating system remains nice and clean.

If you keep a server (such a Virtual Private Server/VPS or a baremetal server), you can host many websites on the same server. Each will be on its own Incus instance. Your database server will be in yet another instance, and it will communicate with the websites with instance-to-instance communication. Do you want to setup one of those instant messaging bots? Launch another instance and install the messaging bot there. Are you into weather forecasting and you want to save monthly weather conditions files because they don’t keep historical records? Launch another instance and set up a cron job to keep copies of those monthly files.

Historical weather conditions of one location. Oh, I set this up in 2016 and it is still running fine.

Another use-case is when you are a desktop Linux user and you install software in instances. It keeps your desktop Linux nice and clean. Also opens up many possibilities. Are you stuck with a single Viber desktop? Put’em in instances.

Migrating to Incus

You can either install Incus from scratch or migrate to Incus from LXD. In my case I am migrating an LXD installation with 27 system containers (seven of them running), and I am doing that while writing this blog post.

Setting up the Incus package repository

I am using these installation instructions to install Incus from a package. And these installation instructions for Incus packages that work for either Debian or Ubuntu. I am repeating the instructions here for my convenience.

Incus is packaged are a deb package and we need to add the appropriate repository. First, let’s look at the repository GPG key. We will first look at the repository key and in the next step we import it. Compared to the repository instructions, I am using the --list-packets parameter in order to view the keyid. I can then search on the keyserver that this key exists.

curl -fsSL https://pkgs.zabbly.com/key.asc | gpg --list-packets --fingerprint

The repository key for these Incus packages.

Then, download and install the key on my Debian-style Linux distribution. The first command verifies that there is a /etc/apt/keyrings/ directory. Then, it downloads the actual key as a text file and uses the tee command with sudo to show and save it in the /etc/apt/keyrings/ directory. After you run the commands, you can verify the new file, zabbly.asc is in there.

sudo mkdir -p /etc/apt/keyrings/
curl -fsSL https://pkgs.zabbly.com/key.asc | sudo tee /etc/apt/keyrings/zabbly.asc

Up to now we have added the repository key. The next step is to add the repository itself to our Linux. Create a file with the following content and save it as make-incus-repository-configuration.sh The benefit of running this script is that the details of your Linux distribution are configured automatically.

echo "Enabled: yes"
echo "Types: deb"
echo "URIs: https://pkgs.zabbly.com/incus/stable"
echo "Suites: $(. /etc/os-release && echo ${VERSION_CODENAME})"
echo "Components: main"
echo "Architectures: $(dpkg --print-architecture)"
echo "Signed-By: /etc/apt/keyrings/zabbly.asc"

Now run the following command to install the repository to your system. The command will run the script and create the appropriate configuration for your distribution, and then, again, pipe the output through the sudo tee command to write them to the appropriate location, and also show the output at the same time on your terminal.

sh make-incus-repository-configuration.sh | sudo tee /etc/apt/sources.list.d/zabbly-incus-stable.sources

Installing the Incus server and client

The repository is in place. We need to perform a sudo apt update so that our system refreshes with the newly enabled repository. Then, we install the incus package that includes both the Incus server (the manager) and the incus command line client.

sudo apt update
sudo apt install incus

We have installed the Incus server but we did not initialize it. We must not initialize it because we want to perform the migration.

Configuring the access to the incus client

When we use a non-root user account on our Linux system, we need to give it special access so that our non-root user account can have access to the Incus server/manager. We do this my adding the said non-root user account to the special incus-admin Unix group. During the installation of the incus package, the installer added two new Unix groups, incus-admin (full admin access) and incus (simple administrative tasks). We are adding our non-root username, which is myusername into the incus-admin Unix group.

sudo adduser myusername incus-admin

We have added our account to the incus-admin Unix group. But is it activated straight away? No, due to the way Unix/Linux works. We either need to restart our desktop session (i.e. logout/login or reboot) or use the newgrp Unix command to enable the new Unix group for the current shell. I’ll do the newgrp trick.

newgrp incus-admin

When you run the command, it will appear as nothing change. However, if you run groups to check what groups you are in, you will notice that incus-admin appears first in the list. If you are prompted to type a password, then you did not run properly the earlier adduser command.

Now we are ready to perform the migration and run incus commands through this newgrp shell.

Migrating to Incus with lxd-to-incus

We have installed the incus package that includes both the Incus server/manager, the incus client utility, and also brought in the lxd-to-incus utility that migrates from LXD to Incus.

While migrating, you are prompted whether to remove the LXD package. If you plan to do so, look into ~/snap/lxd/common/config/ for any important configuration to backup. Removing the snap package may remove any configuration here (such as aliases and remotes). If unsure, do not remove LXD just yet.

There is a LXD installation with 27 system containers (7 are running) that I will YOLO right now to Incus according to the migration instructions. Here is the command. We need the sudo because the command performs all sort of administrative tasks in order to complete the migration. That is, it does way more than the incus tool with incus-admin privileges.

sudo lxd-to-incus

Here is the output of the tool. I timed the migration process. It took 45 seconds to complete. Those system containers that were running, were restarted automatically.

Output of the lxd-to-incus migration tool. A successful migration.

Using Incus

You have just migrated to Incus. Are there any changes to keep in mind?

First, there is by default one network remote, and it’s the images remote.

incus remote list

Output

Default list of remotes in Incus.

Second, is there a list of images? Yes, visit https://images.linuxcontainers.org/

Third, are there default images? No. If you want to launch an instance with Debian or Ubuntu, you need to specify the version as well. For example, the current Debian image is called debian/12. There are not default debian or ubuntu images or such aliases.

Fourth, to see the list of all available images from the command line, run

incus image list images:

Finally, to create a Debian 12 container called mydebian, then stop it, then remove it, do the following.

incus launch images:debian/12 mydebian
incus stop mydebian
incus delete mydebian

Conclusion

We have successfully migrated to Incus. We now use the incus command to perform Incus tasks.

We finish this post with a reference to where the name comes from. The name comes from cumulonimbus incus, which is a type of cloud. Clouds, cloud computing. See?

Cumulonimbus incus, an impressive cloud. It resembles an anvil. If you see such a cloud, it will rain heavily below.

The inception of the name is by Aleksa Sarai who also created the fork. The main maintainer of Incus is Stéphane Graber. If you want to try Incus online through your browser, without installing it, visit the Try it Incus page.

Troubleshooting

Error: Required command “zfs” is missing for storage pool

You are trying to migrate from LXD to Incus, you have a ZFS storage pool, but you do not have the zfs utility. This is a common case if you have been using the LXD snap package. The snap package includes inside a version of zfs, therefore this utility has not been installed system-wide.

Remedy: Install the zfsutils-linux package.

sudo apt install zfsutils-linux

on January 11, 2024 11:16 PM

This post is in part a response to an aspect of Nate’s post “Does Wayland really break everything?“, but also my reflection on discussing Wayland protocol additions, a unique pleasure that I have been involved with for the past months1.

Some facts

Before I start I want to make a few things clear: The Linux desktop will be moving to Wayland2 – this is a fact at this point (and has been for a while), sticking to X11 makes no sense for future projects. From reading Wayland protocols and working with it at a much lower level than I ever wanted to, it is also very clear to me that Wayland is an exceptionally well-designed core protocol, and so are the additional extension protocols (xdg-shell & Co.). The modularity of Wayland is great, it gives it incredible flexibility and will for sure turn out to be good for the long-term viability of this project (and also provides a path to correct protocol issues in future, if one is found). In other words: Wayland is an amazing foundation to build on, and a lot of its design decisions make a lot of sense!

The shift towards people seeing “Linux” more as an application developer platform, and taking PipeWire and XDG Portals into account when designing for Wayland is also an amazing development and I love to see this – this holistic approach is something I always wanted!

Furthermore, I think Wayland removes a lot of functionality that shouldn’t exist in a modern compositor – and that’s a good thing too! Some of X11’s features and design decisions had clear drawbacks that we shouldn’t replicate. I highly recommend to read Nate’s blog post, it’s very good and goes into more detail. And due to all of this, I firmly believe that any advancement in the Wayland space must come from within the project.

But!

But! Of course there was a “but” coming 😉 – I think while developing Wayland-as-an-ecosystem we are now entrenched into narrow concepts of how a desktop should work. While discussing Wayland protocol additions, a lot of concepts clash, people from different desktops with different design philosophies debate the merits of those over and over again never reaching any conclusion (just as you will never get an answer out of humans whether sushi or pizza is the clearly superior food, or whether CSD or SSD is better). Some people want to use Wayland as a vehicle to force applications to submit to their desktop’s design philosophies, others prefer the smallest and leanest protocol possible, other developers want the most elegant behavior possible. To be clear, I think those are all very valid approaches.

But this also creates problems: By switching to Wayland compositors, we are already forcing a lot of porting work onto toolkit developers and application developers. This is annoying, but just work that has to be done. It becomes frustrating though if Wayland provides toolkits with absolutely no way to reach their goal in any reasonable way. For Nate’s Photoshop analogy: Of course Linux does not break Photoshop, it is Adobe’s responsibility to port it. But what if Linux was missing a crucial syscall that Photoshop needed for proper functionality and Adobe couldn’t port it without that? In that case it becomes much less clear on who is to blame for Photoshop not being available.

A lot of Wayland protocol work is focused on the environment and design, while applications and work to port them often is considered less. I think this happens because the overlap between application developers and developers of the desktop environments is not necessarily large, and the overlap with people willing to engage with Wayland upstream is even smaller. The combination of Windows developers porting apps to Linux and having involvement with toolkits or Wayland is pretty much nonexistent. So they have less of a voice.

A quick detour through the neuroscience research lab

I have been involved with Freedesktop, GNOME and KDE for an incredibly long time now (more than a decade), but my actual job (besides consulting for Purism) is that of a PhD candidate in a neuroscience research lab (working on the morphology of biological neurons and its relation to behavior). I am mostly involved with three research groups in our institute, which is about 35 people. Most of us do all our data analysis on powerful servers which we connect to using RDP (with KDE Plasma as desktop). Since I joined, I have been pushing the envelope a bit to extend Linux usage to data acquisition and regular clients, and to have our data acquisition hardware interface well with it. Linux brings some unique advantages for use in research, besides the obvious one of having every step of your data management platform introspectable with no black boxes left, a goal I value very highly in research (but this would be its own blogpost).

In terms of operating system usage though, most systems are still Windows-based. Windows is what companies develop for, and what people use by default and are familiar with. The choice of operating system is very strongly driven by application availability, and WSL being really good makes this somewhat worse, as it removes the need for people to switch to a real Linux system entirely if there is the occasional software requiring it. Yet, we have a lot more Linux users than before, and use it in many places where it makes sense. I also developed a novel data acquisition software that even runs on Linux-only and uses the abilities of the platform to its fullest extent. All of this resulted in me asking existing software and hardware vendors for Linux support a lot more often. Vendor-customer relationship in science is usually pretty good, and vendors do usually want to help out. Same for open source projects, especially if you offer to do Linux porting work for them… But overall, the ease of use and availability of required applications and their usability rules supreme. Most people are not technically knowledgeable and just want to get their research done in the best way possible, getting the best results with the least amount of friction.

KDE/Linux usage at a control station for a particle accelerator at Adlershof Technology Park, Germany, for reference (by 25years of KDE)3

Back to the point

The point of that story is this: GNOME, KDE, RHEL, Debian or Ubuntu: They all do not matter if the necessary applications are not available for them. And as soon as they are, the easiest-to-use solution wins. There are many facets of “easiest”: In many cases this is RHEL due to Red Hat support contracts being available, in many other cases it is Ubuntu due to its mindshare and ease of use. KDE Plasma is also frequently seen, as it is perceived a bit easier to onboard Windows users with it (among other benefits). Ultimately, it comes down to applications and 3rd-party support though.

Here’s a dirty secret: In many cases, porting an application to Linux is not that difficult. The thing that companies (and FLOSS projects too!) struggle with and will calculate the merits of carefully in advance is whether it is worth the support cost as well as continuous QA/testing. Their staff will have to do all of that work, and they could spend that time on other tasks after all.

So if they learn that “porting to Linux” not only means added testing and support, but also means to choose between the legacy X11 display server that allows for 1:1 porting from Windows or the “new” Wayland compositors that do not support the same features they need, they will quickly consider it not worth the effort at all. I have seen this happen.

Of course many apps use a cross-platform toolkit like Qt, which greatly simplifies porting. But this just moves the issue one layer down, as now the toolkit needs to abstract Windows, macOS and Wayland. And Wayland does not contain features to do certain things or does them very differently from e.g. Windows, so toolkits have no way to actually implement the existing functionality in a way that works on all platforms. So in Qt’s documentation you will often find texts like “works everywhere except for on Wayland compositors or mobile”4.

Many missing bits or altered behavior are just papercuts, but those add up. And if users will have a worse experience, this will translate to more support work, or people not wanting to use the software on the respective platform.

What’s missing?

Window positioning

SDI applications with multiple windows are very popular in the scientific world. For data acquisition (for example with microscopes) we often have one monitor with control elements and one larger one with the recorded image. There is also other configurations where multiple signal modalities are acquired, and the experimenter aligns windows exactly in the way they want and expects the layout to be stored and to be loaded upon reopening the application. Even in the image from Adlershof Technology Park above you can see this style of UI design, at mega-scale. Being able to pop-out elements as windows from a single-window application to move them around freely is another frequently used paradigm, and immensely useful with these complex apps.

It is important to note that this is not a legacy design, but in many cases an intentional choice – these kinds of apps work incredibly well on larger screens or many screens and are very flexible (you can have any window configuration you want, and switch between them using the (usually) great window management abilities of your desktop).

Of course, these apps will work terribly on tablets and small form factors, but that is not the purpose they were designed for and nobody would use them that way.

I assumed for sure these features would be implemented at some point, but when it became clear that that would not happen, I created the ext-placement protocol which had some good discussion but was ultimately rejected from the xdg namespace. I then tried another solution based on feedback, which turned out not to work for most apps, and now proposed xdg-placement (v2) in an attempt to maybe still get some protocol done that we can agree on, exploring more options before pushing the existing protocol for inclusion into the ext Wayland protocol namespace. Meanwhile though, we can not port any application that needs this feature, while at the same time we are switching desktops and distributions to Wayland by default.

Window position restoration

Similarly, a protocol to save & restore window positions was already proposed in 2018, 6 years ago now, but it has still not been agreed upon, and may not even help multiwindow apps in its current form. The absence of this protocol means that applications can not restore their former window positions, and the user has to move them to their previous place again and again.

Meanwhile, toolkits can not adopt these protocols and applications can not use them and can not be ported to Wayland without introducing papercuts.

Window icons

Similarly, individual windows can not set their own icons, and not-installed applications can not have an icon at all because there is no desktop-entry file to load the icon from and no icon in the theme for them. You would think this is a niche issue, but for applications that create many windows, providing icons for them so the user can find them is fairly important. Of course it’s not the end of the world if every window has the same icon, but it’s one of those papercuts that make the software slightly less user-friendly. Even applications with fewer windows like LibrePCB are affected, so much so that they rather run their app through Xwayland for now.

I decided to address this after I was working on data analysis of image data in a Python virtualenv, where my code and the Python libraries used created lots of windows all with the default yellow “W” icon, making it impossible to distinguish them at a glance. This is xdg-toplevel-icon now, but of course it is an uphill battle where the very premise of needing this is questioned. So applications can not use it yet.

Limited window abilities requiring specialized protocols

Firefox has a picture-in-picture feature, allowing it to pop out media from a mediaplayer as separate floating window so the user can watch the media while doing other things. On X11 this is easily realized, but on Wayland the restrictions posed on windows necessitate a different solution. The xdg-pip protocol was proposed for this specialized usecase, but it is also not merged yet. So this feature does not work as well on Wayland.

Automated GUI testing / accessibility / automation

Automation of GUI tasks is a powerful feature, so is the ability to auto-test GUIs. This is being worked on, with libei and wlheadless-run (and stuff like ydotool exists too), but we’re not fully there yet.

Wayland is frustrating for (some) application authors

As you see, there is valid applications and valid usecases that can not be ported yet to Wayland with the same feature range they enjoyed on X11, Windows or macOS. So, from an application author’s perspective, Wayland does break things quite significantly, because things that worked before can no longer work and Wayland (the whole stack) does not provide any avenue to achieve the same result.

Wayland does “break” screen sharing, global hotkeys, gaming latency (via “no tearing”) etc, however for all of these there are solutions available that application authors can port to. And most developers will gladly do that work, especially since the newer APIs are usually a lot better and more robust. But if you give application authors no path forward except “use Xwayland and be on emulation as second-class citizen forever”, it just results in very frustrated application developers.

For some application developers, switching to a Wayland compositor is like buying a canvas from the Linux shop that forces your brush to only draw triangles. But maybe for your avant-garde art, you need to draw a circle. You can approximate one with triangles, but it will never be as good as the artwork of your friends who got their canvases from the Windows or macOS art supply shop and have more freedom to create their art.

Triangles are proven to be the best shape! If you are drawing circles you are creating bad art!

Wayland, via its protocol limitations, forces a certain way to build application UX – often for the better, but also sometimes to the detriment of users and applications. The protocols are often fairly opinionated, a result of the lessons learned from X11. In any case though, it is the odd one out – Windows and macOS do not pose the same limitations (for better or worse!), and the effort to port to Wayland is orders of magnitude bigger, or sometimes in case of the multiwindow UI paradigm impossible to achieve to the same level of polish. Desktop environments of course have a design philosophy that they want to push, and want applications to integrate as much as possible (same as macOS and Windows!). However, there are many applications out there, and pushing a design via protocol limitations will likely just result in fewer apps.

The porting dilemma

I spent probably way too much time looking into how to get applications cross-platform and running on Linux, often talking to vendors (FLOSS and proprietary) as well. Wayland limitations aren’t the biggest issue by far, but they do start to come come up now, especially in the scientific space with Ubuntu having switched to Wayland by default. For application authors there is often no way to address these issues. Many scientists do not even understand why their Python script that creates some GUIs suddenly behaves weirdly because Qt is now using the Wayland backend on Ubuntu instead of X11. They do not know the difference and also do not want to deal with these details – even though they may be programmers as well, the real goal is not to fiddle with the display server, but to get to a scientific result somehow.

Another issue is portability layers like Wine which need to run Windows applications as-is on Wayland. Apparently Wine’s Wayland driver has some heuristics to make window positioning work (and I am amazed by the work done on this!), but that can only go so far.

A way out?

So, how would we actually solve this? Fundamentally, this excessively long blog post boils down to just one essential question:

Do we want to force applications to submit to a UX paradigm unconditionally, potentially loosing out on application ports or keeping apps on X11 eternally, or do we want to throw them some rope to get as many applications ported over to Wayland, even through we might sacrifice some protocol purity?

I think we really have to answer that to make the discussions on wayland-protocols a lot less grueling. This question can be answered at the wayland-protocols level, but even more so it must be answered by the individual desktops and compositors.

If the answer for your environment turns out to be “Yes, we want the Wayland protocol to be more opinionated and will not make any compromises for application portability”, then your desktop/compositor should just immediately NACK protocols that add something like this and you simply shouldn’t engage in the discussion, as you reject the very premise of the new protocol: That it has any merit to exist and is needed in the first place. In this case contributors to Wayland and application authors also know where you stand, and a lot of debate is skipped. Of course, if application authors want to support your environment, you are basically asking them now to rewrite their UI, which they may or may not do. But at least they know what to expect and how to target your environment.

If the answer turns out to be “We do want some portability”, the next question obviously becomes where the line should be drawn and which changes are acceptable and which aren’t. We can’t blindly copy all X11 behavior, some porting work to Wayland is simply inevitable. Some written rules for that might be nice, but probably more importantly, if you agree fundamentally that there is an issue to be fixed, please engage in the discussions for the respective MRs! We for sure do not want to repeat X11 mistakes, and I am certain that we can implement protocols which provide the required functionality in a way that is a nice compromise in allowing applications a path forward into the Wayland future, while also being as good as possible and improving upon X11. For example, the toplevel-icon proposal is already a lot better than anything X11 ever had. Relaxing ACK requirements for the ext namespace is also a good proposed administrative change, as it allows some compositors to add features they want to support to the shared repository easier, while also not mandating them for others. In my opinion, it would allow for a lot less friction between the two different ideas of how Wayland protocol development should work. Some compositors could move forward and support more protocol extensions, while more restrictive compositors could support less things. Applications can detect supported protocols at launch and change their behavior accordingly (ideally even abstracted by toolkits).

You may now say that a lot of apps are ported, so surely this issue can not be that bad. And yes, what Wayland provides today may be enough for 80-90% of all apps. But what I hope the detour into the research lab has done is convince you that this smaller percentage of apps matters. A lot. And that it may be worthwhile to support them.

To end on a positive note: When it came to porting concrete apps over to Wayland, the only real showstoppers so far5 were the missing window-positioning and window-position-restore features. I encountered them when porting my own software, and I got the issue as feedback from colleagues and fellow engineers. In second place was UI testing and automation support, the window-icon issue was mentioned twice, but being a cosmetic issue it likely simply hurts people less and they can ignore it easier.

What this means is that the majority of apps are already fine, and many others are very, very close! A Wayland future for everyone is within our grasp! 😄

I will also bring my two protocol MRs to their conclusion for sure, because as application developers we need clarity on what the platform (either all desktops or even just a few) supports and will or will not support in future. And the only way to get something good done is by contribution and friendly discussion.

Footnotes

  1. Apologies for the clickbait-y title – it comes with the subject 😉 ↩
  2. When I talk about “Wayland” I mean the combined set of display server protocols and accepted protocol extensions, unless otherwise clarified. ↩
  3. I would have picked a picture from our lab, but that would have needed permission first ↩
  4. Qt has awesome “platform issues” pages, like for macOS and Linux/X11 which help with porting efforts, but Qt doesn’t even list Linux/Wayland as supported platform. There is some information though, like window geometry peculiarities, which aren’t particularly helpful when porting (but still essential to know). ↩
  5. Besides issues with Nvidia hardware – CUDA for simulations and machine-learning is pretty much everywhere, so Nvidia cards are common, which causes trouble on Wayland still. It is improving though. ↩
on January 11, 2024 04:24 PM

KTextAddons 1.5.3

Jonathan Riddell

KTextAddons is a library with Various text handling addons used by Ruqola and Kontact apps. It can be compiles for both Qt 5 and 6 and distros are advised to compile two builds for each until Ruqola is ported to Qt 6.

URL: https://download.kde.org/stable/ktextaddons/

SHA256: 8a52db8abfa8a9d68d2d291fb0f8be20659fd7899987b4dcafdf2468db0917dc

Changelog

  • Drop unused KXmlGui dependency
  • Adapt to new KConfigGroup API
  • As we exclude emojis we need to remove it from list and not exclude it
  • Use proxymodel when exclude emoticons were updated
  • Allow to exclude some specific emoticons (Need for ruqola)
  • Exclude mock engine => it’s for test
  • Remove generate pri support (removed in kf6)
on January 11, 2024 12:29 PM

E281 Eles Querem É Poleiro No Pacote

Podcast Ubuntu Portugal

Desta vez recebemos o João Jotta, que nos veio falar do que anda a fazer para estragar sistemas operativos Livres ou o Toque de Midas ao Contrário. O Diogo iniciou uma fulgurante carreira política que dará sem dúvida origem a inúmeros incidentes diplomáticos e o Miguel mais uma vez suscitou a ira de uma boa secção de ouvintes, que escreverão cartas indignadas. Também discutimos geoestratégia militar na Guerra dos Pacotes e contratações para o plantel dos Snaps; quem ama Jellyfin e porquê; quem pode usar Ubuntu Pro e jogos de tabuleiro sem tabuleiro. Já agora; alguém sabe onde fica o Wisconsin…?

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on January 11, 2024 12:00 AM

January 10, 2024

The term High Performance Computing, HPC, evokes a lot of powerful emotions whenever mentioned. Even people who do not necessarily have vocational knowledge of hardware and software will have an inkling of understanding of what it’s all about; HPC solves problems at speed and scale that cannot be achieved with standard, traditional compute resources. But the speed and the scale introduce a range of problems of their own. In this article, we’d like to talk about the “problem journey” – the ripple effect of what happens when you try to solve a large, complex compute issue.

It all starts with data…

Consider a 1TB text data set. Let’s assume we want to analyse the frequency of words in the data set. This is a fairly simple mathematical problem, and it can be achieved by any computer capable of reading and analysing the input file. In fact, with fast storage devices like NVMe-connected SSDs, parsing 1TB of text won’t take too long.

But what if we had to handle a much larger data set, say 1 PB (or 1 EB for that matter), and solve a much more complex mathematical problem? Furthermore, what if we needed to solve this problem relatively quickly?

From serial to parallel

One way to try to handle the challenge is to simply load data in a serial manner into the computer memory for processing. This is where we encounter the first bottleneck. We need storage that can handle the data set, and which can stream the data into memory at a pace that will not compromise the required performance. Even before it starts, the computation problem becomes one of big data. As it happens, this is a general challenge that affects most areas of the software industry. In the HPC space, it is simply several orders of magnitude bigger and more acute. 

The general limitations of storage speed compared to computer memory and CPU are not a new thing. They can be resolved in various ways, but by and large, especially when taking into consideration the cost of hardware, I/O operations (storage and network) are slower than in-memory operations or computation done by the processor. Effectively, this means that tasks will run only as quickly as the slowest component. Therefore, if one wants an entire system to perform reasonably fast, considering it is composed of multiple components that all work at different individual speeds, the input of data needs to change, from serial to parallel.

An alternative approach to handle the data problem is to break the large data set into smaller chunks, store them on different computers, and then analyse and solve the mathematical problem as a set of discrete mini-problems in parallel. However, this method assumes that data can be arbitrarily broken without losing meaning, or that execution can be done in parallel without altering the end result of the computation. It also requires an introduction of a new component into the equation: a tool that will orchestrate the division of data, as well as organise the work of multiple computers at the same time.

Data locality

If a single computer cannot handle the task at hand, using multiple computers means they will need to be inter-connected, both physically and logically, to perform the required workload. These clusters of compute nodes will often require network connectivity between them, so they can share data input and output, as well as the data dispatch logic.

The network component presents a new challenge to the mathematical problem we want to solve. It introduces the issue of reliability into the equation, as well as further performance considerations. While one can guarantee that data will correctly move from local storage into memory, there is no such guarantee for over-the-network operations.

Data may be lost, retransmitted, or delayed. Some of these problems can be controlled on the network transmission protocol level; but there will still be a level of variance that may affect the parallel execution. If the data analysis relies on precise timing and synchronisation of operations among the different nodes in inter-connected clusters of machines, we will need to introduce yet another safecheck mechanism into the system. We will need to ensure data consistency on top of the other constraints.

This means that the cluster data dispatcher governor will have to take into account additional logic, including the fine balance among different components in the environment. We need to invest additional resources to create an intelligent and robust scheduler before we can even start any data analysis.

Who guards the guards?

In many scenarios, large data sets will include structure dependencies that will complicate the possible data topology for parallel computation. Data won’t just be a simple if large, well-organised set. For example, certain parts of the data set may have to be processed before other parts, or they may use the output of computation from some of the subsets as the input for further processing with other subsets.

Such limitations will dictate the data processing, and will require a clever governor that manages and organises data. Otherwise, even if there are sufficient storage and computational resources available, they might not be utilised fully or correctly because the data isn’t being dispatched in the most optimal manner.

The scheduler will have to take into account data locality, network transport reliability, correctly parse the data and distribute it across multiple computers, orchestrate the sequence and timing of parallel execution, and finally validate the results of the execution. All of these problems stem from the fact we may want to run a task across multiple computers. 

Back to a single node and the laws of thermodynamics

Indeed, one may conclude that parallel execution is too complex, and brings more problems than it solves. But in practice, it is the only possible method that can resolve some large-scale mathematical problems, as no single-node computer can handle the massive amounts of data in the required time frame.

Individual computers designed for high performance operations also have their own challenges, dictated by the limitations of physics. Computation is usually determined by the processor clock frequency. Typically, the higher the frequency, the higher the heat generation in the CPU. This excess heat quickly builds up and needs to be dissipated so that CPUs can continue working normally. When one considers the fact that the power density of silicon-based processors is similar to fission nuclear reactors, the cooling is a critical element, and it will usually be the practical bottleneck of what any one processor can do.

Higher clock speeds, both for the processor and the memory, also affect the rate of error generation that can occur during execution, and the system’s ability to handle these errors. At some point, in addition to the heat problem, execution may become unstable, compromising the integrity of the entire process.

There are also practical limits on the size of physical memory modules and local storage systems, which mean that large data sets will have to be fragmented to be processed, at which point we go back to square one – breaking the large set into a number of small ones, and taking into account the distribution of data, the data logic and ordering, and the reliability of the transport between the local system and the rest of the environment.

In that case, perhaps the solution is to use multiple computers, but treat them as a single, unified entity?

Mean Time Between Failures (MTBF)

The clustering of computers brings into focus an important concept, especially in the world of HPC. MTBF is an empirically-established value that determines the average time a system will run or last before it encounters a failure. It is used to estimate the reliability of a system, in order to implement relevant safeguards and redundancy where needed.

An alternative way to look at MTBF can be through failure rates. For instance, if a hard disk has a 3% failure rate within its first year, one can expect, on average, three hard disk failures from a pool of 100 within one year of being put to use. This isn’t necessarily an issue on its own, but if all these 100 disks are used as part of a clustered system, as a single entity, then the failure is pretty much guaranteed.

The MTBF values are a critical factor for HPC workloads due to their parallelised nature. Quite often, the results are counterintuitive. In many scenarios, HPC systems (or clusters) have much lower MTBF values than the individual computers (or nodes) that comprise them.

Hard disks are a good, simple way to look and analyse MTBF-associated risks. If a hard disk has a 1% failure rate, storing identical data on two different devices significantly reduces the risk of total data loss. In this case, the risk of loss becomes only 0.01%. However, if the two disks are used to store different, non-identical parts of data, the loss of any one will cause a critical failure in the system. The failure rate will increase, and lead to a lower MTBF, effectively half the value for individual disks.

This can be a significant problem with large data sets that need to be analysed in small chunks on dozens, hundreds and sometimes thousands of parallelised compute instances that work as a unified system. While each of these computers is an individual machine, they all form a single, more complex system that will not produce the desired results if a failure occurs on any one node in the entity.

With supercomputing clusters, the MTBF can be as low as several minutes. If the runtime of tasks that need to be solved exceeds the MTBF figure, the setup needs to include mechanisms to ensure a potential loss of individual components will not affect the entire system. However, often, every new safeguard introduces additional cost and potential performance penalty.

Once again, there are critical challenges that need to be addressed before the data set can be safely parsed and analysed. In essence, it becomes a cat-and-mouse chase between performance and reliability. The more time and resources are invested in checks that ensure data safety and integrity, the longer and costlier the execution. A functioning setup will be a tradeoff between raw speed and potential catastrophic loss of the entire data set computation.

Back to data

So far, we haven’t really talked about what we actually want to do with our data. We just know that we have a payload that is too large to load and run on a single machine, and we need to break it into chunks and run on several systems in parallel. The bottleneck of data handling triggered an entire chain of considerations that are tangential to data but critical to our ability to process and analyse the data.

Assuming we can figure out the infrastructure part, we still have the challenge of how to handle our actual workload. A well-designed HPC setup that has as few bottlenecks as possible will still be only as good as our ability to divide the data, process it, and get the results.

To that end, we need a data scheduler. It needs to have the following capabilities:

  • Be fast enough so the HPC setup is used in an optimal manner – and not introduce performance penalties of its own.
  • Be aware of and understand the infrastructure topology on which it runs so it can run necessary data integrity checks when needed – to account for data loss or corruption, unresponsive components in the system, and other possible failures.
  • Be able to scale up and down as needed.

Usually, implementing all of these requirements in a single tool is very difficult, as the end product ends up being highly complex and fragile, as well as not easily adaptable to changes in the environment. Typically, setups will have several schedulers – one or more on the infrastructure level, and one or more on the application and data level. The former will take care of the parallelised compute environment so that the applications that run on top of it will be able to treat it as transparent, i.e., as though working on a single computer. The latter will focus on data management – low overhead, high integrity, and scale. On paper, this sounds like a pretty simple formula.

There are no shortcuts

Realistically, no two HPC or supercomputing setups are identical, and they all require a slightly different approach, for a variety of reasons. The size and complexity of the data that needs to be used for computation will often dictate the topography of the environment, and the combination of individual components used in the system, no matter how much we may want these to be separate and transparent to each other. HPC setups are often built using a specific set of hardware, and sometimes, some of the components are ancient while others are new and potentially less tested, which can lead to instability and runtime issues. A lot of effort is required to integrate and make all these components work together.

At Canonical, we do not presume to have all the answers. But we do understand the colossal challenges that the practitioners of HPC face, and we want to help them. This piece is the beginning of a journey, where we want to solve, or at least simplify, some of the core problems of the HPC world. We want to make HPC software and tooling more accessible, easier to use and maintain. We want to help HPC customers optimise their workloads and reduce their cost, be it the hardware or electricity bills. We want to make HPC better however we can. And we want to do that in a transparent way, using open source applications and utilities, both from the wider HPC community as well as our own home-grown software.If you’re interested, please consider signing up for our newsletter, or contact us, and maybe we can undertake this journey together.

Photo by Aleksandr Popov on Unsplash.

on January 10, 2024 12:50 PM

Going freelance

Colin Watson

I’ve mentioned this in a couple of other places, but I realized I never got round to posting about it on my own blog rather than on other people’s services. How remiss of me.

Anyway: after much soul-searching, I decided a few months ago that it was time for me to move on from Canonical and the Launchpad team there. Nearly 20 years is a long time to spend at any company, and although there are a bunch of people I’ll miss, Launchpad is in a reasonable state where I can let other people have a turn.

I’m now in business for myself as a freelance developer! My new company is Columbiform, and I’m focusing on Debian packaging and custom Python development. My services page has some self-promotion on the sorts of things I can do.

My first gig, and the one that made it viable to make this jump, is at Freexian where I’m helping with an exciting infrastructure project that we hope will start making Debian developers’ lives easier in the near future. This is likely to take up most of my time at least through to the end of 2024, but I may have some spare cycles. Drop me a line if you have something where you think I could be a good fit, and we can have a talk about it.

on January 10, 2024 09:50 AM

January 08, 2024

Welcome to the Ubuntu Weekly Newsletter, Issue 821 for the week of December 31, 2023 – January 6, 2024. The full version of this issue is available here.

In this issue we cover:

  • First Noble Numbat test rebuild
  • discontinuing source ISOs?
  • Ubuntu Stats
  • Hot in Support
  • LoCo Events
  • Introducing the temporary Matrix Council
  • Other Community News
  • Ubuntu Cloud News
  • Canonical News
  • In the Blogosphere
  • In Other News
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for 20.04, 22.04, 23.04 and 23.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

on January 08, 2024 08:46 PM

KDiagram 3.0.1

Jonathan Riddell

KDiagram 3.0.1 is an update to our charting libraries which fixes a bug in the cmake path configuration. It also updates translations and removes some unused Qt 5 code.

URL: https://download.kde.org/stable/kdiagram/3.0.1/

sha256: 4659b0c2cd9db18143f5abd9c806091c3aab6abc1a956bbf82815ab3d3189c6d

Signed by E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Esk-Riddell jr@jriddell.org
https://jriddell.org/esk-riddell.gpg

on January 08, 2024 05:50 PM

January 05, 2024

Santiago Zarate

If you’re using copilot and by luck also use pass to manage your passwords, you will find that the default configuration, or rather the configuration where you want copilot enabled everywhere, basically creates a risk for your precious passwords… As Copilot will be enabled by default, on text files.

My dataaa!

So here’s the snippet I use:

-- initialize copilot
local copilot = {
	"zbirenbaum/copilot.lua",
	"ofseed/copilot-status.nvim",
	cmd = "Copilot",
	build = ":Copilot auth",
	event = "InsertEnter",
	opts = {
		filetypes = {
			sh = function()
				if string.match(vim.fs.basename(vim.api.nvim_buf_get_name(0)), "^%.env.*") then
					-- disable for .env files
					return false
				end
				return true
			end,
			text = function()
				if
					vim.has_key(vim.environ(), "GIT_CEILING_DIRECTORIES") or vim.has_key(vim.environ(), "PASS_VERSION")
				then
					-- disable for .env files
					return false
				end
				return true
			end,
		},
	},
}

I should eventually add this too to my dotfiles… Once I have the time to do so.

My dotfiles would be here, if I updated them

on January 05, 2024 12:28 AM

If you’re using copilot and by luck also use pass to manage your passwords, you will find that the default configuration, or rather the configuration where you want copilot enabled everywhere, basically creates a risk for your precious passwords… As Copilot will be enabled by default, on text files.

My dataaa!

So here’s the snippet I use:

-- initialize copilot
local copilot = {
  "zbirenbaum/copilot.lua",
  "ofseed/copilot-status.nvim",
  cmd = "Copilot",
  build = ":Copilot auth",
  event = "InsertEnter",
  opts = {
    filetypes = {
      sh = function()
        if string.match(vim.fs.basename(vim.api.nvim_buf_get_name(0)), "^%.env.*") then
          -- disable for .env files
          return false
        end
        return true
      end,
      text = function()
        if
          vim.has_key(vim.environ(), "GIT_CEILING_DIRECTORIES") or vim.has_key(vim.environ(), "PASS_VERSION")
        then
          -- disable for .env files
          return false
        end
        return true
      end,
    },
  },
}

I should eventually add this too to my dotfiles… Once I have the time to do so.

My dotfiles would be here, if I updated them

on January 05, 2024 12:00 AM

January 04, 2024

E280 Os Magos Mal Pagos

Podcast Ubuntu Portugal

Será que os nossos magos e adivinhos acertaram nas previsões para 2023? O grau de fiabilidade das previsões é razoável - ou é semelhante ao da Meteorologia, Astrologia, Economia e empresas de sondagens? Alguém ganhou o jantar! Mas quem? Conseguem adivinhar? Neste episódio revisitámos os vaticínios dos bruxos que no ano passado se dispuseram a adivinhar as tendências do ano de 2023…e tivemos muitas surpresas!

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on January 04, 2024 12:00 AM

January 02, 2024

2024

Stéphane Graber

Happy new year!

2023 was quite the busy year for me with a lot of changes to get used to, the biggest of which being my departure from Canonical and going self-employed.


While I don’t expect 2024 to be quite as exciting (and that’s a good thing), I certainly expect it to be busy! Here are some of what I look forward to in 2024:

Growing the Incus user base

Incus has been quickly picking up new users over the past few things, mostly thanks to the great work of our packagers as we now have proper packages and installations instructions for Arch, Debian, Fedora, Gentoo, NixOS and Ubuntu with more coming soon!

It’s also easier than ever for folks using MacOS or Windows to interact with remote Incus servers thanks to Homebrew, Chocolatey and now WinGet packages.

We’re also starting recurring Incus Users meetings as a way to gather more valuable feedback for the development team as well as connecting users together!

Also worth noting for anyone still using LXD. We’ve started the process of phasing out access to the Linux Containers image server for LXD users. It’s something we’re doing pretty carefully and spread over a number of months, focusing on those users who have an easy migration path first.

FOSDEM 2024

FOSDEM 2024 is now just a month away and I’m very much looking forward to catching up with everyone! It’s going to be a busy weekend with us running both the Containers (Saturday) and Kernel (Sunday) devrooms, but I’m excited about all the great talks!

Schedule for containers devroom
Schedule for kernel devroom

We’re going to have Aleksandr, Christian, Tycho and myself representing LXC, LXCFS and Incus over there.

Incus LTS

As mentioned, Incus has seen a lot of interest lately and has picked up a pretty sizable user base already. But a lot more users and Linux distributions are waiting for a Long Term Support release to come out so they can use standardize on something that’s not quite as fast paced.

That’s going to happen towards the end of March or early April with the release of Incus 6.0 LTS.

The version comes as we always align all the LXC LTS projects during an LTS release, so we’ll be releasing LXC 6.0 LTS, LXCFS 6.0 LTS and Incus 6.0 LTS around the same time, with 5 years of security update across all of them, 2 years of which will see bugfixes and minor improvements also be included.

LPC 2024

In the second half of the year, we’ll be gathering in Vienna this time for the annual Linux Plumbers Conference where I hope we’ll have another edition of the containers micro-conference.

This is always a great opportunity to catch up in person with other low-level Linux developers and working together on exciting new kernel and userspace features.

Of particular interest to me is the continued work on improving user namespaces, VFS idmap mounts and new ways to handle resources limits in containers and CGroups.

on January 02, 2024 05:41 PM

January 01, 2024

Welcome to the Ubuntu Weekly Newsletter, Issue 820 for the week of December 24 – 30, 2023. The full version of this issue is available here.

In this issue we cover:

  • Ubuntu Stats
  • Hot in Support
  • LoCo LoGo DoJo
  • LoCo Events
  • Other Community News
  • In the Blogosphere
  • In Other News
  • Featured Audio and Video
  • Upcoming Meetings and Events
  • Updates and Security for 20.04, 22.04, 23.04 and 23.10: None
  • And some more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

on January 01, 2024 08:50 PM

December 21, 2023

Announcing Incus 0.4

Stéphane Graber

Just as we’re wrapping up 2023, one last Incus release!

Incus 0.4 is now out, including keepalive support in the command line tool, improved certificate management, new OVN configuration keys and the ability to create CephFS filesystems directly through Incus!

This is going to be the last release of Incus to benefit from any of the work that goes into Canonical LXD as their decision to re-license will be preventing us from taking in any additional fixes or improvements. You’ll find details about that in one of my previous posts.

Related to that change, we’ve made the decision to progressively phase out access to the Linux Containers image server for those users still on Canonical LXD. You’ll find details about the motivations and timeline for this change here.

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

Finally just a quick reminder that my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship.
You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Enjoy the holidays and see you all in 2024!

on December 21, 2023 11:38 PM

December 15, 2023

First I want to thank KDE for this wonderful write up on LinkedIn. https://www.linkedin.com/posts/kde_akademy-2023-over-a-million-reasons-why-activity-7139965489153208320-PNem?utm_source=share&utm_medium=member_desktop It made my heart explode with happiness as I prepped for my interview on Monday. I didn’t get the job ( Just the usual “we were very impressed with your experience, but we went with another candidate” ). I think the cosmos is determined for me to hold out for the ‘project’ even though it is only part time work, it is work and if I have learned nothing else this year, I have learned how to live on a very tight budget! It turns out many of the things we think we need, we don’t. So with hard work of Kevin Ottens ( Thank you!!!! ), this should be finalized after the first of the year. This plan also allows me time for KDEneon and Debian work. I am happy with it and look forward to the coming year and the exciting things to come.

My holiday plans are to help the Debian KDE team with the KF6 packaging, On-going KDEneon efforts, and continue to make sure the snaps Qt6 transition is a painless as possible. I will also be working on the qt6 kde-neon extension.

In closing, despite my terrible luck with job-hunting, I am in an amazing community and I am truly grateful to each and every one of you. It has been a great year and I have added many new things to my skillset and I look forward to many more next year.

As usual, it is that time of month where I have not raised enough to pay my internet bill ( phone company taking an extra 200.00 didn’t help ) If you can spare any change ( any amount helps! ) please consider a donation https://gofund.me/b74e4c6f Thank you!

I hope everyone has a wonderful <insert your holiday here> !!!!!

~ Scarlett

on December 15, 2023 09:33 PM

December 14, 2023

UPDATE: The boot issues have been fixed, however, the installer frontend is not yet working due to a known bug. This is true for Ubuntu Desktop Noble Numbat as well. If you wish to test the installation, it must be tested with subiquity. This can be done with:

sudo snap install subiquity --classic
sudo subiquity

However, the live .iso image is now functioning.


Greetings to our community!

We’re working diligently on getting ready for the next Long-Term Support release of Ubuntu Studio, right now codenamed Noble Numbat, which will become 24.04 LTS in April.

One of the things we are putting our energy into is moving away from the Ubiquity Installer, due to be sunset, and to the modern Subiquity installer, which has a technical caveat: it’s text-only. However, the Ubuntu Desktop team came up with a wonderful solution: Ubuntu Desktop Installer. It’s a flutter-based frontend to Subiquity which was released as the default installer experience for Ubuntu Desktop 23.04. Along with that, Ubuntu Budgie also released their own version, the Ubuntu Budgie Installer, which is done as an add-on to the Ubuntu Desktop Installer, and released that as their default installation experience for Ubuntu Budgie 23.10.

In following that lead, we have decided to change our installation experience to the Ubuntu Studio System Installer, which is built upon the work that Ubuntu Budgie and Ubuntu Desktop teams have already done. This will make us the first non-GNOME-based flavor to use this installation. However, in doing the switchover on our daily builds, we have run into some breakage.

Right now, our daily .iso images boot to a black screen with a mouse cursor. In other words, they do not work. Additionally, you cannot login to a virtual terminal to diagnose the issue via logs. This is because, upon extracting the .iso image and included squashfs file, we found out that the default live user (normally named “ubuntu-studio”) is not being created upon image build.

We do not expect this issue to be resolved until January. This is because the people that we need help to resolve this issue (Ubuntu Desktop team and Ubuntu Foundations team) are employees at Canonical. Canonical’s developers are mandated to take the final two weeks of every year off, which is why the yy.04 cycles are slightly longer than the yy.10 cycles.

If, for whatever reason, we cannot get this resolved, we will find another path. That said, we request that since this is a known bug and that a bug report exists, please do not file any further bug reports, and please do not attempt any further testing on our .iso images for Noble Numbat (future 24.04) as the .iso images are failing until further notice.

Please watch our accounts on x (formerly known as twitter) or mastodon.art for further updates.

on December 14, 2023 05:56 PM

December 10, 2023

KDE PIM Kaddressbook snapKDE PIM Kaddressbook snap

KDE Snaps:

This weeks big accomplishment is KDE PIM snaps! I have successfully added akonadi as a service via an akonadi content snap and running it as a service. Kaddressbook is our first PIM snap with this setup and it works flawlessly! It is available in the snap store. I have a pile of MRs awaiting approvals, so keep your eye out for the rest of PIM in the next day.

KDE Applications 23.08.4 has been released and available in the snap store.

Krita 5.2.2 has been released.

I have created a new kde-qt6 snap as the qt-framework snap has not been updated and the maintainer is unreachable. It is in edge and I will be rebuilding our kf6 snap with this one.

I am debugging an issue with the latest Labplot release.

KDE neon:

This week I helped with frameworks release 5.113 and KDE applications 23.08.4.

I also worked on the ongoing Unstable turning red into green builds as the porting to qt6 continues.

Debian:

With my on going learning packaging for all the programming languages, Rust packaging: I started on Rustic https://github.com/rustic-rs/rustic unfortunately, it was a bit of wasted time as it depends on a feature of tracing-subcriber that depends on matchers which has a grave bug, so it remains disabled.

Personal:

I do have an interview tomorrow! And it looks like the ‘project’ may go through after the new year. So things are looking up, unfortunately I must still ask, if you have any spare change please consider a donation. The phone company decided to take an extra $200.00 I didn’t have to spare and while I resolved it, they refused a refund, but gave me a credit to next months bill, which doesn’t help me now. Thank you for your consideration.

https://gofund.me/b74e4c6f

on December 10, 2023 01:33 PM
The Lubuntu Team has been hard at work already this development cycle polishing the Lubuntu desktop in time for our upcoming Long-Term Support release, 24.04 (codenamed Noble Numbat). We have pioneered groundbreaking features and achieved remarkable stability in crucial components. These enhancements are not just technical milestones; they're transformative changes you'll experience when you install […]
on December 10, 2023 03:26 AM

November 30, 2023

Every so often I have to make a new virtual machine for some specific use case. Perhaps I need a newer version of Ubuntu than the one I’m running on my hardware in order to build some software, and containerization just isn’t working. Or maybe I need to test an app that I made modifications to in a fresh environment. In these instances, it can be quite helpful to be able to spin up these virtual machines quickly, and only install the bare minimum software you need for your use case.

One common strategy when making a minimal or specially customized install is to use a server distro (like Ubuntu Server for instance) as the base and then install other things on top of it. This sorta works, but it’s less than ideal for a couple reasons:

  • Server distros are not the same as minimal distros. They may provide or offer software and configurations that are intended for a server use case. For instance, the ubuntu-server metapackage in Ubuntu depends on software intended for RAID array configuration and logical volume management, and it recommends software that enables LXD virtual machine related features. Chances are you don’t need or want these sort of things.

  • They can be time-consuming to set up. You have to go through the whole server install procedure, possibly having to configure or reconfigure things that are pointless for your use case, just to get the distro to install. Then you have to log in and customize it, adding an extra step.

If you’re able to use Debian as your distro, these problems aren’t so bad since Debian is sort of like Arch Linux - there’s a minimal base that you build on to turn it into a desktop or server. But for Ubuntu, there’s desktop images (not usually what you want), server images (not usually what you want), cloud images (might be usable but could be tricky), and Ubuntu Core images (definitely not what you want for most use cases). So how exactly do you make a minimal Ubuntu VM?

As hinted at above, a cloud image might work, but we’re going to use a different solution here. As it turns out, you don’t actually have to use a prebuilt image or installer to install Ubuntu. Similar to the installation procedure Arch Linux provides, you can install Ubuntu manually, giving you very good control over what goes into your VM and how it’s configured.

This guide is going to be focused on doing a manual installation of Ubuntu into a VM, using debootstrap to install the initial minimal system. You can use this same technique to install Ubuntu onto physical hardware by just booting from a live USB and then using this technique on your hardware’s physical disk(s). However we’re going to be primarily focused on using a VM right now. Also, the virtualization software we’re going to be working with is QEMU. If you’re using a different hypervisor like VMware, VirtualBox, or Hyper-V, you can make a new VM and then install Ubuntu manually into it the same way you would install Ubuntu onto physical hardware using this technique. QEMU, however, provides special tools that make this procedure easier, and QEMU is more flexible than other virtualization software in my experience. You can install it by running sudo apt install qemu-system-x86 on your host system.

With that laid out, let us begin.

Open a terminal on your physical machine, and make a directory for your new VM to reside in. I’ll use “~/VMs/Ubuntu” here.

mkdir ~/VMs/Ubuntu
cd ~/VMs/Ubuntu

Next, let’s make a virtual disk image for the VM using the qemu-img utility.

qemu-img create -f qcow2 ubuntu.img 32G

This will make a 32 GiB disk image - feel free to customize the size or filename as you see fit. The -f parameter at the beginning specifies the VM disk image format. QCOW2 is usually a good option since the image will start out small and then get bigger as necessary. However, if you’re already using a copy-on-write filesystem like BTRFS or ZFS, you might want to use -f raw rather than -f qcow2 - this will make a raw disk image file and avoid the overhead of the QCOW2 file format.

Now we need to attach the disk image to the host machine as a device. I usually do this with you can use qemu-nbd, which can attach a QEMU-compatible disk image to your physical system as a network block device. These devices look and work just like physical disks, which makes them extremely handy for modifying the contents of a disk image.

qemu-nbd requires that the nbd kernel module be loaded, and at least on Ubuntu, it’s not loaded by default, so we need to load it before we can attach the disk image to our host machine.

sudo modprobe nbd
sudo qemu-nbd -f qcow2 -c /dev/nbd0 ./ubuntu.img

This will make our ubuntu.img file available through the /dev/nbd0 device. Make sure to specify the format via the -f switch, especially if you’re using a raw disk image. QEMU will keep you from writing a new partition table to the disk image if you give it a raw disk image without telling it directly that the disk image is raw.

Once your disk image is attached, we can partition it and format it just like a real disk. For simplicity’s sake, we’ll give the drive an MBR partition table, create a single partition enclosing all of the disk’s space, then format the partition as ext4.

sudo fdisk /dev/nbd0
n
p
1


w
sudo mkfs.ext4 /dev/nbd0p1

(The two blank lines are intentional - they just accept the default options for the partition’s first and last sector, which makes a partition that encloses all available space on the disk.)

Now we can mount the new partition.

mkdir vdisk
sudo mount /dev/nbd0p1 ./vdisk

Now it’s time to install the minimal Ubuntu system. You’ll need to know the first part of the codename for the Ubuntu version you intend to install. The codenames for Ubuntu releases are an adjective followed by the name of an animal, like “Jammy Jellyfish”. The first word (“Jammy” in this instance) is the one you need. These codenames are easy to look up online. Here’s the codenames for the currently supported LTS versions of Ubuntu, as well as the codename for the current development release:

+-------------------+-------+
| 20.04 | Focal |
|-------------------+-------+
| 22.04 | Jammy |
|-------------------+-------+
| 24.04 Development | Noble |
|-------------------+-------+

To install the initial minimal Ubuntu system, we’ll use the debootstrap utility. This utility will download and install the bare minimum packages needed to have a functional Ubuntu system. Keep in mind that the Ubuntu installation this tool makes is really minimal - it doesn’t even come with a bootloader or Linux kernel. We’ll need to make quite a few changes to this installation before it’s ready for use in a VM.

Assuming we’re installing Ubuntu 22.04 LTS into our VM, the command to use is:

sudo debootstrap jammy ./vdisk

After a few minutes, our new system should be downloaded and installed. (Note that debootstrap does require root privileges.)

Now we’re ready to customize the VM! To do this, we’ll use a utility called chroot - this utility allows us to “enter” an installed Linux system, so we can modify with it without having to boot it. (This is done by changing the root directory (from the perspective of the chroot process) to whatever directory you specify, then launching a shell or program inside the specified directory. The shell or program will see its root directory as being the directory you specified, and volia, it’s as if we’re “inside” the installed system without having to boot it. This is a very weak form of containerization and shouldn’t be relied on for security, but it’s perfect for what we’re doing.)

There’s one thing we have to account for before chrooting into our new Ubuntu installation. Some commands we need to run will assume that certain special directories are mounted properly - in particular, /proc should point to a procfs filesystem, /sys should point to a sysfs filesystem, /dev needs to contain all of the device files of our system, and /dev/pts needs to contain the device files for pseudoterminals (you don’t have to know what any of that means, just know that those four directories are important and have to be set up properly). If these directories are not properly mounted, some tools will behave strangely or not work at all. The easiest way to solve this problem is with bind mounts. These basically tell Linux to make the contents of one directory visible in some other directory too. (These are sort of like symlinks, but they work differently - a symlink says “I’m a link to something, go over here to see what I contain”, whereas a bind mount says “make this directory’s contents visible over here too”. The differences are subtle but important - a symlink can’t make files outside of a chroot visible inside the chroot. A bind mount, however, can.)

So let’s bind mount the needed directories from our system into the chroot:

sudo mount --bind /dev ./vdisk/dev
sudo mount --bind /proc ./vdisk/proc
sudo mount --bind /sys ./vdisk/sys
sudo mount --bind /dev/pts ./vdisk/dev/pts

And now we can chroot in!

sudo chroot ./vdisk

Run ping -c1 8.8.8.8 just to make sure that Internet access is working - if it’s not, you may need to copy the host’s /etc/resolv.conf file into the VM. However, you probably won’t have to do this. Assuming Internet is working, we can now start customizing things.

By default, debootstrap only enables the “main” repository of Ubuntu. This repository only contains free-and-open-source software that is supported by Canonical. This does *not* include most of the software available in Ubuntu - most of it is in the “universe”, “restricted”, and “multiverse” repositories. If you really know what you’re doing, you can leave some of these repositories out, but I would highly recommend you enable them. Also, only the “release” pocket is enabled by default - this pocket includes all of the software that came with your chosen version of Ubuntu when it was first released, but it doesn’t include bug fixes, security updates, or newer versions of software. All those are in the “updates”, “security”, and “backports” pockets.

To fix this, run the following block of code, adjusted for your release of Ubuntu:

tee /etc/apt/sources.list << ENDSOURCESLIST
deb http://archive.ubuntu.com/ubuntu jammy main universe restricted multiverse
deb http://archive.ubuntu.com/ubuntu jammy-updates main universe restricted multiverse
deb http://archive.ubuntu.com/ubuntu jammy-security main universe restricted multiverse
deb http://archive.ubuntu.com/ubuntu jammy-backports main universe restricted multiverse
ENDSOURCESLIST

Replace “jammy” with the codename corresponding to your chosen release of Ubuntu. Once you’ve run this, run cat /etc/apt/sources.list to make sure the file looks right, then run apt update to refresh your software database with the newly enabled repositories. Once that’s done, run apt full-upgrade to update any software in the base installation that’s out-of-date.

What exactly you install at this point is up to you, but here’s my list of recommendations:

  • linux-generic. Highly recommended. This provides the Linux kernel. Without it, you’re going to have significant trouble booting. You can replace this with a different kernel metapackage if you want to for some reason (like linux-lowlatency).

  • grub-pc. Highly recommended. This is the bootloader. You might be able to replace this with an alternative bootloader like systemd-boot.

  • vim (or some other decent text editor that runs in a terminal). Highly recommended. The minimal install of Ubuntu doesn’t come with a good text editor, and you’ll really want one of those most likely.

  • network-manager. Highly recommended. If you don’t install this or some other network manager, you won’t have Internet access. You can replace this with an alternative network manager if you’d like.

  • tmux. Recommended. Unless you’re going to install a graphical environment, you’ll probably want a terminal multiplexer so you don’t have to juggle TTYs (which is especially painful in QEMU).

  • openssh-server. Optional. This is handy since it lets you use your terminal emulator of choice on your physical machine to interface with the virtual machine. You won’t be stuck using a rather clumsy and slow TTY in a QEMU display.

  • pulseaudio. Very optional. Provides sound support within the VM.

  • icewm + xserver-xorg + xinit + xterm. Very optional. If you need or want a graphical environment, this should provide you with a fairly minimal and fast one. You’ll still log in at a TTY, but you can use startx to start a desktop.

Add whatever software you want to this list, remove whatever you don’t want, and then install it all with this command:

apt install listOfPackages

Replace “listOfPackages” with the actual list of packages you want to install. For instance, if I were to install everything in the above list except openssh-server, I would use:

apt install linux-generic grub-pc vim network-manager tmux icewm xserver-xorg xinit xterm

At this point our software is installed, but the VM still has a few things needed to get it going.

  • We need to install and configure the bootloader.

  • We need an /etc/fstab file, or the system will boot with the drive mounted read-only.

  • We should probably make a non-root user with sudo access.

  • There’s a file in Ubuntu that will prevent Internet access from working. We should delete it now.

The bootloader is pretty easy to install and configure. Just run:

sudo grub-install /dev/nbd0
sudo update-grub

For /etc/fstab, there are a few options. One particularly good one is to label the partition we installed Ubuntu into using e2label, then use that label as the ID of the drive we want to mount as root. That can be done like this:

e2label /dev/nbd0p1 ubuntu-inst
echo "LABEL=ubuntu-inst / ext4 defaults 0 1" > /etc/fstab

Making a user account is fairly easy:

adduser user # follow the prompts to create the user
adduser user sudo

And lastly, we should remove the Internet blocker file. I don’t understand why exactly this file exists in Ubuntu, but it does, and it causes problems for me when I make a minimal VM in this way. Removing it fixes the problem.

rm /usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf

And that’s it! Now we can exit the chroot, unmount everything, and detach the disk image from our host machine.

exit
sudo umount ./vdisk/dev/pts
sudo umount ./vdisk/dev
sudo umount ./vdisk/proc
sudo umount ./vdisk/sys
sudo umount ./vdisk
sudo qemu-nbd -d /dev/nbd0

Now we can try and boot the VM. But before doing that, it’s probably a good idea to make a VM launcher script. Run vim ./startVM.sh (replacing “vim” with your text editor of choice), then type the following contents into the file:

#!/bin/bash
qemu-system-x86_64 -enable-kvm -machine q35 -m 4G -smp 2 -vga qxl -display sdl -monitor stdio -device intel-hda -device hda-duplex -usb -device usb-tablet -drive file=./ubuntu.img,format=qcow2,if=virtio

Refer to the qemu-system-x86_64 manpage or QEMU Invocation documentation page at https://www.qemu.org/docs/master/system/invocation.html for more info on what all these options do. Basically this gives you a VM with 4 GB RAM, 2 CPU cores, decent graphics (not 3d accelerated but not as bad as plain VGA), and audio support. You can tweak the amount of RAM and number of CPU cores by changing the -m and -smp parameters respectively. You’ll have access to the QEMU monitor through whatever terminal you run the launcher script in, allowing you to do things like switch to a different TTY, insert and remove devices and storage media on the fly, and things like that.

Finally, it’s time to see if it works.

chmod +x ./startVM.sh
./startVM.sh

If all goes well, the VM should boot and you should be able to log in! If you installed IceWM and its accompanying software like mentioned earlier, try running startx once you log in. This should pop open a functional IceWM desktop.

Some other things you should test once you’re logged in:

  • Do you have Internet access? ping -c1 8.8.8.8 can be used to test. If you don’t have Internet, run sudo nmtui in a terminal and add a new Ethernet network within the VM, then try activating it. If you get an error about the Ethernet device being strictly unmanaged, you probably forgot to remove the /usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf file mentioned earlier.

  • Can you write anything to the drive? Try running touch test to make sure. If you can’t, you probably forgot to create the /etc/fstab file.

If either of these things don’t work, you can power off the VM, then re-attach the VM’s virtual disk to your host machine, mount it, and chroot in like this:

sudo qemu-nbd -f qcow2 -c /dev/nbd0 ./ubuntu.img
sudo mount /dev/nbd0p1 ./vdisk
sudo chroot vdisk

Since all you’ll be doing is writing or removing a file, you don’t need to bind mount all the special directories we had to work with earlier.

Once you’re done fixing whatever is wrong, you can exit the VM, unmount and detach its disk, and then try to boot it again like this:

exit
sudo umount vdisk
sudo qemu-nbd -d /dev/nbd0
./startVM.sh

You now have a fully functional, minimal VM! Some extra tips that you may find handy:

  • If you choose to install an SSH server into your VM, you can use the “hostfwd” setting in QEMU to forward a port on your local machine to port 22 within the VM. This will allow you to SSH into the VM. Add a parameter like -nic user,hostfwd=tcp:127.0.0.1:2222-:22 to your QEMU command in the “startVM.sh” script. This will forward port 2222 of your host machine to port 22 of the VM. Then you can SSH into the VM by running ssh user@127.0.0.1 -p 2222. The “hostfwd” QEMU feature is documented at https://www.qemu.org/docs/master/system/invocation.html - just search the page for “hostfwd” to find it.

  • If you intend to use the VM through SSH only and don’t want a QEMU window at all, remove the following three parameters from the QEMU command in “startVM.sh”:

    • -vga qxl

    • -display sdl

    • -monitor stdio

    Then add the following switch:

    • -nographic

    This will disable the graphical QEMU window entirely and provide no video hardware to the VM.

  • You can disable sound support by removing the following switches from the QEMU command in “startVM.sh”:

    • -device intel-hda

    • -device hda-duplex

There’s lots more you can do with QEMU and manual Ubuntu installations like this, but I think this should give you a good start. Hope you find this useful! God bless.

Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

on November 30, 2023 10:34 PM

November 25, 2023

In 2020 I reviewed LiveCD memory usage.

I was hoping to review either Wayland only or immutable only (think ostree/flatpak/snaps etc) but for various reasons on my setup it would just be a Gnome compare and that's just not as interesting. There are just to many distros/variants for me to do a full followup.

Lubuntu has previously always been the winner, so let's just see how Lubuntu 23.10 is doing today.

Previously in 2020 Lubuntu needed to get to 585 MB to be able to run something with a livecd. With a fresh install today Lubuntu can still launch Qterminal with just 540 MB of RAM (not apples to apples, but still)! And that's without Zram that it had last time.

I decided to try removing some parts of the base system to see the cost of each component (with 10MB accuracy). I disabled networking to try and make it a fairer compare.

  • Snapd - 30 MiB
  • Printing - cups foomatic - 10 MiB
  • rsyslog/crons - 10 MiB

Rsyslog impact

Out of the 3 above it's felt more like with rsyslog (and cron) are redundant in modern Linux with systemd. So I tried hitting the log system to see if we could get a slowdown, by every .1 seconds having a service echo lots of gibberish.

After an hour of uptime, this is how much space was used:

  • syslog 575M
  • journal at 1008M

CPU Usage on fresh boot after:

With Rsyslog

  • gibberish service was at 1% CPU usage
  • rsyslog was at 2-3%
  • journal was at ~4%

Without Rsyslog

  • gibberish service was at 1% CPU usage
  • journal was at 1-3%

That's a pretty extreme case, but does show some impact of rsyslog, which in most desktop settings is redundant anyway.

Testing notes:

  • 2 CPUs (Copy host config)
  • Lubuntu 23.10 install
  • no swap file
  • ext4, no encryption
  • login automatically
  • Used Virt-manager and only default change was enabling EUFI
on November 25, 2023 02:42 AM

November 22, 2023

Launchpad has supported building for riscv64 for a while, since it was a requirement to get Ubuntu’s riscv64 port going. We don’t actually have riscv64 hardware in our datacentre, since we’d need server-class hardware with the hypervisor extension and that’s still in its infancy; instead, we do full-system emulation of riscv64 on beefy amd64 hardware using qemu. This has worked well enough for a while, although it isn’t exactly fast.

The biggest problem with our setup wasn’t so much performance, though; it was that we were just using a bunch of manually-provisioned virtual machines, and they weren’t being reset to a clean state between builds. As a result, it would have been possible for a malicious build to compromise future builds on the same builder: it would only need a chroot or container escape. This violated our standard security model for builders, in which each build runs in an isolated ephemeral VM, and each VM is destroyed and restarted from a clean image at the end of every build. As a result, we had to limit the set of people who were allowed to have riscv64 builds on Launchpad, and we had to restrict things like snap recipes to only use very tightly-pinned parts from elsewhere on the internet (pinning is often a good idea anyway, but at an infrastructural level it isn’t something we need to require on other architectures).

We’ve wanted to bring this onto the same footing as our other architectures for some time. In Canonical’s most recent product development cycle, we worked with the OpenStack team to get riscv64 emulation support into nova, and installed a backport of this on our newest internal cloud region. This almost took care of the problem. However, Launchpad builder images start out as standard Ubuntu cloud images, which on riscv64 are only available from Ubuntu 22.04 LTS onwards; in testing 22.04-based VMs on other relatively slow architectures we already knew that we were seeing some mysterious hangs in snap recipe builds. Figuring this out blocked us for some time, and involved some pretty intensive debugging of the “strace absolutely everything in sight and see if anything sensible falls out” variety. We eventually narrowed this down to a LXD bug and were at least able to provide a workaround, at which point bringing up new builders was easy.

As a result, you can now enable riscv64 builds for yourself in your PPAs or snap recipes. Visit the PPA and follow the “Change details” link, or visit the snap recipe and follow the “Edit snap package” link; you’ll see a list of checkboxes under “Processors”, and you can enable or disable any that aren’t greyed out, including riscv64. This now means that all Ubuntu architectures are fully virtualized and unrestricted in Launchpad, making it easier for developers to experiment.

on November 22, 2023 02:00 PM

November 20, 2023

Access tokens can be used to access repositories on behalf of someone. They have scope limitations, optional expiry dates, and can be revoked at any time. They are a stricter and safer alternative to using real user authentication when needing to automate pushing and/or pulling from your git repositories.

This is a concept that has existed in Launchpad for a while now. If you have the right permissions in a git repository, you might have seen a “Manage Access Tokens” button in your repository’s page in the past.

These tokens can be extremely useful. But if you have multiple git repositories within a project, it can be a bit of a nuisance to create and manage access tokens for each repository.

So what’s new? We’ve now introduced project-scoped access tokens. These tokens reduce the trouble for the creation and maintenance of tokens for larger projects. A project access token will work as authentication for any git repository within that project.

Let’s say user A wants to run something in a remote server that requires pulling multiple git repositories from a project. User A can create a project access token, and restrict it to “repository pull” scope only. This token will then be valid authentication to pull from any repository within that project. And user A will be able to revoke that token once it’s no longer needed, keeping their real user authentication safe.

The same token will be invalid for pushing, or for accessing repositories within other projects. Also note that this is used for ‘authentication’, not ‘authorization’ – if the user doesn’t have access to a given git repository, their access token will not grant them permissions.

Anyone with permissions to edit a project will be able to create an access token, either through the UI or the API, using the same method as to create access tokens for git repositories. See Generating Access Tokens section in our documentation for instructions and other information.
This feature was implemented on request by our colleagues from the ROS team. We would love to get some feedback whether this also covers your use case. Please let us know.

on November 20, 2023 09:31 AM

November 19, 2023

In this article I will show you how to start your current operating system inside a virtual machine. That is: launching the operating system (with all your settings, files, and everything), inside a virtual machine, while you’re using it.

This article was written for Ubuntu, but it can be easily adapted to other distributions, and with appropriate care it can be adapted to non-Linux kernels and operating systems as well.

Motivation

Before we start, why would a sane person want to do this in the first place? Well, here’s why I did it:

  • To test changes that affect Secure Boot without a reboot.

    Recently I was doing some experiments with Secure Boot and the Trusted Platform Module (TPM) on a new laptop, and I got frustrated by how time consuming it was to test changes to the boot chain. Every time I modified a file involved during boot, I would need to reboot, then log in, then re-open my terminal windows and files to make more modifications… Plus, whenever I screwed up, I would need to manually recover my system, which would be even more time consuming.

    I thought that I could speed up my experiments by using a virtual machine instead.

  • To predict the future TPM state (in particular, the values of PCRs 4, 5, 8, and 9) after a change, without a reboot.

    I wanted to predict the values of my TPM PCR banks after making changes to the bootloader, kernel, and initrd. Writing a script to calculate the PCR values automatically is in principle not that hard (and I actually did it before, in a different context), but I wanted a robust, generic solution that would work on most systems and in most situations, and emulation was the natural choice.

  • And, of course, just for the fun of it!

To be honest, I’m not a big fan of Secure Boot. The reason why I’ve been working on it is simply that it’s the standard nowadays and so I have to stick with it. Also, there are no real alternatives out there to achieve the same goals. I’ll write an article about Secure Boot in the future to explain the reasons why I don’t like it, and how to make it work better, but that’s another story…

Procedure

The procedure that I’m going to describe has 3 main steps:

  1. create a copy of your drive
  2. emulate a TPM device using swtpm
  3. emulate the system with QEMU

I’ve tested this procedure on Ubuntu 23.04 (Lunar) and 23.10 (Mantic), but it should work on any Linux distribution with minimal adjustments. The general approach can be used for any operating system, as long as appropriate replacements for QEMU and swtpm exist.

Prerequisites

Before we can start, we need to install:

  • QEMU: a virtual machine emulator
  • swtpm: a TPM emulator
  • OVMF: a UEFI firmware implementation

On a recent version of Ubuntu, these can be installed with:

sudo apt install qemu-system-x86 ovmf swtpm

Note that OVMF only supports the x86_64 architecture, so we can only emulate that. If you run a different architecture, you’ll need to find another UEFI implementation that is not OVMF (but I’m not aware of any freely available ones).

Create a copy of your drive

We can decide to either:

  • Choice #1: run only the components involved early at boot (shim, bootloader, kernel, initrd). This is useful if you, like me, only need to test those components and how they affect Secure Boot and the TPM, and don’t really care about the rest (the init process, login manager, …).

  • Choice #2: run the entire operating system. This can give you a fully usable operating system running inside the virtual machine, but may also result in some instability inside the guest (because we’re giving it a filesystem that is in use), and may also lead to some data loss if we’re not careful and make typos. Use with care!

Choice #1: Early boot components only

If we’re interested in the early boot components only, then we need to make a copy the following from our drive: the GPT partition table, the EFI partition, and the /boot partition (if we have one). Usually all these 3 pieces are at the “start” of the drive, but this is not always the case.

To figure out where the partitions are located, run:

sudo parted -l

On my system, this is the output:

Model: WD_BLACK SN750 2TB (nvme)
Disk /dev/nvme0n1: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  525MB   524MB   fat32              boot, esp
 2      525MB   1599MB  1074MB  ext4
 3      1599MB  2000GB  1999GB                     lvm

In my case, the partition number 1 is the EFI partition, and the partition number 2 is the /boot partition. If you’re not sure what partitions to look for, run mount | grep -e /boot -e /efi. Note that, on some distributions (most notably the ones that use systemd-boot), a /boot partition may not exist, so you can leave that out in that case.

Anyway, in my case, I need to copy the first 1599 MB of my drive, because that’s where the data I’m interested in ends: those first 1599 MB contain the GPT partition table (which is always at the start of the drive), the EFI partition, and the /boot partition.

Now that we have identified how many bytes to copy, we can copy them to a file named drive.img with dd (maybe after running sync to make sure that all changes have been committed):

# replace '/dev/nvme0n1' with your main drive (which may be '/dev/sda' instead),
# and 'count' with the number of MBs to copy
sync && sudo -g disk dd if=/dev/nvme0n1 of=drive.img bs=1M count=1599 conv=sparse

Choice #2: Entire system

If we want to run our entire system in a virtual machine, then I would recommend creating a QEMU copy-on-write (COW) file:

# replace '/dev/nvme0n1' with your main drive (which may be '/dev/sda' instead)
sudo -g disk qemu-img create -f qcow2 -b /dev/nvme0n1 -F raw drive.qcow2

This will create a new copy-on-write image using /dev/nvme0n1 as its “backing storage”. Be very careful when running this command: you don’t want to mess up the order of the arguments, or you might end up writing to your storage device (leading to data loss)!

The advantage of using a copy-on-write file, as opposed to copying the whole drive, is that this is much faster. Also, if we had to copy the entire drive, we might not even have enough space for it (even when using sparse files).

The big drawback of using a copy-on-write file is that, because our main drive likely contains filesystems that are mounted read-write, any modification to the filesystems on the host may be perceived as data corruption on the guest, and that in turn may cause all sort of bad consequences inside the guest, including kernel panics.

Another drawback is that, with this solution, later we will need to give QEMU permission to read our drive, and if we’re not careful enough with the commands we type (e.g. we swap the order of some arguments, or make some typos), we may potentially end up writing to the drive instead.

Emulate a TPM device using swtpm

There are various ways to run the swtpm emulator. Here I will use the “vTPM proxy” way, which is not the easiest, but has the advantage that the emulated device will look like a real TPM device not only to the guest, but also to the host, so that we can inspect its PCR banks (among other things) from the host using familiar tools like tpm2_pcrread.

First, enable the tpm_vtpm_proxy module (which is not enabled by default on Ubuntu):

sudo modprobe tpm_vtpm_proxy

If that worked, we should have a /dev/vtpmx device. We can verify its presence with:

ls /dev/vtpmx

swtpm in “vTPM proxy” mode will interact with /dev/vtpmx, but in order to do so it needs the sys_admin capability. On Ubuntu, swtpm ships with this capability explicitly disabled by AppArmor, but we can enable it with:

sudo sh -c "echo '  capability sys_admin,' > /etc/apparmor.d/local/usr.bin.swtpm"
systemctl reload apparmor

Now that /dev/vtpmx is present, and swtpm can talk to it, we can run swtpm in “vTPM proxy” mode:

sudo mkdir /tpm/swtpm-state
sudo swtpm chardev --tpmstate dir=/tmp/swtpm-state --vtpm-proxy --tpm2

Upon start, swtpm should create a new /dev/tpmN device and print its name on the terminal. On my system, I already have a real TPM on /dev/tpm0, and therefore swtpm allocates /dev/tpm1.

The emulated TPM device will need to be readable and writeable by QEMU, but the emulated TPM device is by default accessible only by root, so either we run QEMU as root (not recommended), or we relax the permissions on the device:

# replace '/dev/tpm1' with the device created by swtpm
sudo chmod a+rw /dev/tpm1

Make sure not to accidentally change the permissions of your real TPM device!

Emulate the system with QEMU

Inside the QEMU emulator, we will run the OVMF UEFI firmware. On Ubuntu, the firmware comes in 2 flavors:

  • with Secure Boot enabled (/usr/share/OVMF/OVMF_CODE_4M.ms.fd), and
  • with Secure Boot disabled (in /usr/share/OVMF/OVMF_CODE_4M.fd)

(There are actually even more flavors, see this AskUbuntu question for the details.)

In the commands that follow I’m going to use the Secure Boot flavor, but if you need to disable Secure Boot in your guest, just replace .ms.fd with .fd in all the commands below.

To use OVMF, first we need to copy the EFI variables to a file that can be read & written by QEMU:

cp /usr/share/OVMF/OVMF_VARS_4M.ms.fd /tmp/

This file (/tmp/OVMF_VARS_4M.ms.fd) will be the equivalent of the EFI flash storage, and it’s where OVMF will read and store its configuration, which is why we need to make a copy of it (to avoid modifications to the original file).

Now we’re ready to run QEMU:

  • If you copied only the early boot files (choice #1):

    # replace '/dev/tpm1' with the device created by swtpm
    qemu-system-x86_64 \
      -accel kvm \
      -machine q35,smm=on \
      -cpu host \
      -smp cores=4,threads=1 \
      -m 4096 \
      -vga virtio \
      -bios /usr/share/ovmf/OVMF.fd \
      -drive if=pflash,unit=0,format=raw,file=/usr/share/OVMF/OVMF_CODE_4M.ms.fd,readonly=on \
      -drive if=pflash,unit=1,format=raw,file=/tmp/OVMF_VARS_4M.ms.fd \
      -drive if=virtio,format=raw,file=drive.img \
      -tpmdev passthrough,id=tpm0,path=/dev/tpm1,cancel-path=/dev/null \
      -device tpm-tis,tpmdev=tpm0
    
  • If you have a copy-on-write file for the entire system (choice #2):

    # replace '/dev/tpm1' with the device created by swtpm
    sudo -g disk qemu-system-x86_64 \
      -accel kvm \
      -machine q35,smm=on \
      -cpu host \
      -smp cores=4,threads=1 \
      -m 4096 \
      -vga virtio \
      -bios /usr/share/ovmf/OVMF.fd \
      -drive if=pflash,unit=0,format=raw,file=/usr/share/OVMF/OVMF_CODE_4M.ms.fd,readonly=on \
      -drive if=pflash,unit=1,format=raw,file=/tmp/OVMF_VARS_4M.ms.fd \
      -drive if=virtio,format=qcow2,file=drive.qcow2 \
      -tpmdev passthrough,id=tpm0,path=/dev/tpm1,cancel-path=/dev/null \
      -device tpm-tis,tpmdev=tpm0
    

    Note that this last command makes QEMU run as the disk group: on Ubuntu, this group has the permission to read and write all storage devices, so be careful when running this command, or you risk losing your files forever! If you want to add more safety, you may consider using an ACL to give the user running QEMU read-only permission to your backing storage.

In either case, after launching QEMU, our operating system should boot… while running inside itself!

In some circumstances though it may happen that the wrong operating system is booted, or that you end up at the EFI setup screen. This can happen if your system is not configured to boot from the “first” EFI entry listed in the EFI partition. Because the boot order is not recorded anywhere on the storage device (it’s recorded in the EFI flash memory), of course OVMF won’t know which operating system you intended to boot, and will just attempt to launch the first one it finds. You can use the EFI setup screen provided by OVMF to change the boot order in the way you like. After that, changes will be saved into the /tmp/OVMF_VARS_4M.ms.fd file on the host: you should keep a copy of that file so that, next time you launch QEMU, you’ll boot directly into your operating system.

Reading PCR banks after boot

Once our operating system has launched inside QEMU, and after the boot process is complete, the PCR banks will be filled and recorded by swtpm.

If we choose to copy only the early boot files (choice #1), then of course our operating system won’t be fully booted: it’ll likely hang waiting for the root filesystem to appear, and may eventually drop to the initrd shell. None of that really matters if all we want is to see the PCR values stored by the bootloader.

Before we can extract those PCR values, we first need to stop QEMU (Ctrl-C is fine), and then we can read it with tpm2_pcrread:

# replace '/dev/tpm1' with the device created by swtpm
tpm2_pcrread -T device:/dev/tpm1

Using the method described here in this article, PCRs 4, 5, 8, and 9 inside the emulated TPM should match the PCRs in our real TPM. And here comes an interesting application of this method: if we upgrade our bootloader or kernel, and we want to know the future PCR values that our system will have after reboot, we can simply follow this procedure and obtain those PCR values without shutting down our system! This can be especially useful if we use TPM sealing: we can reseal our secrets and make them unsealable at the next reboot without trouble.

Restarting the virtual machine

If we want to restart the guest inside the virtual machine, and obtain a consistent TPM state every time, we should start from a “clean” state every time, which means:

  1. restart swtpm
  2. recreate the drive.img or drive.qcow2 file
  3. launch QEMU again

If we don’t restart swtpm, the virtual TPM state (and in particular the PCR banks) won’t be cleared, and new PCR measurements will simply be added on top of the existing state. If we don’t recreate the drive file, it’s possible that some modifications to the filesystems will have an impact on the future PCR measurements.

We don’t necessarily need to recreate the /tmp/OVMF_VARS_4M.ms.fd file every time. In fact, if you need to modify any EFI setting to make your system bootable, you might want to preserve it so that you don’t need to change EFI settings at every boot.

Automating the entire process

I’m (very slowly) working on turning this entire procedure into a script, so that everything can be automated. Once I find some time I’ll finish the script and publish it, so if you liked this article, stay tuned, and let me know if you have any comment/suggestion/improvement/critique!

on November 19, 2023 04:33 PM

November 16, 2023

When users first download Lubuntu, they are presented with two options: Install the latest Long-Term Support release, providing them with a rock-solid and stable base (we assume most users choose this option). Install the latest interim release, providing the latest base with the latest LXQt release. As we have mentioned in previous announcements, Kubuntu and […]
on November 16, 2023 10:10 PM


Photo by Pixabay

Ubuntu systems typically have up to 3 kernels installed, before they are auto-removed by apt on classic installs. Historically the installation was optimized for metered download size only. However, kernel size growth and usage no longer warrant such optimizations. During the 23.10 Mantic Minatour cycle, I led a coordinated effort across multiple teams to implement lots of optimizations that together achieved unprecedented install footprint improvements.

Given a typical install of 3 generic kernel ABIs in the default configuration on a regular-sized VM (2 CPU cores 8GB of RAM) the following metrics are achieved in Ubuntu 23.10 versus Ubuntu 22.04 LTS:

  • 2x less disk space used (1,417MB vs 2,940MB, including initrd)

  • 3x less peak RAM usage for the initrd boot (68MB vs 204MB)

  • 0.5x increase in download size (949MB vs 600MB)

  • 2.5x faster initrd generation (4.5s vs 11.3s)

  • approximately the same total time (103s vs 98s, hardware dependent)


For minimal cloud images that do not install either linux-firmware or modules extra the numbers are:

  • 1.3x less disk space used (548MB vs 742MB)

  • 2.2x less peak RAM usage for initrd boot (27MB vs 62MB)

  • 0.4x increase in download size (207MB vs 146MB)


Hopefully, the compromise of download size, relative to the disk space & initrd savings is a win for the majority of platforms and use cases. For users on extremely expensive and metered connections, the likely best saving is to receive air-gapped updates or skip updates.


This was achieved by precompressing kernel modules & firmware files with the maximum level of Zstd compression at package build time; making actual .deb files uncompressed; assembling the initrd using split cpio archives - uncompressed for the pre-compressed files, whilst compressing only the userspace portions of the initrd; enabling in-kernel module decompression support with matching kmod; fixing bugs in all of the above, and landing all of these things in time for the feature freeze. Whilst leveraging the experience and some of the design choices implementations we have already been shipping on Ubuntu Core. Some of these changes are backported to Jammy, but only enough to support smooth upgrades to Mantic and later. Complete gains are only possible to experience on Mantic and later.


The discovered bugs in kernel module loading code likely affect systems that use LoadPin LSM with kernel space module uncompression as used on ChromeOS systems. Hopefully, Kees Cook or other ChromeOS developers pick up the kernel fixes from the stable trees. Or you know, just use Ubuntu kernels as they do get fixes and features like these first.


The team that designed and delivered these changes is large: Benjamin Drung, Andrea Righi, Juerg Haefliger, Julian Andres Klode, Steve Langasek, Michael Hudson-Doyle, Robert Kratky, Adrien Nader, Tim Gardner, Roxana Nicolescu - and myself Dimitri John Ledkov ensuring the most optimal solution is implemented, everything lands on time, and even implementing portions of the final solution.


Hi, It's me, I am a Staff Engineer at Canonical and we are hiring https://canonical.com/careers.


Lots of additional technical details and benchmarks on a huge range of diverse hardware and architectures, and bikeshedding all the things below:


For questions and comments please post to Kernel section on Ubuntu Discourse.



on November 16, 2023 10:45 AM

A lot of time has passed since my previous post on my work to make dhcpcd the drop-in replacement for the deprecated ISC dhclient a.k.a. isc-dhcp-client. Current status:

  • Upstream now regularly produces releases and with a smaller delta than before. This makes it easier to track possible breakage.
  • Debian packaging has essentially remained unchanged. A few Recommends were shuffled, but that's about it.
  • The only remaining bug is fixing the build for Hurd. Patches are welcome. Once that is fixed, bumping dhcpcd-base's priority to important is all that's left.
on November 16, 2023 09:38 AM

November 12, 2023

Ubuntu 23.10 “Mantic Minotaur” Desktop, showing network settings

We released Ubuntu 23.10 ‘Mantic Minotaur’ on 12 October 2023, shipping its proven and trusted network stack based on Netplan. Netplan is the default tool to configure Linux networking on Ubuntu since 2016. In the past, it was primarily used to control the Server and Cloud variants of Ubuntu, while on Desktop systems it would hand over control to NetworkManager. In Ubuntu 23.10 this disparity in how to control the network stack on different Ubuntu platforms was closed by integrating NetworkManager with the underlying Netplan stack.

Netplan could already be used to describe network connections on Desktop systems managed by NetworkManager. But network connections created or modified through NetworkManager would not be known to Netplan, so it was a one-way street. Activating the bidirectional NetworkManager-Netplan integration allows for any configuration change made through NetworkManager to be propagated back into Netplan. Changes made in Netplan itself will still be visible in NetworkManager, as before. This way, Netplan can be considered the “single source of truth” for network configuration across all variants of Ubuntu, with the network configuration stored in /etc/netplan/, using Netplan’s common and declarative YAML format.

Netplan Desktop integration

On workstations, the most common scenario is for users to configure networking through NetworkManager’s graphical interface, instead of driving it through Netplan’s declarative YAML files. Netplan ships a “libnetplan” library that provides an API to access Netplan’s parser and validation internals, which is now used by NetworkManager to store any network interface configuration changes in Netplan. For instance, network configuration defined through NetworkManager’s graphical UI or D-Bus API will be exported to Netplan’s native YAML format in the common location at /etc/netplan/. This way, the only thing administrators need to care about when managing a fleet of Desktop installations is Netplan. Furthermore, programmatic access to all network configuration is now easily accessible to other system components integrating with Netplan, such as snapd. This solution has already been used in more confined environments, such as Ubuntu Core and is now enabled by default on Ubuntu 23.10 Desktop.

Migration of existing connection profiles

On installation of the NetworkManager package (network-manager >= 1.44.2-1ubuntu1) in Ubuntu 23.10, all your existing connection profiles from /etc/NetworkManager/system-connections/ will automatically and transparently be migrated to Netplan’s declarative YAML format and stored in its common configuration directory /etc/netplan/

The same migration will happen in the background whenever you add or modify any connection profile through the NetworkManager user interface, integrated with GNOME Shell. From this point on, Netplan will be aware of your entire network configuration and you can query it using its CLI tools, such as “sudo netplan get” or “sudo netplan status” without interrupting traditional NetworkManager workflows (UI, nmcli, nmtui, D-Bus APIs). You can observe this migration on the apt-get command line, watching out for logs like the following:

Setting up network-manager (1.44.2-1ubuntu1.1) ...
Migrating HomeNet (9d087126-ae71-4992-9e0a-18c5ea92a4ed) to /etc/netplan
Migrating eduroam (37d643bb-d81d-4186-9402-7b47632c59b1) to /etc/netplan
Migrating DebConf (f862be9c-fb06-4c0f-862f-c8e210ca4941) to /etc/netplan

In order to prepare for a smooth transition, NetworkManager tests were integrated into Netplan’s continuous integration pipeline at the upstream GitHub repository. Furthermore, we implemented a passthrough method of handling unknown or new settings that cannot yet be fully covered by Netplan, making Netplan future-proof for any upcoming NetworkManager release.

The future of Netplan

Netplan has established itself as the proven network stack across all variants of Ubuntu – Desktop, Server, Cloud, or Embedded. It has been the default stack across many Ubuntu LTS releases, serving millions of users over the years. With the bidirectional integration between NetworkManager and Netplan the final piece of the puzzle is implemented to consider Netplan the “single source of truth” for network configuration on Ubuntu. With Debian choosing Netplan to be the default network stack for their cloud images, it is also gaining traction outside the Ubuntu ecosystem and growing into the wider open source community.

Within the development cycle for Ubuntu 24.04 LTS, we will polish the Netplan codebase to be ready for a 1.0 release, coming with certain guarantees on API and ABI stability, so that other distributions and 3rd party integrations can rely on Netplan’s interfaces. First steps into that direction have already been taken, as the Netplan team reached out to the Debian community at DebConf 2023 in Kochi/India to evaluate possible synergies.

Conclusion

Netplan can be used transparently to control a workstation’s network configuration and plays hand-in-hand with many desktop environments through its tight integration with NetworkManager. It allows for easy network monitoring, using common graphical interfaces and provides a “single source of truth” to network administrators, allowing for configuration of Ubuntu Desktop fleets in a streamlined and declarative way. You can try this new functionality hands-on by following the “Access Desktop NetworkManager settings through Netplan” tutorial.


If you want to learn more, feel free to follow our activities on Netplan.io, GitHub, Launchpad, IRC or our Netplan Developer Diaries blog on discourse.

on November 12, 2023 03:00 PM

November 11, 2023

AppStream 1.0 released!

Matthias Klumpp

Today, 12 years after the meeting where AppStream was first discussed and 11 years after I released a prototype implementation I am excited to announce AppStream 1.0! 🎉🎉🎊

Check it out on GitHub, or get the release tarball or read the documentation or release notes! 😁

Some nostalgic memories

I was not in the original AppStream meeting, since in 2011 I was extremely busy with finals preparations and ball organization in high school, but I still vividly remember sitting at school in the students’ lounge during a break and trying to catch the really choppy live stream from the meeting on my borrowed laptop (a futile exercise, I watched parts of the blurry recording later).

I was extremely passionate about getting software deployment to work better on Linux and to improve the overall user experience, and spent many hours on the PackageKit IRC channel discussing things with many amazing people like Richard Hughes, Daniel Nicoletti, Sebastian Heinlein and others.

At the time I was writing a software deployment tool called Listaller – this was before Linux containers were a thing, and building it was very tough due to technical and personal limitations (I had just learned C!). Then in university, when I intended to recreate this tool, but for real and better this time as a new project called Limba, I needed a way to provide metadata for it, and AppStream fit right in! Meanwhile, Richard Hughes was tackling the UI side of things while creating GNOME Software and needed a solution as well. So I implemented a prototype and together we pretty much reshaped the early specification from the original meeting into what would become modern AppStream.

Back then I saw AppStream as a necessary side-project for my actual project, and didn’t even consider me as the maintainer of it for quite a while (I hadn’t been at the meeting afterall). All those years ago I had no idea that ultimately I was developing AppStream not for Limba, but for a new thing that would show up later, with an even more modern design called Flatpak. I also had no idea how incredibly complex AppStream would become and how many features it would have and how much more maintenance work it would be – and also not how ubiquitous it would become.

The modern Linux desktop uses AppStream everywhere now, it is supported by all major distributions, used by Flatpak for metadata, used for firmware metadata via Richard’s fwupd/LVFS, runs on every Steam Deck, can be found in cars and possibly many places I do not know yet.

What is new in 1.0?

API breaks

The most important thing that’s new with the 1.0 release is a bunch of incompatible changes. For the shared libraries, all deprecated API elements have been removed and a bunch of other changes have been made to improve the overall API and especially make it more binding-friendly. That doesn’t mean that the API is completely new and nothing looks like before though, when possible the previous API design was kept and some changes that would have been too disruptive have not been made. Regardless of that, you will have to port your AppStream-using applications. For some larger ones I already submitted patches to build with both AppStream versions, the 0.16.x stable series as well as 1.0+.

For the XML specification, some older compatibility for XML that had no or very few users has been removed as well. This affects for example release elements that reference downloadable data without an artifact block, which has not been supported for a while. For all of these, I checked to remove only things that had close to no users and that were a significant maintenance burden. So as a rule of thumb: If your XML validated with no warnings with the 0.16.x branch of AppStream, it will still be 100% valid with the 1.0 release.

Another notable change is that the generated output of AppStream 1.0 will always be 1.0 compliant, you can not make it generate data for versions below that (this greatly reduced the maintenance cost of the project).

Developer element

For a long time, you could set the developer name using the top-level developer_name tag. With AppStream 1.0, this is changed a bit. There is now a developer tag with a name child (that can be translated unless the translate="no" attribute is set on it). This allows future extensibility, and also allows to set a machine-readable id attribute in the developer element. This permits software centers to group software by developer easier, without having to use heuristics. If we decide to extend the developer information per-app in future, this is also now possible. Do not worry though the developer_name tag is also still read, so there is no high pressure to update. The old 0.16.x stable series also has this feature backported, so it can be available everywhere. Check out the developer tag specification for more details.

Scale factor for screenshots

Screenshot images can now have a scale attribute, to indicate an (integer) scaling factor to apply. This feature was a breaking change and therefore we could not have it for the longest time, but it is now available. Please wait a bit for AppStream 1.0 to become deployed more widespread though, as using it with older AppStream versions may lead to issues in some cases. Check out the screenshots tag specification for more details.

Screenshot environments

It is now possible to indicate the environment a screenshot was recorded in (GNOME, GNOME Dark, KDE Plasma, Windows, etc.) via an environment attribute on the respective screenshot tag. This was also a breaking change, so use it carefully for now! If projects want to, they can use this feature to supply dedicated screenshots depending on the environment the application page is displayed in. Check out the screenshots tag specification for more details.

References tag

This is a feature more important for the scientific community and scientific applications. Using the references tag, you can associate the AppStream component with a DOI (Digital object identifier) or provide a link to a CFF file to provide citation information. It also allows to link to other scientific registries. Check out the references tag specification for more details.

Release tags

Releases can have tags now, just like components. This is generally not a feature that I expect to be used much, but in certain instances it can become useful with a cooperating software center, for example to tag certain releases as long-term supported versions.

Multi-platform support

Thanks to the interest and work of many volunteers, AppStream (mostly) runs on FreeBSD now, a NetBSD port exists, support for macOS was written and a Windows port is on its way! Thank you to everyone working on this 🙂

Better compatibility checks

For a long time I thought that the AppStream library should just be a thin layer above the XML and that software centers should just implement a lot of the actual logic. This has not been the case for a while, but there was still a lot of complex AppStream features that were hard for software centers to implement and where it makes sense to have one implementation that projects can just use.

The validation of component relations is one such thing. This was implemented in 0.16.x as well, but 1.0 vastly improves upon the compatibility checks, so you can now just run as_component_check_relations and retrieve a detailed list of whether the current component will run well on the system. Besides better API for software developers, the appstreamcli utility also has much improved support for relation checks, and I wrote about these changes in a previous post. Check it out!

With these changes, I hope this feature will be used much more, and beyond just drivers and firmware.

So much more!

The changelog for the 1.0 release is huge, and there are many papercuts resolved and changes made that I did not talk about here, like us using gi-docgen (instead of gtkdoc) now for nice API documentation, or the many improvements that went into better binding support, or better search, or just plain bugfixes.

Outlook

I expect the transition to 1.0 to take a bit of time. AppStream has not broken its API for many, many years (since 2016), so a bunch of places need to be touched even if the changes themselves are minor in many cases. In hindsight, I should have also released 1.0 much sooner and it should not have become such a mega-release, but that was mainly due to time constraints.

So, what’s in it for the future? Contrary to what I thought, AppStream does not really seem to be “done” and fetature complete at a point, there is always something to improve, and people come up with new usecases all the time. So, expect more of the same in future: Bugfixes, validator improvements, documentation improvements, better tools and the occasional new feature.

Onwards to 1.0.1! 😁

on November 11, 2023 07:48 PM

November 07, 2023

Last week, I wrote about my somewhat last-minute plans to attend the 2023 Ubuntu Summit in Riga, Latvia. The event is now over, and I’m back home collating my thoughts about the weekend.

The tl;dr: It was a great, well-organised and run event with interesting speakers.

Here’s my “trip report”.

Logistics

The event was held at the Radisson Blu Latvija. Many of the Canonical staff stayed at the Raddison, while most (perhaps all) of the non-Canonical attendees were a short walk away at the Tallink Hotel.

Everything kicked off with a “Welcome” session at 14:00 on Friday. That may seem like a weird time to start an event, but it’s squashed on the weekend between an internal Canonical Product Sprint and an Engineering Sprint.

The conference rooms were spread across a couple of floors, with decent signage, and plenty of displays showing the schedule. It wasn’t hard to plan your day, and make sure you were in the right place for each talk.

The talks were live streamed, and as I understand it, also recorded. So remote participants could watch the sessions, and for anyone who missed them, they should be online soon.

Coffee, cold drinks, snacks, cakes and fruit were refreshed through the day to keep everyone topped up. A buffet lunch was provided on Saturday and Sunday.

A “Gaming” night was organised for the Saturday evening. There was also a party after the event finished, on the Sunday.

A bridged Telegram/Matrix chat was used during the event to enable everyone to co-ordinate meeting up, alert people of important things, or just invite friends for beer. Post-event it was also used for people to post travel updates, and let everyone know when they got home safely.

An email was sent out early on at the start of each day, to give everyone a heads-up on the main things happening that day, and provide information about social events.

There were two styles of lanyard from which to hang your name badge. One was coloured diffierently to indicate the individual did not wish to be photographed. I saw similar at Akademy back in 2018, and appreciate this option.

Sessions

There was one main room with a large stage used for plenary and keynote style talks, two smaller rooms for talks and two further workshop rooms. It was sometimes a squeeze in the smaller rooms when a talk was popular, but it was rarely ‘standing room only’.

The presentation equipment that was provided worked well, for the most part. A few minor display issues, and microphone glitches occurred, but overall I could hear and see everything I was expected to experience.

There was also a large open area with standing tables, where people could hang out between sessions, and noodle around with things - more on that later. A few sessions which left an impression on me are detailed below, with a conclusion at the end.

Ubuntu Asahi

Tobias Heider (Canonical) was on stage, with a remote Hector Martin (Asahi Linux) via video link. They presented some technical slides about the MacOS boot process, and how Asahi is able to be installed on Apple Silicon devices. I personally found this interesting, understandable, and accessible. Hector speaks fast, but clearly, and covered plenty of ground in the time they had.

Tobias then took over to talk about some of the specifics of the Ubuntu Asahi build, how to install it, and some of the future plans. I was so interested and inspired that I immediately installed Ubuntu Asahi on my M1 Apple MacBook Air. More on that experience in a future blog post.

MoonRay

This was a great talk about the process of open sourcing a component of the video production pipeline. While that sounds potentially dull, it wasn’t. Partly helped by plenty of cute rendered DreamWorks characters in the presentation, along with short video clips. We got a quick primer on rendering scenes, then moved into the production pipeline and finally to MoonRay. Hearing how and why a large movie production house like DreamWorks would open source a core part of the pipeline was fascinating. We even got to see Bilby at the end.

Ubuntu Core Desktop

Oliver Smith and Ken VanDine presented Ubuntu Core Desktop Preview, from a laptop running the Core Desktop. I talked a little about this in Ubuntu Core Snapdeck.

It’s essentially the Ubuntu desktop packaged up as a bunch of snap packages. Very much like Fedora Silverblue, or SteamOS on the steamdeck, Ubuntu Core Desktop is an “immutable” system.

It was interesting to see the current blockers to release. It’s already quite usable, but they’re not quite ready to share images of Ubuntu Core Desktop. Not that they’re hard to find if you’re keen!

Framework

This was one of my favourite talks. Daniel Schaefer talks about the Framework laptops, their design and decisions made during their development. The talk straddled the intersection of hardware, firmware and software which tickles me. I was also pleased to see Daniel fiddle with parts of the laptop while giving a talk from it. Demonstrating the replacable magnetically attached display bezel and replacing the keyboard while using the laptop is a great demo and fun sight.

Security

Mark Esler, from the Ubuntu Security Team gave a great overview of security best practices. They had specific, and in some cases simple, actionable things developers can do to improve their application security. We had a brief discussion afterwards about snap application security, which I’ll cover in a future post.

Discord

Some of the team behind the Ubuntu Discord presented stats about the sizable community that use Discord. They also went through their process for ensuring a friendly environment for support.

Hallway track

At all these kinds of events the so-called ‘Hallway track’ is just as important as the scheduled sessions. There were opportunities to catch-up with old friends, meet new people I’d only seen online before, and play with technology.

Some highlights for me on the hallway track include:

Kind words

Quite a few people approached and introduced themselves to me over the weekend. It was a great opportunity to meet people I’ve not seen before, only met online, or not seen since a, Ubuntu Developer Summit long ago.

A few introduced themselves then thanked me as I’d inspired them to get involved in Linux or Ubuntu as part of their career. It was very humbling to think those years were a positive impact on people’s lives. So I greatly appreciated their comments.

UBports

Previously known as Ubuntu Touch, the UBports project had a stand to exhibit the status of the project to bring the converged desktop to devices. I have a great fondness for the UBports project, having worked on the core apps for Ubuntu Touch. It always puts a smile on my face to see the Music, Terminal, Clock, Calendar and other apps I worked on, still in use on UBports today.

I dug out my OnePlus 5 when I got home, and might give UBports another play in a spare moment.

Raspberry Pi 5

Dave Jones from Canonical had a Raspberry Pi 5 which he’d hooked up to a TV, keyboard and mouse, and was running Ubuntu Desktop. I’d not seen a Pi running the Ubuntu desktop so fluidly before, so I had a play with it. We installed a bunch of snaps from the store, to put the device through its paces, and see if any had problems on the new Pi. The collective brains of myself, Dave, Ogra and Martin solved a bug or two and sent the results over the network to my laptop to be pushed to Launchpad.

Gaming Night

A large space was set aside for gaming night on the Saturday evening. Most people left the event, found food, then came back to ‘game’. There were board games, cards, computers and consoles. A fair number of people were not actually gaming, but coding and just chatting. It was quite nice to have a bit of space to just chill out and get on with whatever you like.

One part which amused me greatly was Ken VanDine and Dave Jones attempting to get the aforementioned Ubuntu Core Desktop Preview working on the new Raspberry Pi 5. They had the pi, cables, keyboard and mouse, but no display. There were however, projectors around the room. Unfortunately the HDMI sockets were nowhere near the actual projection screen. So we witnessed Dave, Ken and others burning calories walking back and forth to see terminal output, then call out commands across the loud room to the pi operator.

This went on for some time until I pointed out to Ken that Martin had a portable display in his bag. I probably should have thought about that before hand. Then someone else saved the day by walking in with a TV they’d acquired from somewhere. I’ve never seen so many nerds sat around a Raspberry Pi, reading logs from a TV screen. It’s perfectly normal at events like this, of course.

After party

Once the event was over, we all decamped to Digital Art House to relax over a beer or five. There were displays and projectors all around the venue, showing Ubuntu wallpapers, and the artworks of Sylvia Ritter.

Conclusion

I think the organising committee nailed it with this event. The number of rooms and tracks was about right. There was a good mix of talks. Some were technical, and Ubuntu related, others were just generally interesting. The infrastructure worked and felt professionally run.

I had an opportunity to meet a ton of people I’ve never met, but have spoken to online for years. I also got to meet and talk with some of the new people at Canonical, of which, there are many.

I’d certainly go again if I had the opportunity. Perhaps I’ll come up with something to talk about, I’ve got a year to prepare!

on November 07, 2023 11:00 AM

November 05, 2023

Ubuntu Summit 2023

Ross Gammon

UbuntuSummit2023

I am currently attending the Ubuntu Summit 2023 in Riga, Latvia. This is the first time I have deliberately attended an Ubuntu event. Back in 2013, I accidentally walked through what I believe was the last Ubuntu Developers Summit in Copenhagen, when I was showing some friends around Bella Sky in Copenhagen.

This time I was asked by Erich Eickmeyer if I would like to join him as a member of the Ubuntu Studio team. It has been fantastic to meet him and Eylul Dogruel from the Ubuntu Studio team. It was also fantastic to meet or see in person other members of the Linux Audio community, and other Ubuntu and Canonical people that have helped me with my Ubuntu contributions along the way.

Here are the talks I attended and meetings I had related to Ubuntu Studio:

50 things you did not know you could do with Ardour , Dr Robin Gareus (Ardour, Linux Audio)
Making a standalone effects pedal system based on embed Linux, Filipe Coelho
Live Mixing with PipeWire and Ardour/Harrison Mixbus, Erich Eickmeyer (Ubuntu / Ubuntu Studio)
Art and ownership – the confusing problem of owning a visual idea, Eylul Dogruel (Ubuntu Studio)
Ubuntu Flavour Sync meeting, Aaron Prisk (Canonical), Ana Sereijo (Canonical), Daniel Bungert (Canonical), Mr Mauro Gaspari (Canonical), Michael Hudson-Doyle (Canonical), Oliver Smith (Canonical), Mr Tim Holmes-Mitra (Canonical)
I believe talks will be uploaded onto You Tube at some point, so look out for them!
on November 05, 2023 04:01 PM

November 03, 2023

At the Ubuntu Summit in Latvia, Canonical have just announced their plans for the Ubuntu Core Desktop. I recently played with a preview of it, for fun. Here’s a nearby computer running it right now.

Ubuntu Core Desktop Development Preview on a SteamDeck

Ubuntu Core is a “a secure, application-centric IoT OS for embedded devices”. It’s been around a while now, powering IoT devices, kiosks, routers, set-top-boxes and other appliances.

Ubuntu Core Desktop is an immutable, secure and modular desktop operating system. It’s (apparently) coming to a desktop near you next year.

In case you weren’t aware, the SteamDeck is a portable desktop PC running a Linux distribution from Valve called “SteamOS”.

As a tinkerer, I thought “I wonder what Ubuntu Core on the SteamDeck looks like”. So I went grubbing around in GitHub projects to find something to play with.

I’m not about to fully replace SteamOS on my SteamDeck, of course, at least, not yet. This was just a bit of fun, to see if it worked. I’m told by the team that I’m likely the only person who has tried this so far.

Nobody at Canonical asked me to do this, and I didn’t get special access to the image. I just stumbled around until I found it, and started playing. You know, for fun.

Also, obviously I don’t speak for Canonical, these are my own thoughts. This also isn’t a how-to guide, or a recommendation that you should use this. It isn’t ready for prime time yet.

Snaps all the way down

Let’s get this out of the way up front. Ubuntu Core images are all about snaps. The kernel, applications, and even the desktop itself is a snap. Everything is a snap. Snap, snappity, snap snap! 🐊

So it has the features that come with being snap-based. Applications can be automatically updated, reverted, multiple parallel versions installed. Snaps are strictly confined using container primitives, seccomp and AppArmor.

This is not too dissimilar to the way many SteamDeck users add applications to the immutable SteamOS install. On SteamOS they use Flatpak, whereas on Ubuntu Core, Snap is used.

They achieve much the same goal though. A secure, easily updated and managed desktop OS.

Not ready yet

The image is currently described as “Ubuntu Core Desktop Development Preview”.

Indeed the wallpaper makes this very clear. Here be dragons. 🐉

Ubuntu Core Desktop Development Preview wallpaper

This is not ready for daily production use as a primary OS, but I’m sure some nerds like me will be running it soon enough. It’s fun to play with stuff like this, and get a glimpse of what the future of Ubuntu desktop might be like.

I was pleasantly surprised that the developer preview exceeded my expectations. Here’s what I discovered.

Installation

I didn’t want to destroy the SteamOS install on my SteamDeck - I quite like playing games on the device. So I put the Ubuntu Core image on a USB stick, and ran it from that. The current image doesn’t have an ‘installer’ as such.

On first boot, you’re greeted with an Ubuntu Core logo while the various snaps are setup and configured. Once that completes, a first-run wizard pops up to walk though the initial setup.

Initial setup

This is the usual configuration steps to setup keyboard, locale, first user and so on.

Pre-installed applications

Once installed, everything was pretty familiar.

There’s a browser - Firefox, and a small set of default GNOME applications such as Eye of GNOME, Evince, GNOME Calculator, Characters, Clocks, Logs, Weather, Font Viewer and Text Editor. There’s also a graphical Ubuntu App Centre (more on that in a moment).

There’s also three terminal applications.

  • GNOME Terminal - which is a little bit useless because it’s strictly confined.

  • Console - also GNOME Terminal, but is unconfined, so can be used for system administration tasks like installing software.

  • Workshops - which provides a Toolbox / Distrobox like experience for launching LXD containers running Ubuntu or another Linux distribution. The neat part about this is there’s full GPU passthrough to the containers.

So on a suitably equipped desktop with an nVidia GPU, it’s possible to run CUDA workloads inside a container on top of Ubuntu Core.

Automatic updates

When I initially played with this a week or two back, I noticed that the core image shipped with a build of GNOME 42.

GNOME 42

One major feature of snaps is their ability to do automatic updates in the background. At some point between October 19th and today, an update brought me GNOME 45!

GNOME 45

I doubt that a final product will jump users unceremoniously from one major desktop release to another, but this is a preview remember, so interesting, exciting and frightening things happen.

Installing apps

The “traditional” (read: deb-based) Ubuntu Desktop recently shipped with a new software store front. This application, built using Flutter, makes it easy to find and install snaps on the desktop.

I tested this process by installing Steam, given this is a SteamDeck!

Installing Steam

This process was uneventful and smooth. Installing additional apps on the Ubuntu core desktop preview works as expected. However, so-called “classic” (unconfined) snaps are not yet installable. So applications like VSCode, Sublime Text and Blender can’t currently be easily installed.

Kernel switcheroo

Did I mention everything is a snap? This includes the Linux kernel. That means it’s possible to quickly switch to a completely different kernel, trivially easily, with one snap refresh command.

Switching kernel

It’s just as simple to snap revert back to the previous kernel, or try kernels specifically optimised for the hardware or use cases, such as gaming, or resource constrained computers.

Steam snap

The snap of Steam has been around for a while now, to install on the traditional Linux desktop. As a snap, it’s installable on this core desktop preview too.

The Steam snap also bundles some additional tools you might find on the SteamOS shipped on the SteamDeck, like MangoHUD.

Launching Steam on Ubuntu Core on the SteamDeck works just like it does on a traditional desktop. The SteamDeck is a desktop PC at its heart, after all.

Here’s a few screenshots, but this isn’t super remarkable, but neat nonetheless. The controller works, and the games I tested run fine. I didn’t install anything huge like GTA5, because this was all running off a USB stick. Ain’t nobody got time for that.

Steam

I didn’t try using the new Steam UI as seen on the SteamOS official builds. But I imagine it’s possible to get that working.

Steam

Audio doesn’t work in the Ubuntu Core image on the SteamDeck for me, so the whole game playing experience is a little impacted by that.

Steam

Steam

As you can see, this doesn’t really look any different to running a traditional desktop Linux distribution.

Steam

Steam

Unworking things

Not everything is smooth - this is a developer preview remember! I have fed back these things to the team - over beer, last night. I’m happy to help them debug these issues.

On my SteamDeck, I had no audio, at all. I suspect this is likely due to something missing in the Ubuntu kernel. As shown above, I did try a different, newer kernel, to no avail.

Bluetooth also didn’t work. In GNOME Settings, pressing the bluetooth enable button just toggled it back off again. I didn’t investigate this deeply, but will certainly file a bug and provide logs to the team.

Running snap refresh in the console doesn’t finish, when there’s an update to the desktop itself. I suspect this is a byproduct of Ubuntu Core usually being an unattended IoT device where it would normally do an automatic reboot when these packages are updated. You clearly don’t want a desktop to do random reboots after updates, so that behaviour seems to be supressed.

I’ve not commented at all on performance, because it’s a little unfair, given this is a preview. That’s not to say it’s slow, but I am running it on a USB stick, not the internal nvme drive. It’s certainly more than usable, but I didn’t measure any performance benchmarks yet.

The future

While the SteamDeck is a desktop “PC”, it’s a little quirky. There’s no keyboard, only one USB port, has weird audio chipset, and the display initially boots rotated by 90 degrees. It’s not really the target for this image.

I would expect this Ubuntu Core Developer Preview to be more usable on a traditional laptop or desktop computer. I haven’t tried that, but I know others have. Over time, more people will need to play with this platform, to find the sharp edges, and resolve the critical bugs before this ships for general use.

I can envisage a future where laptops from well-known vendors ship with Ubuntu Core Desktop by default. These might target developers initially, but I suspect eventually ’normie’ users will use Ubuntu Core Desktop.

It’s pretty far along already though. For some desktop use cases this is perfectly usable today, just probably not on your primary or only computer. In five months, when the next Ubuntu release comes out, I think it could be a very compelling daily driver.

Worth keeping an eye on this!

on November 03, 2023 11:00 AM

October 24, 2023

In consideration of some ongoing issues, we thought we’d give some insight into a few things going on right now.

Upgrades to 23.10

Unfortunately, as of this writing, upgrades to 23.10 have not yet been enabled due to a blocking bug in the release upgrader. The fix is in place, but must be manually verified by the team in charge of the release upgrader only after the new development cycle opens up for 24.04. For more details, please see the Ubuntu Discourse.

Closure of Matrix Rooms

Due to the disabling of Libera’s Matrix to IRC bridge, we had to make the hard decision to close the Matrix rooms since we want to keep our support and communication rooms unified. While we would like to return to Matrix someday, this is something that is on hold for now. There are plans in the larger Ubuntu community, in cooperation with Canonical, to unify all communication platforms between the community and Canonical, so stay tuned for that.

Closure of the Ubuntu Studio Café (offtopic chat)

What started out as the #ubuntustudio-offtopic channel on IRC, the Ubuntu Studio Café was intended to be a place where the community could simply socialize and chat about whatever they wanted so long as the IRC Guidelines and the Ubuntu Code of Conduct were still followed. However, we started to realize that people became confused between this channel and the support channel, #ubuntustudio, and would often use them interchangeably. While support wasn’t allowed in the offtopic channel, people were asking for support in the offtopic channel, and often going offtopic (chatting without support specifically) in the support channel.

Additionally, neither channel sees much traffic, with the support channel understandably seeing most of the traffic.

Therefore, after a discussion on the Ubuntu Studio Users Mailing List, it has been decided to close #ubuntustudio-offtopic and combine much of its function with the main support channel, making the the support channel an on-topic support and discussion channel, as long as the discussion is related to Ubuntu Studio and creativity, meaning using its tools and helping each other use the included tools. For offtopic non-Ubuntu Studio discussion, the #ubuntu-offtopic channel exists and anyone is welcome.

This closure will occur this Friday, October 27th, 2023.

More on Backports

As stated in the Ubuntu 23.10 Release Announcement, the Ubuntu Studio Backports PPA is in the process of being sunset in favor of using the official Ubuntu Backports repository. However, the Backports repository only works for LTS releases and for good reason. There are a few requirements for backporting:

  • It must be an application which already exists in the Ubuntu repositories
  • It must be an application which would not otherwise qualify for a simple bugfix, which would then qualify it to be a Stable Release Update. This means it must have new features.
  • It must not rely on new libraries or new versions of libraries.
  • It must exist within a later supported release or the development release of Ubuntu.

If you have a suggestion for an application for which to backport that meets those requirements, feel free to join and email the Ubuntu Studio Users Mailing List with your suggestion with the tag “[BPO]” at the beginning of the subject line. Backports to 22.04 LTS will close with the release of 24.04 LTS at which time backports to 24.04 LTS will open. Additionally, suggestions must pertain to Ubuntu Studio and preferably must be applications included with Ubuntu Studio. Suggestions can be rejected at the Project Leader’s discretion.

We are also considering sunsetting the Ardour Backports PPA in favor of only backporting Ardour’s point releases. For major upgrades, we recommend subscribing to Ardour’s official releases at ardour.org for as little as $1 USD per month.


on October 24, 2023 08:44 PM

October 18, 2023

We have had many requests to make Plasma 5.27 available in our backports PPA for Jammy Jellyfish 22.04. However, for technical reasons this would have broken upgrades to Kinetic 22.10 while that upgrade path existed. Now that Kinetic is end of life, it is possible to allow opt in backports of plasma 5.27 for 22.04.

As with the previous backport of plasma 5.25, 5.27 is provided in the backports-extra PPA

This PPA is intended to be used in combination with our standard backports PPA, but should also work standalone.

As usual with our PPAs, there is the caveat that the PPA may receive additional updates and new releases of KDE Plasma, Gear (Apps) and Frameworks, plus other apps and required libraries. Users should always review proposed updates to decide whether they wish to receive them.

While we feel these backports will be beneficial to enthusiastic adopters, users wanting to use a more tested Plasma release on the 22.04 base may find it advisable to stay with Plasma 5.24 as included in the original 22.04 (Jammy) release.

To add the PPA and upgrade, do:

sudo add-apt-repository ppa:kubuntu-ppa/backports-extra && sudo apt full-upgrade -y

We hope keen adopters enjoy using Plasma 5.27

on October 18, 2023 04:45 PM

October 17, 2023


The Kubuntu Team is happy to announce that Kubuntu 23.10 has been released, featuring the ‘beautiful’ KDE Plasma 5.27 simple by default, powerful when needed.

Codenamed “Mantic Minotaur”, Kubuntu 23.10 continues our tradition of giving you Friendly Computing by integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.

Under the hood, there have been updates to many core packages, including a new 6.5-based kernel, KDE Frameworks 5.110, KDE Plasma 5.27 and KDE Gear 23.08.

KDE Plasma desktop 5.27.8 on Kubuntu 23.10

Kubuntu has seen many updates for other applications, both in our default install, and installable from the Ubuntu archive.

Haruna, Krita, Kdevelop, Yakuake, and many many more applications are updated.

Applications for core day-to-day usage are included and updated, such as Firefox, and LibreOffice.

For a list of other application updates, and known bugs be sure to read our release notes.

Download Kubuntu 23.10, or learn how to upgrade from 23.04.

Note: For upgrades from 23.04, there may a delay of a few hours to days between the official release announcements and the Ubuntu Release Team enabling upgrades.

on October 17, 2023 12:50 AM

October 12, 2023

The Xubuntu team is happy to announce the immediate release of Xubuntu 23.10.

Xubuntu 23.10, codenamed Mantic Minotaur, is a regular release and will be supported for 9 months, until July 2024.

Xubuntu 23.10, featuring the latest updates from Xfce 4.18 and GNOME 44.

Xubuntu 23.10 features the latest updates from Xfce 4.18, GNOME 45, and MATE 1.26. With a focus on stability, memory management, and hardware support, Xubuntu 23.10 should perform well on your device. Enjoy frictionless bluetooth headphone connections and out-of-the-box touchpad support. Read Sean’s What’s New in Xubuntu 23.10 post for an in-depth review of the latest updates.

The final release images for Xubuntu Desktop and Xubuntu Minimal are available as torrents and direct downloads from xubuntu.org/download/.

As the main server might be busy in the first few days after the release, we recommend using the torrents if possible.

We’d like to thank everybody who contributed to this release of Xubuntu!

Highlights and Known Issues

Highlights

  • Improved hardware support for bluetooth headphones and touchpads
  • Color emoji is now included and supported in Firefox, Thunderbird, and newer Gtk-based apps
  • Significantly improved screensaver integration and stability

Known Issues

  • The shutdown prompt may not be displayed at the end of the installation. Instead you might just see a Xubuntu logo, a black screen with an underscore in the upper left hand corner, or just a black screen. Press Enter and the system will reboot into the installed environment. (LP: #1944519)
  • Xorg crashes and the user is logged out after logging in or switching users on some virtual machines, including GNOME Boxes. (LP: #1861609)
  • You may experience choppy audio or poor system performance while playing audio, but only in some virtual machines (observed in VMware and VirtualBox)

For more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions, please refer to the Xubuntu Release Notes.

The main Ubuntu Release Notes cover many of the other packages we carry and more generic issues.

Support

For support with the release, navigate to Help & Support for a complete list of methods to get help.

on October 12, 2023 05:18 PM

We are pleased to announce the release of the next version of our distro, the 23.10 release. This is a standard release supported for 9 months packed full of all sorts of new capabilities. If you want a well tested and longer term support then our 22.04 LTS version is supported for 3 years. The new release has many new core updates as well as v10.8 version of budgie itself: We…

Source

on October 12, 2023 02:45 PM

October 10, 2023

APT currently knows about three types of upgrades:

  • upgrade without new packages (apt-get upgrade)
  • upgrade with new packages (apt upgrade)
  • upgrade with new packages and deletions (apt{,-get} {dist,full}-upgrade)

All of these upgrade types are necessary to deal with upgrades within a distribution release. Yes, sometimes even removals may be needed because bug fixes require adding a Conflicts somewhere.

In Ubuntu we have a third type of upgrades, handled by a separate tool: release upgrades. ubuntu-release-upgrader changes your sources.list, and applies various quirks to the upgrade.

In this post, I want to look not at the quirk aspects but discuss how dependency solving should differ between intra-release and inter-release upgrades.

Previous solver projects (such as Mancoosi) operated under the assumption that minimizing the number of changes performed should ultimately be the main goal of a solver. This makes sense as every change causes risks. However it ignores a different risk, which especially applies when upgrading from one distribution release to a newer one: Increasing divergence from the norm.

Consider a person installs foo in Debian 12. foo depends on a | b, so a will be automatically installed to satisfy the dependency. A release later, a has some known issues and b is prefered, the dependency now reads: b|a.

A classic solver would continue to keep a installed because it was installed before, leading upgraded installs to have foo, a installed whereas new systems have foo, b installed. As systems get upgraded over and over, they continue to diverge further and further from new installs to the point that it adds substantial support effort.

My proposal for the new APT solver is that when we perform release upgrades, we forget which packages where previously automatically installed. We effectively perform a normalization: All systems with the same set of manually installed packages will end up with the same set of automatically installed packages. Consider the solving starting with an empty set and then installing the latest version of each previously manually installed package: It will see now that foo depends b|a and install b (and a will be removed later on as its not part of the solution).

Another case of divergence is Suggests handling. Consider that foo also Suggests s. You now install another package bar that depends s, hence s gets installed. Upon removing bar, s is not being removed automatically because foo still suggests it (and you may have grown used to foo’s integration of s). This is because apt considers Suggests to be important - they won’t be automatically installed, but will not be automatically removed.

In Ubuntu, we unset that policy on release upgrades to normalize the systems. The reasoning for that is simple: While you may have grown to use s as part of foo during the release, an upgrade to the next release already is big enough that removing s is going to have less of an impact - breakage of workflows is expected between release upgrades.

I believe that apt release-upgrade will benefit from both of these design choices, and in the end it boils down to a simple mantra:

  • On upgrades within a release, minimize changes.
  • On upgrades between releases, minimize divergence from fresh installs.
on October 10, 2023 05:22 PM
What's New in Xubuntu 23.10

Xubuntu 23.10, codenamed "Mantic Minotaur", is due to climb out of the development labyrinth on Thursday, October 12, 2023. It features the latest apps from Xfce 4.18, GNOME 45, and MATE 1.26. There&aposs not many exciting new features this time around. Instead, the overall theme of this release is stability, better memory management, and improved support for UI scaling.

In case you&aposre a Xubuntu regular or somebody with a growing interest, I&aposve documented the purpose and highlights for each updated graphical app below... except Firefox, Thunderbird, and LibreOffice (those apps deserve their own separate changelogs). Enjoy!

Xubuntu Updates

What's New in Xubuntu 23.10The Xubuntu 23.10 desktop, featuring the latest wallpaper by Xubuntu&aposs own Pasi Lallinaho

Known issues fixed since 23.04 "Lunar Lobster"

The following issues were reported in a previous release and are no longer reproducible in 23.10. Hooray!

  • OEM installation uses the wrong slideshow (LP: #1842047)
  • Screensaver crashes shortly after unlocking (LP: #2012795)
  • Password required twice when switching users (LP: #1874178)

Improved hardware support

  • Bluetooth headphones are now better supported under PipeWire. We caught up with the other flavors and added the missing key package: libspa-0.2-bluetooth
  • The Apple Magic Trackpad 2 and other modern touch input devices are now supported. We removed the conflicting and unsupported xserver-xorg-input-synaptics package to allow libinput to take control.

Appearance updates

What's New in Xubuntu 23.10Color emoji are now supported in Xubuntu 23.10
  • elementary-xfce 0.18 features numerous refreshed icons in addition to a handful of removed, deprecated icons.
  • Greybird 3.23.3 is a minor update that delivers improved support for Gtk 3 and 4 applications.
  • Color emoji are now supported and used in Firefox, Thunderbird, and Gtk 3/4 applications. To enter emoji on Gtk applications (such as Mousepad), use the Ctrl + . keyboard shortcut to show the emoji picker. Some text areas will also allow you to bring up the emoji picker from the right-click menu.
  • When changing your Gtk (interface) theme, a matching Xfwm (window manager) theme is now automatically selected.
  • Past Xubuntu wallpapers can now be easily installed from the repositories! Additionally, wallpapers from before the 22.10 release have been removed from the default installation.

GNOME Apps

Disk Usage Analyzer (44.0 to 45.0)

What's New in Xubuntu 23.10Baobab 45.0 features a visual refresh to match the other GNOME 45 apps

Disk Usage Analyzer (baobab) provides a graphical representation of disk usage for local and remote volumes. The 45.0 release features the latest GNOME 45&aposs libadwaita widgets and design conventions.

Disks (44.0 to 45.0)

What's New in Xubuntu 23.10Disks 45.0 received a minimal bug-fixing update

Disks (gnome-disk-utility) is an easy-to-use disk management utility that can inspect, format, partition, image, configure, and benchmark disks. Version 45.0 received a minimal update, silencing some warnings thrown in the benchmark dialog.

Fonts (44.0 to 45.0)

What's New in Xubuntu 23.10Fonts 45.0 features the latest GNOME 45 design trends

Fonts (gnome-font-viewer) is a font management application that allows you to view installed fonts and install new ones for yourself or for all users on your computer. Version 45.0 features a graphical refresh to the new GNOME 45 styles.

Software (44.0 to 45.0)

What's New in Xubuntu 23.10Software 45.0 is mostly a bugfix release with some usability enhancements

Software (gnome-software) allows you to find and install apps. It includes plugins for installing Debian, Snap, and Flatpak packages (Flatpak not included in 23.10). The 45.0 release benefits from a number of bug fixes, performance improvements, and usability improvements. Flatpak users are now offered to clear app storage when removing an app.

Rhythmbox (3.4.6 to 3.4.7)

What's New in Xubuntu 23.10Rhythmbox 3.4.7 includes some bug fixes while also removing party mode

Rhythmbox is a music player, online radio, and music management application. The 3.4.7 update drops party mode and includes a handful of improvements. User-facing improvements include:

  • Imported playlists will now retain the playlist file name
  • Subscribing to a podcast will no longer cause the app to crash
  • WMA-format audio files will now play automatically when clicked

Xfce Apps

Dictionary (0.8.4 to 0.8.5)

What's New in Xubuntu 23.10Dictionary 0.8.5 fixes some bugs while also applying some light UI enhancements

Dictionary (xfce4-dict) allows you to search dictionary services including dictionary servers like dict.org, web servers in a browser, or local spell check programs. Version 0.8.5 includes some minor updates, including a switch to symbolic icons, properly escaping markup in server information, and reducing unused code.

Mousepad (0.5.10 to 0.6.1)

What's New in Xubuntu 23.10Mousepad 0.6.1 adds a new search setting and fixes a variety of bugs

Mousepad is an easy-to-use and fast text editor. The 0.6.1 release includes some useful updates. A new "match whole word" toggle has been added to the search toolbar. File modification state is now tracked more reliably. Multi-window sessions are now properly restored at startup. Improvements to the menu bar and search labels round out this release.

Notifications (0.7.3 to 0.8.2)

What's New in Xubuntu 23.10Notifications 0.8.2 incorporates numerous bug fixes and improves logging support

The Xfce Notify Daemon (xfce4-notifyd) enables sending application and system notifications in Xfce. Version 0.8.2 received a massive number of bug fixes in addition to some new features:

  • "Mark All Read" button added to the settings and panel plugin
  • Individual log entries can now now be deleted or marked read
  • Option to show only unread notifications in the plugin menu
  • Option to ignore app-specified notification timeouts

Power Manager (4.18.1 to 4.18.2)

What's New in Xubuntu 23.10Power Manager 4.18.2 improves memory management and screensaver integration

Power Manager (xfce4-power-manager) manages the power settings of the computer, monitor, and connected devices. Included in 4.18.2 are syncing the lock-on-sleep setting with the Xfce Screensaver, fixes to a handful of memory management issues, and some stability improvements.

Ristretto (0.12.4 to 0.13.1)

What's New in Xubuntu 23.10Ristretto 0.13.1 (finally) introduces printing support

Ristretto is a fast and lightweight image viewer for Xfce. The latest 0.13.1 release introduces a long-awaited feature: printing support! It also improves thumbnailing and looks better when scaling the UI beyond 1x.

Screensaver (4.16.0 to 4.18.2)

What's New in Xubuntu 23.10Screensaver 4.18.2 fixes several stability and usability issues

Screensaver (xfce4-screensaver) is a simple screen saver and locker app for Xfce. Version 4.18.2 fixes some crashes and memory management issues seen in Xubuntu 23.04, correctly integrates with LightDM (no more double password entry when switching users), and correctly inhibits sleep when expected. Screensaver works in conjunction with Power Manager to secure your desktop session.

Screenshooter (1.10.3 to 1.10.4)

What's New in Xubuntu 23.10Screenshooter 1.10.4 improves usability and adds two new file types

Screenshooter (xfce4-screenshooter) allows you to capture your entire screen, an active window, or a selected region. The 1.10.4 update introduces support for AVIF and JPEG XL files, better handles unwritable directories, and remembers preferences between sessions.

Thunar (4.18.4 to 4.18.7)

What's New in Xubuntu 23.10Thunar 4.18.7 features some bug fixes and performance improvements

Thunar is a fast and feature-full file manager for Xfce. Version 4.18.7 fixes a number of bugs and performance issues, resulting in a more stable and responsive file manager.

Thunar Media Tags Plugin (0.3.0 to 0.4.0)

What's New in Xubuntu 23.10The lesser-known Media Tags plugin received minor technical updates

Thunar Media Tags Plugin extends Thunar&aposs support for media files, adding a tag-based bulk rename option, an audio tag editor, and an additional page to the file properties dialog. Version 0.4.0 received only minor updates, updating some backend libraries to newer versions.

Xfburn (0.6.2 to 0.7.0)

What's New in Xubuntu 23.10Xfburn 0.7.0 continues to receive updates as one of the best-supported burning apps around

Xfburn is an easy-to-use disc burning software for Xfce. Version 0.7.0 includes numerous usability bug fixes (missing icons, missing progress dialogs, multi-item selection) and adds supported MIME types to open blank and audio CDs from other apps.

Xfce Panel (4.18.2 to 4.18.4)

What's New in Xubuntu 23.10Panel 4.18.4 fixes memory management issues and improves scaling

Xfce Panel (xfce4-panel) is a key component of the Xfce desktop environment, featuring application launchers and various useful plugins. The 4.18.4 release fixes memory management issues, improves icon scaling at different resolutions, and updates icons when their status changes (e.g. for symbolic colored icons).

Panel Profiles (1.0.13 to 1.0.14)

What's New in Xubuntu 23.10Panel Profiles 1.0.14 significantly improves file handling

Panel Profiles (xfce4-panel-profiles) allows you to manage and share Xfce panel layouts. Version 1.0.14 introduces saving and restoration of RC files, ensures unique and consistent profile and file names, and fixes file handling issues.

Panel Plugins

Clipman Plugin (1.6.2 to 1.6.4)

What's New in Xubuntu 23.10Clipman 1.6.4 improves memory management and tidies up the UI

Clipman (xfce4-clipman-plugin) is a clipboard manager for Xfce. Once activated (it&aposs disabled by default in Xubuntu), it will automatically store your clipboard history for easy later retrieval. Version 1.6.4 improves memory management, polishes up the UX with the addition of some new icons and better layout, and fixes icon display when the UI is scaled beyond 1x.

CPU Graph Plugin (1.2.7 to 1.2.8)

What's New in Xubuntu 23.10CPU Graph 1.2.8 makes more information readily available from the panel

CPU Graph (xfce4-cpugraph-plugin) adds a graphical representation of your CPU load to the Xfce panel. Version 1.2.8 now displays detailed CPU load information and features an improved tooltip.

Indicator Plugin (2.4.1 to 2.4.2)

What's New in Xubuntu 23.10Indicator Plugin 2.4.2 removes the downstream Xubuntu delta, guaranteeing better support

Indicator Plugin (xfce4-indicator-plugin) adds support for Ayatana indicators to the Xfce panel. While many applications have moved to the (also-supported) KStatusNotifierItem format, some older apps still utilize the classic indicator libraries. The 2.4.2 update sees the upstream panel plugin migrate to the Ayatana indicators, a patch that Xubuntu and Debian have carried for a while.

Mailwatch Plugin (1.3.0 to 1.3.1)

What's New in Xubuntu 23.10Mailwatch 1.3.1 improves logging and UI scaling support

Mailwatch Plugin (xfce4-mailwatch-plugin) is a mailbox watching applet for the Xfce panel. The 1.3.1 release fixes blurry icons when using UI scaling, adds a new "View Log" menu item, and updates the log when an update is manually requested.

Netload Plugin (1.4.0 to 1.4.1)

What's New in Xubuntu 23.10Netload 1.4.1 shows your network utilization, with correct units, in the panel

Netload Plugin (xfce4-netload-plugin) shows your current network usage in the panel. The 1.4.1 update fixes some memory management issues, uses the correct units, and adds a new option to set the decimal precision ("digits number").

PulseAudio Plugin (xfce4-pulseaudio-plugin 0.4.5 to 0.4.8)

What's New in Xubuntu 23.10PulseAudio Plugin 0.4.8 greatly improves device and MPRIS (media player) support

PulseAudio Plugin (xfce4-pulseaudio-plugin) shows your current volume levels in the panel and makes it easy to adjust your volume, switch devices, and control media playback. Version 0.4.8 fixes a bug with changing devices, eliminates flickering in the microphone icon, adds scrolling to the microphone icon to adjust recording volume, and includes a bevy of other improvements for MPRIS and device handling. And yes, it works with PipeWire.

Verve Plugin (2.0.1 to 2.0.3)

What's New in Xubuntu 23.10Verve 2.0.3 plays better with the updates the panel has received in recent years

Verve Plugin (xfce4-verve-plugin) is a command line plugin for the Xfce panel. It allows you to run commands, jump to folder locations, or open URLs in your browser. The 2.0.3 release features a port to PCRE2, better handling for focus-out events, and a fix for a crash when used with the panel&aposs autohide functionality.

Weather Plugin (0.11.0 to 0.11.1)

What's New in Xubuntu 23.10Weather 0.11.1 makes it easier to configure the plugin and improves UI scaling

Weather Plugin (xfce4-weather-plugin) shows your local weather conditions in the panel. Click the panel icon to reveal your forecast for the next few days. Version 0.11.1 fixes a bug where the temperature would read as -0C, fixes logo and icon display with UI scaling beyond 1x, and makes configuration easier.

Whisker Menu Plugin (2.7.2 to 2.8.0)

What's New in Xubuntu 23.10Whisker Menu 2.8.0 improves power user support with more menu popup options

Whisker Menu Plugin (xfce4-whiskermenu-plugin) is a modern application launcher for Xfce. While the standard menu is reminiscent of Windows 2000 and earlier, the Whisker Menu has features you&aposd find in Windows Vista or later. Version 2.8.0 fixes breakage with AccountsService, adds support for showing specific menu instances (when using multiple plugins in multiple panels), and adds support for showing the menu at the center of the screen.

Download Xubuntu 23.10

Ready to take Xubuntu 23.10 for a spin? Well, we&aposre not! However, if you want to test the Release Candidate (RC) image, you can find the download information and where to report your test results on iso.qa.ubuntu.com.

See you Thursday when this release is ready to roll!

on October 10, 2023 11:49 AM