In a local network where you want to keep your devices synchronized with accurate time, running a lightweight and efficient NTP server is essential. Chrony, a modern alternative to ntpd, is a great choice and in this guide, I’ll show you how to set it up inside a Docker container that fetches time from global sources and distributes it across your LAN.
Why Chrony?
Chrony is:
More accurate than ntpd in many conditions (especially with intermittent connectivity)
Lightweight and easy to configure
Ideal for both clients and servers
What You’ll Set Up
A Docker container running Chrony
Configured to sync with global NTP servers
Act as a time server for your LAN
With optional logging and control access
Step 1: Create a Dockerfile for Chrony
Start by creating a simple Dockerfile to build a minimal Chrony container.
Here’s a sample chrony.conf tailored for local server use and syncing with global time sources:
# chrony.conf
# Time sources (use pool.ntp.org or your regional servers)
server 0.pool.ntp.org iburst
server 1.pool.ntp.org iburst
server 2.pool.ntp.org iburst
# Allow all clients on your LAN (edit this according to your subnet)
allow 192.168.1.0/24
# Local stratum fallback if Internet is down
local stratum 10
# Drift file to track clock error over time
driftfile /var/lib/chrony/chrony.drift
# Log tracking data
log tracking measurements statistics
# Log files location
logdir /var/log/chrony
# Optional: control access
cmdport 0 # Use 0 to disable remote control; use 323 if needed
Replace 192.168.1.0/24 with your actual LAN subnet.
Are you using Kubuntu 25.04 Plucky Puffin, our current stable release? Or are you already running our development builds of the upcoming 25.10 (Questing Quokka)?
However this is a Beta release, and we should re-iterate the disclaimer:
DISCLAIMER: This release contains untested and unstable software. It is highly recommended you do not use this version in a production environment and do not use it as your daily work environment. You risk crashes and loss of data.
6.4 Beta1 packages and required dependencies are available in our Beta PPA. The PPA should work whether you are currently using our backports PPA or not. If you are prepared to test via the PPA, then add the beta PPA and then upgrade:
In case of issues, testers should be prepared to use ppa-purge to remove the PPA and revert/downgrade packages.
Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.
If you believe you might have found a packaging bug, you can use launchpad.net to post testing feedback to the Kubuntu team as a bug, or give feedback on Matrix [1], or mailing lists [2].
If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.
[Test Case] * General tests: – Does plasma desktop start as normal with no apparent regressions over 6.3? – General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc. * Specific tests: – Identify items with front/user facing changes capable of specific testing. – Test the ‘fixed’ functionality or ‘new’ feature.
Testing may involve some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.
Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.
We need your help to get this important beta release in shape for Kubuntu and the KDE community as a whole.
Thanks!
Please stop by the Kubuntu-devel Matrix channel on if you need clarification of any of the steps to follow.
Yesterday, exactly twenty years ago my mobile rang while I was walking the dog.
I had just returned from Sydney about a week ago (still battling with the last remains of my Jet-lag (I had never left Europe before!)) where I had attended the UbuntuDownUnder summit and had a 30min interview on the last day (that was literally rather like having a coffee with friends after lunch) with Mark Shuttleworth and Matt Zimmerman (back then Canonicals CTO) on a nice hotel terrace directly under a tree with a colony of flying foxes sleeping above our heads.
There was Jane Silber (CEO) on the phone, telling me: “I’m so happy to tell you you are hired! In your new role we want you to create an educational flavor of Ubuntu, there will be a debian-edu/skolelinux gathering in Bergen in Norway from the 10th to 12th of June, are you okay flying there with Mark?”
I rushed back home and told my girlfriend: “I’m hired, and I’ll fly Canonical One on my first business trip next month!” (Canonical One was the name of Marks plane). I learned the next weeks that Canonical had indeed booked a generic scheduled flight for me and we’d only meet at the venue
The flight was a disaster, after we were boarding that small 20-seater 2 prop plane that was supposed to get us from Cologne to Amsterdam and the pilot started the engine my window all of a sudden was soaked in oil. We had to stay in the plane out on the filed while the mechanics were fixing the engine for like 2-3h so indeed I missed the connection in Amsterdam and had to stay for the night instead of arriving in Bergen the evening before the event started.
When I arrived at the venue everyone was already busy hacking on stuff and I jumped right in alongside, finally meeting some users of LTSP (Linux Terminal Server Project) which I was upstream for at that time and working with them on the problems they faced in debian with it, tinkering with moodle as a teaching support system and looking at other edu software, meanwhile Mark was sitting on a bar-stool in a corner with his laptop hacking on launchpad code.
When we went to our hotel in the evening it turned out they did not have our booking at all and were completely overbooked due to a jewelry exhibition they had in the house for that week. I talked like 15min to the lady behind the counter, showed her my booking confirmation PDF on the laptop, begged and flirted a lot and eventually she told us “We do have an exhibition room that we keep as spare, it only has one bed but you can have it and we will add a folding bed”. The room was actually a normal hotel room but completely set up with wallpaper tables all around the walls.
Mark insisted to take the folding bed and I can tell you, he does not snore … (well, he didn’t back then)
This was only the first of a plethora of adventures that followed in the upcoming 20 years, that phone call clearly changed my life and the company gave me the opportunity to work with the brightest, sharpest and most intelligent people on the planet in and outside of Canonical.
It surely changed a lot over these years (when I started we were building the distro with 18 people in the distro team and did that for quite a few years before it actually got split into server, foundations, kernel and desktop teams) but it never lost its special spirit of having these exceptional people with such a high focus on bringing opensource to everyone and making it accessible to everyone. Indeed, with growth comes the requirement to make more money to pay the people, the responsibility to give your employees a certain amount of security and persistence grows, but Canonical and especially Mark have always managed to keep the balance to not lose that focus and do the right thing in the end.
Ten years ago I said “onward to the next ten!!”, I won’t really say “onward to the nest 20!” today, not because I ever plan to resign but simply because I doubt I still want to work full time when I’m 75
Thank you Mark for dragging me into this adventure and thank you for still having me! I still love the ride!!
One of the most critical gaps in traditional Large Language Models (LLMs) is that they rely on static knowledge already contained within them. Basically, they might be very good at understanding and responding to prompts, but they often fall short in providing current or highly specific information. This is where RAG comes in; RAG addresses these critical gaps in traditional LLMs by incorporating current and new information that serves as a reliable source of truth for these models.
In our previous blog on understanding and deploying RAG, we walked you through the basics of what this technique is and how it enhances generative AI models by utilizing external knowledge sources such as documents and extensive databases. These external knowledge bases enhance machine learning models for enterprise applications by providing verifiable, up-to-date information that reduces errors, simplifies implementation, and lowers the cost of continuous retraining.
Building a robust generative AI infrastructure, such as those for RAG, can be complex and challenging. It requires careful consideration of the technology stack, data, scalability, ethics, and security. For the technology stack, the hardware, operating systems, cloud services, and generative AI services must be resilient and efficient based on the scale that enterprises require.
There are several open source software options available for building generative AI infrastructure and complex AI projects that accelerate development, avoid vendor lock-in, reduce costs, and satisfy enterprise needs.
Objective
In this guide, we will take you through setting up a RAG pipeline. We will utilize open source tools such as Charmed OpenSearch for efficient search retrieval and KServe for machine learning inference, specifically in Azure and Ubuntu environments while leveraging silicons.
This guide is intended for data enthusiasts, engineers, scientists, and machine learning professionals who want to start building RAG solutions on public cloud platforms, such as Azure, using enterprise open source tools that are not native to Azure microservices offering. This can be used for various projects, including proofs of concept, development, and production.
Please note that multiple open source tools not highlighted in this guide can be used in place of the ones we outline. In cases where you do use different tools, you should adjust the hardware specifications—such as storage, computing power, and configuration—to meet the specific requirements of your use case.
RAG workflow
When building a generative AI project, such as a RAG system and advanced generative AI reference architectures, it is crucial to include multiple components and services. These components typically encompass databases, knowledge bases, retrieval systems, vector databases, model embeddings, large language models (LLMs), inference engines, prompt processing, and guardrail and fine-tuning services, among others.
RAG allows users to choose the most suitable RAG services and applications for their specific use cases. The reference workflow outlined below mainly utilizes two open source tools: Charmed OpenSearch and KServe. In the RAG workflow depicted below, fine-tuning is not mandatory; however, it can enhance the performance of LLMs as the project scales.
Figure 1: RAG workflow diagram using open source tools
The table below describes all the RAG services highlighted in the workflow diagram above and maps the open source solutions that are used in this guide.
Services
Description
Open source solutions
Advanced parsing
Text splitters are advanced parsing techniques for the document that goes to the RAG system. In this way, the document can be cleaner, and focused and will provide informative input.
Charmed Kubeflow: Text splitters
Ingest/data processing
The ingest or data processing is a data pipeline layer. This is responsible for data extraction, cleansing, and the removal of unnecessary data that you will run.
Charmed OpenSearch can be used for document processing
Embedding model
The embedding model is a machine-learning model that converts raw data to vector representations.
Charmed OpenSearch- Sentence transformer
Retrieval and ranking
This component retrieves the data from the knowledge base; it also ranks the relevance of the information being fetched based on relevance scores.
Charmed OpenSearch with FAISS (Facebook AI Similarity Search)
Vector database
A vector database stores vector embeddings so data can be easily searched by the ‘retrieval and ranking services’.
Charmed OpenSearch- KNN Index as a vector database
Prompt processing
This service formats queries and retrieved text into a readable format so it is structured to the LLM.
Charmed OpenSearch – OpenSearch: ML – agent predict
LLM
This component provides the final response using multiple GenAI models.
GPT LlamaDeepseek
LLM inference
This refers to operationalizing machine learning in production by processing running data into a machine learning model so that it gives an output.
Charmed Kubeflow with Kserve
Guardrail
This component ensures ethical content in the GenAI response by creating a guardrail filter for the inputs and outputs.
Charmed OpenSearch: guardrail validation model
LLM Fine-tuning
Fine-tuning is the process of taking a pre-trained machine learning model and further training it on a smaller, targeted data set.
Charmed Kubeflow
Model repository
This component is used to store and version trained machine learning models, especially within the process of fine-tuning. This registry can track the model’s lifecycle from deployment to retirement.
Charmed KubeflowCharmed MLFlow
Framework for building LLM application
This simplifies LLM workflow, prompts, and services so that building LLMs is easier.
Langchain
This table provides an overview of the key components involved in building a RAG system and advanced GenAI reference solution, along with associated open source solutions for each service. Each service performs a specific task that can enhance your LLM setup, whether it relates to data management and preparation, embedding a model in your database, or improving the LLM itself.
The deployment guide below will cover most of the services except the following: model repository, LLM fine-tuning, and text splitters.
The rate of innovation in this field, particularly within the open source community, has become exponential. It is crucial to stay updated with the latest developments, including new models and emerging RAG solutions.
RAG component: Charmed OpenSearch
Charmed OpenSearch will be mainly used in this RAG workflow deployment. Charmed OpenSearch is an operator that builds on the OpenSearch upstream by integrating automation to streamline the deployment, management, and orchestration of production clusters. The operator enhances efficiency, consistency, and security. Its rich features include high availability, seamless scaling features for deployments of all sizes, HTTP and data-in-transit encryption, multi-cloud support, safe upgrades without downtime, roles and plugin management, and data visualization through Charmed OpenSearch Dashboards.
With the Charmed OpenSearch operator (also known as charm), you can deploy and run OpenSearch on physical and virtual machines (VM) and other cloud and cloud-like environments, including AWS, Azure, Google Cloud, OpenStack, and VMware. For the next section, deployment guide, will be using Azure VM instances:
Figure 2: Charmed OpenSearch architecture
Charmed OpenSearch uses Juju. Juju is an open source orchestration engine for software operators that enables the deployment, integration, and lifecycle management of applications at any scale on any infrastructure. In the deployment process, the Juju controller manages the flow of data and interactions within multiple OpenSearch deployments, including mediating between different parts of the system.
Charmed OpenSearch deployment and use is straightforward. If you’d like to learn how to use and deploy it in a range of cloud environments, you can read more in our in-depth Charmed OpenSearch documentation.
RAG component: KServe
KServe is a cloud-native solution within the Kubeflow ecosystem that serves machine learning models. By leveraging Kubernetes, KServe operates effectively in cloud-native environments. It can be used for various purposes, including model deployment, machine learning model versioning, LLM inference, and model monitoring.
In the RAG use case discussed in this guide, we will use KServe to perform inference on LLM. Specifically, it will process an already-trained LLM to make predictions based on new data. This emphasizes the need for a robust LLM inference system that works with local and public LLMs. The system should be scalable, capable of handling high concurrency, providing low-latency responses, and delivering accurate answers to LLM-related questions.
In the deployment guide, we’ll take you through a comprehensive and hands-on guide to building and deploying a RAG service using Charmed OpenSearch and KServe. Charmed Kubeflow by Canonical natively supports KServe.
Deployment guide to building an end-to-end RAG workflow with Charmed OpenSearch and KServe
Our deployment guide for building an end-to-end RAG workflow with Charmed OpenSearch and KServe covers everything you need to make your own RAG workflow, including:
Prerequisites
Install Juju and configure Azure credentials
Bootstrap Juju controller and create Juju model for Charmed OpenSearch
Deploy Charmed OpenSearch and set up the RAG service
Canonical provides Data and AI workshop, and enterprise open source tools and services and can advise on securing the safety of your code, data, and models in production.
Build the right RAG architecture and application with the Canonical RAG workshop
Canonical offers a 5-day workshop designed to help you start building your enterprise RAG systems. By the end of the workshop, you will have a thorough understanding of RAG and LLM theory, architecture, and best practices. Together, we will develop and deploy solutions tailored to your specific needs. Download the datasheet here.
Learn and use best-in-class RAG tooling on any hardware and cloud
Unlock the benefits of RAG with open source tools designed for your entire data and machine learning lifecycle. Run RAG-enabled LLM on any hardware and cloud platform, whether in production or at scale.
Enhance the security of your GenAI projects while mastering best practices for managing your software stack. Discover ways to safeguard your code, data, and machine learning models in production with Confidential AI.
A broadband remote access server (BRAS) is an access gateway oriented to broadband network applications. It bridges broadband access and backbone networks, providing basic access methods and management functions for broadband access networks.
Traditionally, BRAS has suffered from challenges, including low resource utilization, complex management and maintenance, and slow service provisioning. Virtual broadband remote access server (vBRAS) offers a way to address these challenges, accelerating service rollout, improving resource utilization, and simplifying operations and maintenance (O&M).
Huawei and Canonical have worked together to design and verify a private cloud architecture for vBRAS based on Huawei OceanStor storage and Canonical Charmed OpenStack. The solution gives telcos a reliable, performant and cost efficient way to implement network functions virtualization infrastructure (NFVI) for vBRAS.
BRAS challenges and vBRAS
The traditional BRASs are deployed in distributed mode and fully meshed with peripheral systems. As the number of home broadband users surges and emerging services such as 4K high definition (HD) and Internet of Things (IoT) develop rapidly, the traditional distributed BRAS deployment solution faces the following significant challenges:
-Low resource utilization -Complex management and maintenance -Slow service provisioning
vBRAS adopts an architecture based on virtualization technologies. It virtualizes traditional hardware BRAS functions and implements software-based network functions, improving network flexibility and scalability.
vBRAS solutions featuring the cloud-based control and user plane separation (CUPS) architecture are implemented based on the centralized management and control capabilities brought by software defined networking (SDN) and the device cloudification capabilities brought by network functions virtualization (NFV). NFV is a network architecture in which traditional network devices are virtualized into software modules running on universal hardware. The vBRAS solution focuses on unified management of vBRAS-vUPs and vBRAS-pUPs, and uses the CUPS vBRAS architecture mentioned earlier. The CUPS architecture is the most cost-effective solution for evolving toward the all-cloud era.
NFVI reference architecture for vBRAS
To achieve reliability, performance, and cost efficiency for vBRAS, Canonical and Huawei field engineering teams have designed and verified a private cloud architecture for vBRAS. It is based on Huawei OceanStor storage and the open source Canonical OpenStack. The main components and key features include:
-Fully disaggregated architecture with dedicated controller nodes. Compute nodes for management and orchestration (MANO) and VNF workloads.
-Dedicated storage from OceanStor, with all virtual machines backed by OceanStor. A small ceph storage cluster is available for glance/images only.
-Instance HA from Openstack Masakara and two separated OceanStor storages provide another higher reliability. The Cinder-Huawei charm was enhanced to support multi-backend during this verification.
-Separated networks with pure OVN (open virtual network) for general workload (MANO), and high performance OVN+DPDK (data plane development kit – for network acceleration), cpu pin and huge pages for VNFs workloads.
-Infrastructure management and automation from Canonical MAAS and Juju, Canonical Observability Stack (COS), and Landscape and Kernel Livepatch from Ubuntu Pro.
-BRAS VNFs orchestration provided by Huawei’s MANO.
The latest versions of MANO and vBRAS are tested and verified with a 10 years lifecycle version of Ubuntu and OpenStack (Ubuntu 20.04 Server LTS and Ussuri Charmed OpenStack).
Huawei OceanStor
Huawei OceanStor storage unlocks new levels of intelligence and power. It offers converged and flexible storage solutions that boast the power and reliability needed to meet green, sustainable, and future-facing development goals.
Huawei OceanStor Dorado systems are the next-gen all-flash storage systems. They are designed to meet the major concerns of high availability, utilization, and usability for medium and large enterprises, offering huge storage capacity and quick data access. OceanStor Dorado 5000 and 6000 excel in database, virtualization, and big data analytics scenarios, making them well-suited to industries such as carrier, finance, government, and manufacturing.
Canonical OpenStack
Ubuntu Server powers 48% of OpenStack clouds globally, and leading companies across industries – including telco, finance, hardware manufacturing, retail, automotive, and healthcare – choose Canonical OpenStack as the platform for their private cloud implementations. Canonical OpenStack is an enterprise cloud platform engineered for price-performance, making it the ideal choice for telecommunications companies that demand the highest levels of infrastructure stability, security, and resilience:
Designed to be economical in every way: we use optimal architecture including server types and their components to run more VMs. In our case studies, our approach has reduced costs by as much as 80% for 40 nodes. We use open-source software to avoid high licence costs and automation to cut spendings on operations.
Total bottom-up automation: enjoy fully automated OpenStack deployment and operations thanks to Canonical’s tooling and MANO.
Reliability and high availability for vBRAS: our reference architecture includes redundancy for hardware components to eliminate single points of failure. All controller services are deployed as clusters and designed for fault tolerance. Instance high availability from OpenStack Masakara and running vBRAS across 2 regions provide 99.9% availability.
More than 5 years ago, i386 was dropped as an architecture in Ubuntu. Despite this, i386 has remained selected by default as an architecture to build when creating new PPAs, snap recipes, or OCI recipes.
Today, we have disabled building for i386 by default. From now on, only amd64 will be selected by default when creating new PPAs, snap recipes, or OCI recipes. This change only affects newly created PPAs, snap recipes, or OCI recipes. Existing PPAs and recipes remain unchanged.
It’s worth noting that, although we have disabled building for i386 by default, it’s still possible to select i386 as a target architecture when creating new PPAs, snap recipes, or OCI recipes. In future, we may yet decide to disable this altogether but for now, the ability to target i386 remains.
Because targeting i386 is still possible (but requires intervention to enable), we don’t anticipate that this change will affect users, but if you are affected, please log a bug.
And as always, if you have any feedback, please let us know!
We are pleased to announce that the Plasma 6.3.5 bugfix update is now available for Kubuntu 25.04 Plucky Puffin in our backports PPA.
As usual with our PPAs, there is the caveat that the PPA may receive additional updates and new releases of KDE Plasma, Gear (Apps), and Frameworks, plus other apps and required libraries. Users should always review proposed updates to decide whether they wish to receive them.
To upgrade:
Add the following repository to your software sources list:
ppa:kubuntu-ppa/backports
or if it is already added, the updates should become available via your preferred update method.
The PPA can be added manually in the Konsole terminal with the command:
sudo add-apt-repository ppa:kubuntu-ppa/backports
and packages then updated with
sudo apt full-upgrade
We hope you enjoy using Plasma 6.3.5!
Issues with Plasma itself can be reported on the KDE bugtracker [1]. In the case of packaging or other issues, please provide feedback on our mailing list [2], and/or file a bug against our PPA packages [3].
A Canonical levou o Miguel a redescobrir o conforto da gama de sofás-cama do IKEA e entretanto, o Diogo trouxe um saco cheio de prendas, que incluem reflexões sobre direitos digitais e privacidade em tempo de eleições, uma obsessão acumuladora com arquivos, papéis, facturas, revistas velhas e jornais cheios de pó digital e ainda uma catrefada de extensões de Firefox para todos os usos e gostos - ou não fosse isto o Podcast Firefox Portugal, o podcast sobre Firefox, a Mozilla e outras cenas.
About 90% of my Debian contributions this month were
sponsored by Freexian.
You can also support my work directly via
Liberapay.
Request for OpenSSH debugging help
Following the OpenSSH work described below, I have an open
report about the sshd server sometimes
crashing when clients try to connect to it. I can’t reproduce this myself,
and arm’s-length debugging is very difficult, but three different users have
reported it. For the time being I can’t pass it upstream, as it’s entirely
possible it’s due to a Debian patch.
Is there anyone reading this who can reproduce this bug and is capable of
doing some independent debugging work, most likely involving bisecting
changes to OpenSSH? I’d suggest first seeing whether a build of the
unmodified upstream 10.0p2 release exhibits the same bug. If it does, then
bisect between 9.9p2 and 10.0p2; if not, then bisect the list of Debian
patches. This would be extremely helpful, since at the moment it’s a bit
like trying to look for a needle in a haystack from the next field over by
sending instructions to somebody with a magnifying glass.
I enabled the new --with-linux-memlock-onfault configure option to protect
sshd against being swapped out, but this turned out to cause test failures
on riscv64, so I disabled it again there. Debugging this took some time
since I needed to do it under emulation, and in the process of setting up a
testbed I added riscv64 support to
vmdb2.
In coordination with the wtmpdb
maintainer, I enabled the new Y2038-safe native wtmpdb support in OpenSSH,
so wtmpdb last now reports the correct tty.
Since we added dput-ng integration to
Debusine
recently, I wanted to make sure that it was in good condition in trixie, so
I fixed dput-ng: will FTBFS during trixie support
period. Previously a similar bug had been
fixed by just using different Ubuntu release names in tests; this time I
made the tests independent of the current supported release data returned by
distro_info, so this shouldn’t come up again.
We also ran into dput-ng: —override doesn’t override profile
parameters, which needed somewhat more
extensive changes since it turned out that that option had never worked. I
fixed this after some discussion with Paul Tagliamonte to make sure I
understood the background properly.
man-db
I released man-db
2.13.1. This just
included various small fixes and a number of translation updates, but I
wanted to get it into trixie in order to include a contribution to increase
the MAX_NAME constant,
since that was now causing problems for some pathological cases of manual
pages in the wild that documented a very large number of terms.
Before container technologies like Docker came into play, applications were typically run directly on the host operating system—either on bare metal hardware or inside virtual machines (VMs). While this method works, it often leads to frustrating issues, especially when trying to reproduce setups across different environments.
This becomes even more relevant in the amateur radio world, where we often experiment with digital tools, servers, logging software, APRS gateways, SDR applications, and more. Having a consistent and lightweight deployment method is key when tinkering with limited hardware like Raspberry Pi, small form factor PCs, or cloud VPS systems.
The Problem with Traditional Software Deployment
Let’s say you’ve set up an APRS iGate, or maybe you’re experimenting with WSJT-X for FT8, and everything runs flawlessly on your laptop. But the moment you try deploying the same setup on a Raspberry Pi or a remote server—suddenly things break.
Why?
Common culprits include:
Different versions of the operating system
Mismatched library versions
Varying configurations
Conflicting dependencies
These issues can be particularly painful in amateur radio projects, where specific software dependencies are critical, and stability matters for long-term operation.
You could solve this by running each setup inside a virtual machine, but VMs are often overkill—especially for ham radio gear with limited resources.
Enter Docker: The Ham’s Best Friend for Lightweight Deployment
Docker is an open-source platform that allows you to package applications along with everything they need—libraries, configurations, runtimes—into one neat, portable unit called a container.
Think of it like packaging up your entire ham radio setup (SDR software, packet tools, logging apps, etc.) into a container, then being able to deploy that same exact setup on:
A Raspberry Pi
A cloud server
A homelab NUC
Another ham’s machine
Why It’s Great for Hams:
Lightweight – great for Raspberry Pi or low-power servers
Fast startup – ideal for services that need to restart quickly
Reproducible environments – makes sharing setups with fellow hams easier
Isolation – keeps different radio tools from interfering with each other
Many amateur radio tools like Direwolf, Xastir, Pat (Winlink), and even JS8Call can be containerized, making experimentation safer and more efficient.
Virtual Machines: Still Relevant in the Shack
Virtual Machines (VMs) have been around much longer and still play a crucial role. Each VM acts like a complete computer, with its own OS and kernel, running on a hypervisor like:
VirtualBox
VMware
KVM
Hyper-V
With VMs, you can spin up an entire Windows or Linux machine, perfect for:
Running legacy ham radio software (e.g., old Windows-only apps)
Simulating different operating systems for testing
Isolating potentially unstable setups from your main system
However, VMs require more horsepower. They’re heavy, boot slowly, and take up more disk space—often not ideal for small ham radio PCs or low-powered nodes deployed in the field.
Quick Comparison: Docker vs Virtual Machines for Hams
Feature
Docker
Virtual Machine
OS
Shares host kernel
Full OS per VM
Boot Time
Seconds
Minutes
Resource Use
Low
High
Size
Lightweight
Heavy (GBs)
Ideal For
Modern ham tools, APRS bots, SDR apps
Legacy systems, OS testing
Portability
High
Moderate
Ham Radio Use Cases for Docker
Here’s how Docker fits into amateur radio workflows:
Run an APRS iGate with Direwolf and YAAC in isolated containers.
Deploy SDR receivers like rtl_433, OpenWebRX, or CubicSDR as containerized services.
Set up a Winlink gateway using Pat + ax25 tools, all in one container.
Automate and scale your APRS bot, or APRS gateway using Docker + cron + scripts.
Docker makes it easier to test and share these setups with other hams—just export your Docker Compose file or image.
When to Use Docker, When to Use a VM
Use Docker if:
You’re building or experimenting with modern ham radio apps
You want to deploy quickly and repeatably
You’re using Raspberry Pi, VPS, or low-power hardware
You’re setting up CI/CD pipelines for your scripts or bots
Use VMs if:
You need to run legacy apps (e.g., old Windows logging software)
You want to simulate full system environments
You’re working on something that could crash your main system
Final Thoughts
Both Docker and VMs are powerful tools that have a place in the modern ham shack. Docker offers speed, portability, and resource-efficiency—making it ideal for deploying SDR setups, APRS bots, or automation scripts. VMs, on the other hand, still shine when you need full system emulation or deeper isolation.
At the end of the day, being a ham means being an experimenter. And tools like Docker just give us more ways to explore, automate, and share our radio projects with the world.
Incus is a manager for virtual machines and system containers.
A virtual machine (VM) is an instance of an operating system that runs on a computer, along with the main operating system. A virtual machine uses hardware virtualization features for the separation from the main operating system. With virtual machines, the full operating system boots up in them. While in most cases you would run Linux on a VM without a desktop environment, you can also run Linux with a desktop environment (like in VirtualBox and VMWare).
A system with support for hardware virtualization so that it can run virtual machines.
A virtual machine image of your preferred Linux desktop distribution.
Cheat sheet
You should specify how much RAM memory you are giving to the VM. The default is only 1GiB of RAM, which is not enough for desktop VMs. The --console=vga launches for you the Remote Viewer GUI application to allow you to use the desktop in a window.
$ incus image list images:desktop # List all available desktop images
$ incus launch --vm images:ubuntu/jammy/desktop mydesktop -c limits.memory=3GiB --console=vga
$ incus console mydesktop --type=vga # Reconnect to already running instance
$ incus start mydesktop --console=vga # Start an existing desktop VM
Availability of images
Currently, Incus provides you with the following VM images of Linux desktop distributions. The architecture is x86_64.
Run the following command to list all available Linux desktop images. incus image is the section of Incus that deals with the management of images. The list command lists the available images of a remote/repository, the default being images: (run incus remote list for the full list of remotes). After the colon (:), you type filter keywords, and in this case we typed desktop to show images that have the word desktop in them (to show only Desktop images). We are interested in a few columns only, therefore -c ldt only shows the columns for the Alias, the Description and the Type.
These images have been generated with the utility distrobuilder, https://github.com/lxc/distrobuilder The purpose of the utility is to prepare the images so that when we launch them, we get immediately the desktop environment and do not perform any manual configuration. The configuration files for distrobuilder to create these images can be found at https://github.com/lxc/lxc-ci/tree/main/images For example, the archlinux.yaml configuration file has a section to create the desktop image, along with the container and other virtual machine images.
The full list of Incus images are also available on the Web, through the website https://images.linuxcontainers.org/ It is possible to generate more such desktop images by following the steps of the existing configuration files. Perhaps a Kali Linux desktop image would be very useful. In the https://images.linuxcontainers.org/ website you can also view the build logs that were generated while building the images, and figure out what parameters are needed for distrobuilder to build them (along with the actual configuration file). For example, here are the logs for the ArchLinux desktop image, https://images.linuxcontainers.org/images/archlinux/current/amd64/desktop-gnome/
Up to this point we got a list of the available virtual machine images that are provided by Incus. We are ready to boot them.
Booting a desktop Linux VM on Incus
When launching a VM, Incus provides by default 1GiB RAM and 10GiB of disk space. The disk space is generally OK, but the RAM is too little for a desktop image (it’s OK for non-desktop images). For example, for an Ubuntu desktop image, the instance requires about 1.2GB of memory to start up and obviously more to run other programs. Therefore, if we do not specify more RAM, then the VM would struggle to make do of the mere 1GiB of RAM.
Booting the Ubuntu desktop image on Incus
Here is the command to launch a desktop image. We use incus launch to launch the image. It’s a VM, hence --vm. We are using the image from the images: remote, the one called ubuntu/plucky/desktop (it’s the last from the list of the previous section). We configure a new limit for the memory usage, -c limits.memory=3GiB, so that the instance will be able to run successfully. Finally, the console is not textual but graphical. We specify that with --console=vga which means that Incus will launch the remote desktop utility for us.
I closed the desktop window but the VM is running. How do I get it back up?
If you closed the Remote Viewer window, you can get Incus to start it again with the following command. By doing so, you are actually reconnecting back to the VM and continue working from where you left off.
We are using the incus console action to connect to the running mydesktop instance and request access through the Remote Viewer (rather than a text console).
$ incus console mydesktop --type=vga
Error: This console is already connected. Force is required to take it over.
You are already connected to the desktop VM with the Remote Viewer and you are trying to connect again. Either go to the existing Remote Viewer window, or add the parameter --force to close the existing Remote Viewer window and open a new one.
Error: Instance is not running
You are trying to connect to a desktop VM with the Remote Viewer but the instance (which already exists) is not running. Use the action incus start to start the virtual machine, along with the --type=vga parameter to get Incus to launch the Remote Viewer for you.
$ incus start mydesktop --console=vga
I get no audio from the desktop VM! How do I get sound in the desktop VM?
This requires extra steps which I do not show yet. There are three options. The first is to use the QEMU device emulation to emulate a sound device in the VM. The second is to somehow push an audio device into the VM so that this audio device is used exclusively in the VM (have not tried this but I think it’s possible). The third and perhaps best option is to use network audio with PulseAudio/Pipewire. You enable network audio on your desktop and then configure the VM instance to connect to that network audio server. I have tried that and it worked well for me. The downside is that the Firefox snap package in the VM could not figure out that there is network audio there and I could not get audio in that application.
How do I shutdown the desktop VM?
Use the desktop UI to perform the shutdown. The VM will shut down cleanly.
Error: Failed instance creation: The image used by this instance is incompatible with secureboot. Please set security.secureboot=false on the instance
You tried to launch a virtual machine with SecureBoot enabled but the image does not support SecureBoot. You need to disable SecureBoot when you launch this image. The instance has been created but is unable to run unless you disable SecureBoot. You can either disable SecureBoot through an Incus configuration for this image, or just delete the instance, and try again with the parameter -c security.secureboot=false.
Here is how to disable SecureBoot, then try to incus start that instance.
$ incus config set mydesktop security.secureboot=true
Here is how you would enable that flag when you launch such a VM.
Note that official Ubuntu images can work with SecureBoot enabled, most others don’t. It has to do with the Linux kernel being digitally signed by some certification authority.
Error: Failed instance creation: Failed creating instance record: Add instance info to the database: Failed to create “instances” entry: UNIQUE constraint failed: instances.project_id, instances.name
This error message is a bit cryptic. It just means that you are trying to create or launch an instance while the instance already exists. Read as Error: The instance name already exists.
Launchpad and the Open Documentation Academy Live in Málaga
Launchpad is a web-based platform to support collaborative software development for open source projects. It offers a comprehensive suite of tools including bug tracking, code hosting , translation management, and package building
Launchpad is tightly integrated with the Ubuntu ecosystem, serving as a central hub for Ubuntu development and community contributions. Its features are designed to streamline the process of managing, developing, and distributing software in a collaborative environment.
Launchpad aims to foster strong community engagement by providing features that support collaboration, community management, and user participation, positioning itself as a central hub for open source communities.
Canonical’s Open Documentation Academy is a collaboration between Canonical’s documentation team and open source newcomers, experts, and those in-between, to help us all improve documentation, become better writers, and better open source contributors.
A key aim of the project is to set the standard for inclusive and welcoming collaboration while providing real value for both the contributors and the projects involved in the programme.
Join us at OpenSouthCode in Málaga
Launchpad and the Open Documentation Academy will join forces at OpenSouthCode 2025 in the wonderful city of Málaga, Spain, on June 20 – 21 2025.
The Open Documentation Academy will have a hands-on documentation workshop at the conference, where the participants will learn how to do meaningful open source contributions with the help of the Diátaxis documentation framework.
Launchpad’s Jürgen Gmach will be on-site and help you to land your first open source contribution.
I was just released from the hospital after a 3 day stay for my ( hopefully ) last surgery. There was concern with massive blood loss and low heart rate. I have stabilized and have come home. Unfortunately, they had to prescribe many medications this round and they are extremely expensive and used up all my funds. I need gas money to get to my post-op doctors appointments, and food would be cool. I would appreciate any help, even just a dollar!
I am already back to work, and continued work on the crashy KDE snaps in a non KDE env. ( Also affects anyone using kde-neon extensions such as FreeCAD) I hope to have a fix in the next day or so.
The Incus team is pleased to announce the release of Incus 6.12!
This release comes with some very long awaited improvements such as online growth of virtual machine memory, network address sets for easier network ACLs, revamped logging support and more!
On top of the new features, this release also features quite a few welcome performance improvements, especially for systems with a lot of snapshots and with extra performance enhancements for those using ZFS.
The highlights for this release are:
Network address sets
Memory hotplug support in VMs
Reworked logging handling & remote syslog
SNAT support on complex network forwards
Authentication through access_token parameter
Improved server-side filtering in the CLI
More generated documentation
The full announcement and changelog can be found here. And for those who prefer videos, here’s the release overview video:
And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus
Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.
Ubuntu MATE 25.04 is ready to soar! 🪽 Celebrating our 10th anniversary as an official Ubuntu flavour with the reliable MATE Desktop experience you love, built on the latest Ubuntu foundations. Read on to learn more 👓️
A Decade of MATE
This release marks the 10th anniversary of Ubuntu MATE becoming an official Ubuntu flavour. From our humble beginnings, we’ve developed a loyal following of users who value a traditional desktop experience with modern capabilities. Thanks to our amazing community, contributors, and users who have been with us throughout this journey. Here’s to many more years of Ubuntu MATE! 🥂
What changed in Ubuntu MATE 25.04?
Here are the highlights of what’s new in the Plucky Puffin release:
Celebrating 10 years as an official Ubuntu flavour! 🎂
Optional full disk encryption in the installer 🔐
Enhanced advanced partitioning options
Better interaction with existing BitLocker-enabled Windows installations
Improved experience when installing alongside other operating systems
Major Applications
Accompanying MATE Desktop 🧉 and Linux 6.14 🐧 are Firefox 137 🔥🦊,
Evolution 3.56 📧, LibreOffice 25.2.2 📚
See the Ubuntu 25.04 Release Notes
for details of all the changes and improvements that Ubuntu MATE benefits from.
There are no offline upgrade options for Ubuntu MATE. Please ensure you have
network connectivity to one of the official mirrors or to a locally accessible
mirror and follow the instructions above.
The Xubuntu team is happy to announce the immediate release of Xubuntu 25.04.
Xubuntu 25.04, codenamed Plucky Puffin, is a regular release and will be supported for 9 months, until January 2026.
Xubuntu 25.04, featuring the latest updates from Xfce 4.20 and GNOME 48.
Xubuntu 25.04 features the latest Xfce 4.20, GNOME 48, and MATE 1.26 updates. Xfce 4.20 features many bug fixes and minor improvements, modernizing the Xubuntu desktop while maintaining a familiar look and feel. GNOME 48 apps are tightly integrated and have full support for dark mode. Users of QEMU and KVM will be delighted to find new stability with the desktop session—the long-running X server crash has been resolved in Xubuntu 25.04 and backported to all supported Xubuntu releases.
The final release images for Xubuntu Desktop and Xubuntu Minimal are available as torrents and direct downloads from xubuntu.org/download/.
As the main server might be busy the first few days after the release, we recommend using the torrents if possible.
We want to thank everybody who contributed to this release of Xubuntu!
Highlights and Known Issues
Highlights
Xfce 4.20, released in December 2024, is included and contains many new features. Early Wayland support has been added, but is not available in Xubuntu.
GNOME 48 apps, including Font Viewer (gnome-font-viewer) and Mines (gnome-mines), include a refreshed appearance and usability improvements.
Known Issues
The shutdown prompt may not be displayed at the end of the installation. Instead, you might just see a Xubuntu logo, a black screen with an underscore in the upper left-hand corner, or a black screen. Press Enter, and the system will reboot into the installed environment. (LP: #1944519)
You may experience choppy audio or poor system performance while playing audio, but only in some virtual machines (observed in VMware and VirtualBox).
OEM installation options are not currently supported or available.
Please refer to the Xubuntu Release Notes for more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions.
The main Ubuntu Release Notes cover many other packages we carry and more generic issues.
Support
For support with the release, navigate to Help & Support for a complete list of methods to get help.
In addition to all the regular testing I am testing our snaps in a non KDE environment, so far it is not looking good in Xubuntu. We have kernel/glibc crashes on startup for some and for file open for others. I am working on a hopeful fix.
Next week I will have ( I hope ) my final surgery. If you can spare any change to help bring me over the finish line, I will be forever grateful
The Lubuntu Team is proud to announce Lubuntu 25.04, codenamed Plucky Puffin. Lubuntu 25.04 is the 28th release of Lubuntu, the 14th release of Lubuntu with LXQt as the default desktop environment. With 25.04 being an interim release, it will be supported until January of 2026. If you're a 24.10 user, please upgrade to 25.04 […]
The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 25.04 code-named “Plucky Puffin”. This marks Ubuntu Studio’s 36th release. This release is a Regular release and as such, it is supported for 9 months, until January 2026.
Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a more complete list of changes and known issues. Listed here are some of the major highlights.
This release is dedicated to the memory of Steve Langasek. Without Steve, Ubuntu Studio would not be where it is today. He provided invaluable guidance, insight, and instruction to our leader, Erich Eickmeyer, who not only learned how to package applications but learned how to do it properly. We owe him an eternal debt of gratitude.
You can download Ubuntu Studio 25.04 from our download page.
Special Notes
The Ubuntu Studio 25.04 disk image (ISO) exceeds 4 GB and cannot be downloaded to some file systems such as FAT32 and may not be readable when burned to a standard DVD. For this reason, we recommend downloading to a compatible file system. When creating a boot medium, we recommend creating a bootable USB stick with the ISO image or burning to a Dual-Layer DVD.
Minimum installation media requirements: Dual-Layer DVD or 8GB USB drive.
Full updated information, including Upgrade Instructions, are available in the Release Notes.
Upgrades from 24.10 should be enabled within a month after release, so we appreciate your patience. Upgrades from 25.04 LTS will be enabled after 24.10 reaches End-Of-Life in July 2025.
New This Release
GIMP 3.0!
The long-awaited GIMP 3.0 is included by default. GIMP is now capable of non-destructive editing with filters, better Photoshop PSD export, and so very much more! Check out the GIMP 3.0 release announcement for more information.
Pencil2D
Ubuntu Studio now includes Pencil2D! This is a 2D animation and drawing application that is sure to be helpful to animators. You can use basic clipart to make animations!
The basic features of Pencil2D are:
layers support (separated layer for bitmap, vector and soud part)
bitmap drawing
vector drawing
sound support
LibreOffice No Longer in Minimal Install
The LibreOffice suite is now part of the full desktop install. This will save space for those wishing for a minimalistic setup for their needs.
Invada Studio Plugins
Beginning this release we are including the Invada Studio Plugins first created by Invada Records Australia. This includes distortion, delay, dynamics, filter, phaser, reverb, and utility audio plugins.
PipeWire 1.2.7
This release contains PipeWire 1.2.7. One major feature this has over 1.2.4 is that v4l2loopback support is available via the pipewire-v4l2 package which is not installed by default.
PipeWire’s JACK compatibility is configured to use out-of-the-box and is zero-latency internally. System latency is configurable via Ubuntu Studio Audio Configuration.
However, if you would rather use straight JACK 2 instead, that’s also possible. Ubuntu Studio Audio Configuration can disable and enable PipeWire’s JACK compatibility on-the-fly. From there, you can simply use JACK via QJackCtl.
Ardour 8.12
This is, as of this writing, the latest release of Ardour, packed with the latest bugfixes.
To help support Ardour’s funding, you may obtain later versions directly from ardour.org. To do so, please one-time purchase or subscribe to Ardour from their website. If you wish to get later versions of Ardour from us, you will have to wait until the next regular release of Ubuntu Studio, due in October 2025.
Deprecation of Mailing Lists
Our mailing lists are getting inundated with spam and there is no proper way to fix the filtering. It uses an outdated version of MailMan, so this release announcement will be the last release announcement we send out via email. To get support, we encourage using Ubuntu Discourse for support, and for community clicking the notification bell in the Ubuntu Studio category there.
Frequently Asked Questions
Q: Does Ubuntu Studio contain snaps? A: Yes. Mozilla’s distribution agreement with Canonical changed, and Ubuntu was forced to no longer distribute Firefox in a native .deb package. We have found that, after numerous improvements, Firefox now performs just as well as the native .deb package did.
Thunderbird also became a snap so that the maintainers can get security patches delivered faster.
Additionally, Freeshow is an Electron-based application. Electron-based applications cannot be packaged in the Ubuntu repositories in that they cannot be packaged in a traditional Debian source package. While such apps do have a build system to create a .deb binary package, it circumvents the source package build system in Launchpad, which is required when packaging for Ubuntu. However, Electron apps also have a facility for creating snaps, which can be uploaded and included. Therefore, for Freeshow to be included in Ubuntu Studio, it had to be packaged as a snap.
We have additional snaps that are Ubuntu-specific, such as the Firmware Updater and the Security Center. Contrary to popular myth, Ubuntu does not have any plans to switch all packages to snaps, nor do we.
Q: Will you make an ISO with {my favorite desktop environment}? A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since we’re completely volunteer-run, we don’t have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio – which does *not* convert that flavor to Ubuntu Studio but adds its benefits.
Q: What if I don’t want all these packages installed on my machine? A: Simply use the Ubuntu Studio Installer to remove the features of Ubuntu Studio you don’t want or need!
Get Involved!
A wonderful way to contribute is to get involved with the project directly! We’re always looking for new volunteers to help with packaging, documentation, tutorials, user support, and MORE! Check out all the ways you can contribute!
Our project leader, Erich Eickmeyer, is now working on Ubuntu Studio at least part-time, and is hoping that the users of Ubuntu Studio can give enough to generate a monthly part-time income. We’re not there, but if every Ubuntu Studio user donated monthly, we’d be there! Your donations are appreciated! If other distributions can do it, surely we can! See the sidebar for ways to give!
Special Thanks
Huge special thanks for this release go to:
Eylul Dogruel: Artwork, Graphics Design
Ross Gammon: Upstream Debian Developer, Testing, Email Support
Sebastien Ramacher:Upstream Debian Developer
Dennis Braun: Upstream Debian Developer
Rik Mills: Kubuntu Council Member, help with Plasma desktop
Scarlett Moore: Kubuntu Project Lead, help with Plasma desktop
Len Ovens: Testing, insight
Mauro Gaspari: Tutorials, Promotion, and Documentation, Testing, keeping Erich sane
Simon Quigley: Qt6 Megastuff
Erich Eickmeyer: Project Leader, Packaging, Development, Direction, Treasurer
Recently, I was involved in an event where a video was shown, and the event was filmed. It would be nice to put the video of the event up somewhere so other people who weren't there could watch it. Obvious answer: upload it to YouTube. However, the video that was shown at the event is Copyrighted Media Content and therefore is disallowed by YouTube and the copyright holder; it's not demonetised (which wouldn't be a problem), it's flat-out blocked. So YouTube is out.
I'd like the video I'm posting to stick around for a long time; this is a sort of archival, reference thing where not many people will ever want to watch it but those that do might want to do so in ten years. So I'm loath to find some other random video hosting site, which will probably go bust, or pivot to selling online AI shoes or something. And the best way to ensure that something keeps going long-term is to put it on your own website, and use decent HTML, because that means that even in ten or twenty years it'll still work where the latest flavour-of-the-month thing will go the way of other old technologies and fade away and stop working over time. HTML won't do that.
But... it's an hour long and in full HD. 2.6GB of video. And one of the benefits of YouTube is that they'll make the video adaptive: it'll fit the screen, and the bandwidth, of whatever device someone's watching it on. If someone wants to look at this from their phone and its slightly-shaky two bars of 4G connection, they probably don't want to watch the loading spinner for an hour while it buffers a full HD video; they can ideally get a cut down, lower-quality but quicker to serve, version. But... how is this possible?
There are two aspects to doing this. One is that you serve up different resolutions of video, based on the viewer's screen size. This is exactly the same problem as is solved for images by the <picture> element to provide responsive images (where if you're on a 400px-wide screen you get a 400px version of the background image, not the 2000px full-res version), and indeed the magic words to search for here are responsive video. And the person you will find who is explaning all this is Scott Jehl, who has written a good description of how to do responsive video which explains it all in detail. You make versions of the video at different resolutions, and serve whichever one best matches the screen you're on, just like responsive images. Nice work; just what the doctor ordered.
But there's also a second aspect to this: responsive video adapts to screen size, but it doesn't adapt to bandwidth. What we want, in addition to the responsive stuff, is that on poor connections the viewer gets a lower-bandwidth version as well as a lower-resolution version, and that the viewer's browser can dynamically switch from moment to moment between different versions of the video to match their current network speed. This task is the job of HTTP Live Streaming, or HLS. To do this, you essentially encode the video in a bunch of different qualities and screen sizes, so you've got a bunch of separate videos (which you've probably already done above for the responsive part) and then (and this is the key) you chop up each video into a load of small segments. That way, instead of the browser downloading the whole one hour of video at a particular resolution, it only downloads the next segment at its current choice of resolution, and then if you suddenly get more (or less) bandwidth, it can switch to getting segment 2 from a different version of the video which better matches where you currently are.
Doing this sounds hard. Fortunately, all hard things to do with video are handled by ffmpeg. There's a nice writeup by Mux on how to convert an mp4 video to HLS with ffmpeg, and it works great. I put myself together a little Python script to construct the ffmpeg command line to do it, but you can do it yourself; the script just does some of the boilerplate for you. Very useful.
So now I can serve up a video which adapts to the viewer's viewing conditions, and that's just what I wanted. I have to pay for the bandwidth now (which is the other benefit of having YouTube do it, and one I now don't get) but that's worth it for this, I think. Cheers to Scott and Mux for explaining all this stuff.
Ubuntu Budgie 25.04 (Plucky Puffin) is a Standard Release with 9 months of support by your distro maintainers and Canonical, from April 2025 to Jan 2026. These release notes showcase the key takeaways for 24.10 upgraders to 25.04. Please note – there is no direct upgrade path from 24.04.2 to 25.04; you must uplift to 24.10 first or perform a fresh install. In these release notes the areas…
Watch a conversation reflecting on the 20th anniversary of Git, the version control system created by Linus Torvalds. He discusses his initial motivations for developing Git as a response to the limitations of existing systems like CVS and BitKeeper, and his desire to establish a better tool for the open-source community. Torvalds explains the processes […]
I’m pleased to announce the uCareSystem 25.04.09, the latest version of the all-in-one system maintenance tool for Ubuntu, Linux Mint, Debian and its derivatives, used by thousands ! This release brings some major changes internal changes, fixes and improvements under the hood. A new version of uCareSystem is out, and this time the focus is […]
Ubuntu MATE 24.10 is more of what you like, stable MATE Desktop on top of current Ubuntu. Read on to learn more 👓️
Ubuntu MATE 24.10
Thank you! 🙇
My sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this release 👏
I’d like to acknowledge the close collaboration with the Ubuntu Foundations team and the Ubuntu flavour teams, in particular Erich Eickmeyer who pushed critical fixes while I was travelling.
Thank you! 💚
Ships stable MATE Desktop 1.26.2 with a handful of bug fixes 🐛
Switched back to Slick Greeter (replacing Arctica Greeter) due to race-condition in the boot process which results the display manager failing to initialise.
Returning to Slick Greeter reintroduces the ability to easily configure the login screen via a graphical application, something users have been requesting be re-instated 👍
Ubuntu MATE 24.10 .iso 📀 is now 3.3GB 🤏 Down from 4.1GB in the 24.04 LTS release.
This is thanks to some fixes in the installer that no longer require as many packages in the live-seed.
Login Window
What didn’t change since the Ubuntu MATE 24.04 LTS?
If you follow upstream MATE Desktop development, then you’ll have noticed that Ubuntu MATE 24.10 doesn’t ship with the recently released MATE Desktop 1.28 🧉
I have prepared packaging for MATE Desktop 1.28, along with the associated components but encountered some bugs and regressions 🐞 I wasn’t able to get things to a standard I’m happy to ship be default, so it is tried and true MATE 1.26.2 one last time 🪨
Major Applications
Accompanying MATE Desktop 1.26.2 🧉 and Linux 6.11 🐧 are Firefox 131 🔥🦊,
Celluloid 0.27 🎥, Evolution 3.54 📧, LibreOffice 24.8.2 📚
See the Ubuntu 24.10 Release Notes
for details of all the changes and improvements that Ubuntu MATE benefits from.
There are no offline upgrade options for Ubuntu MATE. Please ensure you have
network connectivity to one of the official mirrors or to a locally accessible
mirror and follow the instructions above.
The Linux Containers project maintains Long Term Support (LTS) releases for its core projects. Those come with 5 years of support from upstream with the first two years including bugfixes, minor improvements and security fixes and the remaining 3 years getting only security fixes.
This is now the fourth round of bugfix releases for LXC, LXCFS and Incus 6.0 LTS.
LXC
LXC is the oldest Linux Containers project and the basis for almost every other one of our projects. This low-level container runtime and library was first released in August 2008, led to the creation of projects like Docker and today is still actively used directly or indirectly on millions of systems.
New LXC_IPV6_ENABLE lxc-net configuration key to turn IPv6 on/off
Fixed ability to attach to application containers with non-root entry point
LXCFS
LXCFS is a FUSE filesystem used to workaround some shortcomings of the Linux kernel when it comes to reporting available system resources to processes running in containers. The project started in late 2014 and is still actively used by Incus today as well as by some Docker and Kubernetes users.
Properly handle SLAB reclaimable memory in meminfo
Handle empty cpuset strings
Fix potential sleep interval overflows
Incus
Incus is our most actively developed project. This virtualization platform is just over a year old but has already seen over 3500 commits by over 120 individual contributors. Its first LTS release made it usable in production environments and significantly boosted its user base.
Due to the nature of this tool, it doesn’t get LTS releases as its feature set is extremely stable but still needs to receive very frequent updates to handle changes in the various Linux distributions that it builds. Distrobuilder 3.2 was released at the same time as the LTS releases, providing an up to date snapshot of that project.
systemd generator handles newer Linux distributions
Support for Alpaquita
What’s next?
We’re expecting another LTS bugfix release for the 6.0 branches in the third quarter of 2025. In the mean time, Incus will keep going with its usual monthly feature release cadence.
Thanks
This LTS release update was made possible thanks to funding provided by the Sovereign Tech Fund (now part of the Sovereign Tech Agency).
The Sovereign Tech Fund supports the development, improvement, and maintenance of open digital infrastructure. Its goal is to sustainably strengthen the open source ecosystem, focusing on security, resilience, technological diversity, and the people behind the code.
A couple weeks ago I was playing around with a multiple architecture CI setup with another team, and that led me to pull out my StarFive VisionFive 2 SBC again to see where I could make it this time with an install.
I left off about a year ago when I succeeded in getting an older version of Debian on it, but attempts to get the tooling to install a more broadly supported version of U-Boot to the SPI flash were unsuccessful. Then I got pulled away to other things, effectively just bringing my VF2 around to events as a prop for my multiarch talks – which it did beautifully! I even had one conference attendee buy one to play with while sitting in the audience of my talk. Cool.
I was delighted to learn how much progress had been made since I last looked. Canonical has published more formalized documentation: Install Ubuntu on the StarFive VisionFive 2 in the place of what had been a rather cluttered wiki page. So I got all hooked up and began my latest attempt.
My first step was to grab the pre-installed server image. I got that installed, but struggled a little with persistence once I unplugged the USB UART adapter and rebooted. I then decided just to move forward with the Install U-Boot to the SPI flash instructions. I struggled a bit here for two reasons:
The documentation today leads off with having you download the livecd, but you actually want the pre-installed server image to flash U-Boot, the livecd step doesn’t come until later. Admittedly, the instructions do say this, but I wasn’t reading carefully enough and was more focused on the steps.
I couldn’t get the 24.10 pre-installed image to work for flashing U-Boot, but once I went back to the 24.04 pre-installed image it worked.
And then I had to fly across the country. We’re spending a couple weeks around spring break here at our vacation house in Philadelphia, but the good thing about SBCs is that they’re incredibly portable and I just tossed my gear into my backpack and brought it along.
Thanks to Emil Renner Berthing (esmil) on the Ubuntu Matrix server for providing me with enough guidance to figure out where I had gone wrong above, and got me on my way just a few days after we arrived in Philly.
With the newer U-Boot installed, I was able to use the Ubuntu 24.04 livecd image on a micro SD Card to install Ubuntu 24.04 on an NVMe drive! That’s another new change since I last looked at installation, using my little NVMe drive as a target was a lot simpler than it would have been a year ago. In fact, it was rather anticlimactic, hah!
And with that, I was fully logged in to my new system.
It has 4 cores, so here’s the full output: vf2-cpus.txt
What will I do with this little single board computer? I don’t know yet. I joked with my husband that I’d “install Debian on it and forget about it like everything else” but I really would like to get past that. I have my little multiarch demo CI project in the wings, and I’ll probably loop it into that.
Since we were in Philly, I had a look over at my long-neglected Raspberry Pi 1B that I have here. When we first moved in, I used it as an ssh tunnel to get to this network from California. It was great for that! But now we have a more sophisticated network setup between the houses with a VLAN that connects them, so the ssh tunnel is unnecessary. In fact, my poor Raspberry Pi fell off the WiFi network when we switched to 802.1X just over a year ago and I never got around to getting it back on the network. I connected it to a keyboard and monitor and started some investigation. Honestly, I’m surprised the little guy was still running, but it’s doing fine!
And it had been chugging along running Rasbian based on Debian 9. Well, that’s worth an upgrade. But not just an upgrade, I didn’t want to stress the device and SD card, so I figured flashing it with the latest version of Raspberry Pi OS was the right way to go. It turns out, it’s been a long time since I’ve done a Raspberry Pi install.
I grabbed the Raspberry Pi Imager and went on my way. It’s really nice. I went with the Raspberry Pi OS Lite install since it’s the RP1, I didn’t want a GUI. The imager asked the usual installation questions, loaded up my SSH key, and I was ready to load it up in my Pi.
The only thing I need to finish sorting out is networking. The old USB WiFi adapter I have it in doesn’t initialize until after it’s booted up, so wpa_supplicant on boot can’t negotiate with the access point. I’ll have to play around with it. And what will I use this for once I do, now that it’s not an SSH tunnel? I’m not sure yet.
I realize this blog post isn’t very deep or technical, but I guess that’s the point. We’ve come a long way in recent years in support for non-x86 architectures, so installation has gotten a lot easier across several of them. If you’re new to playing around with architectures, I’d say it’s a really good time to start. You can hit the ground running with some wins, and then play around as you go with various things you want to help get working. It’s a lot of fun, and the years I spent playing around with Debian on Sparc back in the day definitely laid the groundwork for the job I have at IBM working on mainframes. You never know where a bit of technical curiosity will get you.
python-mastodon (in the course of which I found
#1101140 in blurhash-python and
proposed a small
cleanup to slidge)
python-model-bakery
python-multidict
python-pip
python-rsyncmanager
python-service-identity
python-setproctitle
python-telethon
python-trio
python-typing-extensions
responses
setuptools-scm
trove-classifiers
zope.testrunner
In bookworm-backports, I updated python-django to 3:4.2.19-1.
Although Debian’s upgrade to python-click 8.2.0 was
reverted for the time being, I fixed a
number of related problems anyway since we’re going to have to deal with it eventually:
dh-python dropped its dependency on python3-setuptools in 6.20250306, which
was long overdue, but it had quite a bit of fallout; in most cases this was
simply a question of adding build-dependencies on python3-setuptools, but in
a few cases there was a missing build-dependency on
python3-typing-extensions which had previously been pulled in as a
dependency of python3-setuptools. I fixed these bugs resulting from this:
There was a dnspython autopkgtest regression on
s390x. I independently tracked that down
to a pylsqpack bug and came up with a reduced test case before realizing
that Pranav P had already been working on it; we then worked together on it
and I uploaded their patch to Debian.
I finally gave in and joined the Debian Science
Team this month, since it often has
a lot of overlap with the Python team, and Freexian maintains several
packages under it.
I fixed a uscan error in hdf5-blosc (maintained by Freexian), and upgraded
it to a new upstream version.
I fixed a build failure with GCC 15 in
yubihsm-shell (maintained by Freexian).
Prompted by a CI failure in
debusine, I submitted a
large batch of spelling fixes and some improved static analysis to incus
(#1777,
#1778) and
distrobuilder.
Thanks to the hard work of our contributors, we are happy to announce the release of Lubuntu's Plucky Beta, which will become Lubuntu 25.04. This is a snapshot of the daily images. Approximately two months ago, we posted an Alpha-level update. While some information is duplicated below, that contains an accurate, concise technical summary of […]
The Ubuntu Studio team is pleased to announce the beta release of Ubuntu Studio 25.04, codenamed “Plucky Puffin”.
While this beta is reasonably free of any showstopper installer bugs, you will find some bugs within. This image is, however, mostly representative of what you will find when Ubuntu Studio 25.04 is released on April 17, 2025.
We encourage everyone to try this image and report bugs to improve our final release.
Special Notes
The Ubuntu Studio 25.04 image (ISO) exceeds 4 GB and cannot be downloaded to some file systems such as FAT32 and may not be readable when burned to a DVD. For this reason, we recommend downloading to a compatible file system. When creating a boot medium, we recommend creating a bootable USB stick with the ISO image or burning to a Dual-Layer DVD.
Full updated information, including Upgrade Instructions, are available in the Release Notes.
New Features This Release
This release is more evolutionary rather than revolutionary. While we work hard to bring new features, this one was not one where we had anything major to report. Here are a few highlights:
Plasma 6.3 is now the default desktop environment, an upgrade from Plasma 6.1.
PipeWire continues to improve with every release.. Version 1.2.7
The Default Panel Icons are now back. The default panel now populates depending on which applications are available, so that there are never empty icons if you choose the minimal install, and then install one or more of our featured applications. This refresh to the default is done every reboot, so it’s not a live update. Additionally, it must be refreshed manually from the User side either by selecting the Global Theme or removing the panel and adding “Ubuntu Studio Default Panel”.
While not included in this Beta, Darktable will be upgraded to 5.0.0 before final release.
Major Package Upgrades
Ardour version 8.12.0
Qtractor version 1.5.3
Audacity version 3.7.3
digiKam version 8.5.0
Kdenlive version 24.12.3
Krita version 5.2.9
GIMP version 3.0.0
There are many other improvements, too numerous to list here. We encourage you to look around the freely-downloadable ISO image.
Known Issues
The installer was supposed to be able to keep the screen from locking, but this will still happen after 15 minutes. Please keep the screen active during installation. As a workaround if you know you will be keeping your machine unattended during installation, press Alt-Space to invoke Krunner (this even works from the Install Ubuntu Studio versus the Try Ubuntu Studio live environment) and type “System Settings”. From there, search for “Screen Locking” and deactivate “Lock automatically after…”.
Another possible workaround is to click on “Switch User” and then re-login as “Live User” without a password if this happens.
The Installer background and slideshow still show the Oracular Oriole mascot. This is work in progress, to be fixed in a daily release sometime between now and final release.
Additionally, we need financial contributions. Our project lead, Erich Eickmeyer, is working long hours on this project and trying to generate a part-time income. Go here to see how you can contribute financially (options are also in the sidebar).
Frequently Asked Questions
Q: Does Ubuntu Studio contain snaps? A: Yes. Mozilla’s distribution agreement with Canonical changed, and Ubuntu was forced to no longer distribute Firefox in a native .deb package. We have found that, after numerous improvements, Firefox now performs just as well as the native .deb package did.
Thunderbird is also a snap this cycle in order for the maintainers to get security patches delivered faster.
Additionally, Freeshow is an Electron-based application. Electron-based applications cannot be packaged in the Ubuntu repositories in that they cannot be packaged in a traditional Debian source package. While such apps do have a build system to create a .deb binary package, it circumvents the source package build system in Launchpad, which is required when packaging for Ubuntu. However, Electron apps also have a facility for creating snaps, which can be uploaded and included. Therefore, for Freeshow to be included in Ubuntu Studio, it had to be packaged as a snap.
Also, to keep theming consistent, all included themes are snapped in addition to the included .deb versions so that snaps stay consistent with out themes.
We are working with Canonical to make sure that the quality of snaps goes up with each release, so we please ask that you give snaps a chance instead of writing them off completely.
Q: If I install this Beta release, will I have to reinstall when the final release comes out? A: No. If you keep it updated, your installation will automatically become the final release. However, if Audacity returns to the Ubuntu repositories before final release, then you might end-up with a double-installation of Audacity. Removal instructions of one or the other will be made available in a future post.
Q: Will you make an ISO with {my favorite desktop environment}? A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since we’re completely volunteer-run, we don’t have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio – which does *not* convert that flavor to Ubuntu Studio but adds its benefits.
Q: What if I don’t want all these packages installed on my machine? A: We now include a minimal install option. Install using the minimal install option, then use Ubuntu Studio Installer to install what you need for your very own content creation studio.
The Open Source Initiative has two classes of board seats: Affiliate seats, and Individual Member seats.
In the upcoming election, each affiliate can nominate a candidate, and each affiliate can cast a vote for the Affiliate candidates, but there's only 1 Affiliate seat available. I initially expressed interest in being nominated as an Affiliate candidate via Debian. But since Bradley Kuhn is also running for an Affiliate seat with a similar platform to me, especially with regards to the OSAID, I decided to run as part of an aligned "ticket" as an Individual Member to avoid contention for the 1 Affiliate seat.
Bradley and I discussed running on a similar ticket around 8/9pm Pacific, and I submitted my candidacy around 9pm PT on 17 February.
I was dismayed when I received the following mail from Nick Vidal:
Dear Luke,
Thank you for your interest in the OSI Board of Directors election. Unfortunately, we are unable to accept your application as it was submitted after the official deadline of Monday Feb 17 at 11:59 pm UTC. To ensure a fair process, we must adhere to the deadline for all candidates.
We appreciate your enthusiasm and encourage you to stay engaged with OSI’s mission. We hope you’ll consider applying in the future or contributing in other meaningful ways.
The OSI's contact address is in California, so it seems arbitrary and capricious to retroactively define all of these processes as being governed by UTC.
I was not able to participate in the "potential board director" info sessions accordingly, but people who attended heard that the importance of accommodating differing TZ's was discussed during the info session, and that OSI representatives mentioned they try to accommodate TZ's of everyone. This seems in sharp contrast with the above policy.
I urge the OSI to reconsider this policy and allow me to stand for an Individual seat in the current cycle.
Upd, N.B.: to people writing about this, I use they/them pronouns
I can’t remember exactly the joke I was making at the time in my
work’s slack instance (I’m sure it wasn’t particularly
funny, though; and not even worth re-reading the thread to work out), but it
wound up with me writing a UEFI binary for the punchline. Not to spoil the
ending but it worked - no pesky kernel, no messing around with “userland”. I
guess the only part of this you really need to know for the setup here is that
it was a Severance joke,
which is some fantastic TV. If you haven’t seen it, this post will seem perhaps
weirder than it actually is. I promise I haven’t joined any new cults. For
those who have seen it, the payoff to my joke is that I wanted my machine to
boot directly to an image of
Kier Eagan.
As for how to do it – I figured I’d give the uefi
crate a shot, and see how it is to use,
since this is a low stakes way of trying it out. In general, this isn’t the
sort of thing I’d usually post about – except this wound up being easier and
way cleaner than I thought it would be. That alone is worth sharing, in the
hopes someome comes across this in the future and feels like they, too, can
write something fun targeting the UEFI.
First thing’s first – gotta create a rust project (I’ll leave that part to you
depending on your life choices), and to add the uefi crate to your
Cargo.toml. You can either use cargo add or add a line like this by hand:
uefi = { version = "0.33", features = ["panic_handler", "alloc", "global_allocator"] }
We also need to teach cargo about how to go about building for the UEFI target,
so we need to create a rust-toolchain.toml with one (or both) of the UEFI
targets we’re interested in:
Unfortunately, I wasn’t able to use the
image crate,
since it won’t build against the uefi target. This looks like it’s
because rustc had no way to compile the required floating point operations
within the image crate without hardware floating point instructions
specifically. Rust tends to punt a lot of that to libm usually, so this isnt
entirely shocking given we’re nostd for a non-hardfloat target.
So-called “softening” requires a software floating point implementation that
the compiler can use to “polyfill” (feels weird to use the term polyfill here,
but I guess it’s spiritually right?) the lack of hardware floating point
operations, which rust hasn’t implemented for this target yet. As a result, I
changed tactics, and figured I’d use ImageMagick to pre-compute the pixels
from a jpg, rather than doing it at runtime. A bit of a bummer, since I need
to do more out of band pre-processing and hardcoding, and updating the image
kinda sucks as a result – but it’s entirely manageable.
This will take our input file (kier.jpg), resize it to get as close to the
desired resolution as possible while maintaining aspect ration, then convert it
from a jpg to a flat array of 4 byte RGBA pixels. Critically, it’s also
important to remember that the size of the kier.full.jpg file may not actually
be the requested size – it will not change the aspect ratio, so be sure to
make a careful note of the resulting size of the kier.full.jpg file.
Last step with the image is to compile it into our Rust bianary, since we
don’t want to struggle with trying to read this off disk, which is thankfully
real easy to do.
Remember to use the width and height from the final kier.full.jpg file as the
values for KIER_WIDTH and KIER_HEIGHT. KIER_PIXEL_SIZE is 4, since we
have 4 byte wide values for each pixel as a result of our conversion step into
RGBA. We’ll only use RGB, and if we ever drop the alpha channel, we can drop
that down to 3. I don’t entirely know why I kept alpha around, but I figured it
was fine. My kier.full.jpg image winds up shorter than the requested height
(which is also qemu’s default resolution for me) – which means we’ll get a
semi-annoying black band under the image when we go to run it – but it’ll
work.
Anyway, now that we have our image as bytes, we can get down to work, and
write the rest of the code to handle moving bytes around from in-memory
as a flat block if pixels, and request that they be displayed using the
UEFI GOP. We’ll just need to hack up a container
for the image pixels and teach it how to blit to the display.
/// RGB Image to move around. This isn't the same as an
/// `image::RgbImage`, but we can associate the size of
/// the image along with the flat buffer of pixels.
structRgbImage {
/// Size of the image as a tuple, as the
/// (width, height)
size: (usize, usize),
/// raw pixels we'll send to the display.
inner: Vec<BltPixel>,
}
impl RgbImage {
/// Create a new `RgbImage`.
fnnew(width: usize, height: usize) -> Self {
RgbImage {
size: (width, height),
inner: vec![BltPixel::new(0, 0, 0); width * height],
}
}
/// Take our pixels and request that the UEFI GOP
/// display them for us.
fnwrite(&self, gop: &mut GraphicsOutput) -> Result {
gop.blt(BltOp::BufferToVideo {
buffer: &self.inner,
src: BltRegion::Full,
dest: (0, 0),
dims: self.size,
})
}
}
impl Index<(usize, usize)>for RgbImage {
typeOutput= BltPixel;
fnindex(&self, idx: (usize, usize)) -> &BltPixel {
let (x, y) = idx;
&self.inner[y * self.size.0+ x]
}
}
impl IndexMut<(usize, usize)>for RgbImage {
fnindex_mut(&mut self, idx: (usize, usize)) -> &mut BltPixel {
let (x, y) = idx;
&mut self.inner[y * self.size.0+ x]
}
}
We also need to do some basic setup to get a handle to the UEFI
GOP via the UEFI crate (using
uefi::boot::get_handle_for_protocol
and
uefi::boot::open_protocol_exclusive
for the GraphicsOutput
protocol), so that we have the object we need to pass to RgbImage in order
for it to write the pixels to the display. The only trick here is that the
display on the booted system can really be any resolution – so we need to do
some capping to ensure that we don’t write more pixels than the display can
handle. Writing fewer than the display’s maximum seems fine, though.
fnpraise() -> Result {
let gop_handle = boot::get_handle_for_protocol::<GraphicsOutput>()?;
letmut gop = boot::open_protocol_exclusive::<GraphicsOutput>(gop_handle)?;
// Get the (width, height) that is the minimum of
// our image and the display we're using.
let (width, height) = gop.current_mode_info().resolution();
let (width, height) = (width.min(KIER_WIDTH), height.min(KIER_HEIGHT));
letmut buffer = RgbImage::new(width, height);
for y in0..height {
for x in0..width {
let idx_r = ((y * KIER_WIDTH) + x) * KIER_PIXEL_SIZE;
let pixel =&mut buffer[(x, y)];
pixel.red = KIER[idx_r];
pixel.green = KIER[idx_r +1];
pixel.blue = KIER[idx_r +2];
}
}
buffer.write(&mut gop)?;
Ok(())
}
Not so bad! A bit tedious – we could solve some of this by turning
KIER into an RgbImage at compile-time using some clever Cow and
const tricks and implement blitting a sub-image of the image – but this
will do for now. This is a joke, after all, let’s not go nuts. All that’s
left with our code is for us to write our main function and try and boot
the thing!
#[entry]fnmain() -> Status {
uefi::helpers::init().unwrap();
praise().unwrap();
boot::stall(100_000_000);
Status::SUCCESS
}
If you’re following along at home and so interested, the final source is over at
gist.github.com.
We can go ahead and build it using cargo (as is our tradition) by targeting
the UEFI platform.
While I can definitely get my machine to boot these blobs to test, I figured
I’d save myself some time by using QEMU to test without a full boot.
If you’ve not done this sort of thing before, we’ll need two packages,
qemu and ovmf. It’s a bit different than most invocations of qemu you
may see out there – so I figured it’d be worth writing this down, too.
$ doas apt install qemu-system-x86 ovmf
qemu has a nice feature where it’ll create us an EFI partition as a drive and
attach it to the VM off a local directory – so let’s construct an EFI
partition file structure, and drop our binary into the conventional location.
If you haven’t done this before, and are only interested in running this in a
VM, don’t worry too much about it, a lot of it is convention and this layout
should work for you.
With all this in place, we can kick off qemu, booting it in UEFI mode using
the ovmf firmware, attaching our EFI partition directory as a drive to
our VM to boot off of.
If all goes well, soon you’ll be met with the all knowing gaze of
Chosen One, Kier Eagan. The thing that really impressed me about all
this is this program worked first try – it all went so boringly
normal. Truly, kudos to the uefi crate maintainers, it’s incredibly
well done.
Booting a live system
Sure, we could stop here, but anyone can open up an app window and see a
picture of Kier Eagan, so I knew I needed to finish the job and boot a real
machine up with this. In order to do that, we need to format a USB stick.
BE SURE /dev/sda IS CORRECT IF YOU’RE COPY AND PASTING. All my drives
are NVMe, so BE CAREFUL – if you use SATA, it may very well be your
hard drive! Please do not destroy your computer over this.
$ doas fdisk /dev/sda
Welcome to fdisk (util-linux 2.40.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-4014079, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-4014079, default 4014079):
Created a new partition 1 of type 'Linux' and of size 1.9 GiB.
Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): ef
Changed type of partition 'Linux' to 'EFI (FAT-12/16/32)'.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Once that looks good (depending on your flavor of udev you may or
may not need to unplug and replug your USB stick), we can go ahead
and format our new EFI partition (BE CAREFUL THAT /dev/sda IS YOUR
USB STICK) and write our EFI directory to it.
$ doas mkfs.fat /dev/sda1
$ doas mount /dev/sda1 /mnt
$ cp -r esp/efi /mnt
$ find /mnt
/mnt
/mnt/efi
/mnt/efi/boot
/mnt/efi/boot/bootx64.efi
Of course, naturally, devotion to Kier shouldn’t mean backdooring your system.
Disabling Secure Boot runs counter to the Core Principals, such as Probity, and
not doing this would surely run counter to Verve, Wit and Vision. This bit does
require that you’ve taken the step to enroll a
MOK and know how
to use it, right about now is when we can use sbsign to sign our UEFI binary
we want to boot from to continue enforcing Secure Boot. The details for how
this command should be run specifically is likely something you’ll need to work
out depending on how you’ve decided to manage your MOK.
I figured I’d leave a signed copy of boot2kier at
/boot/efi/EFI/BOOT/KIER.efi on my Dell XPS 13, with Secure Boot enabled
and enforcing, just took a matter of going into my BIOS to add the right
boot option, which was no sweat. I’m sure there is a way to do it using
efibootmgr, but I wasn’t smart enough to do that quickly. I let ‘er rip,
and it booted up and worked great!
It was a bit hard to get a video of my laptop, though – but lucky for me, I
have a Minisforum Z83-F sitting around (which, until a few weeks ago was running
the annual http server to control my christmas tree
) – so I grabbed it out of the christmas bin, wired it up to a video capture
card I have sitting around, and figured I’d grab a video of me booting a
physical device off the boot2kier USB stick.
Attentive readers will notice the image of Kier is smaller then the qemu booted
system – which just means our real machine has a larger GOP display
resolution than qemu, which makes sense! We could write some fancy resize code
(sounds annoying), center the image (can’t be assed but should be the easy way
out here) or resize the original image (pretty hardware specific workaround).
Additionally, you can make out the image being written to the display before us
(the Minisforum logo) behind Kier, which is really cool stuff. If we were real
fancy we could write blank pixels to the display before blitting Kier, but,
again, I don’t think I care to do that much work.
But now I must away
If I wanted to keep this joke going, I’d likely try and find a copy of the
original
video when Helly 100%s her file
and boot into that – or maybe play a terrible midi PC speaker rendition of
Kier, Chosen One, Kier after
rendering the image. I, unfortunately, don’t have any friends involved with
production (yet?), so I reckon all that’s out for now. I’ll likely stop playing
with this – the joke was done and I’m only writing this post because of how
great everything was along the way.
All in all, this reminds me so much of building a homebrew kernel to boot a
system into – but like, good, though, and it’s a nice reminder of both how
fun this stuff can be, and how far we’ve come. UEFI protocols are light-years
better than how we did it in the dark ages, and the tooling for this is SO
much more mature. Booting a custom UEFI binary is miles ahead of trying to
boot your own kernel, and I can’t believe how good the uefi crate is
specifically.
Praise Kier! Kudos, to everyone involved in making this so delightful ❤️.
Wireshark is an essential tool for network analysis, and staying up to date with the latest releases ensures access to new features, security updates, and bug fixes. While Ubuntu’s official repositories provide stable versions, they are often not the most recent.
Wearing both WiresharkCore Developer and Debian/Ubuntu package maintainer hats, I’m happy to help the Wireshark team in providing updated packages for all supported Ubuntu versions through dedicated PPAs. This post outlines how you can install the latest stable and nightly Wireshark builds on Ubuntu.
Latest Stable Releases
For users who want the most up-to-date stable Wireshark version, we maintain a PPA with backports of the latest releases:
For those who want to test new features before they are officially released, nightly builds are also available. These builds track the latest development code and you can watch them cooking on their Launchpad recipe page.
Note: Nightly builds may contain experimental features and are not guaranteed to be as stable as the official releases. Also it targets only Ubuntu 24.04 and later including the current development release.
If you need to revert to the stable version later, remove the nightly PPA and reinstall Wireshark:
Throughout my career, I’ve had the privilege of working with organizations that create widely-used open source tools. The popularity of these tools is evident through their impressive download statistics, strong community presence, and engagement both online and at events.
At Influxdata, I was part of the Telegraf team, where we witnessed substantial adoption through downloads and active usage, reflected in our vibrant bug tracker.
What makes Syft and Grype particularly exciting, beyond their permissive licensing, consistent release cycle, dedicated developer team, and distinctivemascots, is how they serve as building blocks for other tools and services.
Syft isn’t just a standalone SBOM generator - it’s a library that developers can integrate into their own tools. Some organizations even build their own SBOM generators and vulnerability tools directly from our open source foundation!
(I find it delightfully meta to discover syft inside other tools using syft itself)
This collaborative building upon existing tools mirrors how Linux distributions often build upon other Linux distributions. Like Ubuntu and Telegraf, we see countless individuals and organizations creating innovative solutions that extend beyond the core capabilities of Syft and Grype. It’s the essence of open source - a multiplier effect that comes from creating accessible, powerful tools.
While we may not always know exactly how and where these tools are being used (and sometimes, rightfully so, it’s not our business), there are many cases where developers and companies want to share their innovative implementations.
I’m particularly interested in these stories because they deserve to be shared. I’ve been exploring public repositories like the GitHub network dependents for syft, grype, sbom-action, and scan-action to discover where our tools are making an impact.
The adoption has been remarkable!
I reached out to several open source projects to learn about their implementations, and Nicolas Vuilamy from MegaLinter was the first to respond - which brings us full circle.
Tired of waiting for apt to finish installing packages? Wish there were a way to make your installations blazingly fast without caring about minor things like, oh, data integrity? Well, today is your lucky day!
I’m thrilled to introduce apt-eatmydata, now available for Debian and all supported Ubuntu releases!
What Is apt-eatmydata?
If you’ve ever used libeatmydata, you know it’s a nifty little hack that disables fsync() and friends, making package installations way faster by skipping unnecessary disk writes. Normally, you’d have to remember to wrap apt commands manually, like this:
eatmydata apt install texlive-full
But who has time for that? apt-eatmydata takes care of this automagically by integrating eatmydata seamlessly into apt itself! That means every package install is now turbocharged—no extra typing required.
How to Get It
Debian
If you’re on Debian unstable/testing (or possibly soon in stable-backports), you can install it directly with:
sudo apt install apt-eatmydata
Ubuntu
Ubuntu users already enjoy faster package installation thanks to zstd-compressed packages and to switch to even higher gear I’ve backported apt-eatmydata to all supported Ubuntu releases. Just add this PPA and install:
And boom! Your apt install times are getting serious upgrade. Let’s run some tests…
# pre-download package to measure only the installation $ sudo apt install -d linux-headers-6.8.0-53-lowlatency ... # installation time is 9.35s without apt-eatmydata: $ sudo time apt install linux-headers-6.8.0-53-lowlatency ... 2.30user 2.12system 0:09.35elapsed 47%CPU (0avgtext+0avgdata 174680maxresident)k 32inputs+1495216outputs (0major+196945minor)pagefaults 0swaps $ sudo apt install apt-eatmydata ... $ sudo apt purge linux-headers-6.8.0-53-lowlatency # installation time is 3.17s with apt-eatmydata: $ sudo time eatmydata apt install linux-headers-6.8.0-53-lowlatency 2.30user 0.88system 0:03.17elapsed 100%CPU (0avgtext+0avgdata 174692maxresident)k 0inputs+205664outputs (0major+198099minor)pagefaults 0swaps
apt-eatmydata just made installing Linux headers 3x faster!
But Wait, There’s More!
If you’re automating CI builds, there’s even a GitHub Action to make your workflows faster essentially doing what apt-eatmydata does, just setting it up in less than a second! Check it out here: GitHub Marketplace: apt-eatmydata
Should You Use It?
Warning:apt-eatmydatais not for all production environments. If your system crashes mid-install, you might end up with a broken package database. But for throwaway VMs, containers, and CI pipelines? It’s an absolute game-changer. I use it on my laptop, too.
So go forth and install recklessly fast!
If you run into any issues, feel free to file a bug or drop a comment. Happy hacking!
Everyone's got a newsletter these days (like everyone's got a podcast). In general, I think this is OK: instead of going through a middleman publisher, have a direct connection from you to the people who want to read what you say, so that that audience can't be taken away from you.
On the other hand, I don't actually like newsletters. I don't really like giving my email address to random people1, and frankly an email app is not a great way to read long-form text! There are many apps which are a lot better at this.
There is a solution to this and the solution is called RSS. Andy Bell explains RSS and this is exactly how I read newsletters. If I want to read someone's newsletter and it's on Substack, or ghost.io, or buttondown.email, what I actually do is subscribe to their newsletter but what I'm actually subscribing to is their RSS feed. This sections off newsletter stuff into a completely separate app that I can catch up on when I've got the time, it means that the newsletter owner (or the site they're using) can't decide to "upsell" me on other stuff they do that I'm not interested in, and it's a better, nicer reading experience than my mail app.2
I use NetNewsWire on my iOS phone, but there are a bunch of other newsreader apps for every platform and you should choose whichever one you want. Andy lists a bunch, above.
The question, of course, then becomes: how do you find the RSS feed for a thing you want to read?3 Well, it turns out... you don't have to.
When you want to subscribe to a newsletter, you literally just put the web address of the newsletter itself into your RSS reader, and that reader will take care of finding the feed and subscribing to it, for you. It's magic. Hooray! I've tested this with substack, with ghost.io, with buttondown.email, and it works with all of them. You don't need to do anything.
If that doesn't work, then there is one neat alternative you can try, though. Kill The Newsletter will give you an email address for any site you name, and provide the incoming emails to that as an RSS feed. So, if you've found a newsletter which doesn't exist on the web (boo hiss!) and doesn't provide an RSS feed, then you go to KTN, it gives you some randomly-generated email address, you subscribe to the intransigent newsletter with that email address, and then you can subscribe to the resultant feed in your RSS reader. It's dead handy.
If you run a newsletter and it doesn't have an RSS feed and you want it to have, then have a look at whatever newsletter software you use; it will almost certainly provide a way to create one, and you might have to tick a box. (You might also want to complain to the software creators that that box wasn't ticked by default.) If you've got an RSS feed for the newsletter that you write, but putting your site's address into an RSS reader doesn't find that RSS feed, then what you need is RSS autodiscovery, which is the "magic" alluded to above; you add a line to your site's HTML in the <head> section which reads <link rel="alternate" type="application/rss+xml" title="RSS" href="https://URL/of/your/feed"> and then it'll work.
I like this. Read newsletters at my pace, in my choice of app, on my terms. More of that sort of thing.
despite how it's my business to do so and it's right there on the front page of the website, I know, I know ↩
Is all of this doable in my mail client? Sure. I could set up filters, put newsletters into their own folders/labels, etc. But that's working around a problem rather than solving it ↩
I suggested to Andy that he ought to write this post explaining how to do this and then realised that I should do it myself and stop being such a lazy snipe, so here it is ↩
For several years, DigitalOcean has been an important sponsor of Ubuntu Budgie. They provide the infrastructure we need to host our website at https://ubuntubudgie.org and our Discourse community forum at https://discourse.ubuntubudgie.org. Maybe you are familiar with them. Maybe you use them in your personal or professional life. Or maybe, like me, you didn’t really see how they would benefit you.
Whenever something touches the red cap, the system wakes up from suspend/s2idle.
I’ve used ThinkPad T14 Gen 3 AMD for 2 years, and I recently purchased T14 Gen 5 AMD. The previous system as Gen 3 annoyed me so much because the laptop randomly woke up from suspend even inside a backpack on its own, heated up the confined air in it, and drained the battery pretty fast as a consequence. Basically it’s too sensitive to any events. For example, whenever a USB Type-C cable is plugged in as a power source or whenever something touches the TrackPoint even if a display on a closed lid slightly makes contact with the red cap, the system wakes up from suspend. It was uncontrollable.
I was hoping that Gen 5 would make a difference, and it did when it comes to the power source event. However, frequent wakeups due to the TrackPoint event remained the same so I started to dig in.
Disabling touchpad as a wakeup source on T14 Gen 5 AMD
Disabling touchpad events as a wakeup source is straightforward. The touchpad device, ELAN0676:00 04F3:3195 Touchpad, can be found in the udev device tree as follows.
And you can get all attributes including parent devices like the following.
$ udevadm info --attribute-walk -p /devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/0018:04F3:3195.0001/input/input12
...
looking at device '/devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/0018:04F3:3195.0001/input/input12':
KERNEL=="input12"SUBSYSTEM=="input"DRIVER=="" ...
ATTR{name}=="ELAN0676:00 04F3:3195 Touchpad" ATTR{phys}=="i2c-ELAN0676:00"...
looking at parent device '/devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00':
KERNELS=="i2c-ELAN0676:00"SUBSYSTEMS=="i2c"DRIVERS=="i2c_hid_acpi" ATTRS{name}=="ELAN0676:00" ...
ATTRS{power/wakeup}=="enabled"
The line I’m looking for is ATTRS{power/wakeup}=="enabled". By using the identifiers of the parent device that has ATTRS{power/wakeup}, I can make sure that /sys/devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/power/wakeup is always disabled with the custom udev rule as follows.
Disabling TrackPoint as a wakeup source on T14 Gen 5 AMD
I’ve seen a pattern already as above so I should be able to apply the same method. The TrackPoint device, TPPS/2 Elan TrackPoint, can be found in the udev device tree.
$ udevadm info --attribute-walk -p /devices/platform/i8042/serio1/input/input5
...
looking at device '/devices/platform/i8042/serio1/input/input5':
KERNEL=="input5"SUBSYSTEM=="input"DRIVER=="" ...
ATTR{name}=="TPPS/2 Elan TrackPoint" ATTR{phys}=="isa0060/serio1/input0"...
looking at parent device '/devices/platform/i8042/serio1':
KERNELS=="serio1"SUBSYSTEMS=="serio"DRIVERS=="psmouse" ATTRS{bind_mode}=="auto" ATTRS{description}=="i8042 AUX port" ATTRS{drvctl}=="(not readable)" ATTRS{firmware_id}=="PNP: LEN0321 PNP0f13" ...
ATTRS{power/wakeup}=="disabled"
I hit the wall here. ATTRS{power/wakeup}=="disabled" for the i8042 AUX port is already there but the TrackPoint still wakes up the system from suspend. I had to do bisecting for all remaining wakeup sources.
Wakeup sources:
│ [/sys/devices/platform/USBC000:00/power_supply/ucsi-source-psy-USBC000:001/wakeup66]: enabled
│ [/sys/devices/platform/USBC000:00/power_supply/ucsi-source-psy-USBC000:002/wakeup67]: enabled
│ ACPI Battery [PNP0C0A:00]: enabled
│ ACPI Lid Switch [PNP0C0D:00]: enabled
│ ACPI Power Button [PNP0C0C:00]: enabled
│ ACPI Sleep Button [PNP0C0E:00]: enabled
│ AT Translated Set 2 keyboard [serio0]: enabled
│ Advanced Micro Devices, Inc. [AMD] ISA bridge [0000:00:14.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] Multimedia controller [0000:c4:00.5]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.1]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.2]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:03.1]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:04.1]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c4:00.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c4:00.4]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.4]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.5]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.6]: enabled
│ Mobile Broadband host interface [mhi0]: enabled
│ Plug-n-play Real Time Clock [00:01]: enabled
│ Real Time Clock alarm timer [rtc0]: enabled
│ Thunderbolt domain [domain0]: enabled
│ Thunderbolt domain [domain1]: enabled
│ USB4 host controller [0-0]: enabled
└─USB4 host controller [1-0]: enabled
Somehow, disabling SLPB “ACPI Sleep Button” stopped undesired wakeups by the TrackPoint.
looking at parent device '/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00':
KERNELS=="PNP0C0E:00"SUBSYSTEMS=="acpi"DRIVERS=="button" ATTRS{hid}=="PNP0C0E" ATTRS{path}=="\_SB_.SLPB" ...
ATTRS{power/wakeup}=="enabled"
The final udev rule is the following. It also disables wakeup events from the keyboard as a side effect, but opening the lid or pressing the power button can still wake up the system so it works for me.
After solving the headache of frequent wakeups for T14 Gen5 AMD. I was curious if I could apply the same to Gen 3 AMD retrospectively. Gen 3 has the following wakeup sources active out of the box.
Wakeup sources:
│ ACPI Battery [PNP0C0A:00]: enabled
│ ACPI Lid Switch [PNP0C0D:00]: enabled
│ ACPI Power Button [LNXPWRBN:00]: enabled
│ ACPI Power Button [PNP0C0C:00]: enabled
│ ACPI Sleep Button [PNP0C0E:00]: enabled
│ AT Translated Set 2 keyboard [serio0]: enabled
│ Advanced Micro Devices, Inc. [AMD] ISA bridge [0000:00:14.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.1]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.2]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:04:00.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:04:00.4]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:05:00.0]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:05:00.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:05:00.4]: enabled
│ ELAN0678:00 04F3:3195 Mouse [i2c-ELAN0678:00]: enabled
│ Mobile Broadband host interface [mhi0]: enabled
│ Plug-n-play Real Time Clock [00:01]: enabled
└─Real Time Clock alarm timer [rtc0]: enabled
Disabling the touchpad event was straightforward. The only difference from Gen 5 was the ID of the device.
When it comes to the TrackPoint or power source event, nothing was able to stop it from waking up the system even after disabling all wakeup sources. I came across a hidden gem named amd_s2idle.py. The “S0i3/s2idle analysis script for AMD systems” is full with the domain knowledge of s2idle like where to look in /proc or /sys or how to enable debug and what part of the logs is important.
By running the script, I got the following output around the unexpected wakeup.
$ sudo python3 ./amd_s2idle.py --debug-ec --duration 30
Debugging script for s2idle on AMD systems
💻 LENOVO 21CF21CFT1 (ThinkPad T14 Gen 3) running BIOS 1.56 (R23ET80W (1.56 )) released 10/28/2024 and EC 1.32
🐧 Ubuntu 24.04.1 LTS
🐧 Kernel 6.11.0-12-generic
🔋 Battery BAT0 (Sunwoda ) is operating at 90.91% of design
Checking prerequisites for s2idle
✅ Logs are provided via systemd
✅ AMD Ryzen 7 PRO 6850U with Radeon Graphics (family 19 model 44)
...
Suspending system in 0:00:02
Suspending system in 0:00:01
Started at 2025-01-04 00:46:53.063495 (cycle finish expected @ 2025-01-04 00:47:27.063532)
Collecting data in 0:00:02
Collecting data in 0:00:01
Results from last s2idle cycle
💤 Suspend count: 1
💤 Hardware sleep cycle count: 1
○ GPIOs active: ['0']
🥱 Wakeup triggered from IRQ 9: ACPI SCI
🥱 Wakeup triggered from IRQ 7: GPIO Controller
🥱 Woke up from IRQ 7: GPIO Controller
❌ Userspace suspended for 0:00:14.031448 (< minimum expected 0:00:27)
💤 In a hardware sleep state for 0:00:10.566894 (75.31%)
🔋 Battery BAT0 lost 10000 µWh (0.02%) [Average rate 2.57W]
Explanations for your system
🚦 Userspace wasn't asleep at least 0:00:30
The system was programmed to sleep for 0:00:30, but woke up prematurely.
This typically happens when the system was woken up from a non-timer based source.
If you didn't intentionally wake it up, then there may be a kernel or firmware bug
I compared all the logs generated between the events of power button, power source, TrackPoint, and touchpad. But except for the touchpad event, everything else was coming from GPIO pin #0 and there was no more information of how to distinguish those wakeup triggers. I ended up with a drastic approach of ignoring wakeup triggers from the GPIO pin #0 completely with the following kernel option.
gpiolib_acpi.ignore_wake=AMDI0030:00@0
And I get the line on each boot.
kernel: amd_gpio AMDI0030:00: Ignoring wakeup on pin 0
That comes with obvious downsides. The system doesn’t wake up frequently any longer, that is good. However, nothing can wake it up after getting into suspend. Opening the lid, pressing the power button or any key is simply ignored since all are going to GPIO pin #0. In the end, I had to enable the touchpad back as a wakeup source explicitly so the system can wakeup by tapping the touchpad. It’s far from ideal, but the touchpad is less sensitive than the TrackPoint so I will keep it that way.