Throughout my career, I’ve had the privilege of working with organizations that create widely-used open source tools. The popularity of these tools is evident through their impressive download statistics, strong community presence, and engagement both online and at events.
At Influxdata, I was part of the Telegraf team, where we witnessed substantial adoption through downloads and active usage, reflected in our vibrant bug tracker.
What makes Syft and Grype particularly exciting, beyond their permissive licensing, consistent release cycle, dedicated developer team, and distinctivemascots, is how they serve as building blocks for other tools and services.
Syft isn’t just a standalone SBOM generator - it’s a library that developers can integrate into their own tools. Some organizations even build their own SBOM generators and vulnerability tools directly from our open source foundation!
(I find it delightfully meta to discover syft inside other tools using syft itself)
This collaborative building upon existing tools mirrors how Linux distributions often build upon other Linux distributions. Like Ubuntu and Telegraf, we see countless individuals and organizations creating innovative solutions that extend beyond the core capabilities of Syft and Grype. It’s the essence of open source - a multiplier effect that comes from creating accessible, powerful tools.
While we may not always know exactly how and where these tools are being used (and sometimes, rightfully so, it’s not our business), there are many cases where developers and companies want to share their innovative implementations.
I’m particularly interested in these stories because they deserve to be shared. I’ve been exploring public repositories like the GitHub network dependents for syft, grype, sbom-action, and scan-action to discover where our tools are making an impact.
The adoption has been remarkable!
I reached out to several open source projects to learn about their implementations, and Nicolas Vuilamy from MegaLinter was the first to respond - which brings us full circle.
Ubuntu Pro is Canonical’s subscription for open source software security, support and compliance. Users of Ubuntu Pro benefit from Expanded Security Maintenance, which extends patching from the standard 5 years to 10 years, for both the Main and Universe repositories.
However, Ubuntu Pro goes beyond patching and maintenance, and includes access to a host of tools that can further enhance both security and stability for our users. We’re dedicated to ensuring that every user and organization gets the most out of their subscription, so in this blog, we’ll introduce how you can benefit from some of the other features of Pro, such as automated compliance hardening, device management, and live kernel-patching. We’ll also provide a runthrough of the different types of Ubuntu Pro subscription available to users via the Ubuntu shop. Let’s dive in.
As an enterprise-grade subscription, Ubuntu Pro caters to various environments and use-cases. Ubuntu Pro is always free for individuals on up to 5 machines, whilst for organizations there are a range of flexible plans to accommodate varying needs.
Essentially, where Ubuntu goes, Pro can follow, meaning that if you’re using Ubuntu on a desktop, public cloud, server or device, your Pro subscription will cover all your use cases.
So how might you benefit from the tools that Ubuntu Pro offers? We’re sharing some instances from our users to help illustrate how you can use your subscription to its full potential.
PCI-DSS compliance for VMs
Ubuntu Pro includes tools for compliance automation, to help simplify compliance for frameworks such as NIST, FedRAMP, ISO27001 and CIS. That means that Pro can help organizations streamline their compliance process across a whole range of industries.
For instance, PCI-DSS is a payment industry standard that must be followed by any company that stores, processes or transmits payment card information. It requires adherence with established frameworks such as NIST, DISA-STIG and CIS, depending on the location and use case, which can be daunting to try to configure manually.
Financial institutions often run private cloud environments for their workloads, which may include VMs to host customer-facing applications. Ubuntu Security Guide (USG), included with Ubuntu Pro, simplifies this process by providing prebuilt hardening profiles for compliance frameworks like CIS benchmarks (for desktops and servers) and DISA-STIG (aligned with NIST 800-171 for US Federal use cases). This allows administrators to efficiently configure their servers and audit their systems to ensure adherence to security best practices, reducing the risk of misconfigurations and non-compliance.
To get started, you can visit the Ubuntu Pro shop to select how many VMs you need to cover and choose the plan that best suits your organization’s security and compliance needs.
Live patching for public cloud instances – Suitable for scalable workloads on AWS, Azure, or Google Cloud
Many organizations choose to innovate on the public cloud, which is why Canonical works directly with vendors such as AWS, Azure and Google Cloud to bring the benefits of Ubuntu Pro to their platforms.
In addition to the expanded patching and maintenance that Pro users receive wherever they use Ubuntu, many organizations also opt to make use of kernel livepatching. This is a feature that patches the Linux kernel between security maintenance windows, whilst the system runs, enabling you to minimize downtime.
Taking the example of a SaaS enterprise hosting its infrastructure on public clouds, Live-kernel patching helps avoid downtime for customers, whilst maintaining full compliance with regulatory requirements and ensuring that production workloads remain secure and audit-ready.
Secure workstations for developers and professionals
We know that many users depend on Ubuntu for critical use cases, where downtime is costly and disruptive. Additionally, managing multiple workstations can be complex and time-consuming, especially in environments where security and compliance are critical. With Landscape, included as part of Ubuntu Pro, organizations can efficiently manage their workstation fleet at scale.
Landscape enables administrators to audit systems for updates, monitor their security status, and ensure that all machines are running the latest patches. This centralized management approach provides a unified view of all workstations and allows organizations to maintain secure workstation environments with minimal operational overhead..
Visit the Ubuntu Pro shop to select the machines or environments you want to cover and unlock the benefits of Landscape.
Streamline device management and ensure uninterrupted service delivery with Landscape
Let’s say an energy company needs to manage hundreds of IoT-enabled EV charging stations, requiring centralized maintenance and monitoring. Landscape, Canonical’s systems management tool included with Ubuntu Pro, streamlines device management by automating security patching, software updates, and device monitoring, ensuring uninterrupted service delivery and compliance with industrial standards.
Ubuntu Pro supports all the use-cases outlined above and many more. Once you decide on what your requirements are, with just a few clicks, you can choose a subscription that aligns with those requirements.
Machine Count– Scale from one machine to hundreds, with Pro growing alongside your needs.
Ubuntu LTS Version– Compatible with Ubuntu 16.04 LTS and higher.
Security Coverage– Whether you need full-stack security or infrastructure-focused coverage, Ubuntu Pro has flexible options to fit your needs.
Pro – Comprehensive coverage for packages in main and universe repositories.
Infra-only – Focused coverage for packages in the main repository.
Support add-ons
If you’re looking for hands-on support, you can also tailor your subscription to include direct access to Canonical’s global expert support team, with phone and ticket support. Canonical support engineers will provide your organisation with troubleshooting, break fix, bug fix and guidance promptly and effectively.
Visit the shop and select the coverage and response time that works best for your organisation:
Select Infra Support for break-fix and bug fix-support for Ubuntu Base OS, Kubernetes, LXD, and Charms, OpenStack and MAAS, Ceph and Swift Storage.
Or opt for full stack support for 25,000+ packages and all of Canonical’s applications.
For both cases, you can choose between weekday support and 24/7/365 support.
Start your Ubuntu Pro journey today
With an easy-to-use online shop, getting Ubuntu Pro has never been simpler.
Alternatively, you can try before you buy. Activate a 30-day free trial via the online shop and experience Ubuntu Pro with no upfront commitment. Your card will only be billed at the end of the trial period.
Managing and renewing your subscription
Once your subscription is active, you simply have to navigate to your Pro dashboard to view your current plan or make any changes to your plan.
You can also manage the renewal of your subscription via the Pro dashboard.
Mais uma vez cobrimos o maior dos eventos da comunidade de Software Livre e Open Source que acontece na Europa todos os anos: a FOSDEM (Free and Open source Software Developers’ European Meeting) - em Bruxelas, Bélgica, a 1 e 2 de Fevereiro. O Diogo não foi, como seria habitual, mas os nossos convidados estiveram por lá - por isso decidimos ouvir o André Alves, Gerardo Lisboa, Tiago Maurício e Tiago Carreira..
Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.
Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.
2024 was the GenAI year. With new and more performant LLMs and a higher number of projects rolled out to production, adoption of GenAI doubled compared to the previous year (source: Gartner). In the same report, organizations answered that they are using AI in more than one part of their business, with 65% of respondents mentioning they use GenAI in one function.
Yet, 2024 wasn’t just a year of incredible GenAI results, it was also a year of challenges – security, ease of use and simplified operations remain core obstacles that organizations still need to address. So what do we foresee in the AI space this year? Where is the community going to channel their energy? Let’s take a quick look at 2024 and then dive into the expectations for AI in 2025.
AI in 2024 at a glance
At the beginning of last year, we said 2024 was the year of AI, and it’s safe to say we were right – admittedly, it was a pretty safe prediction. In the last 12 months, in Canonical we focused on enabling AI/ML workflows in production, to help organizations scale their efforts and productize their projects. We worked on an array of AI projects with our partners, including Nvidia, Microsoft, AWS, Intel, and Dell, for instance integrating NVIDIA NIMs and Charmed Kubeflow to ease the deployment of GenAI applications.
We also made good on our commitment to always work closely with the community. We had 2 releases of Charmed Kubeflow almost at the same time as the upstream project, running beta programs so the innovators could get early access. As the difficulty of operations is still a challenge for most Kubeflow users, hindering adoption in enterprises, we’ve been working on a deployment model that only takes a few clicks. You can sign up for it here, if you haven’t already.
Retrieval Augmented Generation (RAG) is one of the most common GenAI use cases that has been prioritized by a large number of organizations. Open source tools such as OpenSearch, Kserve and Kubeflow are crucial for these use cases. Canonical’s enablement of Charmed Opensearch and Intel AVX is just an example of how our OpenSearch distribution can run on a variety of silicon from different vendors, accelerating adoption of RAG in production. In the case of highly sensitive data, confidential computing unblocks enterprises and helps them move forward with their efforts. During our webinar, together with Ijlal and Michelle, we approached the topic, covering some of the key considerations, benefits and most common use cases.
Is 2025 the year of AI agents?
As for 2025, one of the hottest topics so far is AI agents. These are systems that can independently perform self-determined tasks, interacting with the environment to reach pre-determined tasks. NVIDIA’s CEO, Jensen Huang, declared that “AI agents are going to be deployed” (source), signaling a higher interest in the topic and a shift from generic GenAI applications to specific use cases that organizations would like to prioritize.
Enterprises will be able to quickly adopt AI agents within their business function, but that will not solve or address all the expectations that AI/ML has created. AI agents will still face many of the same challenges that industry has been trying to overcome for some time:
Security: whether we talk about the models, infrastructure or devices where AI Agents run, ensuring security will be critical to enabling organizations to roll them out to production and satisfy audits.
Integrations: the AI/ML landscape is overall scattered and the agentic space is no exception. Building an end-to-end stack that enables not only the use of different wrappers, but also provides fine-tuning or optimized use of the available resources is still a challenge.
Guardrails: the risk of AI agents is mostly related to the misleading actions that they can influence. Therefore, organizations need to build guardrails to protect any production-grade environment from putting them at risk.
Operations: whereas experimentation is a low hanging fruit, running any AI project in production comes with an operational overhead,which enterprises need to simplify in order to scale their innovations.
Security: at the heart of AI projects
Let’s drill down into that security challenge. According to Orca, 62% of organizations deployed AI packages that had at least one vulnerability. As AI adoption grows in production, security of the tooling, data and models is equally important.
Whether we talk about the containers that organizations use to build their AI infrastructure, or the end-to-end solution, security maintenance of the packages used remains a priority in 2025. Reducing the number of vulnerabilities is turning into a mandatory task for anyone who would like to roll-out their projects in production. For enterprises which consider open source solutions, subscriptions such as Ubuntu Pro are suitable, since they secure a large variety of packages that are helpful for AI/ML, including Python, Numpy and MLflow.
As the industry evolves, confidential computing will also grow in adoption, both on the public clouds and on-prem. Running ML workloads that use highly sensitive data is expected to become a more common use case, which will challenge AI providers to enable their MLOps platforms to run on confidential machines.
AI at the edge
Interest in Edge AI continues to rise. With a growing number of models running in production and being deployed to edge devices, this part of the ML lifecycle is expected to grow. The benefits of AI at the edge are clear to almost everyone, yet in 2025 organizations will need to address some of the common challenges in order to move forward, including network connectivity, device size and security.
Deploying AI at the edge also introduces challenges around model maintenance that go beyond model packaging. Organizations are looking for solutions that support delta updates, auto-rollback in case of failure, as well as versioning management. 2025 will see accelerated adoption of edge AI and an increase in the footprint of models running on a wider variety of silicon hardware.
Canonical in the AI space: what is 2025 going to look like?
Canonical’s promise to provide securely designed open source software continues in 2025. Beyond the different artifacts that we already have in our offering, such as Opensearch, Kubeflow and MLflow, we’ve significantly expanded our ability to help our customers and partners in a bespoke way. Everything LTS will help organizations secure their open source container images for different applications, including edge AI and GenAI.
If you are curious where to start or want to accelerate your ML journey, optimize resource usage and elevate your in-house expertise, our MLOps workshop can help you design AI infrastructure for any use case. Spend 5 days on site with Canonical experts who will help upskill your team and solve the most pressing problems related to MLOps and AI/ML. Learn more here or get in touch with our team: https://ubuntu.com/ai/mlops-workshop
Tired of waiting for apt to finish installing packages? Wish there were a way to make your installations blazingly fast without caring about minor things like, oh, data integrity? Well, today is your lucky day!
I’m thrilled to introduce apt-eatmydata, now available for Debian and all supported Ubuntu releases!
What Is apt-eatmydata?
If you’ve ever used libeatmydata, you know it’s a nifty little hack that disables fsync() and friends, making package installations way faster by skipping unnecessary disk writes. Normally, you’d have to remember to wrap apt commands manually, like this:
eatmydata apt install texlive-full
But who has time for that? apt-eatmydata takes care of this automagically by integrating eatmydata seamlessly into apt itself! That means every package install is now turbocharged—no extra typing required.
How to Get It
Debian
If you’re on Debian unstable/testing (or possibly soon in stable-backports), you can install it directly with:
sudo apt install apt-eatmydata
Ubuntu
Ubuntu users already enjoy faster package installation thanks to zstd-compressed packages and to switch to even higher gear I’ve backported apt-eatmydata to all supported Ubuntu releases. Just add this PPA and install:
And boom! Your apt install times are getting serious upgrade. Let’s run some tests…
# pre-download package to measure only the installation $ sudo apt install -d linux-headers-6.8.0-53-lowlatency ... # installation time is 9.35s without apt-eatmydata: $ sudo time apt install linux-headers-6.8.0-53-lowlatency ... 2.30user 2.12system 0:09.35elapsed 47%CPU (0avgtext+0avgdata 174680maxresident)k 32inputs+1495216outputs (0major+196945minor)pagefaults 0swaps $ sudo apt install apt-eatmydata ... $ sudo apt purge linux-headers-6.8.0-53-lowlatency # installation time is 3.17s with apt-eatmydata: $ sudo time eatmydata apt install linux-headers-6.8.0-53-lowlatency 2.30user 0.88system 0:03.17elapsed 100%CPU (0avgtext+0avgdata 174692maxresident)k 0inputs+205664outputs (0major+198099minor)pagefaults 0swaps
apt-eatmydata just made installing Linux headers 3x faster!
But Wait, There’s More!
If you’re automating CI builds, there’s even a GitHub Action to make your workflows faster essentially doing what apt-eatmydata does, just setting it up in less than a second! Check it out here: GitHub Marketplace: apt-eatmydata
Should You Use It?
Warning:apt-eatmydatais not for all production environments. If your system crashes mid-install, you might end up with a broken package database. But for throwaway VMs, containers, and CI pipelines? It’s an absolute game-changer. I use it on my laptop, too.
So go forth and install recklessly fast!
If you run into any issues, feel free to file a bug or drop a comment. Happy hacking!
If you’re using Xfce4 on Debian or any other Linux distribution and find that Snap applications are not appearing in your application menu, you’re not alone. This is a common issue caused by missing symbolic links for Snap desktop entries.
The Problem
After installing a Snap package, you might expect it to show up in the application menu under Whisker Menu (or the standard Xfce menu). However, sometimes Snap apps are missing because Xfce4 doesn’t automatically detect their desktop entry files.
The Solution
To fix this, create a symbolic link between the Snap desktop applications directory and the system-wide application directory:
After running this command, refresh your desktop environment by either:
Logging out and back in
Running xfce4-panel -r to restart the panel
Manually checking the menu using xdg-desktop-menu forceupdate
Why This Works
Snap applications place their .desktop files in /var/lib/snapd/desktop/applications/, but some desktop environments (including Xfce4) may not scan this directory properly. By linking it to /usr/share/applications/snapd, we ensure that Xfce4 correctly picks up installed Snap applications.
Final Thoughts
This quick fix ensures your Snap applications appear in the application menu as expected. If you’re a Snap user on Xfce4, applying this tweak can improve your desktop experience.
Have you faced this issue? Let me know in the comments!
Everyone's got a newsletter these days (like everyone's got a podcast). In general, I think this is OK: instead of going through a middleman publisher, have a direct connection from you to the people who want to read what you say, so that that audience can't be taken away from you.
On the other hand, I don't actually like newsletters. I don't really like giving my email address to random people1, and frankly an email app is not a great way to read long-form text! There are many apps which are a lot better at this.
There is a solution to this and the solution is called RSS. Andy Bell explains RSS and this is exactly how I read newsletters. If I want to read someone's newsletter and it's on Substack, or ghost.io, or buttondown.email, what I actually do is subscribe to their newsletter but what I'm actually subscribing to is their RSS feed. This sections off newsletter stuff into a completely separate app that I can catch up on when I've got the time, it means that the newsletter owner (or the site they're using) can't decide to "upsell" me on other stuff they do that I'm not interested in, and it's a better, nicer reading experience than my mail app.2
I use NetNewsWire on my iOS phone, but there are a bunch of other newsreader apps for every platform and you should choose whichever one you want. Andy lists a bunch, above.
The question, of course, then becomes: how do you find the RSS feed for a thing you want to read?3 Well, it turns out... you don't have to.
When you want to subscribe to a newsletter, you literally just put the web address of the newsletter itself into your RSS reader, and that reader will take care of finding the feed and subscribing to it, for you. It's magic. Hooray! I've tested this with substack, with ghost.io, with buttondown.email, and it works with all of them. You don't need to do anything.
If that doesn't work, then there is one neat alternative you can try, though. Kill The Newsletter will give you an email address for any site you name, and provide the incoming emails to that as an RSS feed. So, if you've found a newsletter which doesn't exist on the web (boo hiss!) and doesn't provide an RSS feed, then you go to KTN, it gives you some randomly-generated email address, you subscribe to the intransigent newsletter with that email address, and then you can subscribe to the resultant feed in your RSS reader. It's dead handy.
If you run a newsletter and it doesn't have an RSS feed and you want it to have, then have a look at whatever newsletter software you use; it will almost certainly provide a way to create one, and you might have to tick a box. (You might also want to complain to the software creators that that box wasn't ticked by default.) If you've got an RSS feed for the newsletter that you write, but putting your site's address into an RSS reader doesn't find that RSS feed, then what you need is RSS autodiscovery, which is the "magic" alluded to above; you add a line to your site's HTML in the <head> section which reads <link rel="alternate" type="application/rss+xml" title="RSS" href="https://URL/of/your/feed"> and then it'll work.
I like this. Read newsletters at my pace, in my choice of app, on my terms. More of that sort of thing.
despite how it's my business to do so and it's right there on the front page of the website, I know, I know ↩
Is all of this doable in my mail client? Sure. I could set up filters, put newsletters into their own folders/labels, etc. But that's working around a problem rather than solving it ↩
I suggested to Andy that he ought to write this post explaining how to do this and then realised that I should do it myself and stop being such a lazy snipe, so here it is ↩
The rtl8852cu Linux driver (version 1.19.2.1, updated as of May 10, 2024) supports USB WiFi adapters based on the RTL8832CU and RTL8852CU chipsets. While Realtek continues to develop this out-of-kernel driver, it is important to note that it is not fully compliant with Linux Wireless Standards. This makes it more suitable for specialized use cases, such as embedded systems, rather than general desktop or server environments.
For most users, adapters with in-kernel drivers are recommended due to their stability and ease of use. However, if you’re working with an adapter supported by this driver, here’s everything you need to know.
Key Features of the rtl8852cu Driver
WiFi Standards: IEEE 802.11 b/g/n/ac/ax (WiFi 6)
Security Protocols:
WEP, WPA TKIP, WPA2 AES/Mixed mode (PSK and TLS)
WPA3-SAE R2
WPS (PIN and PBC methods)
Modes Supported:
Client mode
AP mode (with DFS channel support)
P2P-client and P2P-GO
IBSS (not tested)
Advanced Features:
Miracast
WiFi-Direct
Wake on WLAN
VHT and HE control (supports 160 MHz channel width in AP mode)
Note: Monitor mode is not supported. If you require monitor mode, consider adapters based on the mt7610u, mt7612u, or mt7921au chipsets.
Compatible Devices and Chipsets
This driver supports a variety of USB WiFi adapters, including:
Edup AX5400 EP-AX1671 (single-state, no onboard Windows driver)
Brostrend AX8
TP-Link Archer TX50UH V1
TP-Link Archer TXE70UH(EU) V1
MSI AXE5400
Warning: Multi-state adapters (those with internal Windows drivers) may cause issues on Linux. For better compatibility, opt for single-state and single-function adapters. Avoid multi-function adapters (e.g., those combining WiFi and Bluetooth).
Supported CPU Architectures and Kernels
CPU Architectures:
x86, i386, i686
x86-64, amd64
armv6l, armv7l (arm)
aarch64 (arm64)
Kernel Versions:
Officially tested: 5.4 to 6.6 (Realtek)
Community-supported: 6.7 to 6.12
Tested Compilers: gcc 12, 13, and 14.
Installation Guide
Prerequisites
Before installing the driver, ensure your system is up-to-date and has the necessary development tools installed. You’ll also need internet access during installation.
For Secure Boot: openssl, sign-file, mokutil Example for Ubuntu:
sudo apt install -y build-essential dkms git iw
Download and Install the Driver:
git clone https://github.com/morrownr/rtl8852cu-20240510.git
cd rtl8852cu-20240510
sudo ./install-driver.sh
Reboot Your System: After installation, reboot to ensure the driver loads correctly:
sudo reboot
Troubleshooting Tips
Conflicting Drivers: Installing multiple out-of-kernel drivers for the same hardware can cause issues. Use sudo dkms status to check for conflicts.
Secure Boot: If Secure Boot is enabled, follow the instructions in the FAQ to enroll the signing key.
Manual Installation: If DKMS is unavailable, you can manually compile and install the driver using:
make clean
make -j$(nproc)
sudo make install
sudo reboot
Recommended Router/AP Settings
To optimize your WiFi performance:
Security: Use WPA2-AES or WPA3. Avoid mixed modes like WPA/WPA2.
Channel Width:
2.4 GHz: Set to 20 MHz fixed width.
5 GHz: Use channels 36–48 or 149–165 for compatibility.
Network Names: Avoid naming all bands (2.4 GHz, 5 GHz, 6 GHz) the same.
Router Placement: Position the router centrally, elevated, and away from walls.
Final Notes
While this driver provides robust support for RTL8832CU and RTL8852CU adapters, it is not without limitations. Users should weigh the trade-offs between stability, compatibility, and advanced features when choosing a WiFi adapter. For most desktop and server users, in-kernel drivers remain the best choice.
If you encounter issues or have questions, consult the FAQ or open an issue on the GitHub repository.
Nesta semana demos uma ajuda ao PODES 2024, o primeiro festival de podcasts em Portugal, e até trouxemos um prémio de… Horticultura!? Também enfrentámos as mais turbulentas tempestades e fomos para além da Taproban…fomos…fomos a São Mamede de Infesta - no LCD Porto, para a inauguração da primeira ECTL Porto, com outras comunidades de tecnologias Livres. Falámos com vários convidados (António Aragão, Joana Simões, André Barbosa, André Alves e Benjamim) e até alguns batoteiros de Super Tux Kart.
Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.
Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.
Lubuntu Plucky Puffin is the current development branch of Lubuntu, which will become 25.04. Since the release of 24.10, we have been hard at work polishing the experience and fixing bugs in the upcoming release. Below, we detail some of the changes you can look forward to in 25.04. Two Minute Minimal Install When installing […]
Following a bug in ubuntu-release-upgrader which was causing Ubuntu Studio 22.04 LTS to fail to upgrade to 24.04 LTS, we are pleased to announce that this bug has been fixed, and upgrades now work.
As of this writing, this update is being propagated to the various Ubuntu mirrors throughout the world. The version of ubuntu-release-upgrader needed is 24.04.26 or higher, and is automatically pulled from the 24.04 repositories upon upgrade.
Unfortunately, while testing this fix, we noticed that, due to the time_t64 transition which prevents the 2038 problem, some packages get removed. We have noticed that, if upgrading from 22.04 LTS to 24.04 LTS, the following applications get removed (this list is not exhaustive):
Blender
Kdenlive
digiKam
GIMP
Krita (doesn’t get upgraded)
To fix this, immediately after upgrade, open a Konsole terminal (ctrl-alt-t) and enter the following:
If you do intend to upgrade, remember to purge any PPAs you may have enabled via ppa-purge so that your upgrade will go as smooth as possible.
We apologize for the inconvenience that may have been caused by this bug, and we hope your upgrade process goes as smooth as possible. There may be edge cases where this goes badly as we cannot account for every installation and whatever third-party repositories may be enabled, in which case the best method is to back-up your /home directory and do a clean installation.
Remember to upgrade soon, as Ubuntu Studio 22.04 goes End Of Life (EOL) in April!
The latest thing circulating around people still blogging is the Blog Questions Challenge; Jon did it (and asked if I was) and so have Jeremy and Ethan and a bunch of others, so clearly it is time I should get on board, fractionally late as ever.1
Why did you start blogging in the first place?
Some other people I admired were doing it. I think the person I was most influenced by to start doing it was Simon Willison, who is also still at it2, but a whole bunch of people got on board at around that same time, back in the early days when you be a medium-sized fish in a small pool just by participating. Mark Pilgrim springs to mind as well -- that's a good example of having influence, when the "standard format" of permalinks got sort of hashed out collectively to be /2025/02/03/blog-questions-challenge, which a lot of places still adhere to (although it feels faintly quaint, these days).
Interestingly, a lot of the early posts on this site are short two-sentence half-paragraph things, throwaway thoughts, and that all got sucked up by social media... but social media hadn't been invented, back in 2002.
What platform are you using to manage your blog and why did you choose it? Have you blogged on other platforms before?
Cor. When it started, this site was being run by Castalian, which was basically "classic ASP but Python instead of VBScript", a thing I built. This is because I was using ASP at work on Windows machines, so that was the model for "dynamic web pages" that I understood, but I wasn't on Windows5 and so I built it myself. No idea if it still works and I very much doubt it since it's old enough to buy all the drinks these days.
After that it was Movable Type for a bit and then, because I'd discovered the idea of funky caching6 it was Vellum, that model (a) in Python and (b) written by me. Then for a while it was "Thort", which was based on CouchDB7, and then it was WordPress, and then in 2014 I switched from WP to a static build based on Pelican, which it still is to this day. Crikey, that was over ten years ago!8 I like static site generators: I even wrote 10 Popular Static Site Generators a few years ago for WebsiteSetup which I think is still pretty good.
How do you write your posts? For example, in a local editing tool, or in a panel/dashboard that’s part of your blog?
In my text editor, which is Sublime Text. The static setup is here on my machine; I write a post, I type make kryogenix, and it runs a whole little series of scripts which invoke Pelican to build the static HTML for the blog, do a few things that I've added (such as add footnote handling9, make og:image links and images10, and sort of handle webmentions but that's broken at the moment) and then copy it up to my actual website (via git) to be published.
It's all a bit lashed together, to be honest, but this whole website is like that. It is something like an ancient city, such as London or Rome; what this site is mostly built on is the ruins of the previous history of the city. Sometimes the older bits poke through because they're still actually OK, or they never got updated; sometimes they've been replaced with the new shiny. You should see the .htaccess file, which operates a bewildering set of redirects through about six different generations of URLs so all the old links still work.11
When do you feel most inspired to write?
When the muse seizes me. Sometimes that's a lot; sometimes not. I do quite a lot of paid writing as part of my various day jobs for others, and quite a lot of creative writing as part of running a play-by-post D&D campaign, and that sucks up a reasonable amount of the writing energy, but there are things which just demand going on the website. Normally these days it's things where I want them to be a reference of some kind -- maybe of a useful tech thing, or some important thought, or something interesting -- for myself or for others.
Alternatively you might think the answer is "while in the pub, which leads to making random notes in an email to myself from my phone and then writing a blog post when I get home" and while this is not true, it's not not true either. I do not want to do a histogram of posting times from this site because I am worried that I will find that the majority are at, like, 11.15pm.
Do you publish immediately after writing, or do you let it simmer a bit as a draft?
Always post immediately. I have discovered about myself that, for semi-ephemeral stuff like posts here or projects that I do for fun, that I need to get them done as part of that initial burst of inspiration and energy. If I don't get it done, then my enthusiasm will fade and they will linger half-finished for ever and never get completed. I don't necessarily like this, but I've learned to live with it. If I think of an idea for a post and write a note about it and then don't do it, when I rediscover the note a week later it will not seem anything like as compelling. So posts are mostly written as one long stream-of-consciousness to capitalise on the burning of the creative fire before it gets doused by time or work or everything going on in the world. Carpe diem, I guess.12
Any future plans for your blog? Maybe a redesign, a move to another platform, or adding a new feature?
Not really at the moment, but, as above, these things tend to arrive in a blizzard of excitement and implementation and then linger forever once done. But right now... it all seems to work OK. Ask me when I get back from the pub.
Next?
Well, I should probably point back at some of the people who inspired me to do this or other things and keep doing so to this day. So Simon, Remy, and Bruce, perhaps!
although no longer at simon.incutio.com -- what even was Incutio? ↩
I resisted the word "blog" for a long time, calling it a "weblog", and the activity being "weblogging", because "blog" is such an ugly word. Like most of the fights I was picking in the mid 2000s, this also seems faintly antiquated and passé now. Sic transit gloria mundi and all that. ↩
or "nihil sub sole novum", since we're doing Latin quotes today ↩
and Windows's relationship with Python has always been a bit unsteady, although it's better these days now that Microsoft are prepared to acknowledge that other people can have ideas ↩
you write the pages in an online form, but then a server process builds a static HTML version of them; the advanced version of this where pages were only built on request was called "funky caching" back then ↩
if a disinterested observer were to consider this progression, they might unfairly but accurately conclude that whatever this site runs on is basically a half-arsed system I built based on the latest thing I'm interested in, mightn't they? ↩
Most of my Debian contributions this month were
sponsored by
Freexian. If you appreciate this sort of work and are at a company that
uses Debian, have a look to see whether you can pay for any of
Freexian‘s services; as well as the direct
benefits, that revenue stream helps to keep Debian development sustainable
for me and several other lovely
people.
You can also support my work directly via
Liberapay.
Python team
We finally made Python 3.13 the default version in testing! I fixed various
bugs that got in the way of this:
I helped with some testing of a debian-installer-utils
patch
as part of the /usr move. I need to get around to uploading this, since
it looks OK now.
Other small things
Helmut Grohne reached out for help debugging a multi-arch coinstallability
problem (you know it’s going to be complicated when even Helmut can’t
figure it out on his own …) in
binutils, and we had a call about that.
Some of the Incus maintainers will be present at FOSDEM 2025, helping run both the containers and kernel devrooms. For those arriving in town early, there will be a “Friends of Incus” gathering sponsored by FuturFusion on Thursday evening (January 30th), you can find the details of that here.
And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus
Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.
For several years, DigitalOcean has been an important sponsor of Ubuntu Budgie. They provide the infrastructure we need to host our website at https://ubuntubudgie.org and our Discourse community forum at https://discourse.ubuntubudgie.org. Maybe you are familiar with them. Maybe you use them in your personal or professional life. Or maybe, like me, you didn’t really see how they would benefit you.
We have always strived to give our users the best support options. After asking the community via a thread on Ubuntu Discourse and being given positive feedback, we have decided to move our primary support channel from Ask Ubuntu to Ubuntu Discourse.
Ask Ubuntu, which was run outside of the Ubuntu Governance, was a great idea in its time, but as time has gone on, it has become difficult for the moderators to moderate as its host, StackExchange, has made questionable decisions, including shutting-down OpenSSO, which effectively disabled many accounts which were exclusively linked to Launchpad without recovery. StackExchange has been uncooperative with re-enabling this link to Launchpad, leaving many users, who had higher privileges due to their participation, having to start over.
Additionally, as stated long before, the Ubuntu Forums section for Ubuntu Studio has long been dead. Additionally, the Ubuntu Forums, which is officially under the Ubuntu Governance, have found themselves in a position where the software Ubuntu Forum is unable to upgrade any further. As a result, on Thursday, January 9, 2025, they have officially shut-down. Over the two months prior, support has transitioned to Ubuntu Discourse with much success.
As such, with the community feedback, Ubuntu Studio’s primary support will be changing to Ubuntu Discourse. The support links will be changing over in the menu for all supported versions of Ubuntu Studio (as of this writing, 22.04 LTS, 24.04 LTS, and 24.10), and the Ask Ubuntu section on the website will change to Ubuntu Discourse.
Special Non-Support/Help Community Section
A new icon appearing in the Ubuntu Studio Information menu is “Connect with Community”. This will take you to the special Ubuntu Studio section of the Ubuntu Discourse where, while support and help questions aren’t allowed, other discussions are. This is also where you will find future release notes along with the newest LTS Backports Megathread for any application backport requests you may have.
Overall, this will be a great place to connect with other members of the community and interact with developers.
Small update on 22.04 LTS to 24.04 LTS upgrades
It has been confirmed that a “quirk” needs to be added to ubuntu-release-upgrader that forces an installation of pipewire-audio during the upgrade calculation. A member of the team that works on this has taken this on and is working on a fix. Please stay tuned for further updates.
Whenever something touches the red cap, the system wakes up from suspend/s2idle.
I’ve used ThinkPad T14 Gen 3 AMD for 2 years, and I recently purchased T14 Gen 5 AMD. The previous system as Gen 3 annoyed me so much because the laptop randomly woke up from suspend even inside a backpack on its own, heated up the confined air in it, and drained the battery pretty fast as a consequence. Basically it’s too sensitive to any events. For example, whenever a USB Type-C cable is plugged in as a power source or whenever something touches the TrackPoint even if a display on a closed lid slightly makes contact with the red cap, the system wakes up from suspend. It was uncontrollable.
I was hoping that Gen 5 would make a difference, and it did when it comes to the power source event. However, frequent wakeups due to the TrackPoint event remained the same so I started to dig in.
Disabling touchpad as a wakeup source on T14 Gen 5 AMD
Disabling touchpad events as a wakeup source is straightforward. The touchpad device, ELAN0676:00 04F3:3195 Touchpad, can be found in the udev device tree as follows.
And you can get all attributes including parent devices like the following.
$ udevadm info --attribute-walk -p /devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/0018:04F3:3195.0001/input/input12
...
looking at device '/devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/0018:04F3:3195.0001/input/input12':
KERNEL=="input12"SUBSYSTEM=="input"DRIVER=="" ...
ATTR{name}=="ELAN0676:00 04F3:3195 Touchpad" ATTR{phys}=="i2c-ELAN0676:00"...
looking at parent device '/devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00':
KERNELS=="i2c-ELAN0676:00"SUBSYSTEMS=="i2c"DRIVERS=="i2c_hid_acpi" ATTRS{name}=="ELAN0676:00" ...
ATTRS{power/wakeup}=="enabled"
The line I’m looking for is ATTRS{power/wakeup}=="enabled". By using the identifiers of the parent device that has ATTRS{power/wakeup}, I can make sure that /sys/devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/power/wakeup is always disabled with the custom udev rule as follows.
Disabling TrackPoint as a wakeup source on T14 Gen 5 AMD
I’ve seen a pattern already as above so I should be able to apply the same method. The TrackPoint device, TPPS/2 Elan TrackPoint, can be found in the udev device tree.
$ udevadm info --attribute-walk -p /devices/platform/i8042/serio1/input/input5
...
looking at device '/devices/platform/i8042/serio1/input/input5':
KERNEL=="input5"SUBSYSTEM=="input"DRIVER=="" ...
ATTR{name}=="TPPS/2 Elan TrackPoint" ATTR{phys}=="isa0060/serio1/input0"...
looking at parent device '/devices/platform/i8042/serio1':
KERNELS=="serio1"SUBSYSTEMS=="serio"DRIVERS=="psmouse" ATTRS{bind_mode}=="auto" ATTRS{description}=="i8042 AUX port" ATTRS{drvctl}=="(not readable)" ATTRS{firmware_id}=="PNP: LEN0321 PNP0f13" ...
ATTRS{power/wakeup}=="disabled"
I hit the wall here. ATTRS{power/wakeup}=="disabled" for the i8042 AUX port is already there but the TrackPoint still wakes up the system from suspend. I had to do bisecting for all remaining wakeup sources.
Wakeup sources:
│ [/sys/devices/platform/USBC000:00/power_supply/ucsi-source-psy-USBC000:001/wakeup66]: enabled
│ [/sys/devices/platform/USBC000:00/power_supply/ucsi-source-psy-USBC000:002/wakeup67]: enabled
│ ACPI Battery [PNP0C0A:00]: enabled
│ ACPI Lid Switch [PNP0C0D:00]: enabled
│ ACPI Power Button [PNP0C0C:00]: enabled
│ ACPI Sleep Button [PNP0C0E:00]: enabled
│ AT Translated Set 2 keyboard [serio0]: enabled
│ Advanced Micro Devices, Inc. [AMD] ISA bridge [0000:00:14.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] Multimedia controller [0000:c4:00.5]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.1]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.2]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:03.1]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:04.1]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c4:00.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c4:00.4]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.4]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.5]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.6]: enabled
│ Mobile Broadband host interface [mhi0]: enabled
│ Plug-n-play Real Time Clock [00:01]: enabled
│ Real Time Clock alarm timer [rtc0]: enabled
│ Thunderbolt domain [domain0]: enabled
│ Thunderbolt domain [domain1]: enabled
│ USB4 host controller [0-0]: enabled
└─USB4 host controller [1-0]: enabled
Somehow, disabling SLPB “ACPI Sleep Button” stopped undesired wakeups by the TrackPoint.
looking at parent device '/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00':
KERNELS=="PNP0C0E:00"SUBSYSTEMS=="acpi"DRIVERS=="button" ATTRS{hid}=="PNP0C0E" ATTRS{path}=="\_SB_.SLPB" ...
ATTRS{power/wakeup}=="enabled"
The final udev rule is the following. It also disables wakeup events from the keyboard as a side effect, but opening the lid or pressing the power button can still wake up the system so it works for me.
After solving the headache of frequent wakeups for T14 Gen5 AMD. I was curious if I could apply the same to Gen 3 AMD retrospectively. Gen 3 has the following wakeup sources active out of the box.
Wakeup sources:
│ ACPI Battery [PNP0C0A:00]: enabled
│ ACPI Lid Switch [PNP0C0D:00]: enabled
│ ACPI Power Button [LNXPWRBN:00]: enabled
│ ACPI Power Button [PNP0C0C:00]: enabled
│ ACPI Sleep Button [PNP0C0E:00]: enabled
│ AT Translated Set 2 keyboard [serio0]: enabled
│ Advanced Micro Devices, Inc. [AMD] ISA bridge [0000:00:14.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.1]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.2]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:04:00.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:04:00.4]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:05:00.0]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:05:00.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:05:00.4]: enabled
│ ELAN0678:00 04F3:3195 Mouse [i2c-ELAN0678:00]: enabled
│ Mobile Broadband host interface [mhi0]: enabled
│ Plug-n-play Real Time Clock [00:01]: enabled
└─Real Time Clock alarm timer [rtc0]: enabled
Disabling the touchpad event was straightforward. The only difference from Gen 5 was the ID of the device.
When it comes to the TrackPoint or power source event, nothing was able to stop it from waking up the system even after disabling all wakeup sources. I came across a hidden gem named amd_s2idle.py. The “S0i3/s2idle analysis script for AMD systems” is full with the domain knowledge of s2idle like where to look in /proc or /sys or how to enable debug and what part of the logs is important.
By running the script, I got the following output around the unexpected wakeup.
$ sudo python3 ./amd_s2idle.py --debug-ec --duration 30
Debugging script for s2idle on AMD systems
💻 LENOVO 21CF21CFT1 (ThinkPad T14 Gen 3) running BIOS 1.56 (R23ET80W (1.56 )) released 10/28/2024 and EC 1.32
🐧 Ubuntu 24.04.1 LTS
🐧 Kernel 6.11.0-12-generic
🔋 Battery BAT0 (Sunwoda ) is operating at 90.91% of design
Checking prerequisites for s2idle
✅ Logs are provided via systemd
✅ AMD Ryzen 7 PRO 6850U with Radeon Graphics (family 19 model 44)
...
Suspending system in 0:00:02
Suspending system in 0:00:01
Started at 2025-01-04 00:46:53.063495 (cycle finish expected @ 2025-01-04 00:47:27.063532)
Collecting data in 0:00:02
Collecting data in 0:00:01
Results from last s2idle cycle
💤 Suspend count: 1
💤 Hardware sleep cycle count: 1
○ GPIOs active: ['0']
🥱 Wakeup triggered from IRQ 9: ACPI SCI
🥱 Wakeup triggered from IRQ 7: GPIO Controller
🥱 Woke up from IRQ 7: GPIO Controller
❌ Userspace suspended for 0:00:14.031448 (< minimum expected 0:00:27)
💤 In a hardware sleep state for 0:00:10.566894 (75.31%)
🔋 Battery BAT0 lost 10000 µWh (0.02%) [Average rate 2.57W]
Explanations for your system
🚦 Userspace wasn't asleep at least 0:00:30
The system was programmed to sleep for 0:00:30, but woke up prematurely.
This typically happens when the system was woken up from a non-timer based source.
If you didn't intentionally wake it up, then there may be a kernel or firmware bug
I compared all the logs generated between the events of power button, power source, TrackPoint, and touchpad. But except for the touchpad event, everything else was coming from GPIO pin #0 and there was no more information of how to distinguish those wakeup triggers. I ended up with a drastic approach of ignoring wakeup triggers from the GPIO pin #0 completely with the following kernel option.
gpiolib_acpi.ignore_wake=AMDI0030:00@0
And I get the line on each boot.
kernel: amd_gpio AMDI0030:00: Ignoring wakeup on pin 0
That comes with obvious downsides. The system doesn’t wake up frequently any longer, that is good. However, nothing can wake it up after getting into suspend. Opening the lid, pressing the power button or any key is simply ignored since all are going to GPIO pin #0. In the end, I had to enable the touchpad back as a wakeup source explicitly so the system can wakeup by tapping the touchpad. It’s far from ideal, but the touchpad is less sensitive than the TrackPoint so I will keep it that way.
I have released more core24 snaps to –edge for your testing pleasure. If you find any bugs please report them at bugs.kde.org and assign them to me. Thanks!
I hate asking but I am unemployable with this broken arm fiasco and 6 hours a day hospital runs for treatment. If you could spare anything it would be appreciated! https://gofund.me/573cc38e
A lot has happened in 2024 for the Incus project, so I thought it’d be interesting to see where we started, what we did and where we ended up after that very busy year, then look forward to what’s next in 2025!
Where we started
We began 2024 right on the heels of the Incus 0.4 release at the end of December 2023.
This means that effectively everything that made it into Incus in 2024 originated directly from the Incus community. There is one small exception to that as LXD 5.0 LTS still saw some activity and as that’s still under Apache 2.0, we were able to import a few commits (83 to be exact) from that branch.
Incus 6.0 LTS was released at the beginning of April, alongside LXC and LXCFS 6.0 LTS. All of which get 5 years of security support.
That was a huge milestone for Incus as it now allowed production users who don’t feel like going through an update cycle every month to switch over to Incus 6.0 LTS and have a stable production release for the years to come.
It also provides a much easier packaging target for Linux distributions as the monthly releases can be tricky to follow, especially when they introduce new dependencies.
Today, Incus 6.0 LTS represents around 50% of the Incus user base.
Notable feature additions
It’s difficult to come up with a list of the most notable new features because so much happened all over the place and deciding what’s notable ends up being very personal and subjective, depending on one’s usage of Incus, but here are a few!
Application container support (OCI), gives us the ability to natively run Docker containers on Incus
Clustered LVM storage backend, adds support for iSCSI/NVMEoTCP/FC storage in clusters
Network integrations (OVN inter-connect), allows for cross-cluster overlay networking
Automatic cluster re-balancing, simplifies operation of large clusters
Performance improvements
As more and more users run very large Incus systems, a number of performance issues were noticed and have been fixed.
An early one was related to how Incus handled OVN. The old implementation relied on the OVN command line tools to drive OVN database changes. This is incredibly inefficient as each call to those tools would require new TLS handshakes with all database servers, tracking down the leader, fetching a new copy of the database, performing a trivial operation and exiting. The new implementation uses a native database client directly in Incus which maintains a constant connection with the database, gets notified of changes and can instantly perform any needed configuration changes.
Then there were 2-3 different cases of database performance issues. Two of them were caused by our auto-generated database helpers which weren’t very smart about handling of profiles, effectively causing a situation where performance would get exponentially worse as more profiles would be present in the database. Addressing this issue resulted in dramatic performance improvement for users operating with hundreds or even thousands of profiles.
Another was related to loading of instances on Incus startup, specifically loading the device definitions to check whether anything needed to be done on startup. This logic was always hitting configuration validation which can be costly, in this case, so costly that Incus would fail to startup during the allotted time by the init system (10 minutes). After some fixes to that logic, the affected system, running over 2000 virtual machines (on a single server) at the time, is now able to process all running VMs in just 10-15s.
On top of those issues, special attention was also put in optimizing resource usage on large systems, especially systems with multiple NUMA nodes, supporting basic NUMA balancing of virtual machines as well as selecting the best GPU devices based on NUMA cost.
Distribution integration
Back at the beginning of 2024, Incus was only available through my own packages for Debian or Ubuntu, or through native packages on Gentoo and NixOS.
This has changed considerably through 2024 with Incus now being readily available on:
Alpine Linux
Arch Linux
Chimera Linux
Debian
Fedora
Gentoo
NixOS
openSUSE
Rocky Linux
Ubuntu
Void Linux
Additionally, it’s also available as a Docker container to run on most any other platforms as well as available on MacOS through Colima. The client tool itself is available everywhere that Go supports.
Deployment tooling
Terraform/OpenTofu provider
The Incus Terraform/OpenTofu provider has seen quite a lot of activity this year.
We’re slowly headed towards a 1.0 release for it, basically ensuring that it can drive every single Incus feature and that its resources are defined in a clear and consistent way.
There is only one issue left in the 1.0 release milestone and there is an open pull request for it, so we are very close to where we want as far as feature coverage and with a few more bugfixes here and there, we should have that 1.0 release out in the coming weeks/month!
incus-deploy
incus-deploy was introduced in February and is basically a collection of Ansible and Terraform that allows for easy deployment of Incus, whether standalone or clustered and whether for testing/development or production.
This is commonly used by the Incus team to quickly deploy test clusters, complete with Ceph, OVN, clustered LVM, … all in a very reproducible way.
Incus OS
While incus-deploy provides an automated way to deploy Incus on top of traditional Linux servers, Incus OS is working on providing a solution for those who don’t want to have to deal with maintaining traditional Linux servers.
This is a fully immutable OS image, kept as minimal as possible and solely focused on running Incus.
It heavily relies on systemd tooling to provide a secure environment, starting from SecureBoot signing, to having every step of the boot be TPM measured, to having storage encrypted using that TPM state and the entire read-only disk image being verified through dm-verity.
The end result is an extremely secure and locked down environment which is designed for just one thing, running Incus!
We’re getting close to having something ready for early adopters with automated builds and update logic now working, but it will be a few more weeks before it’s safe/useful to install on a server.
Where we ended up
Over that year, Incus really turned into a full fledged Open Source project and community.
We have kept on with our release cadence, pushing out a new feature release every month while very actively backporting bugfixes and smaller improvements to our LTS release.
Distributions have done a great job at getting Incus packaged, making it natively available just about everywhere (we’re still waiting on solid EPEL packaging).
Our supporting projects like terraform-provider-incus, incus-deploy and incus-os are making it easier than ever to deploy and operate large scale Incus clusters as well as providing a simpler, more repeatable way of running Incus.
2024 was a very very good year for Incus!
What’s coming in 2025
Looking ahead, 2025 has the potential to be and even better year for us!
On the Incus front, there are no single huge feature to be looking forward to, but just the continual improvement, whether it be for containers, VMs, networking or clustering. We have a lot of small new features and polishing in mind which will help fill in some of the current gaps and provide a nice and consistent experience.
But it’s on the supporting projects that a lot of the potential now rests.
This will hopefully be the year of Incus OS, making installing Incus as easy as writing a file to a USB stick, booting a machine from it and accessing it over the network. Want to make a cluster, no problem, just boot a few more machines onto Incus OS and join them together as a cluster!
But we’re also going to be expanding incus-deploy. It’s currently doing a good job at deploying Incus on Ubuntu servers with Ansible but we want to expand that to also cover Debian and some of the RHEL derivatives so we can cover the majority of our current production users with it. On top of that, we want to also have incus-deploy handle setting up the common support services used by Incus clusters, typically OpenFGA, Keycloak, Grafana, Prometheus and Loki.
We also want to improve our testing and development lab, add more systems, add the ability to test on more architectures and easily test more complex features, whether it’s 100Gb/s+ networking with full hardware offload or confidential computing features like AMD SEV.
Sovereign Tech Fund
Thankfully a lot of that is going to be made a whole lot easier thanks to funding by the Sovereign Tech Fund who’s going to be supporting a variety of Incus related projects, especially focusing on the kind of work that’s not particularly exciting but is very much critical to the proper running of a project like ours.
This includes a big refresh of our testing and development lab, work on our LTS releases, new security features through the stack, improved support for other Linux distributions and OSes across our projects and more!
Most of my Debian contributions this month were
sponsored by
Freexian, as well as one direct donation via
Liberapay (thanks!).
OpenSSH
I issued a bookworm
update
with a number of fixes that had accumulated over the last year, especially
fixing GSS-API key exchange which
wasquitebroken in bookworm.
base-passwd
A few months ago, the adduser maintainer started a discussion with me (as
the base-passwd maintainer) and the shadow maintainer about bringing all
three source packages under one team, since they often need to cooperate on
things like user and group names. I agreed, but hadn’t got round to doing
anything about it until recently. I’ve now officially moved it under team maintenance.
debconf
Gioele Barabucci has been working on eliminating duplicated code between
debconf and cdebconf, ultimately with the goal of migrating to cdebconf
(which I’m not sure I’m convinced of as a goal, but if we can make
improvements to both packages as part of working towards it then there’s no
harm in that). I finally got round to reviewing and merging confmodule
changes in each of
debconf
and
cdebconf.
This caused an installer regression due
to a weirdness in cdebconf-udeb’s packaging, which I fixed - sorry about that!
I’ve also been dealing with a few patch submissions that had been in my
queue for a long time, but more on that next month if all goes well.
Last month, I mentioned some progress on
sorting out the multipart vs. python-multipart name conflict in Debian
(#1085728), and said that I thought we’d
be able to finish it soon. I was right! We got it all done this month:
The Python 3.13 transition continues, and last month we were able to add it
to the supported Python versions in testing. (The next step will be to make
it the default.) I fixed lots of problems in aid of this, including:
Sphinx 8.0 removed some old intersphinx_mapping
syntax which turned out to
still be in use by many packages in Debian. The fixes for this were
individually trivial, but there were a lot of them:
I updated the team’s library style
guide to remove material
related to Python 2 and early versions of Python 3, which is no longer
relevant to any current Python packaging work.
Other Python upstream work
I happened to notice a Twisted upstream
issue requesting the
removal of the deprecated twisted.internet.defer.returnValue, realized it
was still used in many places in Debian, and went on a PR-filing spree
informed by codesearch to try to reduce
the future impact of such a change on Debian:
I removed groff’s Recommends: libpaper1
(#1091375,
#1091376), since it isn’t currently all
that useful and was getting in the way of a transition to libpaper2. I
filed an upstream bug suggesting
better integration in this area.
So often I come across the need to avoid my system to block forever, or until a process finishes, I can’t recall how did I came across systemd inhibit, but
here’s my approach and a bit of motivation
After some fiddling (not much really), it starts directly once I login and I will be using it instead of a fully fledged plex or the like, I just want to stream some videos from time to time from my home pc over my ipad :D using VLC.
The Hack
systemd-inhibit --who=foursixnine --why="maybe there be dragons" --mode block \
bash -c 'while $(systemctl --user is-active -q rygel.service); do sleep 1h; done'
Last week I was bitten by a interesting C feature. The following terminate function was expected to exit if okay was zero (false) however it exited when zero was passed to it. The reason is the missing semicolon after the return function.
The interesting part this that is compiles fine because the void function terminate is allowed to return the void return value, in this case the void return from exit().
OCI (open container initiative) images are the standard format based on the original docker format. Each container image is represented as an array of ‘layers’, each of which is a .tar.gz. To unpack the container image, untar the first, then untar the second on top of the first, etc.
Several years ago, while we were working on a product which ships its root filesystem (and of course containers) as OCI layers, Tycho Andersen (https://tycho.pizza/) came up with the idea of ‘atomfs’ as a way to avoid some of the deficiencies of tar (https://www.cyphar.com/blog/post/20190121-ociv2-images-i-tar). In ‘atomfs’, the .tar.gz layers are replaced by squashfs (now optionally erofs) filesystems with dm-verity root hashes specified. Mounting an image now consists of mounting each squashfs, then merging them with overlay. Since we have the dmverity root hash, we can ensure that the filesystem has not been corrupted without having to checksum the files before mounting, and there is no tar unpacking step.
This past week, Ram Chinchani presented atomfs at the OCI weekly discussion, which you can see here https://www.youtube.com/watch?v=CUyH319O9hM starting at about 28 minutes. He showed a full use cycle, starting with a Dockerfile, building atomfs images using stacker, mounting them using atomfs, and then executing a container with lxc. Ram mentioned his goal is to have a containerd snapshotter for atomfs soon. I’m excited to hear that, as it will make it far easier to integrate into e.g. kubernetes.
I’m pleased to introduce uCareSystem 24.12.11, the latest version of the all-in-one system maintenance tool for Ubuntu, Linux Mint, Debian and its derivatives. This release brings some major changes in UI, fixes and improvements under the hood. Continuing on the path of the earlier release, in this release after many many … many … did […]
The new Firebuild release contains plenty of small fixes and a few notable improvements.
Experimental macOS support
The most frequently asked question from people getting to know Firebuild was if it worked on their Mac and the answer sadly used to be that well, it did, but only in a Linux VM. This was far from what they were looking for. 🙁
Linux and macOS have common UNIX roots, but porting Firebuild to macOS included bigger challenges, like ensuring that dyld(1), macOS’s dynamic loader initializes the preloaded interceptor library early enough to catch all interesting calls, and avoid using anything that uses malloc() or thread local variables which are not yet set up then.
Preloading libraries on Linux is really easy, running LD_PRELOAD=my_lib.so ls just works if the library exports the symbols to be interposed, while macOS employs multiple lines of defense to prevent applications from using unknown libraries. Firebuild’s guide for making DYLD_INSERT_LIBRARIES honored on Macs can be helpful with other projects as well that rely on injecting libraries.
Firebuild on macOS can already accelerate simple projects and rebuild itself with Xcode. Since Xcode introduces a lot of nondeterminism to the build, Firebuild can’t shine in acceleration with Xcode yet, but can provide nice reports to show which part of the build is the most time consuming and how each sub-command is called.
If you would like to try Firebuild on macOS please compile it from the GitHub repository for now. Precompiled binaries will be distributed on the Mac App Store and via CI providers. Contact us to get notified when those channels become available.
Dealing with the ‘Epochalypse’
Glibc’s API provides many functions with time parameters and some of those functions are intercepted by Firebuild. Time parameters used to be passed as 32-bit values on 32-bit systems, preventing them to accurately represent timestamps after year 2038, which is known as the Y2038 problem or the Epochalypse.
To deal with the problem glibc 2.34 started providing new function symbol variants with 64-bit time parameters, e.g clock_gettime64() in addition to clock_gettime(). The new 64-bit variants are used when compiling consumers of the API with _TIME_BITS=64 defined.
Processes intercepted by Firebuild may have been compiled with or without _TIME_BITS=64, thus libfirebuild now provides both variants on affected systems running glibc >= 34 to work safely with binaries using 64-bit and 32-bit time representation.
Many Linux distributions already stopped supporting 32-bit architectures, but Debian and Ubuntu still supports armhf, for example, where the Y2038 problem still applies. Both Debian and Ubuntu performed a transition rebuilding every library (and their reverse dependencies) with -D_FILE_OFFSET_BITS=64 set where the libraries exported symbols that changed when switching to 64-bit time representation (thanks to Steve Langasek for driving this!) . Thanks to the transition most programs are ready for 2038, but interposer libraries are trickier to fix and if you maintain one it might be a good idea to check if it works well both 32-bit and 64-bit libraries. Faketime, for example is not fixed yet, see #1064555.
Select passed through environment variables with regular expressions
Firebuild filters out most of the environment variables set when starting a build to make the build more reproducible and achieve higher cache hit rate. Extra environment variables to pass through can be specified on the command line one by one, but with many similarly named variables this may become hard to maintain. With regular expressions this just became easier:
The new feature bug templates in Launchpad aims to streamline the bug reporting process, making it more efficient for both users and project maintainers.
In the past, Launchpad provided only a basic description field for filling bug reports. This often led to incomplete or vague submissions, as users may not include essential details or steps to reproduce an issue. This could slow down the debugging process when fixing bugs.
To improve this, we are introducing bug templates. These allow project maintainers to guide users when reporting bugs. By offering a structured template, users are prompted to provide all the necessary information, which helps to speed up the development process.
To start using bug templates in your project, simply follow these steps:
Access your project’s bug page view.
Select ‘Configure bugs’.
A field showing the bug template will prompt you to fill in your desired template.
Save the changes. The template will now be available to users when they report a new bug for your project.
For now, only a default bug template can be set per project. Looking ahead, the idea is to expand this by introducing multiple bug templates per project, as well as templates for other content types such as merge proposals or answers. This will allow project maintainers to define various templates for different purposes, making the open-source collaboration process even more efficient.
Additionally, we will introduce Markdown support, allowing maintainers to create structured and visually clear templates using features such as headings, lists, or code blocks.
I’m pleased to introduce uCareSystem 24.11.17, the latest version of the all-in-one system maintenance tool. This release brings some minor fixes and improvements with visual changes that you will love. I’m excited to share the details of the latest update to uCareSystem! With this release, the focus is on refining the user experience and modernizing […]
In basically every engineering organization I’ve ever regarded as particularly
high functioning, I’ve sat through one specific recurring conversation which is
not – a conversation about “complexity”. Things are good or bad because they
are or aren’t complex, architectures needs to be redone because it’s too
complex – some refactor of whatever it is won’t work because it’s too complex.
You may have even been a part of some of these conversations – or even been
the one advocating for simple light-weight solutions. I’ve done it. Many times.
Rarely, if ever, do we talk about complexity within its rightful context –
complexity for whom. Is a solution complex because it’s complex for the
end user? Is it complex if it’s complex for an API consumer? Is it complex if
it’s complex for the person maintaining the API service? Is it complex if it’s
complex for someone outside the team maintaining it to understand?
Complexity within a problem domain I’ve come to believe, is fairly zero-sum –
there’s a fixed amount of complexity in the problem to be solved, and you can
choose to either solve it, or leave it for those downstream of you
to solve that problem on their own.
That being said, while I believe there is a lower bound in complexity to
contend with for a problem, I do not believe there is an upper bound to the
complexity of solutions possible. It is always possible, and in fact, very
likely that teams create problems for themselves while trying to solve a
problem. The rest of this post is talking to the lower bound. When getting
feedback on an early draft of this blog post, I’ve been informed that Fred
Brooks coined a term for what I call “lower bound complexity” – “Essential
Complexity”, in the paper
“No Silver Bullet—Essence and Accident in Software Engineering”,
which is a better term and can be used interchangeably.
Complexity Culture
In a large enough organization, where the team is high functioning enough to
have and maintain trust amongst peers, members of the team will specialize.
People will begin to engage with subsets of the work to be done, and begin to
have their efficacy measured against that part of the organization’s problems.
Incentives shift, and over time it becomes increasingly likely that two
engineers may have two very different priorities when working on the same
system together. Someone accountable for uptime and tasked with responding to
outages will begin to resist changes. Someone accountable for rapidly
delivering features will resist gates between them and their users. Companies
(either wittingly or unwittingly) will deal with this by tasking engineers with
both production (feature development) and operational tasks (maintenance), so
the difference in incentives isn’t usually as bad as it could be.
When we get a bunch of folks from far-flung corners of an organization in a
room, fire up a slide deck and throw up some aspirational to-be architecture
diagram in order to get a sign-off to solve some problem (be it someone needs a
credible promotion packet, new feature needs to get delivered, or the system
has begun to fail and needs fixing), the initial reaction will, more often than
I’d like, start to devolve into a discussion of how this is going to introduce
a bunch of complexity, going to be hard to maintain, why can’t you make it
less complex?
Right around here is when I start to try and contextualize the conversation
happening around me – understand what complexity is that being discussed, and
understand who is taking on that burden. Think about who should be owning
that problem, and work through the tradeoffs involved. Is it best solved here,
or left to consumers (be them other systems, developers, or users). Should
something become an API call’s optional param, taking on all the edge-cases and
on, or should users have to implement the logic using the data you
return (leaving everyone else to take on all the edge-cases and maintenance)?
Should you process the data, or require the user to preprocess it for you?
Frequently it’s right to make an active and explicit decision to simplify and
leave problems to be solved downstream, since they may not actually need to be
solved – or perhaps you expect consumers will want to own the specifics of
how the problem is solved, in which case you leave lots of documentation and
examples. Many other times, especially when it’s something downstream consumers
are likely to hit, it’s best solved internal to the system, since the only
thing that can come of leaving it unsolved are bugs, frustration and
half-correct solutions. This is a grey-space of tradeoffs, not a clear decision
tree. No one wants the software manifestation of a katamari ball or a junk
drawer, nor does anyone want a half-baked service unable to handle the simplest
use-case.
Head-in-sand as a Service
Popoffs about how complex something is, are, to a first approximation, best
understood as meaning “complicated for the person making comments”. A lot of
the #thoughtleadership believe that an AWS hosted EKS k8s cluster running
images built by CI talking to an AWS hosted PostgreSQL RDS is not complex.
They’re right. Mostly right. This is less complex – less complex for them.
It’s not, however, without complexity and its own tradeoffs – it’s just
complexity that they do not have to deal with. Now they don’t have to
maintain machines that have pesky operating systems or hard drive failures.
They don’t have to deal with updating the version of k8s, nor ensuring the
backups work. No one has to push some artifact to prod manually. Deployments
happen unattended. You click a button and get a cluster.
On the other hand, developers outside the ops function need to deal with
troubleshooting CI, debugging access control rules encoded in turing complete
YAML, permissions issues inside the cluster due to whatever the fuck a service
mesh is, everyone needs to learn how to use some k8s tools they only actually
use during a bad day, likely while doing some x.509 troubleshooting to
connect to the cluster (an internal only endpoint; just port forward it) – not
to mention all sorts of rules to route packets to their project (a single
repo’s binary being run in 3 containers on a single vm host).
Beyond that, there’s the invisible complexity – complexity on the interior of
a service you depend on. I think about the dozens of teams maintaining the EKS
service (which is either run on EC2 instances, or alternately, EC2 instances in
a trench coat, moustache and even more shell scripts), the RDS service (also
EC2 and shell scripts, but this time accounting for redundancy, backups,
availability zones), scores of hypervisors pulled off the shelf (xen, kvm)
smashed together with the ones built in-house (firecracker, nitro, etc)
running on hardware that has to be refreshed and maintained continuously. Every
request processed by network ACL rules, AWS IAM rules, security group rules,
using IP space announced to the internet wired through IXPs directly into ISPs.
I don’t even want to begin to think about the complexity inherent in how those
switches are designed. Shitloads of complexity to solve problems you may or
may not have, or even know you had.
What’s more complex? An app running in an in-house 4u server racked in the
office’s telco closet in the back running off the office Verizon line, or an
app running four hypervisors deep in an AWS datacenter? Which is more complex
to you? What about to your organization? In total? Which is more prone to
failure? Which is more secure? Is the complexity good or bad? What type of
Complexity can you manage effectively? Which threaten the system? Which
threaten your users?
COMPLEXIVIBES
This extends beyond Engineering. Decisions regarding “what tools are we able to
use” – be them existing contracts with cloud providers, CIO mandated SaaS
products, a list of the only permissible open source projects – will incur
costs in terms of expressed “complexity”. Pinning open source projects to a
fixed set makes SBOM production “less complex”. Using only one SaaS provider’s
product suite (even if its terrible, because it has all the types of tools you
need) makes accreditation “less complex”. If all you have is a contract with
Pauly T’s lowest price technically acceptable artisinal cloudary and
haberdashery, the way you pay for your compute is “less complex” for the CIO
shop, though you will find yourself building your own hosted database template,
mechanism to spin up a k8s cluster, and all the operational and technical
burden that comes with it. Or you won’t and make it everyone else’s problem in
the organization. Nothing you can do will solve for the fact that you must
now deal with this problem somewhere because it was less complicated for the
business to put the workloads on the existing contract with a cut-rate vendor.
Suddenly, the decision to “reduce complexity” because of an existing contract
vehicle has resulted in a huge amount of technical risk and maintenance burden
being onboarded. Complexity you would otherwise externalize has now been taken
on internally. With large enough organizations (specifically, in this case,
I’m talking about you, bureaucracies), this is largely ignored or accepted as
normal since the personnel cost is understood to be free to everyone involved.
Doing it this way is more expensive, more work, less reliable and less
maintainable, and yet, somehow, is, in a lot of ways, “less complex” to the
organization. It’s particularly bad with bureaucracies, since screwing up a
contract will get you into much more trouble than delivering a broken product,
leaving basically no reason for anyone to care to fix this.
I can’t shake the feeling that for every story of technical mandates gone
awry, somewhere just
out of sight there’s a decisionmaker optimizing for what they believe to be the
least amount of complexity – least hassle, fewest unique cases, most
consistency – as they can. They freely offload complexity from their
accreditation and risk acceptance functions through mandates. They will never
have to deal with it. That does not change the fact that someone does.
TC;DR (TOO COMPLEX; DIDN’T REVIEW)
We wish to rid ourselves of systemic Complexity – after all, complexity is
bad, simplicity is good. Removing upper-bound own-goal complexity (“accidental
complexity” in Brooks’s terms) is important, but once you hit the lower bound
complexity, the tradeoffs become zero-sum. Removing complexity from one part of
the system means that somewhere else - maybe outside your organization or in a
non-engineering function - must grow it back. Sometimes, the opposite is the
case, such as when a previously manual business processes is automated. Maybe that’s a
good idea. Maybe it’s not. All I know is that what doesn’t help the situation
is conflating complexity with everything we don’t like – legacy code,
maintenance burden or toil, cost, delivery velocity.
Complexity is not the same as proclivity to failure. The most reliable
systems I’ve interacted with are unimaginably complex, with layers of internal
protection to prevent complete failure. This has its own set of costs which
other people have written about extensively.
Complexity is not cost. Sometimes the cost of taking all the complexity
in-house is less, for whatever value of cost you choose to use.
Complexity is not absolute. Something simple from one perspective may
be wildly complex from another. The impulse to burn down complex sections of
code is helpful to have generally, but
sometimes things are complicated for a reason,
even if that reason exists outside your codebase or organization.
Complexity is not something you can remove without introducing complexity
elsewhere. Just as not making a decision is a decision itself; choosing to
require someone else to deal with a problem rather than dealing with it
internally is a choice that needs to be considered in its full context.
Next time you’re sitting through a discussion and someone starts to talk about
all the complexity about to be introduced, I want to pop up in the back of your
head, politely asking what does complex mean in this context? Is it lower
bound complexity? Is this complexity desirable? Is what they’re saying mean
something along the lines of I don’t understand the problems being solved, or
does it mean something along the lines of this problem should be solved
elsewhere? Do they believe this will result in more work for them in a way that
you don’t see? Should this not solved at all by changing the bounds of what we
should accept or redefine the understood limits of this system? Is the perceived
complexity a result of a decision elsewhere? Who’s taking this complexity on,
or more to the point, is failing to address complexity required by the problem
leaving it to others? Does it impact others? How specifically? What are you
not seeing?
I decided to be more selective and remove those that did very porly at 1.5G, which was most.
Ubuntu - booted but desktop not stable, took 1.5 minutes to load Firefox
Xubuntu-minimal - does not include a web browser so can't further test. Snap is preinstaled even though no apps are - but trying to install a web browser worked but couldn't start.
Manjaro KDE - Desktop loads, but browser doesn't
Xubuntu - laggy when Firefox is opened, can't load sites
Ubuntu Mate -laggy when Firefox is opened, can't load sites
Kubuntu - laggy when Firefox is opened, can't load sites
Linux Mint 22 - desktop loads, browsers isn't responsive
Fedora video is a bit laggy, but watchable.. EndlessOS with Chromium is the most smooth and resonsive watching YouTube.
For fun let's look at startup time with 2GB (with me hitting buttons as needed to open a folder)
Conclusion
Lubuntu lowered it's memory usage from 2020 for loading a desktop 585M to 450M! Kudos to Lubuntu team!
Both Fedora and Endless desktops worked in lower memory then 2020 too!
Lubuntu, Fedora and Endless all used Zram.
Chromium has definitely improved it's memory usage as last time Endless got dinged for using it. Now it appears to work better then Firefox.
Networking is a complex topic, and there is lots of confusion around the definition of an “online” system. Sometimes the boot process gets delayed up to two minutes, because the system still waits for one or more network interfaces to be ready. Systemd provides the network-online.target that other service units can rely on, if they are deemed to require network connectivity. But what does “online” actually mean in this context, is a link-local IP address enough, do we need a routable gateway and how about DNS name resolution?
The requirements for an “online” network interface depend very much on the services using an interface. For some services it might be good enough to reach their local network segment (e.g. to announce Zeroconf services), while others need to reach domain names (e.g. to mount a NFS share) or reach the global internet to run a web server. On the other hand, the implementation of network-online.target varies, depending on which networking daemon is in use, e.g. systemd-networkd-wait-online.service or NetworkManager-wait-online.service. For Ubuntu, we created a specification that describes what we as a distro expect an “online” system to be. Having a definition in place, we are able to tackle the network-online-ordering issues that got reported over the years and can work out solutions to avoid delayed boot times on Ubuntu systems.
In essence, we want systems to reach the following networking state to be considered online:
Do not wait for “optional” interfaces to receive network configuration
Have IPv6 and/or IPv4 “link-local” addresses on every network interface
Have at least one interface with a globally routable connection
Have functional domain name resolution on any routable interface
A common implementation
NetworkManager and systemd-networkd are two very common networking daemons used on modern Linux systems. But they originate from different contexts and therefore show different behaviours in certain scenarios, such as wait-online. Luckily, on Ubuntu we already have Netplan as a unification layer on top of those networking daemons, that allows for common network configuration, and can also be used to tweak the wait-online logic.
With the recent release of Netplan v1.1 we introduced initial functionality to tweak the behaviour of the systemd-networkd-wait-online.service, as used on Ubuntu Server systems. When Netplan is used to drive the systemd-networkd backend, it will emit an override configuration file in /run/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf, listing the specific non-optional interfaces that should receive link-local IP configuration. In parallel to that, it defines a list of network interfaces that Netplan detected to be potential global connections, and waits for any of those interfaces to reach a globally routable state.
In addition to the new features implemented in Netplan, we reached out to upstream systemd, proposing an enhancement to the systemd-networkd-wait-online service, integrating it with systemd-resolved to check for the availability of DNS name resolution. Once this is implemented upstream, we’re able to fully control the systemd-networkd backend on Ubuntu Server systems, to behave consistently and according to the definition of an “online” system that was lined out above.
Future work
The story doesn’t end there, because Ubuntu Desktop systems are using NetworkManager as their networking backend. This daemon provides its very own nm-online utility, utilized by the NetworkManager-wait-online systemd service. It implements a much higher-level approach, looking at the networking daemon in general instead of the individual network interfaces. By default, it considers a system to be online once every “autoconnect” profile got activated (or failed to activate), meaning that either a IPv4 or IPv6 address got assigned.
There are considerable enhancements to be implemented to this tool, for it to be controllable in a fine-granular way similar to systemd-networkd-wait-online, so that it can be instructed to wait for specific networking states on selected interfaces.
A note of caution
Making a service depend on network-online.target is considered an antipattern in most cases. This is because networking on Linux systems is very dynamic and the systemd target can only ever reflect the networking state at a single point in time. It cannot guarantee this state to be remained over the uptime of your system and has the potentially to delay the boot process considerably. Cables can be unplugged, wireless connectivity can drop, or remote routers can go down at any time, affecting the connectivity state of your local system. Therefore, “instead of wondering what to do about network.target, please just fix your program to be friendly to dynamically changing network configuration.” [source].
The Xubuntu team is happy to announce the immediate release of Xubuntu 24.10.
Xubuntu 24.10, codenamed Oracular Oriole, is a regular release and will be supported for 9 months, until July 2025.
Xubuntu 24.10, featuring the latest updates from Xfce 4.19 and GNOME 47.
Xubuntu 24.10 features the latest updates from Xfce 4.19, GNOME 47, and MATE 1.26. For Xfce enthusiasts, you’ll appreciate the new features and improved hardware support found in Xfce 4.19. Xfce 4.19 is the development series for the next release, Xfce 4.20, due later this year. As pre-release software, you may encounter more bugs than usual. Users seeking a stable, well-supported environment should opt for Xubuntu 24.04 “Noble Numbat” instead.
The final release images for Xubuntu Desktop and Xubuntu Minimal are available as torrents and direct downloads from xubuntu.org/download/.
As the main server might be busy in the first few days after the release, we recommend using the torrents if possible.
We’d like to thank everybody who contributed to this release of Xubuntu!
Highlights and Known Issues
Highlights
Xfce 4.19 is included as a development preview of the upcoming Xfce 4.20. Among several new features, it features early Wayland support and improved scaling.
GNOME 47 apps, including Disk Usage Analyzer (baobab) and Sudoku (gnome-sudoku), include a refreshed appearance and usability improvements
Known Issues
The shutdown prompt may not be displayed at the end of the installation. Instead you might just see a Xubuntu logo, a black screen with an underscore in the upper left hand corner, or just a black screen. Press Enter and the system will reboot into the installed environment. (LP: #1944519)
Xorg crashes and the user is logged out after logging in or switching users on some virtual machines, including GNOME Boxes. (LP: #1861609)
You may experience choppy audio or poor system performance while playing audio, but only in some virtual machines (observed in VMware and VirtualBox)
OEM installation options are not currently supported or available, but will be included for Xubuntu 24.04.1
For more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions, please refer to the Xubuntu Release Notes.
The main Ubuntu Release Notes cover many of the other packages we carry and more generic issues.
Support
For support with the release, navigate to Help & Support for a complete list of methods to get help.
The Kubuntu Team is happy to announce that Kubuntu 24.10 has been released, featuring the new and beautiful KDE Plasma 6.1 simple by default, powerful when needed.
Codenamed “Oracular Oriole”, Kubuntu 24.10 continues our tradition of giving you Friendly Computing by integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.
Under the hood, there have been updates to many core packages, including a new 6.11 based kernel, KDE Frameworks 5.116 and 6.6.0, KDE Plasma 6.1 and many updated KDE gear applications.
Kubuntu 24.10 with Plasma 6.1
Kubuntu has seen many updates for other applications, both in our default install, and installable from the Ubuntu archive.
Applications for core day-to-day usage are included and updated, such as Firefox, and LibreOffice.
For a list of other application updates, and known bugs be sure to read our release notes.
Wayland as default Plasma session.
The Plasma wayland session is now the default option in sddm (display manager login screen). An X11 session can be selected instead if desired. The last used session type will be remembered, so you do not have to switch type on each login.
Note: For upgrades from 24.04, there may a delay of a few hours to days between the official release announcements and the Ubuntu Release Team enabling upgrades.
Wake up and hear the birds sing! Thanks to the hard work from our contributors, Lubuntu 24.10 has been released. With the codename Oracular Oriole, Lubuntu 24.10 is the 27th release of Lubuntu, the 13th release of Lubuntu with LXQt as the default desktop environment. Download and Support Lifespan With Lubuntu 24.10 being an interim […]
Ubuntu MATE 24.10 is more of what you like, stable MATE Desktop on top of current Ubuntu. Read on to learn more 👓️
Ubuntu MATE 24.10
Thank you! 🙇
My sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this release 👏
I’d like to acknowledge the close collaboration with the Ubuntu Foundations team and the Ubuntu flavour teams, in particular Erich Eickmeyer who pushed critical fixes while I was travelling.
Thank you! 💚
Ships stable MATE Desktop 1.26.2 with a handful of bug fixes 🐛
Switched back to Slick Greeter (replacing Arctica Greeter) due to race-condition in the boot process which results the display manager failing to initialise.
Returning to Slick Greeter reintroduces the ability to easily configure the login screen via a graphical application, something users have been requesting be re-instated 👍
Ubuntu MATE 24.10 .iso 📀 is now 3.3GB 🤏 Down from 4.1GB in the 24.04 LTS release.
This is thanks to some fixes in the installer that no longer require as many packages in the live-seed.
Login Window
What didn’t change since the Ubuntu MATE 24.04 LTS?
If you follow upstream MATE Desktop development, then you’ll have noticed that Ubuntu MATE 24.10 doesn’t ship with the recently released MATE Desktop 1.28 🧉
I have prepared packaging for MATE Desktop 1.28, along with the associated components but encountered some bugs and regressions 🐞 I wasn’t able to get things to a standard I’m happy to ship be default, so it is tried and true MATE 1.26.2 one last time 🪨
Major Applications
Accompanying MATE Desktop 1.26.2 🧉 and Linux 6.11 🐧 are Firefox 131 🔥🦊,
Celluloid 0.27 🎥, Evolution 3.54 📧, LibreOffice 24.8.2 📚
See the Ubuntu 24.10 Release Notes
for details of all the changes and improvements that Ubuntu MATE benefits from.
Download Ubuntu MATE 24.10
Ubuntu MATE 24.10 (Oracular Oriole) is available for PC/Mac users.
There are no offline upgrade options for Ubuntu MATE. Please ensure you have
network connectivity to one of the official mirrors or to a locally accessible
mirror and follow the instructions above.