June 27, 2025

Canonical Livepatch is a security patching automation tool which supports reboot-less security updates for the Linux kernel, and has been architected to balance security with operational convenience. Livepatch remediates high and critical common vulnerabilities and exposures (CVEs) with in-memory patches, until the next package upgrade and reboot window. System administrators rely on Livepatch to secure mission-critical Ubuntu servers where security is of paramount importance.

Since the Linux kernel is an integral component of a running system, a fault would bring the entire machine to a halt. Two complementary security implementations provide safeguards against malicious code from being inserted via Canonical’s live kernel patching functionality:

  1. Secure Boot ensures you’re running a trusted kernel
  2. Module signature verification ensures only trusted code is loaded into the kernel at runtime

Secure Boot ensures trustworthiness of binaries by validating signatures, they must be signed by a trusted source. It protects the Ubuntu machine by preventing user-space programs from installing untrusted bootloaders and binaries. Secure Boot validation results in a hard requirement for module signature verification, to insert code at runtime.

Livepatching the Linux kernel securely

There are multiple layers of protection ensuring Livepatch runs safely:

Firstly, the Livepatch Client is packaged and distributed as a self-updating snap application. Snap packages are tamper-proof, GPG-signed, compressed, and read-only filesystems. The self-updating functionality is clever enough to roll back to the previous version, if the upgrade fails. Snaps run in a sandboxed environment, and system access is denied by default. The Livepatch snap application is strictly confined, and has granular access only to the areas of the system that are essential for its function, through pre-defined snap interfaces. 

Secondly, Canonical has implemented a certificate-based trust model to ensure Livepatch updates have been published by a trusted source, and not a third party with nefarious intent.

Certificate-based trust model for runtime code insertion

Livepatch implements a certificate-based trust chain wherein all patches must be cryptographically signed by Canonical. Certificates are embedded in all Linux kernels built by Canonical, and Livepatch updates are verified against these embedded certificates before being applied at runtime. Additionally, CA certificates are stored in bootloader packages to validate kernel signatures during the Secure Boot process, but this is a separate validation system from Livepatch module verification.

In order for this system to work over time, two certificates require periodic renewal. Client authentication certificates must be updated to successfully access content from Canonical’s servers, and the certificate in Livepatch Client must match the module signing certificates embedded in the kernels. Launchpad plays a crucial role in the development, packaging, and maintenance of Ubuntu. Launchpad’s build farm compiles source code into .deb packages, and hosts the CI/CD processes around maintaining a valid certificate for Livepatch.

The Livepatch engineering team and kernel engineering team collaborate with each other to ensure the kernels and Livepatch Client are using the appropriate certificate, and collaborate with the Launchpad team to ensure the builds have been signed appropriately. The Kernel Engineers at Canonical package the updates distributed by the Livepatch Client. The same machinery that is used for testing and validating the official kernel builds is repurposed for testing and validating every Livepatch update. Every Livepatch update is distributed as a signed kernel module, and the kernel validates module signatures against embedded certificates before applying the patch.

The public and private certificate pair must match to ensure the kernel can continue receiving Livepatch updates. Canonical signs every kernel with a private certificate, and the corresponding public certificate is embedded in the kernel at build time. All kernel modules, including the patches distributed by Livepatch, are signed with the appropriate private key. When Livepatch applies updates, both the Livepatch Client and the kernel validate signatures using the embedded public certificate. Mismatches between the public and private certificates embedded in the kernel for module signing validation will prevent Livepatch modules from being applied. Invalid Livepatch updates are simply rejected by the kernel during runtime signature verification.

Conclusion

The chain of trust established through Secure Boot, which ultimately requires signed kernel modules, ensures bad actors cannot use Livepatch as a vector for attack. Certificate expiry maintains the integrity of the trust chain, and ensures continued authorization to receive patches. For critical and high kernel vulnerabilities, organizations of all sizes and personal users alike, are turning to Livepatch to shrink the exploit-window of their Ubuntu instances, after kernel vulnerabilities are reported.

Ready to security patch the Linux kernel without downtime?

Zero downtime patching is even better with zero surprises, chat with experts at Canonical to determine how Livepatch can improve your security posture.

Contact Us

on June 27, 2025 12:01 AM

June 26, 2025

Apache Spark has always been very well known for distributing computation among multiple nodes using the assistance of partitions, and CPU cores have always performed processing within a single partition. 

What’s less widely known is that it is possible to accelerate Spark with GPUs. Harnessing this power in the right situation brings immense advantages: it reduces the cost of infrastructure and the amount of servers needed, speeds up query completion times to deliver quicker results up to 7 times when compared to traditional CPUs computing, and does it all in the background without having to alter any existing Spark applications’ code. We’re excited to share that our team at Canonical has enabled GPU support for Spark jobs using the NVIDIA RAPIDS Accelerator – a feature we’ve been developing to address real performance bottlenecks in large-scale data processing.

This blog will explain what advantages Spark can deliver on GPUs, how it delivers them, when GPUs might not be the right option, and guide you through how to launch Spark jobs with GPUs.

Why data scientists should care about Spark and GPUs

Running Apache Spark on GPUs is a notable opportunity to accelerate big data analytics and processing workloads by taking advantage of the specific strengths of GPUs. 

Unlike traditional CPUs, which typically have a low number of cores designed for sequential processing, GPUs are made up of thousands of smaller, power-saving cores that are designed for executing thousands of parallel threads concurrently. This architectural difference makes GPUs well suited to the highly distributed operations common in Spark workloads. By offloading such operations to GPUs, Spark can improve performance significantly, reducing query execution times by orders of magnitude compared to CPU-only environments, usually accelerating data computing by 2x to 7x. This significantly reduces time to insight for organizations, making a noticeable difference.

In this regard, GPU acceleration in Apache Spark constitutes a big advantage for data scientists, as they transition from traditional analytics to AI applications. Standard Spark workloads are CPU core-intensive, which does offer an extremely powerful computation due to its distributed nature – however, it may not be powerful enough to manage  AI-powered analytics workloads.

With GPUs, on the other hand, data scientists can work at higher speed – greater data scale, and improved efficiency. This means data scientists can iterate faster, explore data more interactively, and provide actionable insights in near real-time, which is critical in today’s fast-paced decision-making environments.

Alongside speed acceleration, GPU acceleration also simplifies the data science workflow by combining data engineering and machine learning workloads on a single platform. Through Spark with GPU acceleration, users can efficiently perform data preparation, feature engineering, model training, and inference in one environment without separate infrastructure, or complicated data movement between systems. Consolidating workflows reduces  operational complexity and speeds up end-to-end data science projects.

A third major advantage of using Spark on GPUs is that it reduces operational expenses. Given GPUs offer much greater throughput per machine, companies can achieve equal – or better – results with fewer servers. This keeps costs down, and reduces power consumption. This makes big-data analytics more affordable and sustainable – increasingly important areas for enterprises.

Finally, all of this is achievable without code rewriting or workflow modification, as technologies like NVIDIA RAPIDS smoothly integrate with Spark. Making adoption easier helps users to overcome a major barrier to unlocking the capabilities of GPUs, so they can prioritize rapid value delivery.

When should you rely on traditional CPUs?

It is important to note that not all workloads in Spark will benefit equally from GPU acceleration. 

Firstly, GPUs aren’t efficient for small data set workloads, since data transfer overhead between GPU and CPU memory can be higher than the performance benefit of GPU acceleration. With small workloads, fine-grained parallelism doesn’t benefit from the strengths of GPUs. Likewise, workloads that involve consistent data shuffling within the cluster may not be well suited. This is because shuffling leads to costly data movement across CPU and GPU memory, effectively slowing down operations.

Another good reason to rely on CPUs is if your Spark jobs rely significantly on user-defined functions that are not supported or optimized for execution on GPU. 

Similarly, if your workloads entail operations that directly operate on Resilient Distributed Datasets (RDDs), GPUs might not be the best choice. This is because the RAPIDS Accelerator is currently not capable of handling these workloads and will run them on the CPU instead. Finally, you will also need to make sure that your environment meets the hardware and configuration requirements for GPU acceleration.

To find out whether GPU acceleration is useful in your chosen environment, it’s worth carefully profiling and benchmarking your workloads. 

How to launch Spark jobs with GPUs

Our charm for Apache Spark works with Kubernetes as a cluster manager, so to enable GPUs on Apache Spark we will need to work with pods and containers.


First, you will need to deploy Charmed Apache Spark’s OCI image that supports the Apache Spark Rapids plugin. Read our guide to find out how

Once you’ve completed the deployment, and you’re ready to launch your first job,  you’ll need to create a pod template to limit the amount of GPU per container. To do so, edit the pod manifest file (gpu_executor_template.yaml) by adding the following content:

apiVersion: v1
kind: Pod
spec:
  containers:
    - name: executor
      resources:
        limits:
          nvidia.com/gpu: 1

With the spark-client snap, we can submit the desired Spark job, adding some configuration options for enabling GPU acceleration:

spark-client.spark-submit \
    ... \ 
    --conf spark.executor.resource.gpu.amount=1 \
    --conf spark.task.resource.gpu.amount=1 \
    --conf spark.rapids.memory.pinnedPool.size=1G \
    --conf spark.plugins=com.nvidia.spark.SQLPlugin \
    --conf spark.executor.resource.gpu.discoveryScript=/opt/getGpusResources.sh \
    --conf spark.executor.resource.gpu.vendor=nvidia.com \
    --conf spark.kubernetes.container.image=ghcr.io/canonical/charmed-spark-gpu:3.4-22.04_
edge\
    --conf spark.kubernetes.executor.podTemplateFile=gpu_executor_template.yaml
    …

With the Spark Client snap, you can configure the Apache Spark settings at the service account level so they automatically apply to every job. Find out how to manage options at the service account level in our guide.

Spark with GPUs: the takeaway

In short, NVIDIA RAPIDS GPU acceleration offers Apache Spark enormous performance boosts, enabling faster data processing, cost savings, and does so without code change. This means data scientists can process bigger data sets and heavy models more efficiently, generating insights faster than before. Not all workloads, however, are benefited equally; small data sets, excessive data shuffling, or unsupported functions could limit GPU advantages. Careful profiling must be performed in order to determine when GPUs are a cost effective choice to make. Overall, Spark on GPUs offers a powerful way to accelerate data science and drive innovation.

on June 26, 2025 02:55 PM

E353 Momento Flector a Meio Vão

Podcast Ubuntu Portugal

Esta semana passamos os olhos pelas novidades do Firefox 140 e como podemos libertar a nossa memória das abas e enfiar novos motores de busca; revemos o fim da vida para o Oracle Oriole, calculamos quando cairá a prateleira do Miguel e ainda visitamos a ausência de explicações sobre Flutter no novo portal de desenvolvedores de Ubuntu e adição de Wayland por omissão; estamos ansiosos por meter as mãos num novo caixote do lixo, fazemos penitência sobre «fake news» da Dinamarca, revemos os eventos na agenda e ainda dissemos muitíssimo bem da Google, da qual só podemos esperar coisas boas, circulem cidadãos, não há nada para ver!

Já sabem: oiçam, subscrevam e partilhem!

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Os efeitos sonoros deste episódio possuem as seguintes licenças: Apito Naval - Boatswains whistle 1.wav by waterboy920 – https://freesound.org/s/207329/ – License: Attribution 4.0; Isto é um Alerta Ubuntu - Breaking news intro music by humanoide9000 – https://freesound.org/s/760770/ – License: Attribution 4.0. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada por encomenda pela Shizamura - artista, ilustradora e autora de BD. Podem ficar a conhecer melhor a Shizamura na Ciberlândia e no seu sítio web.

on June 26, 2025 12:00 AM

June 25, 2025

So, I’ve been in the job market for a bit over a year. I was part of a layoff cycle in my last company, and finding a new gig has been difficult. I haven’t been able to find something as of yet, but it’s been a learning curve. The market is not what it has been in the last couple of years. With AI in the mix, lots of roles have been eliminated, or have shifted towards where human intervention is needed to interpret or verify the data AI is interpreting. Job hunting is a job in an of itself, and may even take a 9 to 5 role. I know of a lot of people who have gone through the same process as myself, and wanted to share some of insights and tips from what I’ve learned throughout the last year.

Leveraging your network

First, and I think most important, is to understand that there’s a lot of great people around that you might have worked with. You can always ask for recommendations, touch base, or even have a small chat to see how things are going on their end. Conversations can be very refreshing, and can help you get a new perspective as how the industries are shifting, where you might want to learn new skills, or how to improve your positioning in the market. Folks can talk around and see if there’s additional positions where you might be a good fit, and it’s always good to have a helping hand (or a few). At the end of the day, these folks are your own community. I’ve gotten roles in the past by being referred, and these connections have been critical for my understanding of how different businesses may approach the same problem, or even to solve internal conflicts. So, reach out to people you know!

Understanding the market

Like I mentioned in the opening paragraph, the market is evolving constantly. AI has taken a very solid role nowadays, and lots of companies ask about how you’ve used AI recently. Part of understanding the market is understanding the bleeding edge tools that are used to improve workflows and day-to-day efficiency. Research tools that are coming up, and that are shaping the market.

To give you an example. Haven’t tried AI yet? Give it a spin, even for simple questions. Understand where it works, where it fails, and how you, as a human, can make it work for you. Get a sense of the pitfalls, and where human intervention is needed to interpret or verify the data that’s in there. Like one of my former managers said, “trust, but verify”. Or, you can even get to the point of not trusting the data, and sharing that as a story!

Apply thoughtfully

Someone gave me the recommendation to apply to everything that I see where I “could be a fit”. While this might have its upsides, you might also end up in situations where you are not actually a fit, or where you don’t know the company and what it does. Always take the time, at least a few minutes, to understand the company that you’re applying for, research their values, and how they align to yours. Read about the product they’re creating, selling, or offering, and see if it’s a product where you could contribute your skills. Then, you can make the decision of applying. While doing this you may discover that you are applying to a position in a sector that you’re not interested in, or where your skillset might not be used to its full potential. And you might be missing out on some other opportunities that are significantly more aligned to you.

Also take the time to fully review the job description. JDs are pretty descriptive, and you might stumble upon certain details that don’t align with yourself, such as the salary, hours, location, or certain expectations that you might feel don’t fit within the role or that you are not ready for.

Prepare for your interviews

You landed an interview – congratulations! Make sure that you’ve researched the company before heading in. If you’ve taken a look at the company and the role before applying, take a glimpse again. You might find more interesting things, and it will demonstrate that you are actually preparing yourself for the interview. Also, interviewing is a two-way street. Make sure that you have some questions at the end. Double-check the role of your interviewer in the company, and ensure that you have questions that are tailored to their particular roles. Think about what you want to get from the interview (other than the job!).

Job sourcing

There are many great job sources today – LinkedIn being the biggest of all of them. Throughout my searches I’ve also found weworkremotely.com and hnhiring.com are great sources. I strongly advise that you expand your search and find sources that are relevant to your particular role or industry. This has opened up a lot of opportunities for me!

Take some time for yourself

I know that having a job is important. However, it’s also important to take time for yourself. Your mental health is important. You can use this time to develop some skills, play some games, take care of your garden, or even reorganize your home. Find a hobby and distract yourself every now and then. Take breaks, and ensure you’re not over-stressing yourself. Read a bit about burnout, and take care of yourself, as burnout can also happen from job hunting. And if you need a breather, make sure you take one, but don’t overdo it! Time is valuable, so it’s all about finding the right balance.

Hopefully this is helpful for some folks that are going through my same situation. What other things have worked for you? Do you have any other tips you could share? I’d be happy to read about them! Share them with me on LinkedIn. I’m also happy to chat – you can always find me at jose@ubuntu.com.

on June 25, 2025 09:39 PM

June 23, 2025

Welcome to the Ubuntu Weekly Newsletter, Issue 897 for the week of June 15 – 21, 2025. The full version of this issue is available here.

In this issue we cover:

  • Developer Membership Board: process and membership changes
  • Ubuntu Stats
  • Hot in Support
  • Other Meeting Reports
  • Upcoming Meetings and Events
  • What are our partners building for device makers? Explore the highlights from Ubuntu IoT Day Singapore
  • Announcing the 10th Edition of UbuconLA: Cuenca, Ecuador
  • LoCo Events
  • Fixes available for local privilege escalation vulnerability in libblockdev using udisks
  • Other Community News
  • Ubuntu Cloud News
  • Canonical News
  • In the Blogosphere
  • Featured Audio and Video
  • Updates and Security for Ubuntu 22.04, 24.04, 24.10, and 25.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

on June 23, 2025 10:02 PM

June 19, 2025

My first tag2upload upload

Jonathan Carter

Tag2upload?

The tag2upload service has finally gone live for Debian Developers in an open beta.

If you’ve never heard of tag2upload before, here is a great primer presented by Ian Jackson and prepared by Ian Jackson and Sean Whitton.

In short, the world has moved on to hosting and working with source code in Git repositories. In Debian, we work with source packages that are used to generated the binary artifacts that users know as .deb files. In Debian, there is so much tooling and culture built around this. For example, our workflow passes what we call the island test – you could take every source package in Debian along with you to an island with no Internet, and you’ll still be able to rebuild or modify every package. When changing the workflows, you risk losing benefits like this, and over the years there has been a number of different ideas on how to move to a purely or partially git flow for Debian, none that really managed to gain enough momentum or project-wide support.

Tag2upload makes a lot of sense. It doesn’t take away any of the benefits of the current way of working (whether technical or social), but it does make some aspects of Debian packages significantly simpler and faster. Even so, if you’re a Debian Developer and more familiar with how the sausage have made, you’ll have noticed that this has been a very long road for the tag2upload maintainers, they’ve hit multiple speed bumps since 2019, but with a lot of patience and communication and persistence from all involved (and almost even a GR), it is finally materializing.

Performing my first tag2upload

So, first, I needed to choose which package I want to upload. We’re currently in hard freeze for the trixie release, so I’ll look for something simple that I can upload to experimental.

I chose bundlewrap, it’s quote a straightforward python package, and updates are usually just as straightforward, so it’s probably a good package to work on without having to deal with extra complexities in learning how to use tag2upload.

So, I do the usual uscan and dch -i to update my package…

And then I realise that I still want to build a source package to test it in cowbuilder. Hmm, I remember that Helmut showed me that building a source package isn’t necessary in sbuild, but I have a habit of breaking my sbuild configs somehow, but I guess I should revisit that.

So, I do a dpkg-buildpackage -S -sa and test it out with cowbuilder, because that’s just how I roll (at least for now, fixing my local sbuild setup is yak shaving for another day, let’s focus!).

I end up with a binary that looks good, so I’m satisfied that I can upload this package to the Debian archives. So, time to configure tag2upload.

The first step is to set up the webhook in Salsa. I was surprised two find two webhooks already configured:

I know of KGB that posts to IRC, didn’t know that this was the mechanism it does that by before. Nice! Also don’t know what the tagpending one does, I’ll go look into that some other time.

Configuring a tag2upload webhook is quite simple, add a URL, call the name tag2upload, and select only tag push events:

I run the test webhook, and it returned a code 400 message about a missing ‘message’ header, which the documentation says is normal.

Next, I install git-debpush from experimental.

The wiki page simply states that you can use the git-debpush command to upload, but doesn’t give any examples on how to use it, and its manpage doesn’t either. And when I run just git-debpush I get:

jonathan@lapcloud:~/devel/debian/python-team/bundlewrap/bundlewrap-4.23.1$ git-debpush
git-debpush: check failed: upstream tag upstream/4.22.0 is not an ancestor of refs/heads/debian/master; probably a mistake ('upstream-nonancestor' check)
pristine-tar is /usr/bin/pristine-tar
git-debpush: some check(s) failed; you can pass --force to ignore them

I have no idea what that’s supposed to mean. I was also not sure whether I should tag anything to begin with, or if some part of the tag2upload machinery automatically does it. I think I might have tagged debian/4.23-1 before tagging upstream/4.23 and perhaps it didn’t like it, I reverted and did it the other way around and got a new error message. Progress!

jonathan@lapcloud:~/devel/debian/python-team/bundlewrap/bundlewrap-4.23.1$ git-debpush
git-debpush: could not determine the git branch layout
git-debpush: please supply a --quilt= argument

Looking at the manpage, it looks like –quilt=baredebian matches my package the best, so I try that:

jonathan@lapcloud:~/devel/debian/python-team/bundlewrap/bundlewrap-4.23.1$ git-debpush --quilt=baredebian
Enumerating objects: 70, done.
Counting objects: 100% (70/70), done.
Delta compression using up to 12 threads
Compressing objects: 100% (37/37), done.
Writing objects: 100% (37/37), 8.97 KiB | 2.99 MiB/s, done.
Total 37 (delta 30), reused 0 (delta 0), pack-reused 0 (from 0)
To salsa.debian.org:python-team/packages/bundlewrap.git
6f55d99..3d5498f debian/master -> debian/master

 * [new tag] upstream/4.23.1 -> upstream/4.23.1
 * [new tag] debian/4.23.1-1_exp1 -> debian/4.23.1-1_exp1

Ooh! That looked like it did something! And a minute later I received the notification of the upload in my inbox:

So, I’m not 100% sure that this makes things much easier for me than doing a dput, but, it’s not any more difficult or more work either (once you know how it works), so I’ll be using git-debpush from now on, and I’m sure as I get more used to the git workflow of doing things I’ll understand more of the benefits. And at last, my one last use case for using FTP is now properly dead. RIP FTP :)

on June 19, 2025 07:49 PM

E352 Triunfo Do Kejserpingvin

Podcast Ubuntu Portugal

Enquanto os pólos derretem, falámos de MARP e criação de diapositivos com Markdown; como o Ubuntu recria o Tratado de Tordesilhas com satélites; a Dinamarca será o primeiro passo para a Grande Expansão do Império do Pinguim Imperador; dicas sobre como evitar buracos traseiros nas telecomunicações, onde se enfiam toda a espécie de raias de espionagem; como a Apple fez um Linux aguado com 2% de sumo à base de concentrado; o Oracle Oriole tropeça nos atacadores quando tenta actualizar-se; refrescámos a agenda de eventos para as próximas semanas - e fomos ao Bazaar.

Já sabem: oiçam, subscrevam e partilhem!

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Os efeitos sonoros deste episódio possuem as seguintes licenças: Apito Naval - Boatswains whistle 1.wav by waterboy920 – https://freesound.org/s/207329/ – License: Attribution 4.0; Isto é um Alerta Ubuntu - Breaking news intro music by humanoide9000 – https://freesound.org/s/760770/ – License: Attribution 4.0. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada por encomenda pela Shizamura - artista, ilustradora e autora de BD. Podem ficar a conhecer melhor a Shizamura na Ciberlândia e no seu sítio web.

on June 19, 2025 12:00 AM

June 18, 2025

This isn’t a tech-related post, so if you’re only here for the tech, feel free to skip over.

Any of y’all hate spiders? If you had asked me that last week, I would have said “no”. Turns out you just need to get in a fight with the wrong spider to change that. I’m in the central United States, so thankfully I don’t have to deal with the horror spiders places like Australia have. But even in my not-intrinsically-hostile-to-human-life area of the world, we have some horror spiders of our own turns out. The two most common ones (the Brown Recluse and Black Widow) are basically memes at this point because they get mentioned so often; I’ve been bitten by both so far. The Brown Recluse bite wasn’t really that dramatic before, during, or after treatment, so there’s not really a story to tell there. The Black Widow bite on the other hand… oh boy. Holy moly.

I woke up last Saturday since the alternative was to sleep for 24 hours straight and that sounded awful. There’s lots of good things to do with a Sabbath, why waste the day on sleep? Usually I spend (or at least am supposed to spend) this day with my family, generally doing Bible study and board games. Over the last few weeks though, I had been using the time to clean up various areas of the house that needed it, and this time I decided to clean up a room that had been flooded some time back. I entered the Room of Despair, with the Sword of Paper Towels in one hand and the Shield of Trash Bags in the other. In front of me stood the vast armies of UghYuck-Hai. (LotR fans will get the joke.1) Convinced that I was effectively invulnerable to anything the hoards could do to me, I entered the fray, and thus was the battle joined in the land of MyHome.

Fast forward two hours of sorting, scrubbing, and hauling. I had made a pretty decent dent in the mess. I was also pretty tired at that point, and our family’s dog needed me to take him outside, so I decided it was time to take a break. I put the leash on the dog, and headed into the great outdoors for a much-needed breath of fresh air.

It was at about that time I realized there was something that felt weird on my left hip. In my neck of the woods, we have to deal with pretty extreme concentrations of mosquitoes, so I figured I probably just had some of my blood repurposed by a flying mini-vampire. Upon closer inspection though, I didn’t see localized swelling indicating a mosquito bite (or any other bite for that matter). The troubled area was just far enough toward my back that I couldn’t see if it had a bite hole or not, and I didn’t notice any kind of discoloration to give me a heads-up either. All I knew is that there was a decent-sized patch of my left hip that HURT if I poked it lightly. I’d previously had random areas of my body hurt when poked (probably from minor bruises), so I just lumped this event in with the rest of the mystery injuries I’ve been through and went on with my day.

Upon coming back from helping the dog out, I still felt pretty winded. I chalked that up to doing strenuous work in an area with bad air for too long, and decided to spend some time in bed to recover. One hour in bed turned into two. Two turned into three. Regardless of how long I laid there, I still just felt exhausted. “Did I really work that hard?”, I wondered. It didn’t seem like I had done enough work to warrant this level of tiredness. Thankfully I did get to chat with my mom about Bible stuff for a good portion of that time, so I thought the day had been pretty successful nonetheless.

The sun went down. I was still unreasonably tired. Usually this was when me and my mom would play a board game together, but I just wasn’t up for it. I ended up needing to use the restroom, so I went to do that, and that’s when I noticed my now-even-sorer hip wasn’t the only thing that was wrong.

While in the restroom, I felt like my digestive system was starting to get sick. This too was pretty easily explainable, I had just worked in filth and probably got exposed to too much yuck for my system to handle. My temperature was a bit higher than normal. Whatever, not like I hadn’t had fevers before. My head felt sore and stuffed up, which again just felt like I was getting sick in general. My vision also wasn’t great, but for all I know that could have just been because I was focusing more on feeling bad and less on the wall of the bathroom I was looking at. At this point, I didn’t think that the sore hip and the sudden onset fever might be related.

After coming out of the bathroom, I huddled in bed to try to help the minor fever burn out whatever crud I had gotten into. My mom came to help take care of me while I was sick. To my surprise, the fever didn’t stay minor for long - I suddenly started shivering like crazy even though I wasn’t even remotely cold. My temperature skyrocketed, getting to the point where I was worried it could be dangerously high. I started aching all over and my muscles felt like they got a lot weaker. My heart started pounding furiously, and I felt short of breath. We always keep colloidal silver in the house since it helps with immunity, so my mom gave me some sprays of it and had me hold it under my tongue. I noticed I was salivating a bunch for absolutely no reason while trying to hold the silver spray there as long as I could. Things weren’t really improving, and I noticed my hip was starting to hurt more. I mentioned the sore hip issue to my mom, and we chose to put some aloe vera lotion and colloidal silver on it, just in case I had been bitten by a spider of some sort.

That turned out to be a very good, very very VERY painful idea. After rubbing in the lotion, the bitten area started experiencing severe, relentless stabbing pains, gradually growing in intensity as time progressed. For the first few minutes, I was thinking “wow, this really hurts, what in the world bit me?”, but that pretty quickly gave way to “AAAAA! AAAAA! AAAAAAAAAAAAAA!” I kept most of the screaming in my mind, but after a while it got so bad I just rocked back and forth and groaned for what felt like forever. I’d never had pain like this just keep going and going, so I thought if I just toughed it out for long enough it would eventually go away. This thing didn’t seem to work like that though. After who-knows-how-long, I finally realized this wasn’t going to go away on its own, and so, for reasons only my pain-deranged mind could understand, I tried rolling over on my left side to see if squishing the area would get it to shut up. Beyond all logic, that actually seemed to work, so I just stayed there for quite some time.

At this point, my mom realized the sore hip and the rest of my sickness might be related (I never managed to put the two together). The symptoms I had originally looked like scarlet fever plus random weirdness, but they turned out to match extremely well with the symptoms of a black widow bite (I didn’t have the sweating yet but that ended up happening too). The bite area also started looking discolored, so something was definitely not right. At about this point my kidneys started hurting pretty badly, not as badly as the bite but not too far from it.

I’ll try to go over the rest of the mess relatively quickly. In summary:

  • I passed out and fell over while trying to walk back from the restroom at one point. From what I remember, I had started blacking out while in the restroom, realized I needed to get back to bed ASAP, managed to clumsily walk out of the bathroom and most of the way into the bed, then felt myself fall, bump into a lamp, and land on the bed back-first (which was weird, my back wasn’t facing the bed yet). My mom on the other hand, who was not virtually unconscious, reports that I came around the corner, proceeded to fall face first into the lamp with arms outstretched like a zombie, had a minor seizure, and she had to pull me off the lamp and flip me around. All I can think is my brain must have still been active but lost all sensory input and motor control.

  • I couldn’t get out of bed for over 48 hours straight thereafter. I’d start blacking out if I tried to stand up for very long.

  • A dime-sized area around the bite turned purple, then black. So, great, I guess I can now say a part of me is dead :P At this point we were also able to clearly see dual fang marks, confirming that this was indeed a spider bite.

  • I ended up drinking way more water than usual. I usually only drink three or four cups a day, but I drank more like nine or ten cups the day after the bite.

  • I had some muscle paralysis that made it difficult to urinate. Thankfully that went away after a day.

  • My vision got very, very blurry, and my eyes had tons of pus coming out of them for no apparent reason. This was more of an annoyance than anything, I was keeping my eyes shut most of the time anyway, but the crud kept drying and gluing my eyes shut! It was easy enough to just pick off when that happened, but it was one of those things that makes you go “come on, really?”

  • On the third day of recovery, my whole body broke out in a rash that looked like a bunch of purple freckles. They didn’t hurt, didn’t bump up, didn’t even hardly itch, but they looked really weird. Patches of the rash proceeded to go away and come back every so often, which they’re still doing now.

  • I ended up missing three days of work while laid up.

We kept applying peppermint oil infused aloe vera lotion and colloidal silver to the bite, which helped reduce pain (well, except for the first time anyway :P) and seems to have helped keep the toxins from spreading too much.

A couple of questions come to mind at this point. For one, how do I know that it was a black widow that bit me? Unfortunately, I never saw or felt the spider, so I can’t know for an absolute certainty that I was bitten by a black widow (some people report false widows can cause similar symptoms if they inject you with enough venom). But false widows don’t live anywhere even remotely close to where I live, and black widows are both known to live here and we’ve seen them here before. The symptoms certainly aren’t anything remotely close to a brown recluse bite, and while I am not a medical professional, they seem to match the symptoms of black widow bites very, very well. So even if by some chance this wasn’t a black widow, whatever bit me had just as bad of an effect on me as a black widow would have.

For two, why didn’t I go to a hospital? Number one, everything I looked up said the most they could do is give you antivenom (which can cause anaphylaxis, no thank you), or painkillers like fentanyl (which I don’t want anywhere near me, I’d rather feel like I’m dying from a spider bite than take a narcotic painkiller, thanks anyway). Number two, last time a family member had to go to the hospital, the ambulance just about killed him trying to get him there in the first place. I lost most of my respect for my city’s medical facilities that day; if I’m not literally dying, I don’t need a hospital, and if I am dying, my hospitals will probably just kill me off quicker.

I’m currently on day 4 of recovery (including the day I was bitten). I’m still lightheaded, but I can stand without passing out finally. The kidney pain went away, as did the stabbing pain in the bite area (though it still aches a bit, and hurts if you poke it). The fever is mostly gone, my eyes are working normally again and aren’t constantly trying to superglue themselves closed, and my breathing is mostly fine again. I’m definitely still feeling the effects of the bite, but they aren’t crippling anymore. I’ll probably be able to work from home in the morning (I’d try to do household chores too but my mom would probably have a heart attack since I just about killed myself trying to get out of the bathroom).

Speaking of working from home, it’s half past midnight here, I should be going to bed. Thanks for reading!

1

The army of Saruman sent against the fortress of Helm’s Deep was made up of half-elven, half-orc creatures known as Uruk-Hai. “Ugh, yuck!” and “Uruk” sounded humorously similar, so I just went with it.

on June 18, 2025 05:35 AM

June 16, 2025

Welcome to the Ubuntu Weekly Newsletter, Issue 896 for the week of June 8 – 14, 2025. The full version of this issue is available here.

In this issue we cover:

  • Welcome New Members and Developers
  • Ubuntu Stats
  • Hot in Support
  • LXD: Weekly news #398
  • Other Meeting Reports
  • Upcoming Meetings and Events
  • LoCo Events
  • Canonical News
  • In the Blogosphere
  • Featured Audio and Video
  • Updates and Security for Ubuntu 22.04, 24.04, 24.10, and 25.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • Din Mušić – LXD
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

on June 16, 2025 10:26 PM

The Promised LAN

Paul Tagliamonte

The Internet has changed a lot in the last 40+ years. Fads have come and gone. Network protocols have been designed, deployed, adopted, and abandoned. Industries have come and gone. The types of people on the internet have changed a lot. The number of people on the internet has changed a lot, creating an information medium unlike anything ever seen before in human history. There’s a lot of good things about the Internet as of 2025, but there’s also an inescapable hole in what it used to be, for me.

I miss being able to throw a site up to send around to friends to play with without worrying about hordes of AI-feeding HTML combine harvesters DoS-ing my website, costing me thousands in network transfer for the privilege. I miss being able to put a lightly authenticated game server up and not worry too much at night – wondering if that process is now mining bitcoin. I miss being able to run a server in my home closet. Decades of cat and mouse games have rendered running a mail server nearly impossible. Those who are “brave” enough to try are met with weekslong stretches of delivery failures and countless hours yelling ineffectually into a pipe that leads from the cheerful lobby of some disinterested corporation directly into a void somewhere 4 layers below ground level.

I miss the spirit of curiosity, exploration, and trying new things. I miss building things for fun without having to worry about being too successful, after which “security” offices start demanding my supplier paperwork in triplicate as heartfelt thanks from their engineering teams. I miss communities that are run because it is important to them, not for ad revenue. I miss community operated spaces and having more than four websites that are all full of nothing except screenshots of each other.

Every other page I find myself on now has an AI generated click-bait title, shared for rage-clicks all brought-to-you-by-our-sponsors–completely covered wall-to-wall with popup modals, telling me how much they respect my privacy, with the real content hidden at the bottom bracketed by deceptive ads served by companies that definitely know which new coffee shop I went to last month.

This is wrong, and those who have seen what was know it.

I can’t keep doing it. I’m not doing it any more. I reject the notion that this is as it needs to be. It is wrong. The hole left in what the Internet used to be must be filled. I will fill it.

What comes before part b?

Throughout the 2000s, some of my favorite memories were from LAN parties at my friends’ places. Dragging your setup somewhere, long nights playing games, goofing off, even building software all night to get something working—being able to do something fiercely technical in the context of a uniquely social activity. It wasn’t really much about the games or the projects—it was an excuse to spend time together, just hanging out. A huge reason I learned so much in college was that campus was a non-stop LAN party – we could freely stand up servers, talk between dorms on the LAN, and hit my dorm room computer from the lab. Things could go from individual to social in the matter of seconds. The Internet used to work this way—my dorm had public IPs handed out by DHCP, and my workstation could serve traffic from anywhere on the internet. I haven’t been back to campus in a few years, but I’d be surprised if this were still the case.

In December of 2021, three of us got together and connected our houses together in what we now call The Promised LAN. The idea is simple—fill the hole we feel is gone from our lives. Build our own always-on 24/7 nonstop LAN party. Build a space that is intrinsically social, even though we’re doing technical things. We can freely host insecure game servers or one-off side projects without worrying about what someone will do with it.

Over the years, it’s evolved very slowly—we haven’t pulled any all-nighters. Our mantra has become “old growth”, building each layer carefully. As of May 2025, the LAN is now 19 friends running around 25 network segments. Those 25 networks are connected to 3 backbone nodes, exchanging routes and IP traffic for the LAN. We refer to the set of backbone operators as “The Bureau of LAN Management”. Combined decades of operating critical infrastructure has driven The Bureau to make a set of well-understood, boring, predictable, interoperable and easily debuggable decisions to make this all happen. Nothing here is exotic or even technically interesting.

Applications of trusting trust

The hardest part, however, is rejecting the idea that anything outside our own LAN is untrustworthy—nearly irreversible damage inflicted on us by the Internet. We have solved this by not solving it. We strictly control membership—the absolute hard minimum for joining the LAN requires 10 years of friendship with at least one member of the Bureau, with another 10 years of friendship planned. Members of the LAN can veto new members even if all other criteria is met. Even with those strict rules, there’s no shortage of friends that meet the qualifications—but we are not equipped to take that many folks on. It’s hard to join—-both socially and technically. Doing something malicious on the LAN requires a lot of highly technical effort upfront, and it would endanger a decade of friendship. We have relied on those human, social, interpersonal bonds to bring us all together. It’s worked for the last 4 years, and it should continue working until we think of something better.

We assume roommates, partners, kids, and visitors all have access to The Promised LAN. If they’re let into our friends' network, there is a level of trust that works transitively for us—I trust them to be on mine. This LAN is not for “security”, rather, the network border is a social one. Benign “hacking”—in the original sense of misusing systems to do fun and interesting things—is encouraged. Robust ACLs and firewalls on the LAN are, by definition, an interpersonal—not technical—failure. We all trust every other network operator to run their segment in a way that aligns with our collective values and norms.

Over the last 4 years, we’ve grown our own culture and fads—around half of the people on the LAN have thermal receipt printers with open access, for printing out quips or jokes on each other’s counters. It’s incredible how much network transport and a trusting culture gets you—there’s a 3-node IRC network, exotic hardware to gawk at, radios galore, a NAS storage swap, LAN only email, and even a SIP phone network of “redphones”.

DIY

We do not wish to, nor will we, rebuild the internet. We do not wish to, nor will we, scale this. We will never be friends with enough people, as hard as we may try. Participation hinges on us all having fun. As a result, membership will never be open, and we will never have enough connected LANs to deal with the technical and social problems that start to happen with scale. This is a feature, not a bug.

This is a call for you to do the same. Build your own LAN. Connect it with friends’ homes. Remember what is missing from your life, and fill it in. Use software you know how to operate and get it running. Build slowly. Build your community. Do it with joy. Remember how we got here. Rebuild a community space that doesn’t need to be mediated by faceless corporations and ad revenue. Build something sustainable that brings you joy. Rebuild something you use daily.

Bring back what we’re missing.

on June 16, 2025 03:58 PM

June 11, 2025

Apple has introduced a new open-source Swift framework named Containerization, designed to fundamentally reshape how Linux containers are run on macOS. In a detailed presentation, Apple revealed a new architecture that prioritizes security, privacy, and performance, moving away from traditional methods to offer a more integrated and efficient experience for developers.

The new framework aims to provide each container with the same level of robust isolation previously reserved for large, monolithic virtual machines, but with the speed and efficiency of a lightweight solution.

Here is the video:

The Old Way: A Single, Heavy Virtual Machine

  • Resource Inefficiency: The large VM had resources like CPU and memory allocated to it upfront, regardless of how many containers were running.
  • Security & Privacy Concerns: Sharing files from the Mac with a container was a two-step process; files were first shared with the entire VM, and then to the specific container, potentially exposing data more broadly than intended.
  • Maintenance Overhead: The large VM contained a full Linux distribution with core utilities, dynamic libraries, and a libc implementation, increasing the attack surface and requiring constant updates.

A New Vision: Security, Privacy, and Performance

The Containerization framework was built with three core goals to address these challenges:

  1. Security: Provide every single container with its own isolated virtual machine. This dramatically reduces the attack surface by eliminating shared kernels and system utilities between containers.
  2. Privacy: Enable file and directory sharing on a strict, per-container basis. Only the container that requests access to a directory will receive it.
  3. Performance: Achieve sub-second start times for containers while respecting the user’s system resources. If no containers are running, no resources are allocated.

Under the Hood: How Containerization Works

Containerization is more than just an API; it’s a complete rethinking of the container runtime on macOS.

Lightweight, Per-Container Virtual Machines

The most significant architectural shift is that each container runs inside its own dedicated, lightweight virtual machine. This approach provides profound benefits:

  • Strong Isolation: Each container is sandboxed within its own VM, preventing processes in one container from viewing or interfering with the host or other containers.
  • Dedicated Networking: Every container gets its own dedicated IP address, which improves network performance and eliminates the cumbersome need for port mapping.
  • Efficient Filesystems: Containerization exposes the image’s filesystem to the Linux VM as a block device formatted with EXT4. Apple has even developed a Swift package to manage the creation and population of these EXT4 filesystems directly from macOS.

vminitd: The Swift-Powered Heart of the Container

Once a VM starts, a minimal initial process called vminitd takes over. This is not a standard Linux init system; it’s a custom-built solution with remarkable characteristics:

  • Built in Swift: vminitd is written entirely in Swift and runs as the first process inside the VM.
  • Extremely Minimal Environment: To maximize security, the filesystem vminitd runs in is barebones. It contains no core utilities (like ls, cp), no dynamic libraries, and no libc implementation.
  • Statically Compiled: To run in such a constrained environment, vminitd is cross-compiled from a Mac into a single, static Linux executable. This is achieved using Swift’s Static Linux SDK and musl, a libc implementation optimized for static linking.

vminitd is responsible for setting up the entire container environment, including assigning IP addresses, mounting the container’s filesystem, and supervising all processes that run within the container.

Getting Started: The container Command-Line Tool

To showcase the power of the framework, Apple has also released an open-source command-line tool simply called container. This tool allows developers to immediately begin working with Linux containers in this new, secure environment.

  • Pulling an image:
container image pull alpine:latest
  • Running an interactive shell:
container run -ti alpine:latest sh

Within milliseconds, the user is dropped into a shell running inside a fully isolated Linux environment. Running the ps aux command from within the container reveals only the shell process and the ps process itself, a clear testament to the powerful process isolation at work.

Apple Unveils

An Open Invitation to the Community

Both the Containerization framework and the container tool are available on GitHub. Apple is inviting developers to explore the source code, integrate the framework into their own projects, and contribute to its future by submitting issues and pull requests.

This move signals a strong commitment from Apple to making macOS a first-class platform for modern, Linux container-based development, offering a solution that is uniquely secure, private, and performant.

Source:

The post Apple Unveils “Containerization” for macOS: A New Era for Linux Containers on macOS appeared first on Utappia.

on June 11, 2025 10:06 PM
KDE MascotKDE Mascot

Release notes: https://kde.org/announcements/gear/25.04.2/

Now available in the snap store!

Along with that, I have fixed some outstanding bugs:

Ark: now can open/save files in removable media

Kasts: Once again has sound

WIP: Updating Qt6 to 6.9 and frameworks to 6.14

Enjoy everyone!

Unlike our software, life is not free. Please consider a donation, thanks!

on June 11, 2025 01:14 PM

June 09, 2025

Thanks, Mailbox!

Simon Quigley

https://medium.com/media/553e1df568153a684bfe861e27692fcb/href

A gentleman by the name of Arif Ali reached out to me on LinkedIn. I won’t share the actual text of the message, but I’ll paraphrase:
“I hope everything is going well with you. I’m applying to be an Ubuntu ‘Per Package Uploader’ for the SOS package, and I was wondering if you could endorse my application.”

Arif, thank you! I have always appreciated our chats, and I truly believe you’re doing great work. I don’t want to interfere with anything by jumping on the wiki, but just know you have my full backing.

“So, who actually lets Arif upload new versions of SOS to Ubuntu, and what is it?”
Great question!

Firstly, I realized that I needed some more info on what SOS is, so I can explain it to you all. On a quick search, this was the first result.

Okay, so genuine question…

Why does the first DuckDuckGo result for “sosreport” point to an article for a release of Red Hat Enterprise Linux that is two versions old? In other words, hey DuckDuckGo, your grass is starting to get long. Or maybe Red Hat? Can’t tell, I give you both the benefit of the doubt, in good faith.

So, I clarified the search and found this. Canonical, you’ve done a great job. Red Hat, you could work on your SEO so I can actually find the RHEL 10 docs quicker, but hey… B+ for effort. ;)

Anyway, let me tell you about Arif. Just from my own experiences.

He’s incredible. He shows love to others, and whenever I would sponsor one of his packages during my time in Ubuntu, he was always incredibly receptive to feedback. I really appreciate the way he reached out to me, as well. That was really kind, and to be honest, I needed it.

As for character, he has my +1. In terms of the members of the DMB (aside from one person who I will not mention by name, who has caused me immense trouble elsewhere), here’s what I’d tell you if you asked me privately…

“It’s just PPU. Arif works on SOS as part of his job. Please, do still grill him. The test, and ensuring people know that they actually need to pass a test to get permissions, that’s pretty important.”

That being said, I think he deserves it.

Good luck, Arif. I wish you well in your meeting. I genuinely hope this helps. :)

And to my friends in Ubuntu, I miss you. Please reach out. I’d be happy to write you a public letter, too. Only if you want. :)

on June 09, 2025 05:20 PM

People in the Arena

Simon Quigley

Theodore Roosevelt is someone I have admired for a long time. I especially appreciate what has been coined the Man in the Arena speech.

A specific excerpt comes to mind after reading world news over the last twelve hours:

“It is well if a large proportion of the leaders in any republic, in any democracy, are, as a matter of course, drawn from the classes represented in this audience to-day; but only provided that those classes possess the gifts of sympathy with plain people and of devotion to great ideals. You and those like you have received special advantages; you have all of you had the opportunity for mental training; many of you have had leisure; most of you have had a chance for enjoyment of life far greater than comes to the majority of your fellows. To you and your kind much has been given, and from you much should be expected. Yet there are certain failings against which it is especially incumbent that both men of trained and cultivated intellect, and men of inherited wealth and position should especially guard themselves, because to these failings they are especially liable; and if yielded to, their- your- chances of useful service are at an end. Let the man of learning, the man of lettered leisure, beware of that queer and cheap temptation to pose to himself and to others as a cynic, as the man who has outgrown emotions and beliefs, the man to whom good and evil are as one. The poorest way to face life is to face it with a sneer. There are many men who feel a kind of twister pride in cynicism; there are many who confine themselves to criticism of the way others do what they themselves dare not even attempt. There is no more unhealthy being, no man less worthy of respect, than he who either really holds, or feigns to hold, an attitude of sneering disbelief toward all that is great and lofty, whether in achievement or in that noble effort which, even if it fails, comes to second achievement. A cynical habit of thought and speech, a readiness to criticise work which the critic himself never tries to perform, an intellectual aloofness which will not accept contact with life’s realities — all these are marks, not as the possessor would fain to think, of superiority but of weakness. They mark the men unfit to bear their part painfully in the stern strife of living, who seek, in the affection of contempt for the achievements of others, to hide from others and from themselves in their own weakness. The rôle is easy; there is none easier, save only the rôle of the man who sneers alike at both criticism and performance.”

The riots in LA are seriously concerning to me. If something doesn’t happen soon, this is going to get out of control.

If you are participating in these events, or know someone who is, tell them to calm down. Physical violence is never the answer, no matter your political party.

De-escalate immediately.

Be well. Show love to one another!

on June 09, 2025 05:58 AM

June 08, 2025

My Debian contributions this month were all sponsored by Freexian. Things were a bit quieter than usual, as for the most part I was sticking to things that seemed urgent for the upcoming trixie release.

You can also support my work directly via Liberapay or GitHub Sponsors.

OpenSSH

After my appeal for help last month to debug intermittent sshd crashes, Michel Casabona helped me put together an environment where I could reproduce it, which allowed me to track it down to a root cause and fix it. (I also found a misuse of strlcpy affecting at least glibc-based systems in passing, though I think that was unrelated.)

I worked with Daniel Kahn Gillmor to fix a regression in ssh-agent socket handling.

I fixed a reproducibility bug depending on whether passwd is installed on the build system, which would have affected security updates during the lifetime of trixie.

I backported openssh 1:10.0p1-5 to bookworm-backports.

I issued bookworm and bullseye updates for CVE-2025-32728.

groff

I backported a fix for incorrect output when formatting multiple documents as PDF/PostScript at once.

debmirror

I added a simple autopkgtest.

Python team

I upgraded these packages to new upstream versions:

  • automat
  • celery
  • flufl.i18n
  • flufl.lock
  • frozenlist
  • python-charset-normalizer
  • python-evalidate (including pointing out an upstream release handling issue)
  • python-pythonjsonlogger
  • python-setproctitle
  • python-telethon
  • python-typing-inspection
  • python-webargs
  • pyzmq
  • trove-classifiers (including a small upstream cleanup)
  • uncertainties
  • zope.testrunner

In bookworm-backports, I updated these packages:

  • python-django to 3:4.2.21-1 (issuing BSA-124)
  • python-django-pgtrigger to 4.14.0-1

I fixed problems building these packages reproducibly:

I backported fixes for some security vulnerabilities to unstable (since we’re in freeze now so it’s not always appropriate to upgrade to new upstream versions):

I fixed various other build/test failures:

I added non-superficial autopkgtests to these packages:

I packaged python-django-hashids and python-django-pgbulk, needed for new upstream versions of python-django-pgtrigger.

I ported storm to Python 3.14.

Science team

I fixed a build failure in apertium-oci-fra.

on June 08, 2025 12:20 AM

June 06, 2025

Hey everyone,

Get ready to dust off those virtual cobwebs and crack open a cold one (or a digital one, if you’re in a VM) because uCareSystem 25.05.06 has officially landed! And let me tell you, this release is so good, it’s practically a love letter to your Linux system – especially if that system happens to be chilling out in Windows Subsystem for Linux (WSL).

That’s right, folks, the big news is out: WSL support for uCareSystem has finally landed! We know you’ve been asking, we’ve heard your pleas, and we’ve stopped pretending we didn’t see you waving those “Free WSL” signs.

Now, your WSL instances can enjoy the same tender loving care that uCareSystem provides for your “bare metal” Ubuntu/Debian Linux setups. No more feeling left out, little WSLs! You can now join the cool kids at the digital spa.

Here is a video of it:

But wait, there’s more! (Isn’t there always?) We didn’t just stop at making friends with Windows. We also tackled some pesky gremlins that have been lurking in the shadows:

  • Apt-key dependency? Gone! We told it to pack its bags and hit the road. Less dependency drama, more system harmony.
  • Remember that time your internet check was slower than a sloth on a caffeine crash? We squashed that “Bug latency curl in internet check phase” bug. Your internet checks will now be snappier than a startled squirrel.
  • We fixed that “Wrong kernel cleanup” issue. Your kernels are now safe from accidental digital haircuts.
  • And for those of you who hit snags with Snap in WSL, kernel cleanup (again, because we’re thorough!), and other bits, we’ve applied some much-needed digital duct tape and elbow grease to fix and more.
  • We even gave our code a good scrub, fixing those annoying shellcheck warnings. Because nobody likes a messy codebase, especially not us!
  • Oh, and the -k option? Yeah, that’s gone too. We decided it was useless so we had to retire it to a nice, quiet digital farm upstate.
  • Finally, for all you newcomers and memory-challenged veterans, we’ve added install and uninstall instructions to the README. Because sometimes, even we forget how to put things together after we’ve taken them apart.

So, what are you waiting for? Head over to utappia.org (or wherever you get your uCareSystem goodness) and give your system the pampering it deserves with uCareSystem 25.05.06. Your WSL instance will thank you, probably with a digital high-five.

Download the latest release and give it a spin. As always, feedback is welcome.

Acknowledgements

Thanks to the following users for their support:

  • P. Loughman – Thanks for your continued support
  • D. Emge – Thanks for your continued support
  • W. Schreinemachers – Thanks for your continued support
  • W. Schwartz
  • D. e Swarthout
  • D. Luchini
  • M. Stanley
  • N. Evangelista

Your involvement helps keep this project alive, evolving, and aligned with real-world needs. Thank you.

Happy maintaining!

Where can I download uCareSystem ?

As always, I want to express my gratitude for your support over the past 15 years. I have received countless messages from inside and outside Greece about how useful they found the application. I hope you find the new version useful as well.

If you’ve found uCareSystem to be valuable and it has saved you time, consider showing your appreciation with a donation. You can contribute via PayPal or Debit/Credit Card by clicking on the banner.

Pay what you want Maybe next time
Click the donate button and enter the amount you want to donate. Then you will be navigated to the page with the latest version to download the installer If you don’t want to Donate this time, just click the download icon to be navigated to the page with the latest version to download the installer
btn_donateCC_LG ucare-system-download
   

Once installed, the updates for new versions will be installed along with your regular system updates.

The post uCareSystem 25.05.06: Because Even Your WSL Deserves a Spa Day! appeared first on Utappia.

on June 06, 2025 10:59 PM

What is Bazaar code hosting?

Bazaar is a distributed revision control system, originally developed by Canonical. It provides similar functionality compared to the now dominant Git.

Bazaar code hosting is an offering from Launchpad to both provide a Bazaar backend for hosting code, but also a web frontend for browsing the code. The frontend is provided by the Loggerhead application on Launchpad.

Sunsetting Bazaar

Bazaar passed its peak a decade ago. Breezy is a fork of Bazaar that has kept a form of Bazaar alive, but the last release of Bazaar was in 2016. Since then the impact has declined, and there are modern replacements like Git.

Just keeping Bazaar running requires a non-trivial amount of development, operations time, and infrastructure resources – all of which could be better used elsewhere.

Launchpad will now begin the process of discontinuing support for Bazaar.

Timelines

We are aware that the migration of the repositories and updating workflows will take some time, that is why we planned sunsetting in two phases.

Phase 1

Loggerhead, the web frontend, which is used to browse the code in a web browser, will be shut down imminently. Analyzing access logs showed that there are hardly any more requests from legit users, but almost the entire traffic comes from scrapers and other abusers. Sunsetting Loggerhead will not affect the ability to pull, push and merge changes.

Phase 2

From September 1st, 2025, we do not intend to have Bazaar, the code hosting backend, any more. Users need to migrate all repositories from Bazaar to Git between now and this deadline.

Migration paths

The following blog post describes all the necessary steps on how to convert a Bazaar repository hosted on Launchpad to Git.

Migrate a Repository From Bazaar to Git

Call for action

Our users are extremely important to us. Ubuntu, for instance, has a long history of Bazaar usage, and we will need to work with the Ubuntu Engineering team to find ways to move forward to remove the reliance on the integration with Bazaar for the development of Ubuntu. If you are also using Bazaar and you have a special use case, or you do not see a clear way forward, please reach out to us to discuss your use case and how we can help you.

You can reach us in #launchpad:ubuntu.com on Matrix, or submit a question or send us an e-mail via feedback@launchpad.net.

It is also recommended to join the ongoing discussion at https://discourse.ubuntu.com/t/phasing-out-bazaar-code-hosting/62189.

on June 06, 2025 09:26 AM

June 05, 2025

Announcing Incus 6.13

Stéphane Graber

The Incus team is pleased to announce the release of Incus 6.13!

This is a VERY busy release with a lot of new features of all sizes
and for all kinds of different users, so there should be something for
everyone!

The highlights for this release are:

  • Windows agent support
  • Improvements to incus-migrate
  • SFTP on custom volumes
  • Configurable instance external IP address on OVN networks
  • Ability to pin gateway MAC address on OVN networks
  • Clock handling in virtual machines
  • New get-client-certificate and get-client-token commands
  • DHCPv6 support for OCI
  • Network host tables configuration for routed NICs
  • Support for split image publishing
  • Preseed of certificates
  • Configuration of list format in the CLI
  • Add CLI aliases for create/add and delete/remove/rm
  • OS metrics are now included in Incus metrics when running on Incus OS
  • Converted more database logic to generated code
  • Converted more CLI list functions to using server side filtering
  • Converted more documentation to be generated from the code

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Enjoy!

on June 05, 2025 05:07 AM

June 04, 2025

If you’re looking for a low-power, always-on solution for streaming your personal media library, the Raspberry Pi makes a great Plex server. It’s compact, quiet, affordable, and perfect for handling basic media streaming—especially for home use.

In this post, I’ll guide you through setting up Plex Media Server on a Raspberry Pi, using Raspberry Pi OS (Lite or Full) or Debian-based distros like Ubuntu Server.


🧰 What You’ll Need

  • Raspberry Pi 4 or 5 (at least 2GB RAM, 4GB+ recommended)
  • microSD card (32GB+), or SSD via USB 3.0
  • External storage for media (USB HDD/SSD or NAS)
  • Ethernet or Wi-Fi connection
  • Raspberry Pi OS (Lite or Desktop)
  • A Plex account (free is enough)

⚙ Step 1: Prepare the Raspberry Pi

  1. Flash Raspberry Pi OS using Raspberry Pi Imager
  2. Enable SSH and set hostname (optional)
  3. Boot the Pi, log in, and update:
sudo apt update && sudo apt upgrade -y

📦 Step 2: Install Plex Media Server

Plex is available for ARM-based devices via their official repository.

  1. Add Plex repo and key:
curl https://downloads.plex.tv/plex-keys/PlexSign.key | sudo apt-key add -
echo deb https://downloads.plex.tv/repo/deb public main | sudo tee /etc/apt/sources.list.d/plexmediaserver.list
sudo apt update
  1. Install Plex:
sudo apt install plexmediaserver -y

🔁 Step 3: Enable and Start the Service

Enable Plex on boot and start the service:

sudo systemctl enable plexmediaserver
sudo systemctl start plexmediaserver

Make sure it’s running:

sudo systemctl status plexmediaserver

🌐 Step 4: Access Plex Web Interface

Open your browser and go to:

http://<your-pi-ip>:32400/web

Log in with your Plex account and begin the setup wizard.


📂 Step 5: Add Your Media Library

Plug in your external HDD or mount a network share, then:

sudo mkdir -p /mnt/media
sudo mount /dev/sda1 /mnt/media

Make sure Plex can access it:

sudo chown -R plex:plex /mnt/media

Add the media folder during the Plex setup under Library > Add Library.


💡 Optional Tips

  • Transcoding: The Pi can handle direct play (no transcoding) well, but struggles with transcoding large files. Use compatible formats like H.264 (MP4).
  • USB Boot: For better performance, boot the Pi from an SSD instead of a microSD card.
  • Power Supply: Use a proper 5V/3A PSU to avoid crashes under heavy disk load.
  • Thermal: Add a heatsink or fan for the Pi if using Plex for long sessions.

🔐 Secure Your Server

  • Use your router to forward port 32400 only if you want remote access.
  • Set a strong Plex password.
  • Enable Tailscale or WireGuard for secure remote access without exposing ports.

✅ Conclusion

A Raspberry Pi might not replace a full-blown NAS or dedicated server, but for personal use or as a secondary Plex node, it’s surprisingly capable. With low energy usage and silent operation, it’s the perfect DIY home media solution.

If you’re running other services like Pi-hole or Home Assistant, the Pi can multitask well — just avoid overloading it with too much transcoding.

The post Building a Plex Media Server with Raspberry Pi appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

on June 04, 2025 09:30 PM

June 01, 2025

If you’re a Linux user craving a real-time strategy (RTS) game with the polish of Age of Empires and the historical depth of a university textbook—yet entirely free and open source—then you need to try 0 A.D.. This epic project by Wildfire Games is not just an open-source alternative to mainstream RTS games—it’s a serious contender in its own right, crafted with passion, precision, and community spirit.

🎮 What is 0 A.D.?

0 A.D. (Zero Anno Domini) is a free, open-source, cross-platform RTS game that takes players deep into ancient history, allowing them to build and battle with civilizations from 500 B.C. to 500 A.D. The game is built using the custom Pyrogenesis engine, a modern 3D engine developed from scratch for this purpose, and available under the GPL license—yes, you can even tinker with the code yourself.

It’s not just a clone. 0 A.D. sets itself apart with:

  • 🛡 Historically accurate civilizations
  • 🗺 Dynamic and random map generation
  • ⚔ Tactical land and naval combat
  • 🏗 City-building with tech progression
  • 🧠 AI opponents and multiplayer support
  • 💬 Modding tools and community-created content

🐧 Why It’s Perfect for Linux Users

Linux gamers often get the short end of the stick when it comes to big-name games—but 0 A.D. feels like it was made for us. Here’s why Linux users should care:

✔ Native Linux Support

0 A.D. runs natively on Linux without the need for Wine, Proton, or compatibility layers. You can install it directly from your distro’s package manager or build it from source if you like full control.

For example:

# On Debian/Ubuntu
sudo apt install 0ad

# On Arch Linux
sudo pacman -S 0ad

# On Fedora
sudo dnf install 0ad

No weird dependencies. No workarounds. Just pure, native performance.

🎨 Vulkan Renderer and FSR Support

With Alpha 27 “Agni”, 0 A.D. now supports Vulkan, giving Linux users much better graphics performance, lower CPU overhead, and compatibility with modern GPU features. Plus, it includes AMD FidelityFX Super Resolution (FSR)—which boosts frame rates and visual quality even on low-end hardware.

This makes 0 A.D. one of the few FOSS games optimized for modern Linux graphics stacks like Mesa, Wayland, and PipeWire.

🔄 Rolling Updates and Dev Engagement

The development team and community are highly active, with new features, bug fixes, and optimizations arriving steadily. You don’t need to wait years for meaningful updates—0 A.D. grows with each alpha release, and Linux users are treated as first-class citizens.

Want to contribute a patch or translate the UI into Malay? You can. Everything is transparent and accessible.


🏛 What Makes the Gameplay So Good?

Let’s dive deeper into why the gameplay itself shines.

🏗 Realistic Economy and Base Building

Unlike many fast-paced arcade RTS games, 0 A.D. rewards planning and resource management. You’ll manage four resources—food, wood, stone, and metal—to construct buildings, raise armies, and advance through phases that represent a civilization’s growth. Advancing from village phase to town phase to city phase unlocks more units and structures.

Each civilization has unique architectural styles, tech trees, and military units. For example:

  • Romans have disciplined legionaries and siege weapons.
  • Persians boast fast cavalry and majestic palaces.
  • Athenians excel in naval warfare.

⚔ Intense Tactical Combat

Units in 0 A.D. aren’t just damage sponges. There’s formation control, terrain advantage, flanking tactics, and unit counters. The AI behaves strategically, and in multiplayer, experienced players can pull off devastating maneuvers.

Naval combat has received significant improvements recently, with better ship handling and water pathfinding—something many commercial RTS games still struggle with.

🗺 Endless Map Variety and Mod Support

0 A.D. includes:

  • Skirmish maps
  • Random maps (with different biomes and elevation)
  • Scenario maps (with scripted events)

And thanks to the integrated mod downloader, you can browse, install, and play with community mods in just a few clicks. Want to add new units, tweak balance, or add fantasy elements? You can.


🕹 Multiplayer and Replays

Play with friends over LAN, the Internet, or against the built-in AI. The game includes:

  • 🧭 Multiplayer save and resume support
  • 👁 Observer tools (with flares, commands, and overlays)
  • ⏪ Replay functionality to study your tactics or cast tournaments

There’s even an in-game lobby where players coordinate matches across all platforms.


👥 Community and Contribution

The 0 A.D. project thrives because of its community:

  • Developers contribute code via GitHub.
  • Artists create stunning 3D models and animations.
  • Historians help ensure cultural accuracy.
  • Translators localize the game into dozens of languages.
  • Players write guides, tutorials, and strategy posts.

If you’re a Linux user and want to contribute to an ambitious FOSS project, this is the perfect gateway into game development, design, or open collaboration.


🧑‍💻 How to Install on Linux

Here’s a quick reference:

Option 1: Package Manager (Recommended)

  • Debian/Ubuntu: sudo apt install 0ad
  • Arch Linux: sudo pacman -S 0ad
  • Fedora: sudo dnf install 0ad
  • openSUSE: sudo zypper install 0ad

Option 2: Compile from Source

Follow the official instructions at https://trac.wildfiregames.com/wiki/BuildInstructions


🎯 Final Thoughts

0 A.D. is more than just a game—it’s a testament to what free and open-source software can achieve. For Linux gamers, it’s a rare gem: a game that respects your platform, performs well, and lets you own your experience entirely.

So whether you’re a seasoned general or a curious strategist, download 0 A.D. today and relive history—on your terms.

👉 Visit https://play0ad.com to download and start playing.

The post 0 A.D. on Linux: A Stunning, Free RTS Experience That Rivals the Best appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

on June 01, 2025 04:14 PM

May 29, 2025

Ubuntu Studio 22.04 LTS has reached the end of its three years of supported life provided by the Ubuntu Studio team. All users are urged to upgrade to 24.04 LTS at this time.

This means that the KDE Plasma, audio, video, graphics, photography, and publishing components of your system will no longer receive updates, plus we at Ubuntu Studio won’t support it after 29-May-2025, though your base packages from Ubuntu will continue to receive security updates from Ubuntu until 2027 since Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud and Ubuntu Core continue to receive updates.

See the Ubuntu Studio 24.04 LTS Release Notes for upgrade instructions.

No single release of any operating system can be supported indefinitely, and Ubuntu Studio has no exception to this rule.

Long-Term Support releases are identified by an even numbered year-of-release and a month-of-release of April (04). Hence, the most recent Long-Term Support release is 24.04 (YY.MM = 2024.April), and the next Long-Term Support release will be 26.04 (2026.April). LTS releases for official Ubuntu flavors (not Desktop or Server which are supported for five years) are three years, meaning LTS users are expected to upgrade after every LTS release with a one-year buffer.

on May 29, 2025 04:50 PM

May 24, 2025

A SomewhatMaxSAT Solver

Julian Andres Klode

As you may recall from previous posts and elsewhere I have been busy writing a new solver for APT. Today I want to share some of the latest changes in how to approach solving.

The idea for the solver was that manually installed packages are always protected from removals – in terms of SAT solving, they are facts. Automatically installed packages become optional unit clauses. Optional clauses are solved after manual ones, they don’t partake in normal unit propagation.

This worked fine, say you had

A                                   # install request for A
B                                   # manually installed, keep it
A depends on: conflicts-B | C

Installing A on a system with B installed installed C, as it was not allowed to install the conflicts-B package since B is installed.

However, I also introduced a mode to allow removing manually installed packages, and that’s where it broke down, now instead of B being a fact, our clauses looked like:

A                               # install request for A
A depends on: conflicts-B | C
Optional: B                     # try to keep B installed

As a result, we installed conflicts-B and removed B; the steps the solver takes are:

  1. A is a fact, mark it
  2. A depends on: conflicts-B | C is the strongest clause, try to install conflicts-B
  3. We unit propagate that conflicts-B conflicts with B, so we mark not B
  4. Optional: B is reached, but not satisfiable, ignore it because it’s optional.

This isn’t correct: Just because we allow removing manually installed packages doesn’t mean that we should remove manually installed packages if we don’t need to.

Fixing this turns out to be surprisingly easy. In addition to adding our optional (soft) clauses, let’s first assume all of them!

But to explain how this works, we first need to explain some terminology:

  1. The solver operates on a stack of decisions
  2. “enqueue” means a fact is being added at the current decision level, and enqueued for propagation
  3. “assume” bumps the decision level, and then enqueues the assumed variable
  4. “propagate” looks at all the facts and sees if any clause becomes unit, and then enqueues it
  5. “unit” is when a clause has a single literal left to assign

To illustrate this in pseudo Python code:

  1. We introduce all our facts, and if they conflict, we are unsat:

    for fact in facts:
        enqueue(fact)
    if not propagate():
        return False
    
  2. For each optional literal, we register a soft clause and assume it. If the assumption fails, we ignore it. If it succeeds, but propagation fails, we undo the assumption.

    for optionalLiteral in optionalLiterals:
        registerClause(SoftClause([optionalLiteral]))
        if assume(optionalLiteral) and not propagate():
            undo()
    
  3. Finally we enter the main solver loop:

    while True:
        if not propagate():
            if not backtrack():
                return False
        elif <all clauses are satisfied>:
            return True
        elif it := find("best unassigned literal satisfying a hard clause"):
            assume(it)
        elif it := find("best literal satisfying a soft clause"):
            assume(it)
    

The key point to note is that the main loop will undo the assumptions in order; so if you assume A,B,C and B is not possible, we will have also undone C. But since C is also enqueued as a soft clause, we will then later find it again:

  1. Assume A: State=[Assume(A)], Clauses=[SoftClause([A])]
  2. Assume B: State=[Assume(A),Assume(B)], Clauses=[SoftClause([A]),SoftClause([B])]
  3. Assume C: State=[Assume(A),Assume(B),Assume(C)], Clauses=[SoftClause([A]),SoftClause([B]),SoftClause([C])]
  4. Solve finds a conflict, backtracks, and sets not C: State=[Assume(A),Assume(B),not(C)]
  5. Solve finds a conflict, backtracks, and sets not B: State=[Assume(A),not(B)] – C is no longer assumed either
  6. Solve, assume C as it satisfies SoftClause([C]) as next best literal: State=[Assume(A),not(B),Assume(C)]
  7. All clauses are satisfied, solution is A, not B, and C.

This is not (correct) MaxSAT, because we actually do not guarantee that we satisfy as many soft clauses as possible. Consider you have the following clauses:

Optional: A
Optional: B
Optional: C
B Conflicts with A
C Conflicts with A

There are two possible results here:

  1. {A} – If we assume A first, we are unable to satisfy B or C.
  2. {B,C} – If we assume either B or C first, A is unsat.

The question to ponder though is whether we actually need a global maximum or whether a local maximum is satisfactory in practice for a dependency solver If you look at it, a naive MaxSAT solver needs to run the SAT solver 2**n times for n soft clauses, whereas our heuristic only needs n runs.

For dependency solving, it seems we do not seem have a strong need for a global maximum: There are various other preferences between our literals, say priorities; and empirically, from evaluating hundreds of regressions without the initial assumptions, I can say that the assumptions do fix those cases and the result is correct.

Further improvements exist, though, and we can look into them if they are needed, such as:

  • Use a better heuristic:

    If we assume 1 clause and solve, and we cause 2 or more clauses to become unsatisfiable, then that clause is a local minimum and can be skipped. This is a more common heuristical MaxSAT solver. This gives us a better local maximum, but not a global one.

    This is more or less what the Smart package manager did, except that in Smart, all packages were optional, and the entire solution was scored. It calculated a basic solution without optimization and then toggled each variable and saw if the score improved.

  • Implement an actual search for a global maximum:

    This involves reading the literature. There are various versions of this, for example:

    1. Find unsatisfiable cores and use those to guide relaxation of clauses.

    2. A bounds-based search, where we translate sum(satisifed clauses) > k into SAT, and then search in one of the following ways:

      1. from 0 upward
      2. from n downward
      3. perform a binary search on [0, k] satisfied clauses.

      Actually we do not even need to calculate sum constraints into CNF, because we can just add a specialized new type of constraint to our code.

on May 24, 2025 10:14 AM

May 22, 2025

What are Launchpad’s mailing lists?

Launchpad’s mailing lists are team-based mailing lists, which means that each team can have one of them. E-mails from Launchpad’s mailing lists contain `lists.launchpad.net ` in their address.

For more information on the topic please see https://help.launchpad.net/ListHelp.

What are they not?

Please note that both lists.canonical.com and lists.ubuntu.com are not managed by Launchpad, but by Canonical Information Systems.

Timeline

Launchpad will no longer offer mailing lists as of the end of October 2025, which aligns with the end of the 25.10 cycle.

Migration paths

Depending on your use case, there are different alternatives available.

For a couple of years now, discourse has become a viable alternative for most scenarios. Launchpad also offers the Answers feature for discussions. If it is not so much about communication, but more about receiving information, e.g. for updates on a bug report, you should be aware that you can also subscribe teams to bugs.

Call for action

We are aware that your use case may be very different from the above listed ones. If you are using Launchpad’s mailing lists today and you do not see a clear way forward, please reach out to us to discuss your use case and how we can help you.

Please contact us on Matrix (#launchpad:ubuntu.com) or drop as a message via feedback@launchpad.net.

Please note that this is still work in progress, and we will provide more information over the upcoming weeks and months.

on May 22, 2025 05:42 PM

Snaps!

I actually released last week 🙂 I haven’t had time to blog, but today is my birthday and taking some time to myself!

This release came with a major bugfix. As it turns out our applications were very crashy on non-KDE platforms including Ubuntu proper. Unfortunately, for years, and I didn’t know. Developers were closing the bug reports as invalid because users couldn’t provide a stacktrace. I have now convinced most developers to assign snap bugs to the Snap platform so I at least get a chance to try and fix them. So with that said, if you tried our snaps in the past and gave up in frustration, please do try them again! I also spent some time cleaning up our snaps to only have current releases in the store, as rumor has it snapcrafters will be responsible for any security issues. With 200+ snaps I maintain, that is a lot of responsibility. We’ll see if I can pull it off.

Life!

My last surgery was a success! I am finally healing and out of a sling for the first time in almost a year. I have also lined up a good amount of web work for next month and hopefully beyond. I have decided to drop the piece work for donations and will only accept per project proposals for open source work. I will continue to maintain KDE snaps for as long as time allows. A big thank you to everyone that has donated over the last year to fund my survival during this broken arm fiasco. I truly appreciate it!

With that said,  if you want to drop me a donation for my work, birthday or well-being until I get paid for the aforementioned web work please do so here:

on May 22, 2025 12:49 PM

May 18, 2025


Are you using Kubuntu 25.04 Plucky Puffin, our current stable release? Or are you already running our development builds of the upcoming 25.10 (Questing Quokka)?

We currently have Plasma 6.3.90 (Plasma 6.4 Beta1) available in our Beta PPA for Kubuntu 25.04 and for the 25.10 development series.

However this is a Beta release, and we should re-iterate the disclaimer:



DISCLAIMER: This release contains untested and unstable software. It is highly recommended you do not use this version in a production environment and do not use it as your daily work environment. You risk crashes and loss of data.



6.4 Beta1 packages and required dependencies are available in our Beta PPA. The PPA should work whether you are currently using our backports PPA or not. If you are prepared to test via the PPA, then add the beta PPA and then upgrade:

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt full-upgrade -y

Then reboot.

In case of issues, testers should be prepared to use ppa-purge to remove the PPA and revert/downgrade packages.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use launchpad.net to post testing feedback to the Kubuntu team as a bug, or give feedback on Matrix [1], or mailing lists [2].
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

Please review the planned feature list, release announcement and changelog.

[Test Case]
* General tests:
– Does plasma desktop start as normal with no apparent regressions over 6.3?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.
* Specific tests:
– Identify items with front/user facing changes capable of specific testing.
– Test the ‘fixed’ functionality or ‘new’ feature.

Testing may involve some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu and the KDE community as a whole.

Thanks!

Please stop by the Kubuntu-devel Matrix channel on if you need clarification of any of the steps to follow.

[1] – https://matrix.to/#/#kubuntu-devel:ubuntu.com
[2] – https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel

on May 18, 2025 09:26 AM

May 16, 2025

Rooming with Mark

Oliver Grawert

on May 16, 2025 02:34 PM

May 08, 2025

We are pleased to announce that the Plasma 6.3.5 bugfix update is now available for Kubuntu 25.04 Plucky Puffin in our backports PPA.

As usual with our PPAs, there is the caveat that the PPA may receive additional updates and new releases of KDE Plasma, Gear (Apps), and Frameworks, plus other apps and required libraries. Users should always review proposed updates to decide whether they wish to receive them.

To upgrade:

Add the following repository to your software sources list:

ppa:kubuntu-ppa/backports

or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt full-upgrade

We hope you enjoy using Plasma 6.3.5!

Issues with Plasma itself can be reported on the KDE bugtracker [1]. In the case of packaging or other issues, please provide feedback on our mailing list [2], and/or file a bug against our PPA packages [3].

1. KDE bugtracker::https://bugs.kde.org
2. Kubuntu-devel mailing list: https://lists.u
3. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa

on May 08, 2025 06:28 PM

May 04, 2025

About 90% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

Request for OpenSSH debugging help

Following the OpenSSH work described below, I have an open report about the sshd server sometimes crashing when clients try to connect to it. I can’t reproduce this myself, and arm’s-length debugging is very difficult, but three different users have reported it. For the time being I can’t pass it upstream, as it’s entirely possible it’s due to a Debian patch.

Is there anyone reading this who can reproduce this bug and is capable of doing some independent debugging work, most likely involving bisecting changes to OpenSSH? I’d suggest first seeing whether a build of the unmodified upstream 10.0p2 release exhibits the same bug. If it does, then bisect between 9.9p2 and 10.0p2; if not, then bisect the list of Debian patches. This would be extremely helpful, since at the moment it’s a bit like trying to look for a needle in a haystack from the next field over by sending instructions to somebody with a magnifying glass.

OpenSSH

I upgraded the Debian packaging to OpenSSH 10.0p1 (now designated 10.0p2 by upstream due to a mistake in the release process, but they’re the same thing), fixing CVE-2025-32728. This also involved a diffoscope bug report due to the version number change.

I enabled the new --with-linux-memlock-onfault configure option to protect sshd against being swapped out, but this turned out to cause test failures on riscv64, so I disabled it again there. Debugging this took some time since I needed to do it under emulation, and in the process of setting up a testbed I added riscv64 support to vmdb2.

In coordination with the wtmpdb maintainer, I enabled the new Y2038-safe native wtmpdb support in OpenSSH, so wtmpdb last now reports the correct tty.

I fixed a couple of packaging bugs:

I reviewed and merged several packaging contributions from others:

dput-ng

Since we added dput-ng integration to Debusine recently, I wanted to make sure that it was in good condition in trixie, so I fixed dput-ng: will FTBFS during trixie support period. Previously a similar bug had been fixed by just using different Ubuntu release names in tests; this time I made the tests independent of the current supported release data returned by distro_info, so this shouldn’t come up again.

We also ran into dput-ng: —override doesn’t override profile parameters, which needed somewhat more extensive changes since it turned out that that option had never worked. I fixed this after some discussion with Paul Tagliamonte to make sure I understood the background properly.

man-db

I released man-db 2.13.1. This just included various small fixes and a number of translation updates, but I wanted to get it into trixie in order to include a contribution to increase the MAX_NAME constant, since that was now causing problems for some pathological cases of manual pages in the wild that documented a very large number of terms.

debmirror

I fixed one security bug: debmirror prints credentials with —progress.

Python team

I upgraded these packages to new upstream versions:

In bookworm-backports, I updated these packages:

  • python-django to 3:4.2.20-1 (issuing BSA-123)
  • python-django-pgtrigger to 4.13.3

I dropped a stale build-dependency from python-aiohttp-security that kept it out of testing (though unfortunately too late for the trixie freeze).

I fixed or helped to fix various other build/test failures:

I packaged python-typing-inspection, needed for a new upstream version of pydantic.

I documented the architecture field in debian/tests/autopkgtest-pkg-pybuild.conf files.

I fixed other odds and ends of bugs:

Science team

I fixed various build/test failures:

on May 04, 2025 03:38 PM

May 01, 2025

Incus is a manager for virtual machines and system containers.

A virtual machine (VM) is an instance of an operating system that runs on a computer, along with the main operating system. A virtual machine uses hardware virtualization features for the separation from the main operating system. With virtual machines, the full operating system boots up in them. While in most cases you would run Linux on a VM without a desktop environment, you can also run Linux with a desktop environment (like in VirtualBox and VMWare).

In How to run a Windows virtual machine on Incus on Linux we saw how to run a run a Windows VM on Incus. In this post we see how to run a Linux Desktop virtual machine on Incus.

Table of Contents

Updates

No updates yet.

Prerequisites

  1. You should have a system that runs Incus.
  2. A system with support for hardware virtualization so that it can run virtual machines.
  3. A virtual machine image of your preferred Linux desktop distribution.

Cheat sheet

You should specify how much RAM memory you are giving to the VM. The default is only 1GiB of RAM, which is not enough for desktop VMs. The --console=vga launches for you the Remote Viewer GUI application to allow you to use the desktop in a window.

$ incus image list images:desktop       # List all available desktop images
$ incus launch --vm images:ubuntu/jammy/desktop mydesktop -c limits.memory=3GiB --console=vga
$ incus console mydesktop --type=vga    # Reconnect to already running instance
$ incus start mydesktop --console=vga   # Start an existing desktop VM

Availability of images

Currently, Incus provides you with the following VM images of Linux desktop distributions. The architecture is x86_64.

Run the following command to list all available Linux desktop images. incus image is the section of Incus that deals with the management of images. The list command lists the available images of a remote/repository, the default being images: (run incus remote list for the full list of remotes). After the colon (:), you type filter keywords, and in this case we typed desktop to show images that have the word desktop in them (to show only Desktop images). We are interested in a few columns only, therefore -c ldt only shows the columns for the Alias, the Description and the Type.

$ incus image list images:desktop -c ldt
+------------------------------------------+---------------------------+-----------------+
|                  ALIAS                   |      DESCRIPTION          |      TYPE       |
+------------------------------------------+---------------------------+-----------------+
| archlinux/desktop-gnome (3 more)         | Archlinux current amd64   | VIRTUAL-MACHINE |
+------------------------------------------+---------------------------+-----------------+
| opensuse/15.5/desktop-kde (1 more)       | Opensuse 15.5 amd64       | VIRTUAL-MACHINE |
+------------------------------------------+---------------------------+-----------------+
| opensuse/15.6/desktop-kde (1 more)       | Opensuse 15.6 amd64       | VIRTUAL-MACHINE |
+------------------------------------------+---------------------------+-----------------+
| opensuse/tumbleweed/desktop-kde (1 more) | Opensuse tumbleweed amd64 | VIRTUAL-MACHINE |
+------------------------------------------+---------------------------+-----------------+
| ubuntu/24.10/desktop (3 more)            | Ubuntu oracular amd64     | VIRTUAL-MACHINE |
+------------------------------------------+---------------------------+-----------------+
| ubuntu/focal/desktop (3 more)            | Ubuntu focal amd64        | VIRTUAL-MACHINE |
+------------------------------------------+---------------------------+-----------------+
| ubuntu/jammy/desktop (3 more)            | Ubuntu jammy amd64        | VIRTUAL-MACHINE |
+------------------------------------------+---------------------------+-----------------+
| ubuntu/noble/desktop (3 more)            | Ubuntu noble amd64        | VIRTUAL-MACHINE |
+------------------------------------------+---------------------------+-----------------+
| ubuntu/plucky/desktop (1 more)           | Ubuntu plucky amd64       | VIRTUAL-MACHINE |
+------------------------------------------+---------------------------+-----------------+
$ 

These images have been generated with the utility distrobuilder, https://github.com/lxc/distrobuilder The purpose of the utility is to prepare the images so that when we launch them, we get immediately the desktop environment and do not perform any manual configuration. The configuration files for distrobuilder to create these images can be found at https://github.com/lxc/lxc-ci/tree/main/images For example, the archlinux.yaml configuration file has a section to create the desktop image, along with the container and other virtual machine images.

The full list of Incus images are also available on the Web, through the website https://images.linuxcontainers.org/ It is possible to generate more such desktop images by following the steps of the existing configuration files. Perhaps a Kali Linux desktop image would be very useful. In the https://images.linuxcontainers.org/ website you can also view the build logs that were generated while building the images, and figure out what parameters are needed for distrobuilder to build them (along with the actual configuration file). For example, here are the logs for the ArchLinux desktop image, https://images.linuxcontainers.org/images/archlinux/current/amd64/desktop-gnome/

Up to this point we got a list of the available virtual machine images that are provided by Incus. We are ready to boot them.

Booting a desktop Linux VM on Incus

When launching a VM, Incus provides by default 1GiB RAM and 10GiB of disk space. The disk space is generally OK, but the RAM is too little for a desktop image (it’s OK for non-desktop images). For example, for an Ubuntu desktop image, the instance requires about 1.2GB of memory to start up and obviously more to run other programs. Therefore, if we do not specify more RAM, then the VM would struggle to make do of the mere 1GiB of RAM.

Booting the Ubuntu desktop image on Incus

Here is the command to launch a desktop image. We use incus launch to launch the image. It’s a VM, hence --vm. We are using the image from the images: remote, the one called ubuntu/plucky/desktop (it’s the last from the list of the previous section). We configure a new limit for the memory usage, -c limits.memory=3GiB, so that the instance will be able to run successfully. Finally, the console is not textual but graphical. We specify that with --console=vga which means that Incus will launch the remote desktop utility for us.

$ incus launch --vm images:ubuntu/plucky/desktop mydesktop -c limits.memory=3GiB --console=vga
Launching mydesktop

Here is a screenshot of the new window with the running desktop virtual machine.

Screenshot of images:ubuntu/plucky/desktop

Now we closed the wizard.

Screenshot of images:ubuntu/plucky/desktop after we close the wizard.

Booting the ArchLinux desktop image on Incus

I cannot get this image to show the desktop. If someone can make this work, please post in a comment.

$ incus launch --vm images:archlinux/desktop-gnome mydesktop -c limits.memory=3GiB --console=vga -c security.secureboot=false
Launching mydesktop

Booting the OpenSUSE desktop image on Incus

$ incus launch --vm images:opensuse/15.5/desktop-kde mydesktop -c limits.memory=3GiB --console=vga
Launching mydesktop

Troubleshooting

I closed the desktop window but the VM is running. How do I get it back up?

If you closed the Remote Viewer window, you can get Incus to start it again with the following command. By doing so, you are actually reconnecting back to the VM and continue working from where you left off.

We are using the incus console action to connect to the running mydesktop instance and request access through the Remote Viewer (rather than a text console).

$ incus console mydesktop --type=vga

Error: This console is already connected. Force is required to take it over.

You are already connected to the desktop VM with the Remote Viewer and you are trying to connect again. Either go to the existing Remote Viewer window, or add the parameter --force to close the existing Remote Viewer window and open a new one.

Error: Instance is not running

You are trying to connect to a desktop VM with the Remote Viewer but the instance (which already exists) is not running. Use the action incus start to start the virtual machine, along with the --type=vga parameter to get Incus to launch the Remote Viewer for you.

$ incus start mydesktop --console=vga

I get no audio from the desktop VM! How do I get sound in the desktop VM?

This requires extra steps which I do not show yet. There are three options. The first is to use the QEMU device emulation to emulate a sound device in the VM. The second is to somehow push an audio device into the VM so that this audio device is used exclusively in the VM (have not tried this but I think it’s possible). The third and perhaps best option is to use network audio with PulseAudio/Pipewire. You enable network audio on your desktop and then configure the VM instance to connect to that network audio server. I have tried that and it worked well for me. The downside is that the Firefox snap package in the VM could not figure out that there is network audio there and I could not get audio in that application.

How do I shutdown the desktop VM?

Use the desktop UI to perform the shutdown. The VM will shut down cleanly.

Error: Failed instance creation: The image used by this instance is incompatible with secureboot. Please set security.secureboot=false on the instance

You tried to launch a virtual machine with SecureBoot enabled but the image does not support SecureBoot. You need to disable SecureBoot when you launch this image. The instance has been created but is unable to run unless you disable SecureBoot. You can either disable SecureBoot through an Incus configuration for this image, or just delete the instance, and try again with the parameter -c security.secureboot=false.

Here is how to disable SecureBoot, then try to incus start that instance.

$ incus config set mydesktop security.secureboot=true

Here is how you would enable that flag when you launch such a VM.

incus launch --vm images:archlinux/desktop-gnome mydesktop -c limits.memory=3GiB --console=vga -c security.secureboot=false

Note that official Ubuntu images can work with SecureBoot enabled, most others don’t. It has to do with the Linux kernel being digitally signed by some certification authority.

Error: Failed instance creation: Failed creating instance record: Add instance info to the database: Failed to create “instances” entry: UNIQUE constraint failed: instances.project_id, instances.name

This error message is a bit cryptic. It just means that you are trying to create or launch an instance while the instance already exists. Read as Error: The instance name already exists.

on May 01, 2025 10:51 PM

April 25, 2025

Announcing Incus 6.12

Stéphane Graber

The Incus team is pleased to announce the release of Incus 6.12!

This release comes with some very long awaited improvements such as online growth of virtual machine memory, network address sets for easier network ACLs, revamped logging support and more!

On top of the new features, this release also features quite a few welcome performance improvements, especially for systems with a lot of snapshots and with extra performance enhancements for those using ZFS.

The highlights for this release are:

  • Network address sets
  • Memory hotplug support in VMs
  • Reworked logging handling & remote syslog
  • SNAT support on complex network forwards
  • Authentication through access_token parameter
  • Improved server-side filtering in the CLI
  • More generated documentation

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Enjoy!

on April 25, 2025 04:05 AM

April 19, 2025

Ubuntu MATE 25.04 is ready to soar! 🪽 Celebrating our 10th anniversary as an official Ubuntu flavour with the reliable MATE Desktop experience you love, built on the latest Ubuntu foundations. Read on to learn more 👓️

A Decade of MATE

This release marks the 10th anniversary of Ubuntu MATE becoming an official Ubuntu flavour. From our humble beginnings, we’ve developed a loyal following of users who value a traditional desktop experience with modern capabilities. Thanks to our amazing community, contributors, and users who have been with us throughout this journey. Here’s to many more years of Ubuntu MATE! 🥂

What changed in Ubuntu MATE 25.04?

Here are the highlights of what’s new in the Plucky Puffin release:

  • Celebrating 10 years as an official Ubuntu flavour! 🎂
  • Optional full disk encryption in the installer 🔐
    • Enhanced advanced partitioning options
    • Better interaction with existing BitLocker-enabled Windows installations
    • Improved experience when installing alongside other operating systems

Major Applications

Accompanying MATE Desktop 🧉 and Linux 6.14 🐧 are Firefox 137 🔥🦊, Evolution 3.56 📧, LibreOffice 25.2.2 📚

See the Ubuntu 25.04 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 25.04

Available for 64-bit desktop computers!

Download

Upgrading to Ubuntu MATE 25.04

The upgrade process to Ubuntu MATE 25.04 is the same as Ubuntu.

There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

on April 19, 2025 04:48 AM

April 17, 2025

The Xubuntu team is happy to announce the immediate release of Xubuntu 25.04.

Xubuntu 25.04, codenamed Plucky Puffin, is a regular release and will be supported for 9 months, until January 2026.

Xubuntu 25.04, featuring the latest updates from Xfce 4.20 and GNOME 48.

Xubuntu 25.04 features the latest Xfce 4.20, GNOME 48, and MATE 1.26 updates. Xfce 4.20 features many bug fixes and minor improvements, modernizing the Xubuntu desktop while maintaining a familiar look and feel. GNOME 48 apps are tightly integrated and have full support for dark mode. Users of QEMU and KVM will be delighted to find new stability with the desktop session—the long-running X server crash has been resolved in Xubuntu 25.04 and backported to all supported Xubuntu releases.

The final release images for Xubuntu Desktop and Xubuntu Minimal are available as torrents and direct downloads from xubuntu.org/download/.

As the main server might be busy the first few days after the release, we recommend using the torrents if possible.

We want to thank everybody who contributed to this release of Xubuntu!

Highlights and Known Issues

Highlights

  • Xfce 4.20, released in December 2024, is included and contains many new features. Early Wayland support has been added, but is not available in Xubuntu.
  • GNOME 48 apps, including Font Viewer (gnome-font-viewer) and Mines (gnome-mines), include a refreshed appearance and usability improvements.

Known Issues

  • The shutdown prompt may not be displayed at the end of the installation. Instead, you might just see a Xubuntu logo, a black screen with an underscore in the upper left-hand corner, or a black screen. Press Enter, and the system will reboot into the installed environment. (LP: #1944519)
  • You may experience choppy audio or poor system performance while playing audio, but only in some virtual machines (observed in VMware and VirtualBox).
  • OEM installation options are not currently supported or available.

Please refer to the Xubuntu Release Notes for more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions.

The main Ubuntu Release Notes cover many other packages we carry and more generic issues.

Support

For support with the release, navigate to Help & Support for a complete list of methods to get help.

on April 17, 2025 08:59 PM
The Lubuntu Team is proud to announce Lubuntu 25.04, codenamed Plucky Puffin. Lubuntu 25.04 is the 28th release of Lubuntu, the 14th release of Lubuntu with LXQt as the default desktop environment. With 25.04 being an interim release, it will be supported until January of 2026. If you're a 24.10 user, please upgrade to 25.04 […]
on April 17, 2025 06:27 PM

The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 25.04 code-named “Plucky Puffin”. This marks Ubuntu Studio’s 36th release. This release is a Regular release and as such, it is supported for 9 months, until January 2026.

Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a more complete list of changes and known issues. Listed here are some of the major highlights.

This release is dedicated to the memory of Steve Langasek. Without Steve, Ubuntu Studio would not be where it is today. He provided invaluable guidance, insight, and instruction to our leader, Erich Eickmeyer, who not only learned how to package applications but learned how to do it properly. We owe him an eternal debt of gratitude.

You can download Ubuntu Studio 25.04 from our download page.

Special Notes

The Ubuntu Studio 25.04 disk image (ISO) exceeds 4 GB and cannot be downloaded to some file systems such as FAT32 and may not be readable when burned to a standard DVD. For this reason, we recommend downloading to a compatible file system. When creating a boot medium, we recommend creating a bootable USB stick with the ISO image or burning to a Dual-Layer DVD.

Minimum installation media requirements: Dual-Layer DVD or 8GB USB drive.

Images can be obtained from this link: https://cdimage.ubuntu.com/ubuntustudio/releases/25.04/release/

Full updated information, including Upgrade Instructions, are available in the Release Notes.

Upgrades from 24.10 should be enabled within a month after release, so we appreciate your patience. Upgrades from 25.04 LTS will be enabled after 24.10 reaches End-Of-Life in July 2025.

New This Release

GIMP 3.0: Wilber logo by Aryeom

GIMP 3.0!

The long-awaited GIMP 3.0 is included by default. GIMP is now capable of non-destructive editing with filters, better Photoshop PSD export, and so very much more! Check out the GIMP 3.0 release announcement for more information.

Pencil2D Icon

Pencil2D

Ubuntu Studio now includes Pencil2D! This is a 2D animation and drawing application that is sure to be helpful to animators. You can use basic clipart to make animations!

The basic features of Pencil2D are:

  • layers support (separated layer for bitmap, vector and soud part)
  • bitmap drawing
  • vector drawing
  • sound support

LibreOffice No Longer in Minimal Install

The LibreOffice suite is now part of the full desktop install. This will save space for those wishing for a minimalistic setup for their needs.

Invada Studio Plugins

Beginning this release we are including the Invada Studio Plugins first created by Invada Records Australia. This includes distortion, delay, dynamics, filter, phaser, reverb, and utility audio plugins.

PipeWire 1.2.7

This release contains PipeWire 1.2.7. One major feature this has over 1.2.4 is that v4l2loopback support is available via the pipewire-v4l2 package which is not installed by default.

PipeWire’s JACK compatibility is configured to use out-of-the-box and is zero-latency internally. System latency is configurable via Ubuntu Studio Audio Configuration.

However, if you would rather use straight JACK 2 instead, that’s also possible. Ubuntu Studio Audio Configuration can disable and enable PipeWire’s JACK compatibility on-the-fly. From there, you can simply use JACK via QJackCtl.

Ardour 8.12

This is, as of this writing, the latest release of Ardour, packed with the latest bugfixes.

To help support Ardour’s funding, you may obtain later versions directly from ardour.org. To do so, please one-time purchase or subscribe to Ardour from their website. If you wish to get later versions of Ardour from us, you will have to wait until the next regular release of Ubuntu Studio, due in October 2025.

Deprecation of Mailing Lists

Our mailing lists are getting inundated with spam and there is no proper way to fix the filtering. It uses an outdated version of MailMan, so this release announcement will be the last release announcement we send out via email. To get support, we encourage using Ubuntu Discourse for support, and for community clicking the notification bell in the Ubuntu Studio category there.

Frequently Asked Questions

Q: Does Ubuntu Studio contain snaps?
A: Yes. Mozilla’s distribution agreement with Canonical changed, and Ubuntu was forced to no longer distribute Firefox in a native .deb package. We have found that, after numerous improvements, Firefox now performs just as well as the native .deb package did.

Thunderbird also became a snap so that the maintainers can get security patches delivered faster.

Additionally, Freeshow is an Electron-based application. Electron-based applications cannot be packaged in the Ubuntu repositories in that they cannot be packaged in a traditional Debian source package. While such apps do have a build system to create a .deb binary package, it circumvents the source package build system in Launchpad, which is required when packaging for Ubuntu. However, Electron apps also have a facility for creating snaps, which can be uploaded and included. Therefore, for Freeshow to be included in Ubuntu Studio, it had to be packaged as a snap.

We have additional snaps that are Ubuntu-specific, such as the Firmware Updater and the Security Center. Contrary to popular myth, Ubuntu does not have any plans to switch all packages to snaps, nor do we.

Q: Will you make an ISO with {my favorite desktop environment}?
A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since we’re completely volunteer-run, we don’t have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio – which does *not* convert that flavor to Ubuntu Studio but adds its benefits.

Q: What if I don’t want all these packages installed on my machine?
A: Simply use the Ubuntu Studio Installer to remove the features of Ubuntu Studio you don’t want or need!

Get Involved!

A wonderful way to contribute is to get involved with the project directly! We’re always looking for new volunteers to help with packaging, documentation, tutorials, user support, and MORE! Check out all the ways you can contribute!

Our project leader, Erich Eickmeyer, is now working on Ubuntu Studio at least part-time, and is hoping that the users of Ubuntu Studio can give enough to generate a monthly part-time income. We’re not there, but if every Ubuntu Studio user donated monthly, we’d be there! Your donations are appreciated! If other distributions can do it, surely we can! See the sidebar for ways to give!

Special Thanks

Huge special thanks for this release go to:

  • Eylul Dogruel: Artwork, Graphics Design
  • Ross Gammon: Upstream Debian Developer, Testing, Email Support
  • Sebastien Ramacher: Upstream Debian Developer
  • Dennis Braun: Upstream Debian Developer
  • Rik Mills: Kubuntu Council Member, help with Plasma desktop
  • Scarlett Moore: Kubuntu Project Lead, help with Plasma desktop
  • Len Ovens: Testing, insight
  • Mauro Gaspari: Tutorials, Promotion, and Documentation, Testing, keeping Erich sane
  • Simon Quigley: Qt6 Megastuff
  • Erich Eickmeyer: Project Leader, Packaging, Development, Direction, Treasurer
  • Steve Langasek: You are missed.
on April 17, 2025 05:08 PM

April 16, 2025

Recently, I was involved in an event where a video was shown, and the event was filmed. It would be nice to put the video of the event up somewhere so other people who weren't there could watch it. Obvious answer: upload it to YouTube. However, the video that was shown at the event is Copyrighted Media Content and therefore is disallowed by YouTube and the copyright holder; it's not demonetised (which wouldn't be a problem), it's flat-out blocked. So YouTube is out.

I'd like the video I'm posting to stick around for a long time; this is a sort of archival, reference thing where not many people will ever want to watch it but those that do might want to do so in ten years. So I'm loath to find some other random video hosting site, which will probably go bust, or pivot to selling online AI shoes or something. And the best way to ensure that something keeps going long-term is to put it on your own website, and use decent HTML, because that means that even in ten or twenty years it'll still work where the latest flavour-of-the-month thing will go the way of other old technologies and fade away and stop working over time. HTML won't do that.

But... it's an hour long and in full HD. 2.6GB of video. And one of the benefits of YouTube is that they'll make the video adaptive: it'll fit the screen, and the bandwidth, of whatever device someone's watching it on. If someone wants to look at this from their phone and its slightly-shaky two bars of 4G connection, they probably don't want to watch the loading spinner for an hour while it buffers a full HD video; they can ideally get a cut down, lower-quality but quicker to serve, version. But... how is this possible?

There are two aspects to doing this. One is that you serve up different resolutions of video, based on the viewer's screen size. This is exactly the same problem as is solved for images by the <picture> element to provide responsive images (where if you're on a 400px-wide screen you get a 400px version of the background image, not the 2000px full-res version), and indeed the magic words to search for here are responsive video. And the person you will find who is explaning all this is Scott Jehl, who has written a good description of how to do responsive video which explains it all in detail. You make versions of the video at different resolutions, and serve whichever one best matches the screen you're on, just like responsive images. Nice work; just what the doctor ordered.

But there's also a second aspect to this: responsive video adapts to screen size, but it doesn't adapt to bandwidth. What we want, in addition to the responsive stuff, is that on poor connections the viewer gets a lower-bandwidth version as well as a lower-resolution version, and that the viewer's browser can dynamically switch from moment to moment between different versions of the video to match their current network speed. This task is the job of HTTP Live Streaming, or HLS. To do this, you essentially encode the video in a bunch of different qualities and screen sizes, so you've got a bunch of separate videos (which you've probably already done above for the responsive part) and then (and this is the key) you chop up each video into a load of small segments. That way, instead of the browser downloading the whole one hour of video at a particular resolution, it only downloads the next segment at its current choice of resolution, and then if you suddenly get more (or less) bandwidth, it can switch to getting segment 2 from a different version of the video which better matches where you currently are.

Doing this sounds hard. Fortunately, all hard things to do with video are handled by ffmpeg. There's a nice writeup by Mux on how to convert an mp4 video to HLS with ffmpeg, and it works great. I put myself together a little Python script to construct the ffmpeg command line to do it, but you can do it yourself; the script just does some of the boilerplate for you. Very useful.

So now I can serve up a video which adapts to the viewer's viewing conditions, and that's just what I wanted. I have to pay for the bandwidth now (which is the other benefit of having YouTube do it, and one I now don't get) but that's worth it for this, I think. Cheers to Scott and Mux for explaining all this stuff.

on April 16, 2025 08:26 AM

April 15, 2025

Ubuntu Budgie 25.04 (Plucky Puffin) is a Standard Release with 9 months of support by your distro maintainers and Canonical, from April 2025 to Jan 2026. These release notes showcase the key takeaways for 24.10 upgraders to 25.04. Please note – there is no direct upgrade path from 24.04.2 to 25.04; you must uplift to 24.10 first or perform a fresh install. In these release notes the areas…

Source

on April 15, 2025 05:31 PM

April 06, 2025

Ubuntu MATE 24.10 is more of what you like, stable MATE Desktop on top of current Ubuntu. Read on to learn more 👓️

Ubuntu MATE 24.10 Ubuntu MATE 24.10

Thank you! 🙇

My sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this release 👏 I’d like to acknowledge the close collaboration with the Ubuntu Foundations team and the Ubuntu flavour teams, in particular Erich Eickmeyer who pushed critical fixes while I was travelling. Thank you! 💚

What changed since the Ubuntu MATE 24.04 LTS?

Here are the highlights of what’s changed since the release of Ubuntu MATE 24.04

  • Ships stable MATE Desktop 1.26.2 with a handful of bug fixes 🐛
  • Switched back to Slick Greeter (replacing Arctica Greeter) due to race-condition in the boot process which results the display manager failing to initialise.
    • Returning to Slick Greeter reintroduces the ability to easily configure the login screen via a graphical application, something users have been requesting be re-instated 👍
  • Ubuntu MATE 24.10 .iso 📀 is now 3.3GB 🤏 Down from 4.1GB in the 24.04 LTS release.
    • This is thanks to some fixes in the installer that no longer require as many packages in the live-seed.

Login Window Configuration Login Window

What didn’t change since the Ubuntu MATE 24.04 LTS?

If you follow upstream MATE Desktop development, then you’ll have noticed that Ubuntu MATE 24.10 doesn’t ship with the recently released MATE Desktop 1.28 🧉

I have prepared packaging for MATE Desktop 1.28, along with the associated components but encountered some bugs and regressions 🐞 I wasn’t able to get things to a standard I’m happy to ship be default, so it is tried and true MATE 1.26.2 one last time 🪨

Major Applications

Accompanying MATE Desktop 1.26.2 🧉 and Linux 6.11 🐧 are Firefox 131 🔥🦊, Celluloid 0.27 🎥, Evolution 3.54 📧, LibreOffice 24.8.2 📚

See the Ubuntu 24.10 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 24.10

Available for 64-bit desktop computers!

Download

Upgrading to Ubuntu MATE 24.10

The upgrade process to Ubuntu MATE 24.10 is the same as Ubuntu.

There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

on April 06, 2025 04:54 PM

April 03, 2025

A couple weeks ago I was playing around with a multiple architecture CI setup with another team, and that led me to pull out my StarFive VisionFive 2 SBC again to see where I could make it this time with an install.

I left off about a year ago when I succeeded in getting an older version of Debian on it, but attempts to get the tooling to install a more broadly supported version of U-Boot to the SPI flash were unsuccessful. Then I got pulled away to other things, effectively just bringing my VF2 around to events as a prop for my multiarch talks – which it did beautifully! I even had one conference attendee buy one to play with while sitting in the audience of my talk. Cool.

I was delighted to learn how much progress had been made since I last looked. Canonical has published more formalized documentation: Install Ubuntu on the StarFive VisionFive 2 in the place of what had been a rather cluttered wiki page. So I got all hooked up and began my latest attempt.

My first step was to grab the pre-installed server image. I got that installed, but struggled a little with persistence once I unplugged the USB UART adapter and rebooted. I then decided just to move forward with the Install U-Boot to the SPI flash instructions. I struggled a bit here for two reasons:

  1. The documentation today leads off with having you download the livecd, but you actually want the pre-installed server image to flash U-Boot, the livecd step doesn’t come until later. Admittedly, the instructions do say this, but I wasn’t reading carefully enough and was more focused on the steps.
  2. I couldn’t get the 24.10 pre-installed image to work for flashing U-Boot, but once I went back to the 24.04 pre-installed image it worked.

And then I had to fly across the country. We’re spending a couple weeks around spring break here at our vacation house in Philadelphia, but the good thing about SBCs is that they’re incredibly portable and I just tossed my gear into my backpack and brought it along.

Thanks to Emil Renner Berthing (esmil) on the Ubuntu Matrix server for providing me with enough guidance to figure out where I had gone wrong above, and got me on my way just a few days after we arrived in Philly.

With the newer U-Boot installed, I was able to use the Ubuntu 24.04 livecd image on a micro SD Card to install Ubuntu 24.04 on an NVMe drive! That’s another new change since I last looked at installation, using my little NVMe drive as a target was a lot simpler than it would have been a year ago. In fact, it was rather anticlimactic, hah!

And with that, I was fully logged in to my new system.

elizabeth@r2kt:~$ cat /proc/cpuinfo
processor : 0
hart : 2
isa : rv64imafdc_zicntr_zicsr_zifencei_zihpm_zba_zbb
mmu : sv39
uarch : sifive,u74-mc
mvendorid : 0x489
marchid : 0x8000000000000007
mimpid : 0x4210427
hart isa : rv64imafdc_zicntr_zicsr_zifencei_zihpm_zba_zbb

It has 4 cores, so here’s the full output: vf2-cpus.txt

What will I do with this little single board computer? I don’t know yet. I joked with my husband that I’d “install Debian on it and forget about it like everything else” but I really would like to get past that. I have my little multiarch demo CI project in the wings, and I’ll probably loop it into that.

Since we were in Philly, I had a look over at my long-neglected Raspberry Pi 1B that I have here. When we first moved in, I used it as an ssh tunnel to get to this network from California. It was great for that! But now we have a more sophisticated network setup between the houses with a VLAN that connects them, so the ssh tunnel is unnecessary. In fact, my poor Raspberry Pi fell off the WiFi network when we switched to 802.1X just over a year ago and I never got around to getting it back on the network. I connected it to a keyboard and monitor and started some investigation. Honestly, I’m surprised the little guy was still running, but it’s doing fine!

And it had been chugging along running Rasbian based on Debian 9. Well, that’s worth an upgrade. But not just an upgrade, I didn’t want to stress the device and SD card, so I figured flashing it with the latest version of Raspberry Pi OS was the right way to go. It turns out, it’s been a long time since I’ve done a Raspberry Pi install.

I grabbed the Raspberry Pi Imager and went on my way. It’s really nice. I went with the Raspberry Pi OS Lite install since it’s the RP1, I didn’t want a GUI. The imager asked the usual installation questions, loaded up my SSH key, and I was ready to load it up in my Pi.

The only thing I need to finish sorting out is networking. The old USB WiFi adapter I have it in doesn’t initialize until after it’s booted up, so wpa_supplicant on boot can’t negotiate with the access point. I’ll have to play around with it. And what will I use this for once I do, now that it’s not an SSH tunnel? I’m not sure yet.

I realize this blog post isn’t very deep or technical, but I guess that’s the point. We’ve come a long way in recent years in support for non-x86 architectures, so installation has gotten a lot easier across several of them. If you’re new to playing around with architectures, I’d say it’s a really good time to start. You can hit the ground running with some wins, and then play around as you go with various things you want to help get working. It’s a lot of fun, and the years I spent playing around with Debian on Sparc back in the day definitely laid the groundwork for the job I have at IBM working on mainframes. You never know where a bit of technical curiosity will get you.

on April 03, 2025 08:43 PM

March 27, 2025

Thanks to the hard work of our contributors, we are happy to announce the release of Lubuntu's Plucky Beta, which will become Lubuntu 25.04. This is a snapshot of the daily images. Approximately two months ago, we posted an Alpha-level update. While some information is duplicated below, that contains an accurate, concise technical summary of […]
on March 27, 2025 09:02 PM

March 22, 2025

The Open Source Initiative has two classes of board seats: Affiliate seats, and Individual Member seats. 

In the upcoming election, each affiliate can nominate a candidate, and each affiliate can cast a vote for the Affiliate candidates, but there's only 1 Affiliate seat available. I initially expressed interest in being nominated as an Affiliate candidate via Debian. But since Bradley Kuhn is also running for an Affiliate seat with a similar platform to me, especially with regards to the OSAID, I decided to run as part of an aligned "ticket" as an Individual Member to avoid contention for the 1 Affiliate seat.

Bradley and I discussed running on a similar ticket around 8/9pm Pacific, and I submitted my candidacy around 9pm PT on 17 February. 

I was dismayed when I received the following mail from Nick Vidal:

Dear Luke,

Thank you for your interest in the OSI Board of Directors election. Unfortunately, we are unable to accept your application as it was submitted after the official deadline of Monday Feb 17 at 11:59 pm UTC. To ensure a fair process, we must adhere to the deadline for all candidates.

We appreciate your enthusiasm and encourage you to stay engaged with OSI’s mission. We hope you’ll consider applying in the future or contributing in other meaningful ways.

Best regards,
OSI Election Teams

Nowhere on the "OSI’s board of directors in 2025: details about the elections" page do they list a timezone for closure of nominations; they simply list Monday 17 February. 

The OSI's contact address is in California, so it seems arbitrary and capricious to retroactively define all of these processes as being governed by UTC.

I was not able to participate in the "potential board director" info sessions accordingly, but people who attended heard that the importance of accommodating differing TZ's was discussed during the info session, and that OSI representatives mentioned they try to accommodate TZ's of everyone. This seems in sharp contrast with the above policy. 

I urge the OSI to reconsider this policy and allow me to stand for an Individual seat in the current cycle. 

Upd, N.B.: to people writing about this, I use they/them pronouns

on March 22, 2025 04:30 PM