June 16, 2025

Canonical, the publisher of Ubuntu and trusted open source solutions provider, is proud to sponsor HPE Discover Las Vegas 2025. Join us from June 23–26 to explore how our collaboration with Hewlett Packard Enterprise (HPE) is transforming the future of enterprise IT, from virtualization and cloud infrastructure to AI/ML workloads.

Register to HPE Discover Las Vegas 2025

    What to expect at booth #2235

    Stop by our booth to engage with our team and get a closer look at our latest innovations. Here’s what’s in store:

    • Live Demos: Experience our solutions in action.
    • Expert Sessions: Learn directly from the teams shaping next-gen open source infrastructure.
    • 1:1 Consultations: Discuss your unique challenges with our team and discover how our joint technologies can help optimize your IT strategy.

    Speaking sessions at the Canonical booth

    Visit our booth and attend sessions led by industry experts covering a range of open source solutions. Plus, all attendees will receive a special gift!

    Transform your data center into a cloud-native powerhouse with OpenStack

    Discover how to gain control over your infrastructure, optimize costs, and automate operations while building a flexible, secure foundation that scales seamlessly with your business growth – whether integrated into your GreenLake multi-cloud strategy or deployed as a standalone private cloud.

    Accelerate your AI strategy with Canonical’s portfolio

    From Kubeflow for MLOps to Charmed Kubernetes for orchestration. See how open source AI infrastructure drives innovation while reducing complexity and costs.

    Ubuntu: a trusted foundation for HPE VM Essentials

    Learn how Ubuntu powers HPE VM Essentials to deliver the simplicity, security, and scalability your business demands – making enterprise virtualization accessible to organizations of every size.

    Driving innovation together: Canonical and HPE

    As a strategic partner of HPE and a member of the HPE Technology Partner Program, Canonical brings decades of open source innovations to enterprise-grade solutions. Together, we deliver a full-stack experience — with integrated, secure, and cost-effective platforms that scale with your business.

    Through our joint collaboration, organizations gain:

    • A single point of contact for the entire scope of their project.
    • The combined expertise from HPE and Canonical throughout the project lifecycle.

    Know more about our offerings and how Canonical and HPE can propel your business forward.

    Learn how UIDAI and HPE worked with Canonical to transition from a monolithic code base to a microservice architecture.

    Want to see more?
    Stop by the booth #2235 to speak to our experts.

    Are you interested in setting up a meeting with our team?
    Reach out to our Alliance Business Director:
    Valerie Noto – valerie.noto@canonical.com

    on June 16, 2025 10:51 PM

    Welcome to the Ubuntu Weekly Newsletter, Issue 896 for the week of June 8 – 14, 2025. The full version of this issue is available here.

    In this issue we cover:

    • Welcome New Members and Developers
    • Ubuntu Stats
    • Hot in Support
    • LXD: Weekly news #398
    • Other Meeting Reports
    • Upcoming Meetings and Events
    • LoCo Events
    • Canonical News
    • In the Blogosphere
    • Featured Audio and Video
    • Updates and Security for Ubuntu 22.04, 24.04, 24.10, and 25.04
    • And much more!

    The Ubuntu Weekly Newsletter is brought to you by:

    • Krytarik Raido
    • Bashing-om
    • Chris Guiver
    • Wild Man
    • Din Mušić – LXD
    • And many others

    If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

    .

    on June 16, 2025 10:26 PM

    The Promised LAN

    Paul Tagliamonte

    The Internet has changed a lot in the last 40+ years. Fads have come and gone. Network protocols have been designed, deployed, adopted, and abandoned. Industries have come and gone. The types of people on the internet have changed a lot. The number of people on the internet has changed a lot, creating an information medium unlike anything ever seen before in human history. There’s a lot of good things about the Internet as of 2025, but there’s also an inescapable hole in what it used to be, for me.

    I miss being able to throw a site up to send around to friends to play with without worrying about hordes of AI-feeding HTML combine harvesters DoS-ing my website, costing me thousands in network transfer for the privilege. I miss being able to put a lightly authenticated game server up and not worry too much at night – wondering if that process is now mining bitcoin. I miss being able to run a server in my home closet. Decades of cat and mouse games have rendered running a mail server nearly impossible. Those who are “brave” enough to try are met with weekslong stretches of delivery failures and countless hours yelling ineffectually into a pipe that leads from the cheerful lobby of some disinterested corporation directly into a void somewhere 4 layers below ground level.

    I miss the spirit of curiosity, exploration, and trying new things. I miss building things for fun without having to worry about being too successful, after which “security” offices start demanding my supplier paperwork in triplicate as heartfelt thanks from their engineering teams. I miss communities that are run because it is important to them, not for ad revenue. I miss community operated spaces and having more than four websites that are all full of nothing except screenshots of each other.

    Every other page I find myself on now has an AI generated click-bait title, shared for rage-clicks all brought-to-you-by-our-sponsors–completely covered wall-to-wall with popup modals, telling me how much they respect my privacy, with the real content hidden at the bottom bracketed by deceptive ads served by companies that definitely know which new coffee shop I went to last month.

    This is wrong, and those who have seen what was know it.

    I can’t keep doing it. I’m not doing it any more. I reject the notion that this is as it needs to be. It is wrong. The hole left in what the Internet used to be must be filled. I will fill it.

    What comes before part b?

    Throughout the 2000s, some of my favorite memories were from LAN parties at my friends’ places. Dragging your setup somewhere, long nights playing games, goofing off, even building software all night to get something working—being able to do something fiercely technical in the context of a uniquely social activity. It wasn’t really much about the games or the projects—it was an excuse to spend time together, just hanging out. A huge reason I learned so much in college was that campus was a non-stop LAN party – we could freely stand up servers, talk between dorms on the LAN, and hit my dorm room computer from the lab. Things could go from individual to social in the matter of seconds. The Internet used to work this way—my dorm had public IPs handed out by DHCP, and my workstation could serve traffic from anywhere on the internet. I haven’t been back to campus in a few years, but I’d be surprised if this were still the case.

    In December of 2021, three of us got together and connected our houses together in what we now call The Promised LAN. The idea is simple—fill the hole we feel is gone from our lives. Build our own always-on 24/7 nonstop LAN party. Build a space that is intrinsically social, even though we’re doing technical things. We can freely host insecure game servers or one-off side projects without worrying about what someone will do with it.

    Over the years, it’s evolved very slowly—we haven’t pulled any all-nighters. Our mantra has become “old growth”, building each layer carefully. As of May 2025, the LAN is now 19 friends running around 25 network segments. Those 25 networks are connected to 3 backbone nodes, exchanging routes and IP traffic for the LAN. We refer to the set of backbone operators as “The Bureau of LAN Management”. Combined decades of operating critical infrastructure has driven The Bureau to make a set of well-understood, boring, predictable, interoperable and easily debuggable decisions to make this all happen. Nothing here is exotic or even technically interesting.

    Applications of trusting trust

    The hardest part, however, is rejecting the idea that anything outside our own LAN is untrustworthy—nearly irreversible damage inflicted on us by the Internet. We have solved this by not solving it. We strictly control membership—the absolute hard minimum for joining the LAN requires 10 years of friendship with at least one member of the Bureau, with another 10 years of friendship planned. Members of the LAN can veto new members even if all other criteria is met. Even with those strict rules, there’s no shortage of friends that meet the qualifications—but we are not equipped to take that many folks on. It’s hard to join—-both socially and technically. Doing something malicious on the LAN requires a lot of highly technical effort upfront, and it would endanger a decade of friendship. We have relied on those human, social, interpersonal bonds to bring us all together. It’s worked for the last 4 years, and it should continue working until we think of something better.

    We assume roommates, partners, kids, and visitors all have access to The Promised LAN. If they’re let into our friends' network, there is a level of trust that works transitively for us—I trust them to be on mine. This LAN is not for “security”, rather, the network border is a social one. Benign “hacking”—in the original sense of misusing systems to do fun and interesting things—is encouraged. Robust ACLs and firewalls on the LAN are, by definition, an interpersonal—not technical—failure. We all trust every other network operator to run their segment in a way that aligns with our collective values and norms.

    Over the last 4 years, we’ve grown our own culture and fads—around half of the people on the LAN have thermal receipt printers with open access, for printing out quips or jokes on each other’s counters. It’s incredible how much network transport and a trusting culture gets you—there’s a 3-node IRC network, exotic hardware to gawk at, radios galore, a NAS storage swap, LAN only email, and even a SIP phone network of “redphones”.

    DIY

    We do not wish to, nor will we, rebuild the internet. We do not wish to, nor will we, scale this. We will never be friends with enough people, as hard as we may try. Participation hinges on us all having fun. As a result, membership will never be open, and we will never have enough connected LANs to deal with the technical and social problems that start to happen with scale. This is a feature, not a bug.

    This is a call for you to do the same. Build your own LAN. Connect it with friends’ homes. Remember what is missing from your life, and fill it in. Use software you know how to operate and get it running. Build slowly. Build your community. Do it with joy. Remember how we got here. Rebuild a community space that doesn’t need to be mediated by faceless corporations and ad revenue. Build something sustainable that brings you joy. Rebuild something you use daily.

    Bring back what we’re missing.

    on June 16, 2025 03:58 PM

    June 12, 2025

    E351 Beringela Nuclear

    Podcast Ubuntu Portugal

    Ainda a braços com livros electrónicos e cachuchos espertos, o Miguel e o Diogo dão belas lições sobre como Reduzir, Reutilizar e Reciclar que envolvem passarinhos e ovos estrelados; dizem mal do Windows 11 e como dizer adeus ao Windows 10 da melhor maneira - e ainda têm tempo, entre reuniões muito LoCo, para fazerem rebentar a última bomba da Canonical - que está a dar prémios! - mas também envolve deixar X.org para trás na berma da estrada. Depois revimos as novidades sobre cimeiras variadas, datas novas para as agendas e o que podemos esperar das novas versões de Ubuntu Touch e Questing Cueca (é assim que se diz, não é…?).

    Já sabem: oiçam, subscrevam e partilhem!

    Atribuição e licenças

    Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Os efeitos sonoros deste episódio possuem as seguintes licenças: Risos de piadas secas; patrons laughing.mp3 by pbrproductions – https://freesound.org/s/418831/ – License: Attribution 3.0; Trombone: wah wah sad trombone.wav by kirbydx – https://freesound.org/s/175409/ – License: Creative Commons 0; Quem ganhou? 01 WINNER.mp3 by jordanielmills – https://freesound.org/s/167535/ – License: Creative Commons 0; Isto é um Alerta Ubuntu: Breaking news intro music by humanoide9000 – https://freesound.org/s/760770/ – License: Attribution 4.0. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada por encomenda pela Shizamura - artista, ilustradora e autora de BD. Podem ficar a conhecer melhor a Shizamura na Ciberlândia e no seu sítio web.

    on June 12, 2025 12:00 AM

    June 11, 2025

    Apple has introduced a new open-source Swift framework named Containerization, designed to fundamentally reshape how Linux containers are run on macOS. In a detailed presentation, Apple revealed a new architecture that prioritizes security, privacy, and performance, moving away from traditional methods to offer a more integrated and efficient experience for developers.

    The new framework aims to provide each container with the same level of robust isolation previously reserved for large, monolithic virtual machines, but with the speed and efficiency of a lightweight solution.

    Here is the video:

    The Old Way: A Single, Heavy Virtual Machine

    • Resource Inefficiency: The large VM had resources like CPU and memory allocated to it upfront, regardless of how many containers were running.
    • Security & Privacy Concerns: Sharing files from the Mac with a container was a two-step process; files were first shared with the entire VM, and then to the specific container, potentially exposing data more broadly than intended.
    • Maintenance Overhead: The large VM contained a full Linux distribution with core utilities, dynamic libraries, and a libc implementation, increasing the attack surface and requiring constant updates.

    A New Vision: Security, Privacy, and Performance

    The Containerization framework was built with three core goals to address these challenges:

    1. Security: Provide every single container with its own isolated virtual machine. This dramatically reduces the attack surface by eliminating shared kernels and system utilities between containers.
    2. Privacy: Enable file and directory sharing on a strict, per-container basis. Only the container that requests access to a directory will receive it.
    3. Performance: Achieve sub-second start times for containers while respecting the user’s system resources. If no containers are running, no resources are allocated.

    Under the Hood: How Containerization Works

    Containerization is more than just an API; it’s a complete rethinking of the container runtime on macOS.

    Lightweight, Per-Container Virtual Machines

    The most significant architectural shift is that each container runs inside its own dedicated, lightweight virtual machine. This approach provides profound benefits:

    • Strong Isolation: Each container is sandboxed within its own VM, preventing processes in one container from viewing or interfering with the host or other containers.
    • Dedicated Networking: Every container gets its own dedicated IP address, which improves network performance and eliminates the cumbersome need for port mapping.
    • Efficient Filesystems: Containerization exposes the image’s filesystem to the Linux VM as a block device formatted with EXT4. Apple has even developed a Swift package to manage the creation and population of these EXT4 filesystems directly from macOS.

    vminitd: The Swift-Powered Heart of the Container

    Once a VM starts, a minimal initial process called vminitd takes over. This is not a standard Linux init system; it’s a custom-built solution with remarkable characteristics:

    • Built in Swift: vminitd is written entirely in Swift and runs as the first process inside the VM.
    • Extremely Minimal Environment: To maximize security, the filesystem vminitd runs in is barebones. It contains no core utilities (like ls, cp), no dynamic libraries, and no libc implementation.
    • Statically Compiled: To run in such a constrained environment, vminitd is cross-compiled from a Mac into a single, static Linux executable. This is achieved using Swift’s Static Linux SDK and musl, a libc implementation optimized for static linking.

    vminitd is responsible for setting up the entire container environment, including assigning IP addresses, mounting the container’s filesystem, and supervising all processes that run within the container.

    Getting Started: The container Command-Line Tool

    To showcase the power of the framework, Apple has also released an open-source command-line tool simply called container. This tool allows developers to immediately begin working with Linux containers in this new, secure environment.

    • Pulling an image:
    container image pull alpine:latest
    • Running an interactive shell:
    container run -ti alpine:latest sh

    Within milliseconds, the user is dropped into a shell running inside a fully isolated Linux environment. Running the ps aux command from within the container reveals only the shell process and the ps process itself, a clear testament to the powerful process isolation at work.

    Apple Unveils

    An Open Invitation to the Community

    Both the Containerization framework and the container tool are available on GitHub. Apple is inviting developers to explore the source code, integrate the framework into their own projects, and contribute to its future by submitting issues and pull requests.

    This move signals a strong commitment from Apple to making macOS a first-class platform for modern, Linux container-based development, offering a solution that is uniquely secure, private, and performant.

    Source:

    The post Apple Unveils “Containerization” for macOS: A New Era for Linux Containers on macOS appeared first on Utappia.

    on June 11, 2025 10:06 PM
    KDE MascotKDE Mascot

    Release notes: https://kde.org/announcements/gear/25.04.2/

    Now available in the snap store!

    Along with that, I have fixed some outstanding bugs:

    Ark: now can open/save files in removable media

    Kasts: Once again has sound

    WIP: Updating Qt6 to 6.9 and frameworks to 6.14

    Enjoy everyone!

    Unlike our software, life is not free. Please consider a donation, thanks!

    on June 11, 2025 01:14 PM

    Reference architectures speed up time to market for agentic AI projects

    To ease the path of enterprise AI adoption and accelerate the conversion of AI insights into business value, NVIDIA recently published the NVIDIA Enterprise AI Factory validated design, an ecosystem of solutions that integrates seamlessly with enterprise systems, data sources, and security infrastructure.  The NVIDIA templates for hardware and software design are tailored for modern AI projects, including Physical AI & HPC with a focus on agentic AI workloads.     

    Canonical is proud to be included in the NVIDIA Enterprise AI Factory validated design. Canonical Kubernetes orchestration supports the process of efficiently building, deploying, and managing a diverse and evolving suite of AI agents on high-performance infrastructure.  The Ubuntu operating system is at the heart of NVIDIA Certified Systems across OEM partnerships like Dell. Canonical also collaborates with NVIDIA to ensure the stability and security of open-source software across AI Factory by securing agentic AI dependencies within NVIDIA’s artifact repository. 

    Canonical’s focus on open source, model-driven operations and ease of use offers enterprises flexible options for building their AI Factory on NVIDIA-accelerated infrastructure.

    Canonical Kubernetes

    Canonical Kubernetes is a securely designed and supported foundational platform. It unifies the management of a complex stack – including NVIDIA AI Enterprise, storage, networking, and observability tools – onto a single platform. 

    Within the NVIDIA Enterprise AI Factory validated design, Kubernetes is used to independently develop, update, and scale microservice-based agents, coupled with automated CI/CD pipelines. Kubernetes also handles the significant and often burstable compute demands for training AI models and scales inference services for deployed agents based on real-time needs. 

    Based on upstream Kubernetes, Canonical Kubernetes is integrated with the NVIDIA GPU and Networking Operators to leverage NVIDIA hardware acceleration and supports the deployment of NVIDIA AI Enterprise, enabling AI workloads with NVIDIA NIM and accelerated libraries. 

    Canonical Kubernetes provides full-lifecycle automation and has long-term support with recently announced 12-year security maintenance.

    Security updates for the OS and popular AI toolchains and libraries

    Ubuntu is the most widely used operating system for AI workloads. Choosing Ubuntu as the base OS for NVIDIA AI Factory gives organizations a trusted repository for all their open source, not just the OS. With Ubuntu Pro, customers get up to 12 years of security maintenance for thousands of open source packages including the most widely used libraries and toolchains, like Python, R and others. Organizations can complement that with Canonical’s Container Build Service to get custom containers built to spec, and security maintenance for their entire open source dependency tree. 

    To learn more about what the NVIDIA Enterprise AI Factory validated design could do for you, get in touch with our team – we’d love to hear about your project.

    Visit us at our booth E03 at NVIDIA GTC Paris on June 11-12 for an in-person conversation about how NVIDIA Enterprise AI Factory validated designs.

    Further reading

    on June 11, 2025 11:04 AM

    June 09, 2025

    Welcome to the Ubuntu Weekly Newsletter, Issue 895 for the week of June 1 – 7, 2025. The full version of this issue is available here.

    In this issue we cover:

    • Call for nominations: DMB appointment process
    • Ubuntu Stats
    • Hot in Support
    • LXD: Weekly news #397
    • Other Meeting Reports
    • Upcoming Meetings and Events
    • LoCo Events
    • Ubuntu Server Gazette – Issue 4: Stable Release Updates – The Misunderstood process
    • Lubuntu Council Elections 2025
    • Kernel HWE update for the upcoming Noble 24.04.3 point release
    • Phasing out Bazaar code hosting
    • Canonical News
    • In the Press
    • In the Blogosphere
    • Featured Audio and Video
    • Updates and Security for Ubuntu 22.04, 24.04, 24.10, and 25.04
    • And much more!

    The Ubuntu Weekly Newsletter is brought to you by:

    • Krytarik Raido
    • Bashing-om
    • Chris Guiver
    • Wild Man
    • Din Mušić – LXD
    • And many others

    If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

    .

    on June 09, 2025 10:46 PM

    Thanks, Mailbox!

    Simon Quigley

    https://medium.com/media/553e1df568153a684bfe861e27692fcb/href

    A gentleman by the name of Arif Ali reached out to me on LinkedIn. I won’t share the actual text of the message, but I’ll paraphrase:
    “I hope everything is going well with you. I’m applying to be an Ubuntu ‘Per Package Uploader’ for the SOS package, and I was wondering if you could endorse my application.”

    Arif, thank you! I have always appreciated our chats, and I truly believe you’re doing great work. I don’t want to interfere with anything by jumping on the wiki, but just know you have my full backing.

    “So, who actually lets Arif upload new versions of SOS to Ubuntu, and what is it?”
    Great question!

    Firstly, I realized that I needed some more info on what SOS is, so I can explain it to you all. On a quick search, this was the first result.

    Okay, so genuine question…

    Why does the first DuckDuckGo result for “sosreport” point to an article for a release of Red Hat Enterprise Linux that is two versions old? In other words, hey DuckDuckGo, your grass is starting to get long. Or maybe Red Hat? Can’t tell, I give you both the benefit of the doubt, in good faith.

    So, I clarified the search and found this. Canonical, you’ve done a great job. Red Hat, you could work on your SEO so I can actually find the RHEL 10 docs quicker, but hey… B+ for effort. ;)

    Anyway, let me tell you about Arif. Just from my own experiences.

    He’s incredible. He shows love to others, and whenever I would sponsor one of his packages during my time in Ubuntu, he was always incredibly receptive to feedback. I really appreciate the way he reached out to me, as well. That was really kind, and to be honest, I needed it.

    As for character, he has my +1. In terms of the members of the DMB (aside from one person who I will not mention by name, who has caused me immense trouble elsewhere), here’s what I’d tell you if you asked me privately…

    “It’s just PPU. Arif works on SOS as part of his job. Please, do still grill him. The test, and ensuring people know that they actually need to pass a test to get permissions, that’s pretty important.”

    That being said, I think he deserves it.

    Good luck, Arif. I wish you well in your meeting. I genuinely hope this helps. :)

    And to my friends in Ubuntu, I miss you. Please reach out. I’d be happy to write you a public letter, too. Only if you want. :)

    on June 09, 2025 05:20 PM

    People in the Arena

    Simon Quigley

    Theodore Roosevelt is someone I have admired for a long time. I especially appreciate what has been coined the Man in the Arena speech.

    A specific excerpt comes to mind after reading world news over the last twelve hours:

    “It is well if a large proportion of the leaders in any republic, in any democracy, are, as a matter of course, drawn from the classes represented in this audience to-day; but only provided that those classes possess the gifts of sympathy with plain people and of devotion to great ideals. You and those like you have received special advantages; you have all of you had the opportunity for mental training; many of you have had leisure; most of you have had a chance for enjoyment of life far greater than comes to the majority of your fellows. To you and your kind much has been given, and from you much should be expected. Yet there are certain failings against which it is especially incumbent that both men of trained and cultivated intellect, and men of inherited wealth and position should especially guard themselves, because to these failings they are especially liable; and if yielded to, their- your- chances of useful service are at an end. Let the man of learning, the man of lettered leisure, beware of that queer and cheap temptation to pose to himself and to others as a cynic, as the man who has outgrown emotions and beliefs, the man to whom good and evil are as one. The poorest way to face life is to face it with a sneer. There are many men who feel a kind of twister pride in cynicism; there are many who confine themselves to criticism of the way others do what they themselves dare not even attempt. There is no more unhealthy being, no man less worthy of respect, than he who either really holds, or feigns to hold, an attitude of sneering disbelief toward all that is great and lofty, whether in achievement or in that noble effort which, even if it fails, comes to second achievement. A cynical habit of thought and speech, a readiness to criticise work which the critic himself never tries to perform, an intellectual aloofness which will not accept contact with life’s realities — all these are marks, not as the possessor would fain to think, of superiority but of weakness. They mark the men unfit to bear their part painfully in the stern strife of living, who seek, in the affection of contempt for the achievements of others, to hide from others and from themselves in their own weakness. The rôle is easy; there is none easier, save only the rôle of the man who sneers alike at both criticism and performance.”

    The riots in LA are seriously concerning to me. If something doesn’t happen soon, this is going to get out of control.

    If you are participating in these events, or know someone who is, tell them to calm down. Physical violence is never the answer, no matter your political party.

    De-escalate immediately.

    Be well. Show love to one another!

    on June 09, 2025 05:58 AM

    June 08, 2025

    My Debian contributions this month were all sponsored by Freexian. Things were a bit quieter than usual, as for the most part I was sticking to things that seemed urgent for the upcoming trixie release.

    You can also support my work directly via Liberapay or GitHub Sponsors.

    OpenSSH

    After my appeal for help last month to debug intermittent sshd crashes, Michel Casabona helped me put together an environment where I could reproduce it, which allowed me to track it down to a root cause and fix it. (I also found a misuse of strlcpy affecting at least glibc-based systems in passing, though I think that was unrelated.)

    I worked with Daniel Kahn Gillmor to fix a regression in ssh-agent socket handling.

    I fixed a reproducibility bug depending on whether passwd is installed on the build system, which would have affected security updates during the lifetime of trixie.

    I backported openssh 1:10.0p1-5 to bookworm-backports.

    I issued bookworm and bullseye updates for CVE-2025-32728.

    groff

    I backported a fix for incorrect output when formatting multiple documents as PDF/PostScript at once.

    debmirror

    I added a simple autopkgtest.

    Python team

    I upgraded these packages to new upstream versions:

    • automat
    • celery
    • flufl.i18n
    • flufl.lock
    • frozenlist
    • python-charset-normalizer
    • python-evalidate (including pointing out an upstream release handling issue)
    • python-pythonjsonlogger
    • python-setproctitle
    • python-telethon
    • python-typing-inspection
    • python-webargs
    • pyzmq
    • trove-classifiers (including a small upstream cleanup)
    • uncertainties
    • zope.testrunner

    In bookworm-backports, I updated these packages:

    • python-django to 3:4.2.21-1 (issuing BSA-124)
    • python-django-pgtrigger to 4.14.0-1

    I fixed problems building these packages reproducibly:

    I backported fixes for some security vulnerabilities to unstable (since we’re in freeze now so it’s not always appropriate to upgrade to new upstream versions):

    I fixed various other build/test failures:

    I added non-superficial autopkgtests to these packages:

    I packaged python-django-hashids and python-django-pgbulk, needed for new upstream versions of python-django-pgtrigger.

    I ported storm to Python 3.14.

    Science team

    I fixed a build failure in apertium-oci-fra.

    on June 08, 2025 12:20 AM

    June 06, 2025

    Hey everyone,

    Get ready to dust off those virtual cobwebs and crack open a cold one (or a digital one, if you’re in a VM) because uCareSystem 25.05.06 has officially landed! And let me tell you, this release is so good, it’s practically a love letter to your Linux system – especially if that system happens to be chilling out in Windows Subsystem for Linux (WSL).

    That’s right, folks, the big news is out: WSL support for uCareSystem has finally landed! We know you’ve been asking, we’ve heard your pleas, and we’ve stopped pretending we didn’t see you waving those “Free WSL” signs.

    Now, your WSL instances can enjoy the same tender loving care that uCareSystem provides for your “bare metal” Ubuntu/Debian Linux setups. No more feeling left out, little WSLs! You can now join the cool kids at the digital spa.

    Here is a video of it:

    But wait, there’s more! (Isn’t there always?) We didn’t just stop at making friends with Windows. We also tackled some pesky gremlins that have been lurking in the shadows:

    • Apt-key dependency? Gone! We told it to pack its bags and hit the road. Less dependency drama, more system harmony.
    • Remember that time your internet check was slower than a sloth on a caffeine crash? We squashed that “Bug latency curl in internet check phase” bug. Your internet checks will now be snappier than a startled squirrel.
    • We fixed that “Wrong kernel cleanup” issue. Your kernels are now safe from accidental digital haircuts.
    • And for those of you who hit snags with Snap in WSL, kernel cleanup (again, because we’re thorough!), and other bits, we’ve applied some much-needed digital duct tape and elbow grease to fix and more.
    • We even gave our code a good scrub, fixing those annoying shellcheck warnings. Because nobody likes a messy codebase, especially not us!
    • Oh, and the -k option? Yeah, that’s gone too. We decided it was useless so we had to retire it to a nice, quiet digital farm upstate.
    • Finally, for all you newcomers and memory-challenged veterans, we’ve added install and uninstall instructions to the README. Because sometimes, even we forget how to put things together after we’ve taken them apart.

    So, what are you waiting for? Head over to utappia.org (or wherever you get your uCareSystem goodness) and give your system the pampering it deserves with uCareSystem 25.05.06. Your WSL instance will thank you, probably with a digital high-five.

    Download the latest release and give it a spin. As always, feedback is welcome.

    Acknowledgements

    Thanks to the following users for their support:

    • P. Loughman – Thanks for your continued support
    • D. Emge – Thanks for your continued support
    • W. Schreinemachers – Thanks for your continued support
    • W. Schwartz
    • D. e Swarthout
    • D. Luchini
    • M. Stanley
    • N. Evangelista

    Your involvement helps keep this project alive, evolving, and aligned with real-world needs. Thank you.

    Happy maintaining!

    Where can I download uCareSystem ?

    As always, I want to express my gratitude for your support over the past 15 years. I have received countless messages from inside and outside Greece about how useful they found the application. I hope you find the new version useful as well.

    If you’ve found uCareSystem to be valuable and it has saved you time, consider showing your appreciation with a donation. You can contribute via PayPal or Debit/Credit Card by clicking on the banner.

    Pay what you want Maybe next time
    Click the donate button and enter the amount you want to donate. Then you will be navigated to the page with the latest version to download the installer If you don’t want to Donate this time, just click the download icon to be navigated to the page with the latest version to download the installer
    btn_donateCC_LG ucare-system-download
       

    Once installed, the updates for new versions will be installed along with your regular system updates.

    The post uCareSystem 25.05.06: Because Even Your WSL Deserves a Spa Day! appeared first on Utappia.

    on June 06, 2025 10:59 PM

    What is Bazaar code hosting?

    Bazaar is a distributed revision control system, originally developed by Canonical. It provides similar functionality compared to the now dominant Git.

    Bazaar code hosting is an offering from Launchpad to both provide a Bazaar backend for hosting code, but also a web frontend for browsing the code. The frontend is provided by the Loggerhead application on Launchpad.

    Sunsetting Bazaar

    Bazaar passed its peak a decade ago. Breezy is a fork of Bazaar that has kept a form of Bazaar alive, but the last release of Bazaar was in 2016. Since then the impact has declined, and there are modern replacements like Git.

    Just keeping Bazaar running requires a non-trivial amount of development, operations time, and infrastructure resources – all of which could be better used elsewhere.

    Launchpad will now begin the process of discontinuing support for Bazaar.

    Timelines

    We are aware that the migration of the repositories and updating workflows will take some time, that is why we planned sunsetting in two phases.

    Phase 1

    Loggerhead, the web frontend, which is used to browse the code in a web browser, will be shut down imminently. Analyzing access logs showed that there are hardly any more requests from legit users, but almost the entire traffic comes from scrapers and other abusers. Sunsetting Loggerhead will not affect the ability to pull, push and merge changes.

    Phase 2

    From September 1st, 2025, we do not intend to have Bazaar, the code hosting backend, any more. Users need to migrate all repositories from Bazaar to Git between now and this deadline.

    Migration paths

    The following blog post describes all the necessary steps on how to convert a Bazaar repository hosted on Launchpad to Git.

    Migrate a Repository From Bazaar to Git

    Call for action

    Our users are extremely important to us. Ubuntu, for instance, has a long history of Bazaar usage, and we will need to work with the Ubuntu Engineering team to find ways to move forward to remove the reliance on the integration with Bazaar for the development of Ubuntu. If you are also using Bazaar and you have a special use case, or you do not see a clear way forward, please reach out to us to discuss your use case and how we can help you.

    You can reach us in #launchpad:ubuntu.com on Matrix, or submit a question or send us an e-mail via feedback@launchpad.net.

    It is also recommended to join the ongoing discussion at https://discourse.ubuntu.com/t/phasing-out-bazaar-code-hosting/62189.

    on June 06, 2025 09:26 AM

    June 05, 2025

    Announcing Incus 6.13

    Stéphane Graber

    The Incus team is pleased to announce the release of Incus 6.13!

    This is a VERY busy release with a lot of new features of all sizes
    and for all kinds of different users, so there should be something for
    everyone!

    The highlights for this release are:

    • Windows agent support
    • Improvements to incus-migrate
    • SFTP on custom volumes
    • Configurable instance external IP address on OVN networks
    • Ability to pin gateway MAC address on OVN networks
    • Clock handling in virtual machines
    • New get-client-certificate and get-client-token commands
    • DHCPv6 support for OCI
    • Network host tables configuration for routed NICs
    • Support for split image publishing
    • Preseed of certificates
    • Configuration of list format in the CLI
    • Add CLI aliases for create/add and delete/remove/rm
    • OS metrics are now included in Incus metrics when running on Incus OS
    • Converted more database logic to generated code
    • Converted more CLI list functions to using server side filtering
    • Converted more documentation to be generated from the code

    The full announcement and changelog can be found here.
    And for those who prefer videos, here’s the release overview video:

    You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

    And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

    Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

    Enjoy!

    on June 05, 2025 05:07 AM

    E350 a Senhora Dos Anéis

    Podcast Ubuntu Portugal

    De regresso triunfal do Oppidum Sena, onde apanharam uma onda de calor e uma barrigada de cabrito, queijo e vinho, os nossos heróis trazem novidades da Wikicon Portugal 2025 e contam-nos as suas aventuras tecnológicas, que incluem despir um Cervantes e apanhar felinos chamados Felicity em sítios estranhos da Internet. Para recebê-los esteve presente a Princesa Leia, a.k.a., Joana Simões, a.k.a. A Senhora dos Anéis, regressada de uma gloriosa missão em Tóquio e de partida para o México - a conversa fará tremer a terra debaixo dos vossos pés e os satélites acima das vossas cabeças!

    Já sabem: oiçam, subscrevam e partilhem!

    Atribuição e licenças

    Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada por encomenda pela Shizamura - artista, ilustradora e autora de BD. Podem ficar a conhecer melhor a Shizamura na Ciberlândia e no seu sítio web.

    on June 05, 2025 12:00 AM

    June 04, 2025

    If you’re looking for a low-power, always-on solution for streaming your personal media library, the Raspberry Pi makes a great Plex server. It’s compact, quiet, affordable, and perfect for handling basic media streaming—especially for home use.

    In this post, I’ll guide you through setting up Plex Media Server on a Raspberry Pi, using Raspberry Pi OS (Lite or Full) or Debian-based distros like Ubuntu Server.


    🧰 What You’ll Need

    • Raspberry Pi 4 or 5 (at least 2GB RAM, 4GB+ recommended)
    • microSD card (32GB+), or SSD via USB 3.0
    • External storage for media (USB HDD/SSD or NAS)
    • Ethernet or Wi-Fi connection
    • Raspberry Pi OS (Lite or Desktop)
    • A Plex account (free is enough)

    ⚙ Step 1: Prepare the Raspberry Pi

    1. Flash Raspberry Pi OS using Raspberry Pi Imager
    2. Enable SSH and set hostname (optional)
    3. Boot the Pi, log in, and update:
    sudo apt update && sudo apt upgrade -y
    

    📦 Step 2: Install Plex Media Server

    Plex is available for ARM-based devices via their official repository.

    1. Add Plex repo and key:
    curl https://downloads.plex.tv/plex-keys/PlexSign.key | sudo apt-key add -
    echo deb https://downloads.plex.tv/repo/deb public main | sudo tee /etc/apt/sources.list.d/plexmediaserver.list
    sudo apt update
    
    1. Install Plex:
    sudo apt install plexmediaserver -y
    

    🔁 Step 3: Enable and Start the Service

    Enable Plex on boot and start the service:

    sudo systemctl enable plexmediaserver
    sudo systemctl start plexmediaserver
    

    Make sure it’s running:

    sudo systemctl status plexmediaserver
    

    🌐 Step 4: Access Plex Web Interface

    Open your browser and go to:

    http://<your-pi-ip>:32400/web
    

    Log in with your Plex account and begin the setup wizard.


    📂 Step 5: Add Your Media Library

    Plug in your external HDD or mount a network share, then:

    sudo mkdir -p /mnt/media
    sudo mount /dev/sda1 /mnt/media
    

    Make sure Plex can access it:

    sudo chown -R plex:plex /mnt/media
    

    Add the media folder during the Plex setup under Library > Add Library.


    💡 Optional Tips

    • Transcoding: The Pi can handle direct play (no transcoding) well, but struggles with transcoding large files. Use compatible formats like H.264 (MP4).
    • USB Boot: For better performance, boot the Pi from an SSD instead of a microSD card.
    • Power Supply: Use a proper 5V/3A PSU to avoid crashes under heavy disk load.
    • Thermal: Add a heatsink or fan for the Pi if using Plex for long sessions.

    🔐 Secure Your Server

    • Use your router to forward port 32400 only if you want remote access.
    • Set a strong Plex password.
    • Enable Tailscale or WireGuard for secure remote access without exposing ports.

    ✅ Conclusion

    A Raspberry Pi might not replace a full-blown NAS or dedicated server, but for personal use or as a secondary Plex node, it’s surprisingly capable. With low energy usage and silent operation, it’s the perfect DIY home media solution.

    If you’re running other services like Pi-hole or Home Assistant, the Pi can multitask well — just avoid overloading it with too much transcoding.

    The post Building a Plex Media Server with Raspberry Pi appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

    on June 04, 2025 09:30 PM

    June 01, 2025

    If you’re a Linux user craving a real-time strategy (RTS) game with the polish of Age of Empires and the historical depth of a university textbook—yet entirely free and open source—then you need to try 0 A.D.. This epic project by Wildfire Games is not just an open-source alternative to mainstream RTS games—it’s a serious contender in its own right, crafted with passion, precision, and community spirit.

    🎮 What is 0 A.D.?

    0 A.D. (Zero Anno Domini) is a free, open-source, cross-platform RTS game that takes players deep into ancient history, allowing them to build and battle with civilizations from 500 B.C. to 500 A.D. The game is built using the custom Pyrogenesis engine, a modern 3D engine developed from scratch for this purpose, and available under the GPL license—yes, you can even tinker with the code yourself.

    It’s not just a clone. 0 A.D. sets itself apart with:

    • 🛡 Historically accurate civilizations
    • 🗺 Dynamic and random map generation
    • ⚔ Tactical land and naval combat
    • 🏗 City-building with tech progression
    • 🧠 AI opponents and multiplayer support
    • 💬 Modding tools and community-created content

    🐧 Why It’s Perfect for Linux Users

    Linux gamers often get the short end of the stick when it comes to big-name games—but 0 A.D. feels like it was made for us. Here’s why Linux users should care:

    ✔ Native Linux Support

    0 A.D. runs natively on Linux without the need for Wine, Proton, or compatibility layers. You can install it directly from your distro’s package manager or build it from source if you like full control.

    For example:

    # On Debian/Ubuntu
    sudo apt install 0ad
    
    # On Arch Linux
    sudo pacman -S 0ad
    
    # On Fedora
    sudo dnf install 0ad
    

    No weird dependencies. No workarounds. Just pure, native performance.

    🎨 Vulkan Renderer and FSR Support

    With Alpha 27 “Agni”, 0 A.D. now supports Vulkan, giving Linux users much better graphics performance, lower CPU overhead, and compatibility with modern GPU features. Plus, it includes AMD FidelityFX Super Resolution (FSR)—which boosts frame rates and visual quality even on low-end hardware.

    This makes 0 A.D. one of the few FOSS games optimized for modern Linux graphics stacks like Mesa, Wayland, and PipeWire.

    🔄 Rolling Updates and Dev Engagement

    The development team and community are highly active, with new features, bug fixes, and optimizations arriving steadily. You don’t need to wait years for meaningful updates—0 A.D. grows with each alpha release, and Linux users are treated as first-class citizens.

    Want to contribute a patch or translate the UI into Malay? You can. Everything is transparent and accessible.


    🏛 What Makes the Gameplay So Good?

    Let’s dive deeper into why the gameplay itself shines.

    🏗 Realistic Economy and Base Building

    Unlike many fast-paced arcade RTS games, 0 A.D. rewards planning and resource management. You’ll manage four resources—food, wood, stone, and metal—to construct buildings, raise armies, and advance through phases that represent a civilization’s growth. Advancing from village phase to town phase to city phase unlocks more units and structures.

    Each civilization has unique architectural styles, tech trees, and military units. For example:

    • Romans have disciplined legionaries and siege weapons.
    • Persians boast fast cavalry and majestic palaces.
    • Athenians excel in naval warfare.

    ⚔ Intense Tactical Combat

    Units in 0 A.D. aren’t just damage sponges. There’s formation control, terrain advantage, flanking tactics, and unit counters. The AI behaves strategically, and in multiplayer, experienced players can pull off devastating maneuvers.

    Naval combat has received significant improvements recently, with better ship handling and water pathfinding—something many commercial RTS games still struggle with.

    🗺 Endless Map Variety and Mod Support

    0 A.D. includes:

    • Skirmish maps
    • Random maps (with different biomes and elevation)
    • Scenario maps (with scripted events)

    And thanks to the integrated mod downloader, you can browse, install, and play with community mods in just a few clicks. Want to add new units, tweak balance, or add fantasy elements? You can.


    🕹 Multiplayer and Replays

    Play with friends over LAN, the Internet, or against the built-in AI. The game includes:

    • 🧭 Multiplayer save and resume support
    • 👁 Observer tools (with flares, commands, and overlays)
    • ⏪ Replay functionality to study your tactics or cast tournaments

    There’s even an in-game lobby where players coordinate matches across all platforms.


    👥 Community and Contribution

    The 0 A.D. project thrives because of its community:

    • Developers contribute code via GitHub.
    • Artists create stunning 3D models and animations.
    • Historians help ensure cultural accuracy.
    • Translators localize the game into dozens of languages.
    • Players write guides, tutorials, and strategy posts.

    If you’re a Linux user and want to contribute to an ambitious FOSS project, this is the perfect gateway into game development, design, or open collaboration.


    🧑‍💻 How to Install on Linux

    Here’s a quick reference:

    Option 1: Package Manager (Recommended)

    • Debian/Ubuntu: sudo apt install 0ad
    • Arch Linux: sudo pacman -S 0ad
    • Fedora: sudo dnf install 0ad
    • openSUSE: sudo zypper install 0ad

    Option 2: Compile from Source

    Follow the official instructions at https://trac.wildfiregames.com/wiki/BuildInstructions


    🎯 Final Thoughts

    0 A.D. is more than just a game—it’s a testament to what free and open-source software can achieve. For Linux gamers, it’s a rare gem: a game that respects your platform, performs well, and lets you own your experience entirely.

    So whether you’re a seasoned general or a curious strategist, download 0 A.D. today and relive history—on your terms.

    👉 Visit https://play0ad.com to download and start playing.

    The post 0 A.D. on Linux: A Stunning, Free RTS Experience That Rivals the Best appeared first on Hamradio.my - Amateur Radio, Tech Insights and Product Reviews by 9M2PJU.

    on June 01, 2025 04:14 PM

    May 29, 2025

    Ubuntu Studio 22.04 LTS has reached the end of its three years of supported life provided by the Ubuntu Studio team. All users are urged to upgrade to 24.04 LTS at this time.

    This means that the KDE Plasma, audio, video, graphics, photography, and publishing components of your system will no longer receive updates, plus we at Ubuntu Studio won’t support it after 29-May-2025, though your base packages from Ubuntu will continue to receive security updates from Ubuntu until 2027 since Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud and Ubuntu Core continue to receive updates.

    See the Ubuntu Studio 24.04 LTS Release Notes for upgrade instructions.

    No single release of any operating system can be supported indefinitely, and Ubuntu Studio has no exception to this rule.

    Long-Term Support releases are identified by an even numbered year-of-release and a month-of-release of April (04). Hence, the most recent Long-Term Support release is 24.04 (YY.MM = 2024.April), and the next Long-Term Support release will be 26.04 (2026.April). LTS releases for official Ubuntu flavors (not Desktop or Server which are supported for five years) are three years, meaning LTS users are expected to upgrade after every LTS release with a one-year buffer.

    on May 29, 2025 04:50 PM

    May 24, 2025

    A SomewhatMaxSAT Solver

    Julian Andres Klode

    As you may recall from previous posts and elsewhere I have been busy writing a new solver for APT. Today I want to share some of the latest changes in how to approach solving.

    The idea for the solver was that manually installed packages are always protected from removals – in terms of SAT solving, they are facts. Automatically installed packages become optional unit clauses. Optional clauses are solved after manual ones, they don’t partake in normal unit propagation.

    This worked fine, say you had

    A                                   # install request for A
    B                                   # manually installed, keep it
    A depends on: conflicts-B | C
    

    Installing A on a system with B installed installed C, as it was not allowed to install the conflicts-B package since B is installed.

    However, I also introduced a mode to allow removing manually installed packages, and that’s where it broke down, now instead of B being a fact, our clauses looked like:

    A                               # install request for A
    A depends on: conflicts-B | C
    Optional: B                     # try to keep B installed
    

    As a result, we installed conflicts-B and removed B; the steps the solver takes are:

    1. A is a fact, mark it
    2. A depends on: conflicts-B | C is the strongest clause, try to install conflicts-B
    3. We unit propagate that conflicts-B conflicts with B, so we mark not B
    4. Optional: B is reached, but not satisfiable, ignore it because it’s optional.

    This isn’t correct: Just because we allow removing manually installed packages doesn’t mean that we should remove manually installed packages if we don’t need to.

    Fixing this turns out to be surprisingly easy. In addition to adding our optional (soft) clauses, let’s first assume all of them!

    But to explain how this works, we first need to explain some terminology:

    1. The solver operates on a stack of decisions
    2. “enqueue” means a fact is being added at the current decision level, and enqueued for propagation
    3. “assume” bumps the decision level, and then enqueues the assumed variable
    4. “propagate” looks at all the facts and sees if any clause becomes unit, and then enqueues it
    5. “unit” is when a clause has a single literal left to assign

    To illustrate this in pseudo Python code:

    1. We introduce all our facts, and if they conflict, we are unsat:

      for fact in facts:
          enqueue(fact)
      if not propagate():
          return False
      
    2. For each optional literal, we register a soft clause and assume it. If the assumption fails, we ignore it. If it succeeds, but propagation fails, we undo the assumption.

      for optionalLiteral in optionalLiterals:
          registerClause(SoftClause([optionalLiteral]))
          if assume(optionalLiteral) and not propagate():
              undo()
      
    3. Finally we enter the main solver loop:

      while True:
          if not propagate():
              if not backtrack():
                  return False
          elif <all clauses are satisfied>:
              return True
          elif it := find("best unassigned literal satisfying a hard clause"):
              assume(it)
          elif it := find("best literal satisfying a soft clause"):
              assume(it)
      

    The key point to note is that the main loop will undo the assumptions in order; so if you assume A,B,C and B is not possible, we will have also undone C. But since C is also enqueued as a soft clause, we will then later find it again:

    1. Assume A: State=[Assume(A)], Clauses=[SoftClause([A])]
    2. Assume B: State=[Assume(A),Assume(B)], Clauses=[SoftClause([A]),SoftClause([B])]
    3. Assume C: State=[Assume(A),Assume(B),Assume(C)], Clauses=[SoftClause([A]),SoftClause([B]),SoftClause([C])]
    4. Solve finds a conflict, backtracks, and sets not C: State=[Assume(A),Assume(B),not(C)]
    5. Solve finds a conflict, backtracks, and sets not B: State=[Assume(A),not(B)] – C is no longer assumed either
    6. Solve, assume C as it satisfies SoftClause([C]) as next best literal: State=[Assume(A),not(B),Assume(C)]
    7. All clauses are satisfied, solution is A, not B, and C.

    This is not (correct) MaxSAT, because we actually do not guarantee that we satisfy as many soft clauses as possible. Consider you have the following clauses:

    Optional: A
    Optional: B
    Optional: C
    B Conflicts with A
    C Conflicts with A
    

    There are two possible results here:

    1. {A} – If we assume A first, we are unable to satisfy B or C.
    2. {B,C} – If we assume either B or C first, A is unsat.

    The question to ponder though is whether we actually need a global maximum or whether a local maximum is satisfactory in practice for a dependency solver If you look at it, a naive MaxSAT solver needs to run the SAT solver 2**n times for n soft clauses, whereas our heuristic only needs n runs.

    For dependency solving, it seems we do not seem have a strong need for a global maximum: There are various other preferences between our literals, say priorities; and empirically, from evaluating hundreds of regressions without the initial assumptions, I can say that the assumptions do fix those cases and the result is correct.

    Further improvements exist, though, and we can look into them if they are needed, such as:

    • Use a better heuristic:

      If we assume 1 clause and solve, and we cause 2 or more clauses to become unsatisfiable, then that clause is a local minimum and can be skipped. This is a more common heuristical MaxSAT solver. This gives us a better local maximum, but not a global one.

      This is more or less what the Smart package manager did, except that in Smart, all packages were optional, and the entire solution was scored. It calculated a basic solution without optimization and then toggled each variable and saw if the score improved.

    • Implement an actual search for a global maximum:

      This involves reading the literature. There are various versions of this, for example:

      1. Find unsatisfiable cores and use those to guide relaxation of clauses.

      2. A bounds-based search, where we translate sum(satisifed clauses) > k into SAT, and then search in one of the following ways:

        1. from 0 upward
        2. from n downward
        3. perform a binary search on [0, k] satisfied clauses.

        Actually we do not even need to calculate sum constraints into CNF, because we can just add a specialized new type of constraint to our code.

    on May 24, 2025 10:14 AM

    May 22, 2025

    What are Launchpad’s mailing lists?

    Launchpad’s mailing lists are team-based mailing lists, which means that each team can have one of them. E-mails from Launchpad’s mailing lists contain `lists.launchpad.net ` in their address.

    For more information on the topic please see https://help.launchpad.net/ListHelp.

    What are they not?

    Please note that both lists.canonical.com and lists.ubuntu.com are not managed by Launchpad, but by Canonical Information Systems.

    Timeline

    Launchpad will no longer offer mailing lists as of the end of October 2025, which aligns with the end of the 25.10 cycle.

    Migration paths

    Depending on your use case, there are different alternatives available.

    For a couple of years now, discourse has become a viable alternative for most scenarios. Launchpad also offers the Answers feature for discussions. If it is not so much about communication, but more about receiving information, e.g. for updates on a bug report, you should be aware that you can also subscribe teams to bugs.

    Call for action

    We are aware that your use case may be very different from the above listed ones. If you are using Launchpad’s mailing lists today and you do not see a clear way forward, please reach out to us to discuss your use case and how we can help you.

    Please contact us on Matrix (#launchpad:ubuntu.com) or drop as a message via feedback@launchpad.net.

    Please note that this is still work in progress, and we will provide more information over the upcoming weeks and months.

    on May 22, 2025 05:42 PM

    Snaps!

    I actually released last week 🙂 I haven’t had time to blog, but today is my birthday and taking some time to myself!

    This release came with a major bugfix. As it turns out our applications were very crashy on non-KDE platforms including Ubuntu proper. Unfortunately, for years, and I didn’t know. Developers were closing the bug reports as invalid because users couldn’t provide a stacktrace. I have now convinced most developers to assign snap bugs to the Snap platform so I at least get a chance to try and fix them. So with that said, if you tried our snaps in the past and gave up in frustration, please do try them again! I also spent some time cleaning up our snaps to only have current releases in the store, as rumor has it snapcrafters will be responsible for any security issues. With 200+ snaps I maintain, that is a lot of responsibility. We’ll see if I can pull it off.

    Life!

    My last surgery was a success! I am finally healing and out of a sling for the first time in almost a year. I have also lined up a good amount of web work for next month and hopefully beyond. I have decided to drop the piece work for donations and will only accept per project proposals for open source work. I will continue to maintain KDE snaps for as long as time allows. A big thank you to everyone that has donated over the last year to fund my survival during this broken arm fiasco. I truly appreciate it!

    With that said,  if you want to drop me a donation for my work, birthday or well-being until I get paid for the aforementioned web work please do so here:

    on May 22, 2025 12:49 PM

    May 18, 2025


    Are you using Kubuntu 25.04 Plucky Puffin, our current stable release? Or are you already running our development builds of the upcoming 25.10 (Questing Quokka)?

    We currently have Plasma 6.3.90 (Plasma 6.4 Beta1) available in our Beta PPA for Kubuntu 25.04 and for the 25.10 development series.

    However this is a Beta release, and we should re-iterate the disclaimer:

    

    DISCLAIMER: This release contains untested and unstable software. It is highly recommended you do not use this version in a production environment and do not use it as your daily work environment. You risk crashes and loss of data.

    

    6.4 Beta1 packages and required dependencies are available in our Beta PPA. The PPA should work whether you are currently using our backports PPA or not. If you are prepared to test via the PPA, then add the beta PPA and then upgrade:

    sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt full-upgrade -y

    Then reboot.

    In case of issues, testers should be prepared to use ppa-purge to remove the PPA and revert/downgrade packages.

    Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

    • If you believe you might have found a packaging bug, you can use launchpad.net to post testing feedback to the Kubuntu team as a bug, or give feedback on Matrix [1], or mailing lists [2].
    • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

    Please review the planned feature list, release announcement and changelog.

    [Test Case]
    * General tests:
    – Does plasma desktop start as normal with no apparent regressions over 6.3?
    – General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.
    * Specific tests:
    – Identify items with front/user facing changes capable of specific testing.
    – Test the ‘fixed’ functionality or ‘new’ feature.

    Testing may involve some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

    Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

    We need your help to get this important beta release in shape for Kubuntu and the KDE community as a whole.

    Thanks!

    Please stop by the Kubuntu-devel Matrix channel on if you need clarification of any of the steps to follow.

    [1] – https://matrix.to/#/#kubuntu-devel:ubuntu.com
    [2] – https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel

    on May 18, 2025 09:26 AM

    May 16, 2025

    Rooming with Mark

    Oliver Grawert

    on May 16, 2025 02:34 PM

    May 08, 2025

    We are pleased to announce that the Plasma 6.3.5 bugfix update is now available for Kubuntu 25.04 Plucky Puffin in our backports PPA.

    As usual with our PPAs, there is the caveat that the PPA may receive additional updates and new releases of KDE Plasma, Gear (Apps), and Frameworks, plus other apps and required libraries. Users should always review proposed updates to decide whether they wish to receive them.

    To upgrade:

    Add the following repository to your software sources list:

    ppa:kubuntu-ppa/backports

    or if it is already added, the updates should become available via your preferred update method.

    The PPA can be added manually in the Konsole terminal with the command:

    sudo add-apt-repository ppa:kubuntu-ppa/backports

    and packages then updated with

    sudo apt full-upgrade

    We hope you enjoy using Plasma 6.3.5!

    Issues with Plasma itself can be reported on the KDE bugtracker [1]. In the case of packaging or other issues, please provide feedback on our mailing list [2], and/or file a bug against our PPA packages [3].

    1. KDE bugtracker::https://bugs.kde.org
    2. Kubuntu-devel mailing list: https://lists.u
    3. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa

    on May 08, 2025 06:28 PM

    May 04, 2025

    About 90% of my Debian contributions this month were sponsored by Freexian.

    You can also support my work directly via Liberapay.

    Request for OpenSSH debugging help

    Following the OpenSSH work described below, I have an open report about the sshd server sometimes crashing when clients try to connect to it. I can’t reproduce this myself, and arm’s-length debugging is very difficult, but three different users have reported it. For the time being I can’t pass it upstream, as it’s entirely possible it’s due to a Debian patch.

    Is there anyone reading this who can reproduce this bug and is capable of doing some independent debugging work, most likely involving bisecting changes to OpenSSH? I’d suggest first seeing whether a build of the unmodified upstream 10.0p2 release exhibits the same bug. If it does, then bisect between 9.9p2 and 10.0p2; if not, then bisect the list of Debian patches. This would be extremely helpful, since at the moment it’s a bit like trying to look for a needle in a haystack from the next field over by sending instructions to somebody with a magnifying glass.

    OpenSSH

    I upgraded the Debian packaging to OpenSSH 10.0p1 (now designated 10.0p2 by upstream due to a mistake in the release process, but they’re the same thing), fixing CVE-2025-32728. This also involved a diffoscope bug report due to the version number change.

    I enabled the new --with-linux-memlock-onfault configure option to protect sshd against being swapped out, but this turned out to cause test failures on riscv64, so I disabled it again there. Debugging this took some time since I needed to do it under emulation, and in the process of setting up a testbed I added riscv64 support to vmdb2.

    In coordination with the wtmpdb maintainer, I enabled the new Y2038-safe native wtmpdb support in OpenSSH, so wtmpdb last now reports the correct tty.

    I fixed a couple of packaging bugs:

    I reviewed and merged several packaging contributions from others:

    dput-ng

    Since we added dput-ng integration to Debusine recently, I wanted to make sure that it was in good condition in trixie, so I fixed dput-ng: will FTBFS during trixie support period. Previously a similar bug had been fixed by just using different Ubuntu release names in tests; this time I made the tests independent of the current supported release data returned by distro_info, so this shouldn’t come up again.

    We also ran into dput-ng: —override doesn’t override profile parameters, which needed somewhat more extensive changes since it turned out that that option had never worked. I fixed this after some discussion with Paul Tagliamonte to make sure I understood the background properly.

    man-db

    I released man-db 2.13.1. This just included various small fixes and a number of translation updates, but I wanted to get it into trixie in order to include a contribution to increase the MAX_NAME constant, since that was now causing problems for some pathological cases of manual pages in the wild that documented a very large number of terms.

    debmirror

    I fixed one security bug: debmirror prints credentials with —progress.

    Python team

    I upgraded these packages to new upstream versions:

    In bookworm-backports, I updated these packages:

    • python-django to 3:4.2.20-1 (issuing BSA-123)
    • python-django-pgtrigger to 4.13.3

    I dropped a stale build-dependency from python-aiohttp-security that kept it out of testing (though unfortunately too late for the trixie freeze).

    I fixed or helped to fix various other build/test failures:

    I packaged python-typing-inspection, needed for a new upstream version of pydantic.

    I documented the architecture field in debian/tests/autopkgtest-pkg-pybuild.conf files.

    I fixed other odds and ends of bugs:

    Science team

    I fixed various build/test failures:

    on May 04, 2025 03:38 PM

    May 01, 2025

    Incus is a manager for virtual machines and system containers.

    A virtual machine (VM) is an instance of an operating system that runs on a computer, along with the main operating system. A virtual machine uses hardware virtualization features for the separation from the main operating system. With virtual machines, the full operating system boots up in them. While in most cases you would run Linux on a VM without a desktop environment, you can also run Linux with a desktop environment (like in VirtualBox and VMWare).

    In How to run a Windows virtual machine on Incus on Linux we saw how to run a run a Windows VM on Incus. In this post we see how to run a Linux Desktop virtual machine on Incus.

    Table of Contents

    Updates

    No updates yet.

    Prerequisites

    1. You should have a system that runs Incus.
    2. A system with support for hardware virtualization so that it can run virtual machines.
    3. A virtual machine image of your preferred Linux desktop distribution.

    Cheat sheet

    You should specify how much RAM memory you are giving to the VM. The default is only 1GiB of RAM, which is not enough for desktop VMs. The --console=vga launches for you the Remote Viewer GUI application to allow you to use the desktop in a window.

    $ incus image list images:desktop       # List all available desktop images
    $ incus launch --vm images:ubuntu/jammy/desktop mydesktop -c limits.memory=3GiB --console=vga
    $ incus console mydesktop --type=vga    # Reconnect to already running instance
    $ incus start mydesktop --console=vga   # Start an existing desktop VM

    Availability of images

    Currently, Incus provides you with the following VM images of Linux desktop distributions. The architecture is x86_64.

    Run the following command to list all available Linux desktop images. incus image is the section of Incus that deals with the management of images. The list command lists the available images of a remote/repository, the default being images: (run incus remote list for the full list of remotes). After the colon (:), you type filter keywords, and in this case we typed desktop to show images that have the word desktop in them (to show only Desktop images). We are interested in a few columns only, therefore -c ldt only shows the columns for the Alias, the Description and the Type.

    $ incus image list images:desktop -c ldt
    +------------------------------------------+---------------------------+-----------------+
    |                  ALIAS                   |      DESCRIPTION          |      TYPE       |
    +------------------------------------------+---------------------------+-----------------+
    | archlinux/desktop-gnome (3 more)         | Archlinux current amd64   | VIRTUAL-MACHINE |
    +------------------------------------------+---------------------------+-----------------+
    | opensuse/15.5/desktop-kde (1 more)       | Opensuse 15.5 amd64       | VIRTUAL-MACHINE |
    +------------------------------------------+---------------------------+-----------------+
    | opensuse/15.6/desktop-kde (1 more)       | Opensuse 15.6 amd64       | VIRTUAL-MACHINE |
    +------------------------------------------+---------------------------+-----------------+
    | opensuse/tumbleweed/desktop-kde (1 more) | Opensuse tumbleweed amd64 | VIRTUAL-MACHINE |
    +------------------------------------------+---------------------------+-----------------+
    | ubuntu/24.10/desktop (3 more)            | Ubuntu oracular amd64     | VIRTUAL-MACHINE |
    +------------------------------------------+---------------------------+-----------------+
    | ubuntu/focal/desktop (3 more)            | Ubuntu focal amd64        | VIRTUAL-MACHINE |
    +------------------------------------------+---------------------------+-----------------+
    | ubuntu/jammy/desktop (3 more)            | Ubuntu jammy amd64        | VIRTUAL-MACHINE |
    +------------------------------------------+---------------------------+-----------------+
    | ubuntu/noble/desktop (3 more)            | Ubuntu noble amd64        | VIRTUAL-MACHINE |
    +------------------------------------------+---------------------------+-----------------+
    | ubuntu/plucky/desktop (1 more)           | Ubuntu plucky amd64       | VIRTUAL-MACHINE |
    +------------------------------------------+---------------------------+-----------------+
    $ 

    These images have been generated with the utility distrobuilder, https://github.com/lxc/distrobuilder The purpose of the utility is to prepare the images so that when we launch them, we get immediately the desktop environment and do not perform any manual configuration. The configuration files for distrobuilder to create these images can be found at https://github.com/lxc/lxc-ci/tree/main/images For example, the archlinux.yaml configuration file has a section to create the desktop image, along with the container and other virtual machine images.

    The full list of Incus images are also available on the Web, through the website https://images.linuxcontainers.org/ It is possible to generate more such desktop images by following the steps of the existing configuration files. Perhaps a Kali Linux desktop image would be very useful. In the https://images.linuxcontainers.org/ website you can also view the build logs that were generated while building the images, and figure out what parameters are needed for distrobuilder to build them (along with the actual configuration file). For example, here are the logs for the ArchLinux desktop image, https://images.linuxcontainers.org/images/archlinux/current/amd64/desktop-gnome/

    Up to this point we got a list of the available virtual machine images that are provided by Incus. We are ready to boot them.

    Booting a desktop Linux VM on Incus

    When launching a VM, Incus provides by default 1GiB RAM and 10GiB of disk space. The disk space is generally OK, but the RAM is too little for a desktop image (it’s OK for non-desktop images). For example, for an Ubuntu desktop image, the instance requires about 1.2GB of memory to start up and obviously more to run other programs. Therefore, if we do not specify more RAM, then the VM would struggle to make do of the mere 1GiB of RAM.

    Booting the Ubuntu desktop image on Incus

    Here is the command to launch a desktop image. We use incus launch to launch the image. It’s a VM, hence --vm. We are using the image from the images: remote, the one called ubuntu/plucky/desktop (it’s the last from the list of the previous section). We configure a new limit for the memory usage, -c limits.memory=3GiB, so that the instance will be able to run successfully. Finally, the console is not textual but graphical. We specify that with --console=vga which means that Incus will launch the remote desktop utility for us.

    $ incus launch --vm images:ubuntu/plucky/desktop mydesktop -c limits.memory=3GiB --console=vga
    Launching mydesktop
    

    Here is a screenshot of the new window with the running desktop virtual machine.

    Screenshot of images:ubuntu/plucky/desktop

    Now we closed the wizard.

    Screenshot of images:ubuntu/plucky/desktop after we close the wizard.

    Booting the ArchLinux desktop image on Incus

    I cannot get this image to show the desktop. If someone can make this work, please post in a comment.

    $ incus launch --vm images:archlinux/desktop-gnome mydesktop -c limits.memory=3GiB --console=vga -c security.secureboot=false
    Launching mydesktop
    

    Booting the OpenSUSE desktop image on Incus

    $ incus launch --vm images:opensuse/15.5/desktop-kde mydesktop -c limits.memory=3GiB --console=vga
    Launching mydesktop
    

    Troubleshooting

    I closed the desktop window but the VM is running. How do I get it back up?

    If you closed the Remote Viewer window, you can get Incus to start it again with the following command. By doing so, you are actually reconnecting back to the VM and continue working from where you left off.

    We are using the incus console action to connect to the running mydesktop instance and request access through the Remote Viewer (rather than a text console).

    $ incus console mydesktop --type=vga
    

    Error: This console is already connected. Force is required to take it over.

    You are already connected to the desktop VM with the Remote Viewer and you are trying to connect again. Either go to the existing Remote Viewer window, or add the parameter --force to close the existing Remote Viewer window and open a new one.

    Error: Instance is not running

    You are trying to connect to a desktop VM with the Remote Viewer but the instance (which already exists) is not running. Use the action incus start to start the virtual machine, along with the --type=vga parameter to get Incus to launch the Remote Viewer for you.

    $ incus start mydesktop --console=vga

    I get no audio from the desktop VM! How do I get sound in the desktop VM?

    This requires extra steps which I do not show yet. There are three options. The first is to use the QEMU device emulation to emulate a sound device in the VM. The second is to somehow push an audio device into the VM so that this audio device is used exclusively in the VM (have not tried this but I think it’s possible). The third and perhaps best option is to use network audio with PulseAudio/Pipewire. You enable network audio on your desktop and then configure the VM instance to connect to that network audio server. I have tried that and it worked well for me. The downside is that the Firefox snap package in the VM could not figure out that there is network audio there and I could not get audio in that application.

    How do I shutdown the desktop VM?

    Use the desktop UI to perform the shutdown. The VM will shut down cleanly.

    Error: Failed instance creation: The image used by this instance is incompatible with secureboot. Please set security.secureboot=false on the instance

    You tried to launch a virtual machine with SecureBoot enabled but the image does not support SecureBoot. You need to disable SecureBoot when you launch this image. The instance has been created but is unable to run unless you disable SecureBoot. You can either disable SecureBoot through an Incus configuration for this image, or just delete the instance, and try again with the parameter -c security.secureboot=false.

    Here is how to disable SecureBoot, then try to incus start that instance.

    $ incus config set mydesktop security.secureboot=true

    Here is how you would enable that flag when you launch such a VM.

    incus launch --vm images:archlinux/desktop-gnome mydesktop -c limits.memory=3GiB --console=vga -c security.secureboot=false

    Note that official Ubuntu images can work with SecureBoot enabled, most others don’t. It has to do with the Linux kernel being digitally signed by some certification authority.

    Error: Failed instance creation: Failed creating instance record: Add instance info to the database: Failed to create “instances” entry: UNIQUE constraint failed: instances.project_id, instances.name

    This error message is a bit cryptic. It just means that you are trying to create or launch an instance while the instance already exists. Read as Error: The instance name already exists.

    on May 01, 2025 10:51 PM

    April 25, 2025

    Announcing Incus 6.12

    Stéphane Graber

    The Incus team is pleased to announce the release of Incus 6.12!

    This release comes with some very long awaited improvements such as online growth of virtual machine memory, network address sets for easier network ACLs, revamped logging support and more!

    On top of the new features, this release also features quite a few welcome performance improvements, especially for systems with a lot of snapshots and with extra performance enhancements for those using ZFS.

    The highlights for this release are:

    • Network address sets
    • Memory hotplug support in VMs
    • Reworked logging handling & remote syslog
    • SNAT support on complex network forwards
    • Authentication through access_token parameter
    • Improved server-side filtering in the CLI
    • More generated documentation

    The full announcement and changelog can be found here.
    And for those who prefer videos, here’s the release overview video:

    You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

    And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

    Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

    Enjoy!

    on April 25, 2025 04:05 AM

    April 19, 2025

    Ubuntu MATE 25.04 is ready to soar! 🪽 Celebrating our 10th anniversary as an official Ubuntu flavour with the reliable MATE Desktop experience you love, built on the latest Ubuntu foundations. Read on to learn more 👓️

    A Decade of MATE

    This release marks the 10th anniversary of Ubuntu MATE becoming an official Ubuntu flavour. From our humble beginnings, we’ve developed a loyal following of users who value a traditional desktop experience with modern capabilities. Thanks to our amazing community, contributors, and users who have been with us throughout this journey. Here’s to many more years of Ubuntu MATE! 🥂

    What changed in Ubuntu MATE 25.04?

    Here are the highlights of what’s new in the Plucky Puffin release:

    • Celebrating 10 years as an official Ubuntu flavour! 🎂
    • Optional full disk encryption in the installer 🔐
      • Enhanced advanced partitioning options
      • Better interaction with existing BitLocker-enabled Windows installations
      • Improved experience when installing alongside other operating systems

    Major Applications

    Accompanying MATE Desktop 🧉 and Linux 6.14 🐧 are Firefox 137 🔥🦊, Evolution 3.56 📧, LibreOffice 25.2.2 📚

    See the Ubuntu 25.04 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

    Download Ubuntu MATE 25.04

    Available for 64-bit desktop computers!

    Download

    Upgrading to Ubuntu MATE 25.04

    The upgrade process to Ubuntu MATE 25.04 is the same as Ubuntu.

    There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

    on April 19, 2025 04:48 AM

    April 17, 2025

    The Xubuntu team is happy to announce the immediate release of Xubuntu 25.04.

    Xubuntu 25.04, codenamed Plucky Puffin, is a regular release and will be supported for 9 months, until January 2026.

    Xubuntu 25.04, featuring the latest updates from Xfce 4.20 and GNOME 48.

    Xubuntu 25.04 features the latest Xfce 4.20, GNOME 48, and MATE 1.26 updates. Xfce 4.20 features many bug fixes and minor improvements, modernizing the Xubuntu desktop while maintaining a familiar look and feel. GNOME 48 apps are tightly integrated and have full support for dark mode. Users of QEMU and KVM will be delighted to find new stability with the desktop session—the long-running X server crash has been resolved in Xubuntu 25.04 and backported to all supported Xubuntu releases.

    The final release images for Xubuntu Desktop and Xubuntu Minimal are available as torrents and direct downloads from xubuntu.org/download/.

    As the main server might be busy the first few days after the release, we recommend using the torrents if possible.

    We want to thank everybody who contributed to this release of Xubuntu!

    Highlights and Known Issues

    Highlights

    • Xfce 4.20, released in December 2024, is included and contains many new features. Early Wayland support has been added, but is not available in Xubuntu.
    • GNOME 48 apps, including Font Viewer (gnome-font-viewer) and Mines (gnome-mines), include a refreshed appearance and usability improvements.

    Known Issues

    • The shutdown prompt may not be displayed at the end of the installation. Instead, you might just see a Xubuntu logo, a black screen with an underscore in the upper left-hand corner, or a black screen. Press Enter, and the system will reboot into the installed environment. (LP: #1944519)
    • You may experience choppy audio or poor system performance while playing audio, but only in some virtual machines (observed in VMware and VirtualBox).
    • OEM installation options are not currently supported or available.

    Please refer to the Xubuntu Release Notes for more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions.

    The main Ubuntu Release Notes cover many other packages we carry and more generic issues.

    Support

    For support with the release, navigate to Help & Support for a complete list of methods to get help.

    on April 17, 2025 08:59 PM
    The Lubuntu Team is proud to announce Lubuntu 25.04, codenamed Plucky Puffin. Lubuntu 25.04 is the 28th release of Lubuntu, the 14th release of Lubuntu with LXQt as the default desktop environment. With 25.04 being an interim release, it will be supported until January of 2026. If you're a 24.10 user, please upgrade to 25.04 […]
    on April 17, 2025 06:27 PM

    The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 25.04 code-named “Plucky Puffin”. This marks Ubuntu Studio’s 36th release. This release is a Regular release and as such, it is supported for 9 months, until January 2026.

    Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a more complete list of changes and known issues. Listed here are some of the major highlights.

    This release is dedicated to the memory of Steve Langasek. Without Steve, Ubuntu Studio would not be where it is today. He provided invaluable guidance, insight, and instruction to our leader, Erich Eickmeyer, who not only learned how to package applications but learned how to do it properly. We owe him an eternal debt of gratitude.

    You can download Ubuntu Studio 25.04 from our download page.

    Special Notes

    The Ubuntu Studio 25.04 disk image (ISO) exceeds 4 GB and cannot be downloaded to some file systems such as FAT32 and may not be readable when burned to a standard DVD. For this reason, we recommend downloading to a compatible file system. When creating a boot medium, we recommend creating a bootable USB stick with the ISO image or burning to a Dual-Layer DVD.

    Minimum installation media requirements: Dual-Layer DVD or 8GB USB drive.

    Images can be obtained from this link: https://cdimage.ubuntu.com/ubuntustudio/releases/25.04/release/

    Full updated information, including Upgrade Instructions, are available in the Release Notes.

    Upgrades from 24.10 should be enabled within a month after release, so we appreciate your patience. Upgrades from 25.04 LTS will be enabled after 24.10 reaches End-Of-Life in July 2025.

    New This Release

    GIMP 3.0: Wilber logo by Aryeom

    GIMP 3.0!

    The long-awaited GIMP 3.0 is included by default. GIMP is now capable of non-destructive editing with filters, better Photoshop PSD export, and so very much more! Check out the GIMP 3.0 release announcement for more information.

    Pencil2D Icon

    Pencil2D

    Ubuntu Studio now includes Pencil2D! This is a 2D animation and drawing application that is sure to be helpful to animators. You can use basic clipart to make animations!

    The basic features of Pencil2D are:

    • layers support (separated layer for bitmap, vector and soud part)
    • bitmap drawing
    • vector drawing
    • sound support

    LibreOffice No Longer in Minimal Install

    The LibreOffice suite is now part of the full desktop install. This will save space for those wishing for a minimalistic setup for their needs.

    Invada Studio Plugins

    Beginning this release we are including the Invada Studio Plugins first created by Invada Records Australia. This includes distortion, delay, dynamics, filter, phaser, reverb, and utility audio plugins.

    PipeWire 1.2.7

    This release contains PipeWire 1.2.7. One major feature this has over 1.2.4 is that v4l2loopback support is available via the pipewire-v4l2 package which is not installed by default.

    PipeWire’s JACK compatibility is configured to use out-of-the-box and is zero-latency internally. System latency is configurable via Ubuntu Studio Audio Configuration.

    However, if you would rather use straight JACK 2 instead, that’s also possible. Ubuntu Studio Audio Configuration can disable and enable PipeWire’s JACK compatibility on-the-fly. From there, you can simply use JACK via QJackCtl.

    Ardour 8.12

    This is, as of this writing, the latest release of Ardour, packed with the latest bugfixes.

    To help support Ardour’s funding, you may obtain later versions directly from ardour.org. To do so, please one-time purchase or subscribe to Ardour from their website. If you wish to get later versions of Ardour from us, you will have to wait until the next regular release of Ubuntu Studio, due in October 2025.

    Deprecation of Mailing Lists

    Our mailing lists are getting inundated with spam and there is no proper way to fix the filtering. It uses an outdated version of MailMan, so this release announcement will be the last release announcement we send out via email. To get support, we encourage using Ubuntu Discourse for support, and for community clicking the notification bell in the Ubuntu Studio category there.

    Frequently Asked Questions

    Q: Does Ubuntu Studio contain snaps?
    A: Yes. Mozilla’s distribution agreement with Canonical changed, and Ubuntu was forced to no longer distribute Firefox in a native .deb package. We have found that, after numerous improvements, Firefox now performs just as well as the native .deb package did.

    Thunderbird also became a snap so that the maintainers can get security patches delivered faster.

    Additionally, Freeshow is an Electron-based application. Electron-based applications cannot be packaged in the Ubuntu repositories in that they cannot be packaged in a traditional Debian source package. While such apps do have a build system to create a .deb binary package, it circumvents the source package build system in Launchpad, which is required when packaging for Ubuntu. However, Electron apps also have a facility for creating snaps, which can be uploaded and included. Therefore, for Freeshow to be included in Ubuntu Studio, it had to be packaged as a snap.

    We have additional snaps that are Ubuntu-specific, such as the Firmware Updater and the Security Center. Contrary to popular myth, Ubuntu does not have any plans to switch all packages to snaps, nor do we.

    Q: Will you make an ISO with {my favorite desktop environment}?
    A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since we’re completely volunteer-run, we don’t have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio – which does *not* convert that flavor to Ubuntu Studio but adds its benefits.

    Q: What if I don’t want all these packages installed on my machine?
    A: Simply use the Ubuntu Studio Installer to remove the features of Ubuntu Studio you don’t want or need!

    Get Involved!

    A wonderful way to contribute is to get involved with the project directly! We’re always looking for new volunteers to help with packaging, documentation, tutorials, user support, and MORE! Check out all the ways you can contribute!

    Our project leader, Erich Eickmeyer, is now working on Ubuntu Studio at least part-time, and is hoping that the users of Ubuntu Studio can give enough to generate a monthly part-time income. We’re not there, but if every Ubuntu Studio user donated monthly, we’d be there! Your donations are appreciated! If other distributions can do it, surely we can! See the sidebar for ways to give!

    Special Thanks

    Huge special thanks for this release go to:

    • Eylul Dogruel: Artwork, Graphics Design
    • Ross Gammon: Upstream Debian Developer, Testing, Email Support
    • Sebastien Ramacher: Upstream Debian Developer
    • Dennis Braun: Upstream Debian Developer
    • Rik Mills: Kubuntu Council Member, help with Plasma desktop
    • Scarlett Moore: Kubuntu Project Lead, help with Plasma desktop
    • Len Ovens: Testing, insight
    • Mauro Gaspari: Tutorials, Promotion, and Documentation, Testing, keeping Erich sane
    • Simon Quigley: Qt6 Megastuff
    • Erich Eickmeyer: Project Leader, Packaging, Development, Direction, Treasurer
    • Steve Langasek: You are missed.
    on April 17, 2025 05:08 PM

    April 16, 2025

    Recently, I was involved in an event where a video was shown, and the event was filmed. It would be nice to put the video of the event up somewhere so other people who weren't there could watch it. Obvious answer: upload it to YouTube. However, the video that was shown at the event is Copyrighted Media Content and therefore is disallowed by YouTube and the copyright holder; it's not demonetised (which wouldn't be a problem), it's flat-out blocked. So YouTube is out.

    I'd like the video I'm posting to stick around for a long time; this is a sort of archival, reference thing where not many people will ever want to watch it but those that do might want to do so in ten years. So I'm loath to find some other random video hosting site, which will probably go bust, or pivot to selling online AI shoes or something. And the best way to ensure that something keeps going long-term is to put it on your own website, and use decent HTML, because that means that even in ten or twenty years it'll still work where the latest flavour-of-the-month thing will go the way of other old technologies and fade away and stop working over time. HTML won't do that.

    But... it's an hour long and in full HD. 2.6GB of video. And one of the benefits of YouTube is that they'll make the video adaptive: it'll fit the screen, and the bandwidth, of whatever device someone's watching it on. If someone wants to look at this from their phone and its slightly-shaky two bars of 4G connection, they probably don't want to watch the loading spinner for an hour while it buffers a full HD video; they can ideally get a cut down, lower-quality but quicker to serve, version. But... how is this possible?

    There are two aspects to doing this. One is that you serve up different resolutions of video, based on the viewer's screen size. This is exactly the same problem as is solved for images by the <picture> element to provide responsive images (where if you're on a 400px-wide screen you get a 400px version of the background image, not the 2000px full-res version), and indeed the magic words to search for here are responsive video. And the person you will find who is explaning all this is Scott Jehl, who has written a good description of how to do responsive video which explains it all in detail. You make versions of the video at different resolutions, and serve whichever one best matches the screen you're on, just like responsive images. Nice work; just what the doctor ordered.

    But there's also a second aspect to this: responsive video adapts to screen size, but it doesn't adapt to bandwidth. What we want, in addition to the responsive stuff, is that on poor connections the viewer gets a lower-bandwidth version as well as a lower-resolution version, and that the viewer's browser can dynamically switch from moment to moment between different versions of the video to match their current network speed. This task is the job of HTTP Live Streaming, or HLS. To do this, you essentially encode the video in a bunch of different qualities and screen sizes, so you've got a bunch of separate videos (which you've probably already done above for the responsive part) and then (and this is the key) you chop up each video into a load of small segments. That way, instead of the browser downloading the whole one hour of video at a particular resolution, it only downloads the next segment at its current choice of resolution, and then if you suddenly get more (or less) bandwidth, it can switch to getting segment 2 from a different version of the video which better matches where you currently are.

    Doing this sounds hard. Fortunately, all hard things to do with video are handled by ffmpeg. There's a nice writeup by Mux on how to convert an mp4 video to HLS with ffmpeg, and it works great. I put myself together a little Python script to construct the ffmpeg command line to do it, but you can do it yourself; the script just does some of the boilerplate for you. Very useful.

    So now I can serve up a video which adapts to the viewer's viewing conditions, and that's just what I wanted. I have to pay for the bandwidth now (which is the other benefit of having YouTube do it, and one I now don't get) but that's worth it for this, I think. Cheers to Scott and Mux for explaining all this stuff.

    on April 16, 2025 08:26 AM

    April 15, 2025

    Ubuntu Budgie 25.04 (Plucky Puffin) is a Standard Release with 9 months of support by your distro maintainers and Canonical, from April 2025 to Jan 2026. These release notes showcase the key takeaways for 24.10 upgraders to 25.04. Please note – there is no direct upgrade path from 24.04.2 to 25.04; you must uplift to 24.10 first or perform a fresh install. In these release notes the areas…

    Source

    on April 15, 2025 05:31 PM

    April 06, 2025

    Ubuntu MATE 24.10 is more of what you like, stable MATE Desktop on top of current Ubuntu. Read on to learn more 👓️

    Ubuntu MATE 24.10 Ubuntu MATE 24.10

    Thank you! 🙇

    My sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this release 👏 I’d like to acknowledge the close collaboration with the Ubuntu Foundations team and the Ubuntu flavour teams, in particular Erich Eickmeyer who pushed critical fixes while I was travelling. Thank you! 💚

    What changed since the Ubuntu MATE 24.04 LTS?

    Here are the highlights of what’s changed since the release of Ubuntu MATE 24.04

    • Ships stable MATE Desktop 1.26.2 with a handful of bug fixes 🐛
    • Switched back to Slick Greeter (replacing Arctica Greeter) due to race-condition in the boot process which results the display manager failing to initialise.
      • Returning to Slick Greeter reintroduces the ability to easily configure the login screen via a graphical application, something users have been requesting be re-instated 👍
    • Ubuntu MATE 24.10 .iso 📀 is now 3.3GB 🤏 Down from 4.1GB in the 24.04 LTS release.
      • This is thanks to some fixes in the installer that no longer require as many packages in the live-seed.

    Login Window Configuration Login Window

    What didn’t change since the Ubuntu MATE 24.04 LTS?

    If you follow upstream MATE Desktop development, then you’ll have noticed that Ubuntu MATE 24.10 doesn’t ship with the recently released MATE Desktop 1.28 🧉

    I have prepared packaging for MATE Desktop 1.28, along with the associated components but encountered some bugs and regressions 🐞 I wasn’t able to get things to a standard I’m happy to ship be default, so it is tried and true MATE 1.26.2 one last time 🪨

    Major Applications

    Accompanying MATE Desktop 1.26.2 🧉 and Linux 6.11 🐧 are Firefox 131 🔥🦊, Celluloid 0.27 🎥, Evolution 3.54 📧, LibreOffice 24.8.2 📚

    See the Ubuntu 24.10 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

    Download Ubuntu MATE 24.10

    Available for 64-bit desktop computers!

    Download

    Upgrading to Ubuntu MATE 24.10

    The upgrade process to Ubuntu MATE 24.10 is the same as Ubuntu.

    There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

    on April 06, 2025 04:54 PM

    April 03, 2025

    A couple weeks ago I was playing around with a multiple architecture CI setup with another team, and that led me to pull out my StarFive VisionFive 2 SBC again to see where I could make it this time with an install.

    I left off about a year ago when I succeeded in getting an older version of Debian on it, but attempts to get the tooling to install a more broadly supported version of U-Boot to the SPI flash were unsuccessful. Then I got pulled away to other things, effectively just bringing my VF2 around to events as a prop for my multiarch talks – which it did beautifully! I even had one conference attendee buy one to play with while sitting in the audience of my talk. Cool.

    I was delighted to learn how much progress had been made since I last looked. Canonical has published more formalized documentation: Install Ubuntu on the StarFive VisionFive 2 in the place of what had been a rather cluttered wiki page. So I got all hooked up and began my latest attempt.

    My first step was to grab the pre-installed server image. I got that installed, but struggled a little with persistence once I unplugged the USB UART adapter and rebooted. I then decided just to move forward with the Install U-Boot to the SPI flash instructions. I struggled a bit here for two reasons:

    1. The documentation today leads off with having you download the livecd, but you actually want the pre-installed server image to flash U-Boot, the livecd step doesn’t come until later. Admittedly, the instructions do say this, but I wasn’t reading carefully enough and was more focused on the steps.
    2. I couldn’t get the 24.10 pre-installed image to work for flashing U-Boot, but once I went back to the 24.04 pre-installed image it worked.

    And then I had to fly across the country. We’re spending a couple weeks around spring break here at our vacation house in Philadelphia, but the good thing about SBCs is that they’re incredibly portable and I just tossed my gear into my backpack and brought it along.

    Thanks to Emil Renner Berthing (esmil) on the Ubuntu Matrix server for providing me with enough guidance to figure out where I had gone wrong above, and got me on my way just a few days after we arrived in Philly.

    With the newer U-Boot installed, I was able to use the Ubuntu 24.04 livecd image on a micro SD Card to install Ubuntu 24.04 on an NVMe drive! That’s another new change since I last looked at installation, using my little NVMe drive as a target was a lot simpler than it would have been a year ago. In fact, it was rather anticlimactic, hah!

    And with that, I was fully logged in to my new system.

    elizabeth@r2kt:~$ cat /proc/cpuinfo
    processor : 0
    hart : 2
    isa : rv64imafdc_zicntr_zicsr_zifencei_zihpm_zba_zbb
    mmu : sv39
    uarch : sifive,u74-mc
    mvendorid : 0x489
    marchid : 0x8000000000000007
    mimpid : 0x4210427
    hart isa : rv64imafdc_zicntr_zicsr_zifencei_zihpm_zba_zbb

    It has 4 cores, so here’s the full output: vf2-cpus.txt

    What will I do with this little single board computer? I don’t know yet. I joked with my husband that I’d “install Debian on it and forget about it like everything else” but I really would like to get past that. I have my little multiarch demo CI project in the wings, and I’ll probably loop it into that.

    Since we were in Philly, I had a look over at my long-neglected Raspberry Pi 1B that I have here. When we first moved in, I used it as an ssh tunnel to get to this network from California. It was great for that! But now we have a more sophisticated network setup between the houses with a VLAN that connects them, so the ssh tunnel is unnecessary. In fact, my poor Raspberry Pi fell off the WiFi network when we switched to 802.1X just over a year ago and I never got around to getting it back on the network. I connected it to a keyboard and monitor and started some investigation. Honestly, I’m surprised the little guy was still running, but it’s doing fine!

    And it had been chugging along running Rasbian based on Debian 9. Well, that’s worth an upgrade. But not just an upgrade, I didn’t want to stress the device and SD card, so I figured flashing it with the latest version of Raspberry Pi OS was the right way to go. It turns out, it’s been a long time since I’ve done a Raspberry Pi install.

    I grabbed the Raspberry Pi Imager and went on my way. It’s really nice. I went with the Raspberry Pi OS Lite install since it’s the RP1, I didn’t want a GUI. The imager asked the usual installation questions, loaded up my SSH key, and I was ready to load it up in my Pi.

    The only thing I need to finish sorting out is networking. The old USB WiFi adapter I have it in doesn’t initialize until after it’s booted up, so wpa_supplicant on boot can’t negotiate with the access point. I’ll have to play around with it. And what will I use this for once I do, now that it’s not an SSH tunnel? I’m not sure yet.

    I realize this blog post isn’t very deep or technical, but I guess that’s the point. We’ve come a long way in recent years in support for non-x86 architectures, so installation has gotten a lot easier across several of them. If you’re new to playing around with architectures, I’d say it’s a really good time to start. You can hit the ground running with some wins, and then play around as you go with various things you want to help get working. It’s a lot of fun, and the years I spent playing around with Debian on Sparc back in the day definitely laid the groundwork for the job I have at IBM working on mainframes. You never know where a bit of technical curiosity will get you.

    on April 03, 2025 08:43 PM

    March 27, 2025

    Thanks to the hard work of our contributors, we are happy to announce the release of Lubuntu's Plucky Beta, which will become Lubuntu 25.04. This is a snapshot of the daily images. Approximately two months ago, we posted an Alpha-level update. While some information is duplicated below, that contains an accurate, concise technical summary of […]
    on March 27, 2025 09:02 PM

    March 22, 2025

    The Open Source Initiative has two classes of board seats: Affiliate seats, and Individual Member seats. 

    In the upcoming election, each affiliate can nominate a candidate, and each affiliate can cast a vote for the Affiliate candidates, but there's only 1 Affiliate seat available. I initially expressed interest in being nominated as an Affiliate candidate via Debian. But since Bradley Kuhn is also running for an Affiliate seat with a similar platform to me, especially with regards to the OSAID, I decided to run as part of an aligned "ticket" as an Individual Member to avoid contention for the 1 Affiliate seat.

    Bradley and I discussed running on a similar ticket around 8/9pm Pacific, and I submitted my candidacy around 9pm PT on 17 February. 

    I was dismayed when I received the following mail from Nick Vidal:

    Dear Luke,

    Thank you for your interest in the OSI Board of Directors election. Unfortunately, we are unable to accept your application as it was submitted after the official deadline of Monday Feb 17 at 11:59 pm UTC. To ensure a fair process, we must adhere to the deadline for all candidates.

    We appreciate your enthusiasm and encourage you to stay engaged with OSI’s mission. We hope you’ll consider applying in the future or contributing in other meaningful ways.

    Best regards,
    OSI Election Teams

    Nowhere on the "OSI’s board of directors in 2025: details about the elections" page do they list a timezone for closure of nominations; they simply list Monday 17 February. 

    The OSI's contact address is in California, so it seems arbitrary and capricious to retroactively define all of these processes as being governed by UTC.

    I was not able to participate in the "potential board director" info sessions accordingly, but people who attended heard that the importance of accommodating differing TZ's was discussed during the info session, and that OSI representatives mentioned they try to accommodate TZ's of everyone. This seems in sharp contrast with the above policy. 

    I urge the OSI to reconsider this policy and allow me to stand for an Individual seat in the current cycle. 

    Upd, N.B.: to people writing about this, I use they/them pronouns

    on March 22, 2025 04:30 PM

    February 22, 2025

    Xubuntu Development Update February 2025

    Better late than never, here&aposs your Xubuntu February 2025 update! This month, Xubuntu 24.04.2 was released with a new Linux kernel and bug fixes in tow. 25.04 Feature Freeze is now in effect, with many package updates arriving just before the deadline. Xubuntu&aposs first RISC-V package support has landed. And so much more!

    February Schedule

    The Plucky Puffin Release Schedule for February includes Feature Freeze on the 20th and an optional testing week at the end of the month. It also features the next point release for Xubuntu 24.04 "Noble Numbat".

    Date Milestones
    February 06
    February 13 24.04.2 Point Release (delayed to 2/20)
    February 20 Feature Freeze, Debian Import Freeze
    February 27 Ubuntu Testing Week (optional)

    Major Package Updates in Xubuntu 24.04.2

    Xubuntu 24.04.2 was released on Thursday, February 20, 2025. It features a modest set of changes in addition to the 6.11 Linux kernel and updated graphics stack. xfce4-panel 4.18.4-1ubuntu0.1 features a fix for the crash-on-exit bug that plagued users with unexplainable error notifications (LP: #2064846).

    Changes from Xubuntu 24.04.1 to 24.04.2

    Package 24.04.1 24.04.2
    firefox 134 135
    libdrm 2.4.120 2.4.122
    libreoffice 24.2.5 24.2.7
    Linux kernel 6.8.0 6.11.0
    Mesa 24.0.9 24.2.8
    xfce4-panel 4.18.4-1build2 4.18.4-1ubuntu0.1

    Major Package Updates in Xubuntu 25.04

    There was a flurry of upload activity in the lead up to the 25.04 Feature Freeze. GIMP 3.0.0 was bumped from RC2 to RC3. The Linux kernel was updated from 6.11.0 to 6.12.0. A handful of Xfce components also saw new releases to further stabilize Xubuntu&aposs base. For more information on the progress of Xubuntu 25.04, check out the following links:

    Package January 1, 2025 February 22, 2025
    blueman 2.4.3 2.4.4
    gigolo 0.5.3 0.5.4
    gimp 3.0.0 RC2 3.0.0 RC3
    gnome-mines 40.1 48 Alpha
    libgtk-3 3.24.43 3.24.48
    libgtk-4 4.17.1 4.17.4
    libreoffice 24.8.4 25.2.1
    libxfce4windowing 4.20.0 4.20.2
    lightdm 1.30.0 1.32.0
    Linux kernel 6.11.0 6.12.0
    Mesa 24.2.8 24.3.4
    parole 4.18.1 4.18.2
    python 3.12.8 3.13.1
    rhythmbox 3.4.7 3.4.8
    snapd 2.66.1 2.67.1
    synaptic 0.91.3 0.91.5
    thunar 4.20.0 4.20.2
    xfce4-notifyd 0.9.6 0.9.7
    xfce4-panel 4.20.0 4.20.3
    xfce4-panel-profiles 1.0.14 1.0.15
    xfce4-screensaver 4.18.3 4.18.4
    xfce4-taskmanager 1.5.7 1.5.8
    xfce4-whiskermenu-plugin 2.8.3 2.9.1
    xfce4-xkb-plugin 0.8.3 0.8.5
    xfwm4 4.18.0 4.20.0
    xubuntu-default-settings 25.04.0 25.04.1
    xubuntu-meta 2.265 2.266

    In the interest of keeping this post short, I&aposm only going to dive into Xubuntu&aposs package updates...

    xubuntu-default-settings 25.04.1

    xubuntu-default-settings 25.04.1 includes a minor improvement: The gtk-print-preview-command setting has been correctly associated with Atril (LP: #2025332). Will Thompson&aposs Evince, Flatpak, and GTK print previews provides more context on this feature:

    GTK provides API for applications to print documents. This presents the user with a print dialog, with many knobs to control how your document will be printed. That dialog has a Preview button; when you press it, the dialog vanishes and another one appears, showing a preview of your document. You can press Print on that dialog to print the document, or close it to cancel.

    xubuntu-meta 2.266

    xubuntu-meta 2.266 adds support for the riscv64 architecture (LP: #2092942)! Similar to the arm achitectures, this isn&apost supported by the Xubuntu team and you won&apost find downloadable ISOs. Instead, this enables package builds so the xubuntu-desktop metapackage will be found on the riscv64 archives. If you put in the effort, you should now be able to get Xubuntu up and running on your riscv hardware. Let me know if you do!

    The 25.04 To-Do List

    The Xubuntu 25.04 project board lists many outstanding tasks that we&aposd like to review this cycle. If you&aposd like to contribute to Xubuntu, pick a task and get to work. :) There are many other ways to contribute to Xubuntu listed on our website.

    Xubuntu development marches steadily onward!

    on February 22, 2025 04:40 AM

    February 20, 2025

    boot2kier

    Paul Tagliamonte

    I can’t remember exactly the joke I was making at the time in my work’s slack instance (I’m sure it wasn’t particularly funny, though; and not even worth re-reading the thread to work out), but it wound up with me writing a UEFI binary for the punchline. Not to spoil the ending but it worked - no pesky kernel, no messing around with “userland”. I guess the only part of this you really need to know for the setup here is that it was a Severance joke, which is some fantastic TV. If you haven’t seen it, this post will seem perhaps weirder than it actually is. I promise I haven’t joined any new cults. For those who have seen it, the payoff to my joke is that I wanted my machine to boot directly to an image of Kier Eagan.

    As for how to do it – I figured I’d give the uefi crate a shot, and see how it is to use, since this is a low stakes way of trying it out. In general, this isn’t the sort of thing I’d usually post about – except this wound up being easier and way cleaner than I thought it would be. That alone is worth sharing, in the hopes someome comes across this in the future and feels like they, too, can write something fun targeting the UEFI.

    First thing’s first – gotta create a rust project (I’ll leave that part to you depending on your life choices), and to add the uefi crate to your Cargo.toml. You can either use cargo add or add a line like this by hand:

    uefi = { version = "0.33", features = ["panic_handler", "alloc", "global_allocator"] }
    

    We also need to teach cargo about how to go about building for the UEFI target, so we need to create a rust-toolchain.toml with one (or both) of the UEFI targets we’re interested in:

    [toolchain]
    targets = ["aarch64-unknown-uefi", "x86_64-unknown-uefi"]
    

    Unfortunately, I wasn’t able to use the image crate, since it won’t build against the uefi target. This looks like it’s because rustc had no way to compile the required floating point operations within the image crate without hardware floating point instructions specifically. Rust tends to punt a lot of that to libm usually, so this isnt entirely shocking given we’re nostd for a non-hardfloat target.

    So-called “softening” requires a software floating point implementation that the compiler can use to “polyfill” (feels weird to use the term polyfill here, but I guess it’s spiritually right?) the lack of hardware floating point operations, which rust hasn’t implemented for this target yet. As a result, I changed tactics, and figured I’d use ImageMagick to pre-compute the pixels from a jpg, rather than doing it at runtime. A bit of a bummer, since I need to do more out of band pre-processing and hardcoding, and updating the image kinda sucks as a result – but it’s entirely manageable.

    $ convert -resize 1280x900 kier.jpg kier.full.jpg
    $ convert -depth 8 kier.full.jpg rgba:kier.bin
    

    This will take our input file (kier.jpg), resize it to get as close to the desired resolution as possible while maintaining aspect ration, then convert it from a jpg to a flat array of 4 byte RGBA pixels. Critically, it’s also important to remember that the size of the kier.full.jpg file may not actually be the requested size – it will not change the aspect ratio, so be sure to make a careful note of the resulting size of the kier.full.jpg file.

    Last step with the image is to compile it into our Rust bianary, since we don’t want to struggle with trying to read this off disk, which is thankfully real easy to do.

    const KIER: &[u8] = include_bytes!("../kier.bin");
    const KIER_WIDTH: usize = 1280;
    const KIER_HEIGHT: usize = 641;
    const KIER_PIXEL_SIZE: usize = 4;
    

    Remember to use the width and height from the final kier.full.jpg file as the values for KIER_WIDTH and KIER_HEIGHT. KIER_PIXEL_SIZE is 4, since we have 4 byte wide values for each pixel as a result of our conversion step into RGBA. We’ll only use RGB, and if we ever drop the alpha channel, we can drop that down to 3. I don’t entirely know why I kept alpha around, but I figured it was fine. My kier.full.jpg image winds up shorter than the requested height (which is also qemu’s default resolution for me) – which means we’ll get a semi-annoying black band under the image when we go to run it – but it’ll work.

    Anyway, now that we have our image as bytes, we can get down to work, and write the rest of the code to handle moving bytes around from in-memory as a flat block if pixels, and request that they be displayed using the UEFI GOP. We’ll just need to hack up a container for the image pixels and teach it how to blit to the display.

    /// RGB Image to move around. This isn't the same as an
    /// `image::RgbImage`, but we can associate the size of
    /// the image along with the flat buffer of pixels.
    struct RgbImage {
    /// Size of the image as a tuple, as the
     /// (width, height)
     size: (usize, usize),
    /// raw pixels we'll send to the display.
     inner: Vec<BltPixel>,
    }
    impl RgbImage {
    /// Create a new `RgbImage`.
     fn new(width: usize, height: usize) -> Self {
    RgbImage {
    size: (width, height),
    inner: vec![BltPixel::new(0, 0, 0); width * height],
    }
    }
    /// Take our pixels and request that the UEFI GOP
     /// display them for us.
     fn write(&self, gop: &mut GraphicsOutput) -> Result {
    gop.blt(BltOp::BufferToVideo {
    buffer: &self.inner,
    src: BltRegion::Full,
    dest: (0, 0),
    dims: self.size,
    })
    }
    }
    impl Index<(usize, usize)> for RgbImage {
    type Output = BltPixel;
    fn index(&self, idx: (usize, usize)) -> &BltPixel {
    let (x, y) = idx;
    &self.inner[y * self.size.0 + x]
    }
    }
    impl IndexMut<(usize, usize)> for RgbImage {
    fn index_mut(&mut self, idx: (usize, usize)) -> &mut BltPixel {
    let (x, y) = idx;
    &mut self.inner[y * self.size.0 + x]
    }
    }
    

    We also need to do some basic setup to get a handle to the UEFI GOP via the UEFI crate (using uefi::boot::get_handle_for_protocol and uefi::boot::open_protocol_exclusive for the GraphicsOutput protocol), so that we have the object we need to pass to RgbImage in order for it to write the pixels to the display. The only trick here is that the display on the booted system can really be any resolution – so we need to do some capping to ensure that we don’t write more pixels than the display can handle. Writing fewer than the display’s maximum seems fine, though.

    fn praise() -> Result {
    let gop_handle = boot::get_handle_for_protocol::<GraphicsOutput>()?;
    let mut gop = boot::open_protocol_exclusive::<GraphicsOutput>(gop_handle)?;
    // Get the (width, height) that is the minimum of
     // our image and the display we're using.
     let (width, height) = gop.current_mode_info().resolution();
    let (width, height) = (width.min(KIER_WIDTH), height.min(KIER_HEIGHT));
    let mut buffer = RgbImage::new(width, height);
    for y in 0..height {
    for x in 0..width {
    let idx_r = ((y * KIER_WIDTH) + x) * KIER_PIXEL_SIZE;
    let pixel = &mut buffer[(x, y)];
    pixel.red = KIER[idx_r];
    pixel.green = KIER[idx_r + 1];
    pixel.blue = KIER[idx_r + 2];
    }
    }
    buffer.write(&mut gop)?;
    Ok(())
    }
    

    Not so bad! A bit tedious – we could solve some of this by turning KIER into an RgbImage at compile-time using some clever Cow and const tricks and implement blitting a sub-image of the image – but this will do for now. This is a joke, after all, let’s not go nuts. All that’s left with our code is for us to write our main function and try and boot the thing!

    #[entry]
    fn main() -> Status {
    uefi::helpers::init().unwrap();
    praise().unwrap();
    boot::stall(100_000_000);
    Status::SUCCESS
    }
    

    If you’re following along at home and so interested, the final source is over at gist.github.com. We can go ahead and build it using cargo (as is our tradition) by targeting the UEFI platform.

    $ cargo build --release --target x86_64-unknown-uefi
    

    Testing the UEFI Blob

    While I can definitely get my machine to boot these blobs to test, I figured I’d save myself some time by using QEMU to test without a full boot. If you’ve not done this sort of thing before, we’ll need two packages, qemu and ovmf. It’s a bit different than most invocations of qemu you may see out there – so I figured it’d be worth writing this down, too.

    $ doas apt install qemu-system-x86 ovmf
    

    qemu has a nice feature where it’ll create us an EFI partition as a drive and attach it to the VM off a local directory – so let’s construct an EFI partition file structure, and drop our binary into the conventional location. If you haven’t done this before, and are only interested in running this in a VM, don’t worry too much about it, a lot of it is convention and this layout should work for you.

    $ mkdir -p esp/efi/boot
    $ cp target/x86_64-unknown-uefi/release/*.efi \
     esp/efi/boot/bootx64.efi
    

    With all this in place, we can kick off qemu, booting it in UEFI mode using the ovmf firmware, attaching our EFI partition directory as a drive to our VM to boot off of.

    $ qemu-system-x86_64 \
     -enable-kvm \
     -m 2048 \
     -smbios type=0,uefi=on \
     -bios /usr/share/ovmf/OVMF.fd \
     -drive format=raw,file=fat:rw:esp
    

    If all goes well, soon you’ll be met with the all knowing gaze of Chosen One, Kier Eagan. The thing that really impressed me about all this is this program worked first try – it all went so boringly normal. Truly, kudos to the uefi crate maintainers, it’s incredibly well done.

    Booting a live system

    Sure, we could stop here, but anyone can open up an app window and see a picture of Kier Eagan, so I knew I needed to finish the job and boot a real machine up with this. In order to do that, we need to format a USB stick. BE SURE /dev/sda IS CORRECT IF YOU’RE COPY AND PASTING. All my drives are NVMe, so BE CAREFUL – if you use SATA, it may very well be your hard drive! Please do not destroy your computer over this.

    $ doas fdisk /dev/sda
    Welcome to fdisk (util-linux 2.40.4).
    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.
    Command (m for help): n
    Partition type
    p primary (0 primary, 0 extended, 4 free)
    e extended (container for logical partitions)
    Select (default p): p
    Partition number (1-4, default 1):
    First sector (2048-4014079, default 2048):
    Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-4014079, default 4014079):
    Created a new partition 1 of type 'Linux' and of size 1.9 GiB.
    Command (m for help): t
    Selected partition 1
    Hex code or alias (type L to list all): ef
    Changed type of partition 'Linux' to 'EFI (FAT-12/16/32)'.
    Command (m for help): w
    The partition table has been altered.
    Calling ioctl() to re-read partition table.
    Syncing disks.
    

    Once that looks good (depending on your flavor of udev you may or may not need to unplug and replug your USB stick), we can go ahead and format our new EFI partition (BE CAREFUL THAT /dev/sda IS YOUR USB STICK) and write our EFI directory to it.

    $ doas mkfs.fat /dev/sda1
    $ doas mount /dev/sda1 /mnt
    $ cp -r esp/efi /mnt
    $ find /mnt
    /mnt
    /mnt/efi
    /mnt/efi/boot
    /mnt/efi/boot/bootx64.efi
    

    Of course, naturally, devotion to Kier shouldn’t mean backdooring your system. Disabling Secure Boot runs counter to the Core Principals, such as Probity, and not doing this would surely run counter to Verve, Wit and Vision. This bit does require that you’ve taken the step to enroll a MOK and know how to use it, right about now is when we can use sbsign to sign our UEFI binary we want to boot from to continue enforcing Secure Boot. The details for how this command should be run specifically is likely something you’ll need to work out depending on how you’ve decided to manage your MOK.

    $ doas sbsign \
     --cert /path/to/mok.crt \
     --key /path/to/mok.key \
     target/x86_64-unknown-uefi/release/*.efi \
     --output esp/efi/boot/bootx64.efi
    

    I figured I’d leave a signed copy of boot2kier at /boot/efi/EFI/BOOT/KIER.efi on my Dell XPS 13, with Secure Boot enabled and enforcing, just took a matter of going into my BIOS to add the right boot option, which was no sweat. I’m sure there is a way to do it using efibootmgr, but I wasn’t smart enough to do that quickly. I let ‘er rip, and it booted up and worked great!

    It was a bit hard to get a video of my laptop, though – but lucky for me, I have a Minisforum Z83-F sitting around (which, until a few weeks ago was running the annual http server to control my christmas tree ) – so I grabbed it out of the christmas bin, wired it up to a video capture card I have sitting around, and figured I’d grab a video of me booting a physical device off the boot2kier USB stick.

    Attentive readers will notice the image of Kier is smaller then the qemu booted system – which just means our real machine has a larger GOP display resolution than qemu, which makes sense! We could write some fancy resize code (sounds annoying), center the image (can’t be assed but should be the easy way out here) or resize the original image (pretty hardware specific workaround). Additionally, you can make out the image being written to the display before us (the Minisforum logo) behind Kier, which is really cool stuff. If we were real fancy we could write blank pixels to the display before blitting Kier, but, again, I don’t think I care to do that much work.

    But now I must away

    If I wanted to keep this joke going, I’d likely try and find a copy of the original video when Helly 100%s her file and boot into that – or maybe play a terrible midi PC speaker rendition of Kier, Chosen One, Kier after rendering the image. I, unfortunately, don’t have any friends involved with production (yet?), so I reckon all that’s out for now. I’ll likely stop playing with this – the joke was done and I’m only writing this post because of how great everything was along the way.

    All in all, this reminds me so much of building a homebrew kernel to boot a system into – but like, good, though, and it’s a nice reminder of both how fun this stuff can be, and how far we’ve come. UEFI protocols are light-years better than how we did it in the dark ages, and the tooling for this is SO much more mature. Booting a custom UEFI binary is miles ahead of trying to boot your own kernel, and I can’t believe how good the uefi crate is specifically.

    Praise Kier! Kudos, to everyone involved in making this so delightful ❤️.

    on February 20, 2025 02:40 PM

    February 18, 2025

    Wireshark is an essential tool for network analysis, and staying up to date with the latest releases ensures access to new features, security updates, and bug fixes. While Ubuntu’s official repositories provide stable versions, they are often not the most recent.

    Wearing both WiresharkCore Developer and Debian/Ubuntu package maintainer hats, I’m happy to help the Wireshark team in providing updated packages for all supported Ubuntu versions through dedicated PPAs. This post outlines how you can install the latest stable and nightly Wireshark builds on Ubuntu.

    Latest Stable Releases

    For users who want the most up-to-date stable Wireshark version, we maintain a PPA with backports of the latest releases:

    🔗 Stable Wireshark PPA:
    👉 https://launchpad.net/~wireshark-dev/+archive/ubuntu/stable

    Installation Instructions

    To install the latest stable Wireshark version, add the PPA and update your package list:

    sudo add-apt-repository ppa:wireshark-dev/stable
    sudo apt install wireshark

    Nightly Builds (Development Versions)

    For those who want to test new features before they are officially released, nightly builds are also available. These builds track the latest development code and you can watch them cooking on their Launchpad recipe page.

    🔗 Nightly PPA:
    👉 https://code.launchpad.net/~wireshark-dev/+archive/ubuntu/nightly

    Installation Instructions

    To install the latest development version of Wireshark, use the following commands:

    sudo add-apt-repository ppa:wireshark-dev/nightly
    sudo apt install wireshark

    Note: Nightly builds may contain experimental features and are not guaranteed to be as stable as the official releases. Also it targets only Ubuntu 24.04 and later including the current development release.

    If you need to revert to the stable version later, remove the nightly PPA and reinstall Wireshark:

    sudo add-apt-repository --remove ppa:wireshark-dev/nightly
    sudo apt install wireshark

    Happy sniffing! 🙂

    on February 18, 2025 09:57 AM