Launchpad and the Open Documentation Academy Live in Málaga
Launchpad is a web-based platform to support collaborative software development for open source projects. It offers a comprehensive suite of tools including bug tracking, code hosting , translation management, and package building
Launchpad is tightly integrated with the Ubuntu ecosystem, serving as a central hub for Ubuntu development and community contributions. Its features are designed to streamline the process of managing, developing, and distributing software in a collaborative environment.
Launchpad aims to foster strong community engagement by providing features that support collaboration, community management, and user participation, positioning itself as a central hub for open source communities.
Canonical’s Open Documentation Academy is a collaboration between Canonical’s documentation team and open source newcomers, experts, and those in-between, to help us all improve documentation, become better writers, and better open source contributors.
A key aim of the project is to set the standard for inclusive and welcoming collaboration while providing real value for both the contributors and the projects involved in the programme.
Join us at OpenSouthCode in Málaga
Launchpad and the Open Documentation Academy will join forces at OpenSouthCode 2025 in the wonderful city of Málaga, Spain, on June 20 – 21 2025.
The Open Documentation Academy will have a hands-on documentation workshop at the conference, where the participants will learn how to do meaningful open source contributions with the help of the Diátaxis documentation framework.
Launchpad’s Jürgen Gmach will be on-site and help you to land your first open source contribution.
Identity management is vitally important in cybersecurity. Every time someone tries to access your networks, systems, or resources, it’s critical that you are verifying that these attempts are valid and legitimate, and that they match a real, authenticated user. The way that this tends to be handled in cyber security is through Identity and Access Management (IAM), most commonly by using third-party Identity Providers (IdPs). After all, these IdPs are highly effective at their job of verifying users, and offer robust security defenses against attempted attacks. However, just like all third-party tools, they still carry security risks – and the fact that they are managed by a third party means that these options seem somewhat incompatible with zero trust architecture (given that you’re handing over control of your IAM to an external organization).
In this article, I’ll explore an original and robust method for using third-party IdPs that allows you to maintain a zero trust security posture, thanks to Extra Factor Authentication. I’ll highlight the benefits of IdPs and explore the severe risks of ‘legitimate’ backdoors they pose, and give you a step-by-step framework that we used to implement an extra layer of control and authentication in our internal SSO (as a bonus, we’ll also share this implementation, which we offer to the community as a snap).
What is Identity and Access Management (IAM)?
IAM is a security framework that ensures that only legitimate and approved users, machines, or individuals can access resources. It verifies users and checks their credentials before allowing access to machines, networks, databases, or systems. Through this process, IAM prevents unauthorized access and reduces the risk of fraud, leaks, or breaches.
Why is IAM important?
The risks of poor IAM and access control are all too obvious.
In 2024, companies worldwide lost nearly $4.4 billion in fines for data breaches. Research from Verizon shows that 80% of data breaches stem from attackers guessing or stealing weak passwords; similarly, 61% of all breaches happen because a bad guy was tampering with credentials. In fact, data from CrowdStrike indicates that identity-based breaches account for 80% of cyberattacks. All in all, the numbers show that poor access control is often at the root of breaches.
For most use cases, third-party Identity Providers (IdP) offer an easy-to-implement, hands-free, and generally reliable way to manage your organization’s access control without needing to build it from the ground up yourself.
What is an IDP?
An Identity Provider (IdP) is a service that manages user authentication and access to applications, networks, or systems within a distributed network. IdPs create and manage all the information used in accessing systems belonging to an organization. Third-party IdPs, (for example Okta, or the Google Identity Platform) allow organizations to outsource and streamline their identity management to a trusted third party, who manages user identities and credentials, and authenticates requests to access organizational resources.
Typically, these work by:
Receiving an access request from a user or entity,
Checking that user’s credentials against a secure database of verified and authorized entities,
Assessing their permissions to access the requested network, system, or resource,
And then granting or denying access.
SSOs are a collection of technologies that allows network users to provide a single set of credentials for all network services (rather than having a different log-in for each), and today they’re widely used in IAM: roughly 70% of organizations have either implemented a Single Sign-On (SSO) solution or are planning to. You can read more about how they streamline IAM in our help knowledge base article about SSOs.
The benefits of third-party identity providers
Third-party IdPs are very popular with large organizations for a number of reasons.
IdPs are easy to use IdPs can be rolled out at scale to give organizations one single place to manage access to any number of websites, databases, or resources. This has two benefits: first, users no longer need to remember multiple passwords, reuse passwords, or create weak passwords; and second, it makes it easy to secure your systems and resources at scale.
IdPs enhance your security IdPs often come with features like multi-factor authentication (MFA), detailed events analysis, adaptive authentication, and powerful heuristics and attack detection capabilities, which makes it much harder for unauthorized users to get access.
IdPs free up developer resources IAM systems can be incredibly challenging to build by yourself, and time-consuming to manage at all hours of the day. By using these third-party IdPs, you no longer need dedicated internal resources to do it, allowing your developers to focus on mission-critical work.
The risks of third-party identity providers
As with all third-party tools you do not control, there are always risks.
Beyond the obvious risks of unseen gaps,flaws or attack vectors in these third-party tools, there’s a new and frightening risk of using them: a backdoor into your resources.
Backdoors into your resources, networks, or systems can happen in several ways.
An unauthorized account is added to your databases
An account’s credentials (username, password, access token, machine, etc) are stolen or spoofed
A rogue employee or IdP admin creates backdoor access
Seemingly “legitimate” users are added to access databases. For example, IdPs might create a backdoor of their own, or be forced by courts of governments to create a “legitimate” backdoor
Normally, audits and access controls exist in abundance to ensure that the first three attack points do not occur.
If a court order adds an “employee” to your database, or impersonates a privileged user, then your use of IdP is no longer a defense layer but instead an attack vector, and worse, an attack vector with privileged access, where even traditional additional layers like 2FA or MFA will not provide protection. Given this risk, you can see how many cybersecurity experts see third-party IdPs as incompatible with Zero Trust Security (ZTS).
What is Zero Trust Security
Zero Trust Security is a relatively new approach to cybersecurity. With ZTS, the system by default does not trust any user, application, service, request, or entity; Instead, every request for access is checked and authenticated when it happens, regardless of who made the request or where it came from. For this reason, ZTS is the growing gold standard in cybersecurity, as it offers the most robust security posture at all moments against attack attempts.
However, this onerous scrutiny and readiness comes at a cost: it may preclude the use of third-party tools (as these are outside of the organization’s full control) and may require intensive developer efforts to sustain, as if third-party tools are off the table, then the work is shifted in-house.
This means that ZTS often carries additional burdens in terms of time, cost, efforts, and resource requirements. As a result, a balanced approach that allows simultaneous use of third-party tools and zero trust systems is highly desirable for organizations looking to maximize their security and minimize the costs of doing so. In the next section, I will outline how we at Canonical implemented ZTS into our IdP usage to get the best of both worlds.
How to implement ZTS into your third-party IdP
Your typical IAM flow works like this:
Someone tries to log in to your service
Their request is passed to the IdP
They do their normal login
They pass some form of 2FA or MFA
They get a go/no-go response, and are allowed or blocked
In Canonical’s implementation of the IdP loop, we implement an extra step: a passkey stored by your organization (which we are referring to as ‘Extra Factor Authentication’). This happens outside of the IdP loop – and so the third-party provider isn’t even aware that it’s happening. The normal authentication flow happens, but when the go/no-go returns from the IdP, you prompt for this extra factor. If the user returns an enrolled passkey, we are able to verify that that person is legitimate, and give them access to the system.
You can do this in a number of ways, using multiple potential open source components. With our internal IAM solution, we made use of the following identity management projects:
This stack allows us to self-host our own SSO, which redirects to a third-party IdP, before coming back to us for the final passkey verification.
If you’d like to explore this tool for your own use, you can access it on our Charm hub, where we have packaged these tools into a set of Juju charms (Canonical‘s version of Kubernetes operators).
The benefits of Canonical’s hybrid Zero Trust IdP model
Our hybrid implementation of ZTS and IdPs comes with several benefits.
You get the benefits and protections of third-party IdPs. IdPs offer robust protections against the vast majority of attack attempts, and so you can enjoy these protections, combined with the ease-of-use and scalability of IdPs.
You retain full control over access permission. By retaining full control over the authorization decision, you effectively eliminate the risk of “legitimate” backdoors created by your IdP.
Extra Factor Authentication offers an additional security layer. Our implementation offers an additional layer of authentication and access control in your IAM, making it much harder for attackers or unauthorized users to access your systems, networks, or resources.
In conclusion, IAM is a tricky and time-consuming process, and modern third-party IdPs offer a powerful and reliable way to outsource this activity securely, for the most part. However, risks still exist with IdPs, meaning that if you want to implement Zero Trust Architecture into your IAM you need to take extra precautions so that you’re protected from both unwanted intruders and the third-party IdPs themselves. With just one simple additional verification step, you get the best of both worlds: all the benefits of third party IdPs, none of the potential black box back doors, and a solid Zero Trust outlook.
I was just released from the hospital after a 3 day stay for my ( hopefully ) last surgery. There was concern with massive blood loss and low heart rate. I have stabilized and have come home. Unfortunately, they had to prescribe many medications this round and they are extremely expensive and used up all my funds. I need gas money to get to my post-op doctors appointments, and food would be cool. I would appreciate any help, even just a dollar!
I am already back to work, and continued work on the crashy KDE snaps in a non KDE env. ( Also affects anyone using kde-neon extensions such as FreeCAD) I hope to have a fix in the next day or so.
The Launchpad project is almost 21 years old! Many people have contributed to the project over this lifetime, and we are thankful for all of them. We understand the value of a strong community and we are taking steps to reinvigorate Launchpad’s once-thriving community.
There are two common suggestions for getting started in open source: fixing bugs and contributing to documentation. Early in 2024, Canonical launched the Canonical Open Documentation Academy; an initiative that aims to break down barriers to open source contribution, and work with the community to raise the bar for documentation practice. The Open Documentation Academy has been helping people get involved in open source and has also been helping projects achieve ever higher standards in documentation. Launchpad is one such project.
Today, we recognize and celebrate our community contributors. We hope they enjoyed contributing to Launchpad as much as we enjoyed working with them!
– gerryRcom
– Jared Nielsen
– Adriaan Van Niekerk
– Nathan Barbarick
Thank you for helping to make Launchpad great!
commit f980cfb3c78b72b464a054116eea9658ef906782
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Mon Oct 14 15:39:27 2024 -0400
Add debugging doc; fix broken links (#108)
* Add debugging doc; fix broken links
* fix broken links in debugging.rst
* fix spelling errors
* fix spelling errors
* fix spelling errors
* fix debugging link
* fix lots of formatting on recovered debugging.rst page
* add debugging.rst page into Launchpad development tips
---------
Co-authored-by: Alvaro Crespo <alvarocrespo.se@gmail.com>
commit c690ef5c7ed2d63d989c1f91b2883ed947904228
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Wed Oct 9 14:32:59 2024 -0400
Add database table page; fix broken link (#107)
* Add database table page; fix broken link
* add spell check errors to custom_wordlist
* add rename-database-table to how-to/index.rst
* fix reference link to rename-database-table page in live-patching.rst explanation doc
* format rename-database-table to show as sql code
---------
Co-authored-by: Jared Nielsen <nielsen.jared@gmail.com>
Co-authored-by: Alvaro Crespo <alvaro.crespo@canonical.com>
commit 5b319ab2899a326b7e96a5c001965e486a445448
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Wed Oct 9 12:20:24 2024 -0400
Add missing codehosting doc; fix broken link (#106)
* Add missing codehosting doc; fix broken link
* add codehosting-locally to index.rst
* add spell check errors to custom_wordlist
* fix reference link for codehosting-locally in code.rstexplanation section
---------
Co-authored-by: Jared Nielsen <nielsen.jared@gmail.com>
Co-authored-by: Alvaro Crespo <alvaro.crespo@canonical.com>
commit 1fcb3a9588bcb62132ce0004bb98f54e28c6561c
Author: Nathan Barbarick <nathanclaybarbarick@gmail.com>
Date: Mon Sep 30 11:08:39 2024 -0700
Group articles of the Explanation section into proper subsections (#97)
* Remove How to go about writing a web application, per jugmac00.
* Group articles in the Explanation section into subsections, add introductory text.
* Add new sections for remaining ToC headings.
* Add codehosting.png, fix broken link (#104)
* add codehosting.png, fix broken link
* delete linkcheck_ignore item
* remove accessibility, upstream, and schema links (#102)
* add concepts.rst, fix broken link in code.rst (#105)
* add concepts.rst, fix broken link in code.rst
* add spellcheck errors to custom_wordlist
* add concepts to index.rst
* Add descriptions in the explanation index and move new concepts page.
---------
Co-authored-by: Jared Nielsen <nielsen.jared@gmail.com>
commit ce5408a8ba919d22c5f5f01ff0396e1eb982d359
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Thu Sep 12 08:11:00 2024 -0400
add concepts.rst, fix broken link in code.rst (#105)
* add concepts.rst, fix broken link in code.rst
* add spellcheck errors to custom_wordlist
* add concepts to index.rst
commit eb5a0b185af6122720d44791aa8c98d52daf93e5
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Fri Sep 6 04:00:51 2024 -0400
remove accessibility, upstream, and schema links (#102)
commit 766dc568b06e49afbb831c25a6163be31ab5064a
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Thu Sep 5 03:09:19 2024 -0400
Add codehosting.png, fix broken link (#104)
* add codehosting.png, fix broken link
* delete linkcheck_ignore item
commit 317437262dd6d21bbb832e9603e4f84dbd4095b6
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Fri Aug 16 15:02:25 2024 -0400
add 'Soyuz' link (#103)
commit f238c1f4e2322d5ad31c9d86615108856c9f8dfc
Author: gerryRcom <gerryr@gerryr.com>
Date: Wed Jul 24 06:01:27 2024 +0100
oda spelling check on code doc (#90)
* oda spelling check on code doc
* oda spelling check on code doc
* Update .custom_wordlist.txt
---------
Co-authored-by: Jürgen Gmach <juergen.gmach@canonical.com>
commit ff237feec8ee9fd6530ccd0aa1f940939ddedee0
Author: Adriaan Van Niekerk <144734475+sfadriaan@users.noreply.github.com>
Date: Tue Jul 23 14:44:29 2024 +0200
Check Spelling errors (Storm migration guide) (#92)
* Remove Storm Migration Guide from exclusion list
* Update code inline formatting and correct spelling errors
* Add accepted words
commit 8500de5b96e4949b23d6c646c65272b9c8180424
Author: Adriaan Van Niekerk <144734475+sfadriaan@users.noreply.github.com>
Date: Tue Jul 23 11:05:04 2024 +0200
Check Spelling (Database Performance page) (#91)
* Remove database performance page from exclusion
* Add accepted words
* Correct spelling errors
commit 06401ea4f554bd8eff483a03c5dea2508f942bdd
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Wed Jul 17 11:13:05 2024 +0200
Correct spelling errors
commit 9eb17247c1100dc7c23dcb2a0275064ed1dc7a19
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Wed Jul 17 11:11:13 2024 +0200
Add accepted words
commit a539b047d012d5078b097041d9072937d2247704
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Wed Jul 17 11:10:59 2024 +0200
Remove "Security Policy" from exclusion list
commit 7708a5fa7b6ed6c0856fa2722f917228c9127eb0
Author: Adriaan Van Niekerk <144734475+sfadriaan@users.noreply.github.com>
Date: Wed Jul 17 08:13:34 2024 +0200
Spell check (URL traversal + Navigation Menus) (#87)
* Remove Navigation Menu page from exclusion
* Add words to be excluded from spell check
* Correct spelling errors
* Remove "url-traversal" from exclusion list
* Update list of accepted words
* Update formatting and correct errors
---------
Co-authored-by: Jürgen Gmach <juergen.gmach@canonical.com>
commit e952eb0aa98fe33a20517b82640d88c2c6a8fc5f
Author: gerryRcom <gerryr@gerryr.com>
Date: Mon Jul 15 20:17:36 2024 +0100
oda spelling check on branches doc
commit 46170ead6fe34fde518fe8848e3d321b57506875
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Mon Jul 15 11:02:57 2024 +0200
Update formatting of URLs
commit 124245b2b4b5699596e7039f09f6d1f3211b409f
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Mon Jul 15 11:00:22 2024 +0200
Remove Launchpad Mail page from exclusion list
commit 141aa07f62d47e7b25581c113fe222679ca9135d
Author: gerryRcom <gerryr@gerryr.com>
Date: Wed Jul 10 20:12:47 2024 +0100
oda spelling check on ppa doc
commit bdea1e1d11e88255eed19e335d840a278cefb134
Author: gerryRcom <gerryr@gerryr.com>
Date: Wed Jul 10 20:08:37 2024 +0100
oda spelling check on ppa doc
commit 7a960016415d32bae99bccac8e7ee634d7034ce7
Merge: 1c6506b 3e12837
Author: gerryRcom <gerryr@gerryr.com>
Date: Tue Jul 9 17:47:06 2024 +0100
Merge branch 'main' into spelling-feature-flags-doc
commit 1c6506b7e971fed802b3dfc85abc29bc0a075450
Author: gerryRcom <gerryr@gerryr.com>
Date: Fri Jul 5 20:06:05 2024 +0100
oda spelling check on feature-flags doc
commit 27b2aa62c48dde374d4e27fae671b061eb97a46f
Merge: acb3847 d32c826
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Fri Jul 5 16:03:01 2024 +0200
Merge branch 'main' of https://github.com/canonical/launchpad-manual into javascript-buildsystem-page
commit 3dc90949b0bd2136347916be1b4b05e0041b2d54
Merge: 053a960 f193109
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Fri Jul 5 14:07:59 2024 +0200
Merge branch 'main' of https://github.com/canonical/launchpad-manual into fix-spelling-issues
commit 053a96086a8e649f0b135aa6eeb942b858f7ba5b
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Fri Jul 5 13:59:34 2024 +0200
Add word to resolve conflict in pull request
commit f19310999278be18a3d92443a7b22cf1b0e7e441
Author: gerryRcom <gerryr@gerryr.com>
Date: Thu Jul 4 21:18:04 2024 +0100
oda spelling check on testing doc
commit 93e5fb8d8356b70b52401c69e7884a1dea2e8b46
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 18:44:24 2024 +0200
Remove exclusion added via rebase
commit d75ca31d26bd1731db6fad08c94c7d99bebc02c3
Merge: 54b74c2 5a2f090
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 18:09:04 2024 +0200
Merge branch 'fix-spelling-issues' of https://github.com/sfadriaan/launchpad-manual into fix-spelling-issues
commit 54b74c252952c5de24c0e232bbbe560f9c4c416e
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 10:50:08 2024 +0200
Correct spelling errors, verified by external documentation, converted to en-gb and corrected formatting
commit f1c66b1ce59f6af9a678f86f6b4fa637df91bcb3
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 10:48:48 2024 +0200
Add correctly spelled words picked up by spell checker
commit 73f12ca01f9cce4414702674cd24dc3d38e49304
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 10:47:42 2024 +0200
Remove javascript-integration-testing page from the exclusion list
commit acb384767214e3d432eafe062a2fb646f3c31938
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 16:07:25 2024 +0200
Update mailing list URL, spelling error correction
commit da06505e8a3431d50a815d16ca4f89a5d66c7a41
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 16:06:52 2024 +0200
Remove javascript-buildsystem from exclusion list
commit 2318addb0ea19de7813b5f6b16efc43d21584659
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 16:06:24 2024 +0200
Add words to exclusion list
commit 5a2f090a2da9083b3c3b658592ec43595e78eb0e
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 10:50:08 2024 +0200
Correct spelling errors, verified by external documentation, converted to en-gb and corrected formatting
commit ce333446e7c7501629d3ceab239183aed64af319
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 10:48:48 2024 +0200
Add correctly spelled words picked up by spell checker
commit 7649b104c9439dda5f938b2e0153e4d1c45f21b4
Author: Adriaan van Niekerk <adriaan.vanniekerk@canonical.com>
Date: Thu Jul 4 10:47:42 2024 +0200
Remove javascript-integration-testing page from the exclusion list
commit 017d19761d96d9c04a1ea61ac0e77bcf6a7b7cab
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Wed Jul 3 11:42:33 2024 -0400
Fix 'Loggerhead' link
commit fda0691919cd849ff4c6ee24e4dc1e3d5e6b1682
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Wed Jul 3 11:32:15 2024 -0400
Fix 'UI/CssSprites' link
commit f26faaef61e5ef48140bd2f84630c5d624041dad
Author: gerryRcom <gerryr@gerryr.com>
Date: Wed Jul 3 09:18:02 2024 +0100
oda spelling check on translations doc
commit 13cb12c45e1a5826d27eaf497b7e6a2605d7ec6d
Author: gerryRcom <gerryr@gerryr.com>
Date: Tue Jul 2 19:41:38 2024 +0100
oda spelling check on unittesting doc
commit cdab34e61a7c1009852a642e978b9027c2aad3d2
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Tue Jul 2 12:07:06 2024 -0400
Fix 'Running' link
commit dbe279acfef9eb736735b04ba474801d3f58a3f0
Author: Nathan Barbarick <nathanclaybarbarick@gmail.com>
Date: Fri Jun 28 19:55:08 2024 -0700
Restructure navigation menu using subsections in how-to.
commit 8592ed544881d50877f036073a6eec9de2e6356d
Author: gerryRcom <gerryr@gerryr.com>
Date: Sat Jun 29 09:49:34 2024 +0100
oda spelling check on css doc
commit 90608989d15cf2dbdf9a538a03517c03d87a3658
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Sat Jun 29 03:54:27 2024 -0400
Fix 'JavascriptUnitTesting' link (#72)
Co-authored-by: Jürgen Gmach <juergen.gmach@canonical.com>
commit 61ab3a36a51cb6ee40d6132cc1028779115b8efd
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Sat Jun 29 03:43:47 2024 -0400
Fix 'Help' link (#70)
Co-authored-by: Jürgen Gmach <juergen.gmach@canonical.com>
commit 89f08619f4c1cbb6e82bc95fd3cdc30b802e9c37
Author: gerryRcom <gerryr@gerryr.com>
Date: Fri Jun 28 19:52:32 2024 +0100
oda spelling check on live-patching doc
commit 96924bd1cf580875d76ed28afa3db83d0d642247
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Fri Jun 28 08:44:30 2024 -0400
Fix 'Getting'
commit be6124ff67fc89a604ebad566805e7e535a01377
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Fri Jun 28 09:00:41 2024 -0400
Fix 'JavaScriptIntegrationTesting' link
commit da7f6bfa597f2ea1e8df57dbbec7217fd746268f
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Fri Jun 28 07:46:05 2024 -0400
Fix 'FixBugs'
commit 2ca5b808797ccd2c24cfb65a06d98e1db844b1b1
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Thu Jun 27 11:02:31 2024 -0400
remove underscores
commit 7577f7674066d4e1d974e956ab2506e0d6f5a89b
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Tue Jun 25 13:22:07 2024 -0400
Fix '../Trunk'
commit deb42beb594b860356dfe11297516d26609d1018
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Thu Jun 27 11:52:33 2024 -0400
Fix 'Database/LivePatching'
commit ded351427d3f694d16855f3b4c44e085eb4e551c
Author: gerryRcom <gerryr@gerryr.com>
Date: Thu Jun 27 19:47:05 2024 +0100
oda spelling check on merge-reviews doc
commit c07847f039bc9414410ebf134d263174004a0a67
Author: gerryRcom <gerryr@gerryr.com>
Date: Thu Jun 27 08:22:23 2024 +0100
oda spelling check on db-devel doc
commit 6a54f46fedfcfdb3385dd8ff5c2f1d4a9ce45f15
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Tue Jun 25 12:32:41 2024 -0400
remove updated link from linkcheck_ignore
commit 6eedaa9f3d5eaee21242280b1ead71c376698c4e
Author: Jared Nielsen <nielsen.jared@gmail.com>
Date: Sat Jun 22 12:59:24 2024 -0400
Fix 'PolicyAndProcess/DatabaseSchemaChangesProcess'
commit 92d1b15eafc2a90a88e24afd5a6938f277314d8a
Author: gerryRcom <gerryr@gerryr.com>
Date: Wed Jun 26 19:30:14 2024 +0100
oda spelling check on css-sprites doc
commit aeb7e5c2d4186ba45cb3279e24c3716e7752b32c
Author: gerryRcom <gerryr@gerryr.com>
Date: Tue Jun 25 20:06:46 2024 +0100
oda spelling check on registry doc
commit 13eb716d534b41ee60ac6adbf8b9d8fb96ca96cd
Author: gerryRcom <gerryr@gerryr.com>
Date: Mon Jun 24 20:00:43 2024 +0100
oda spelling check on triage-bugs doc
commit b7ad120ca563e3a1ac82f5ec7c7742874b53d88b
Author: gerryRcom <gerryr@gerryr.com>
Date: Mon Jun 24 19:51:08 2024 +0100
oda spelling check on triage-bugs doc
commit a83419e47f21071ae53a7036210a7c650195e8ef
Author: gerryRcom <gerryr@gerryr.com>
Date: Fri Jun 21 21:54:21 2024 +0100
oda spelling check on schema-changes doc
commit 486b54241a46ec42f48a05a0081b238699c0557b
Author: gerryRcom <gerryr@gerryr.com>
Date: Thu Jun 20 20:36:01 2024 +0100
oda spelling check on submitting-a-patch doc
commit a890a576681258d647d20b8fdc5c80b14f490d94
Author: gerryRcom <gerryr@gerryr.com>
Date: Tue Jun 18 20:09:14 2024 +0100
oda spelling check on database-setup doc
commit b52d850a0d2456f7925a91cb3e2ff4a8c44711a5
Author: gerryRcom <gerryr@gerryr.com>
Date: Mon Jun 17 12:18:09 2024 +0100
oda spelling check on contribute-to doc
commit 074e13a662821ba17d1c99e2814ef38fe2206a01
Author: gerryRcom <gerryr@gerryr.com>
Date: Fri Jun 14 13:17:53 2024 +0100
oda spelling check on getting-help-hacking
commit 81b6f8025aecf35c48b6660510447e07910d4b8e
Author: gerryRcom <gerryr@gerryr.com>
Date: Thu Jun 13 20:58:20 2024 +0100
oda spelling check on explanation-hacking
The Incus team is pleased to announce the release of Incus 6.12!
This release comes with some very long awaited improvements such as online growth of virtual machine memory, network address sets for easier network ACLs, revamped logging support and more!
On top of the new features, this release also features quite a few welcome performance improvements, especially for systems with a lot of snapshots and with extra performance enhancements for those using ZFS.
The highlights for this release are:
Network address sets
Memory hotplug support in VMs
Reworked logging handling & remote syslog
SNAT support on complex network forwards
Authentication through access_token parameter
Improved server-side filtering in the CLI
More generated documentation
The full announcement and changelog can be found here. And for those who prefer videos, here’s the release overview video:
And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus
Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.
One of my favourite authors, Douglas Adams, once said that “we are stuck with technology when what we really want is just stuff that works.” Whilst Adams is right about a lot of things, he got this one wrong – at least when it comes to infrastructure. As our Infra Masters 2025 event demonstrated, infrastructure is the technology that makes everything work – from managing a satellite in outer space, to, say, livestreaming an event.
Held at Canonical’s London office on March 31st, Infra Masters 2025 brought together operations leaders and architects to explain how to build infrastructure that transforms industries.
If you didn’t attend the event, don’t worry – and, naturally, “DON’T PANIC.” You’ve come to the right place to find out what you might have missed. You can watch the full talks on YouTube, or read this article for an overview of everything that took place, from key insights from ESA on modernizing infrastructure to BT’s network cloud transformation.
So, without further ado, what can we learn from Infra Masters 2025?
1. Choose a partner that helps you speed up innovation
Modernizing infrastructure isn’t just about choosing the right software. It’s also about who you choose to work with, and how you work with them. According to BT, it was a fundamental shift from vendor-consumer to partnership with Canonical that invigorated their efforts to modernize their infrastructure.
As their representatives disclosed in the first talk of the day, BT have worked with Canonical since 2019 on their ongoing infrastructure transformation. Despite the challenges posed by the transition, the collaboration between BT and Canonical has been marked by open communication, shared goals, and regular training sessions to upskill engineers. The secret to a successful partnership?
“It’s the collaborative approach, working together working on shared goals.”
– Curtis Haslam, Network Cloud Senior Manager, BT Group
BT was keen to emphasize that transparent collaboration requires willingness to offer and accept constructive feedback. Haslam explains, “we’re very honest”, because “acknowledging mistakes from both sides” is the best way to fix errors. Great collaboration becomes possible through this kind of mutual trust.
2. Double your output with Kubernetes
The European Space Agency (ESA) plans to double the number of missions they want to run by 2030 – no easy feat. With critical projects covering everything from searching for habitable worlds, to clearing the 130 million pieces of orbiting debris that threaten satellites, each mission had its own individual compute and infrastructure needs, making their goal a particularly ambitious one. For organizations interested in how to increase output by modernizing their infrastructure, ESA’s presentation may provide some helpful tips.
As Michael Hawkshaw, ESA Mission Operations Infrastructure IT Service Manager at ESOC (European Space Operations Centre) explains, with Canonical Kubernetes ESA has been able to automate the deployment of both infrastructure and “all the software needed for those missions as well.” Canonical Kubernetes readily plugs into Ceph and PostgreSQL for instance, which are part of ESA’s stack. These automations have, naturally, freed the team to work on other mission-critical tasks. Likewise, by increasing availability and reducing “wasted space” on database servers, Canonical Kubernetes have helped ESA to support more missions.
Want to try Kubernetes but don’t think you have the capacity? ESA was in the same position. Michael acknowledges that Kubernetes is fast-moving and has a steep learning curve. However, with Canonical managing the systems for them, on call “24/7, 7 days a week”, ESA can always “get the support” they need, “with active monitoring” of their setup.
For one thing, open source software gives organizations the flexibility to scale and adapt to shifting requirements. To return to our initial Adams quote, while it may be true that with proprietary solutions, you can become “stuck with technology”, you’re never stuck with open source.
For BT, open source was critical to building the Network Cloud, the infrastructure project that helped them to achieve their goal of bringing 5G to the UK. The Network Cloud replaced a variety of disparate, proprietary vertically-integrated stacks. These stacks each required individual management, oversight, compliance, authentication, and deployment, making them time- and cost-intensive. The challenge was to replace these with infrastructure that was highly dependable and automated, providing consistent high performance.
The answer was consolidating their infrastructure into a single, trusted open source stack – including MAAS for bare-metal provisioning, Ceph for storage, LXD for container management, and Juju for automation. James Cawte, BT Group’s Cloud Network Principal Engineer, noted that by streamlining their operations in this way, the app developers were free to ‘focus solely on their application development’, rather than trying to make the infrastructure work – which allowed BT to streamline its operations. For more details on BT’s partnership with Canonical, explore the case study.
As Canonical’s Thibaut Rouffineau noted, open source software helps organizations to scale resources whilst keeping costs down. Moving away from proprietary software reduces expensive licences and contracts, whilst enabling companies to optimize their infrastructure – rather than getting stuck with technology, it just works.
4. The future is in the clouds, on the edge, and managed by Kubernetes
…When it comes to infrastructure, that is.
Moving forward, BT’s focus is on enhancing edge computing capabilities for better 5G performance, optimizing infrastructure for containerized applications, and integrating serverless computing to improve developer workflows.
This shift to edge computing seems likely to become increasingly common as organizations choose to move away from the public cloud and data centers, and distribute their infrastructure across edge devices. Kubernetes, and the automations it makes possible, will form a key part of managing this infrastructure, offering a new approach to how we think about the future of technology.
Meanwhile, ESA’s plan to double the number of satellites it currently flies by 2030 relies on Kubernetes and cloud-native computing. AI tooling managed by Canonical Kubernetes has increased the amount of data that can be stored and retrieved, whilst Ceph and PostgreSQL support these cloud-based workloads. As space exploration continues to evolve, ESA can more easily scale its workloads thanks to these tools.
So, to take a final leaf from Adams, and “summarize the summary of the summary,” what can we learn from Infra Masters?
The relationship between vendor and client is critical, and moving towards a more collaborative partnership can improve innovation, efficiency, and in-house skillsets. Equally, the support provided by a company like Canonical to aid migration efforts, for example, can help to take the pressure off in-house engineers to avoid disrupting workflows.
The automations enabled by Canonical’s infrastructure portfolio improve efficiency, reduce costs and take the pressure off managing infrastructure. Choosing a managed solution can help organizations to get these benefits, without worrying about capacity or skill shortages.
Open source software is an increasingly important part of the stack for many companies, providing cost savings, the opportunity to scale, and even create a unified platform and integrate infrastructure seamlessly across different environments.
As the company behind Ubuntu, Canonical’s software is widely used, trusted, and provides a great option for organizations looking to explore open source options for their infrastructure – whether it’s to lower costs, gain architecture freedom or cloudify their data center. And that’s all, folks. So long, and thanks for all the fish – and by fish, I mean “humoring my … creative references.” See you at the next Infra Masters!
Voltámos da quadra pascal muito mais gordos, depois de enfardar toda a espécie de doçarias, ensopados e lançamentos de Ubuntu 25.04 Plucky Puffin, a.k.a.: Fradinho. A conversa andou à volta de rituais religiosos, como apanhar doenças com cuspo dos vizinhos, como construir emissores de Onda Média de trazer por casa, habilidades e jigajogas com leitores de livros electrónicos de tinta electrónica e ainda todas as novas funcionalidades do 25.04, terapias com Painel de Bem-Estar, perigosas expedições ao Berço e promessas de grande galhofa com jogos de bebidas, durante as eleições Legislativas..
The dipole program is part of the Yagi-Uda project, a collection of tools designed for the analysis and optimization of Yagi-Uda antennas. This particular tool calculates the impedance of a single dipole, making it a useful utility for antenna engineers and amateur radio enthusiasts.
Installation on Ubuntu/Debian
To install the Yagi-Uda software suite, including dipole, run the following command:
sudo apt install yagiuda
This package includes several tools for Yagi-Uda antenna analysis and design, making it a valuable addition for those working with antennas.
Usage
To compute the impedance of a dipole, use the following command:
dipole <frequency> <length> <diameter>
For example, to calculate the impedance of a dipole at 7.1 MHz with a length of 20 meters and a diameter of 1.5 mm, run:
dipole 7.100mhz 20m 1.5mm
Example Output:
Self impedance of a dipole:
7.100000 MHz, length 20.000000 m, diameter 1.500000 mm, is
Z = 62.418686 -48.363233 jX Ohms
This output indicates:
Frequency: 7.1 MHz
Length: 20 meters
Diameter: 1.5 mm
Impedance (Z): 62.42 – j48.36 Ω
The negative reactance (-48.36 Ω) suggests the dipole is capacitive, meaning it is too long at this frequency. To achieve resonance (purely resistive impedance), the dipole length should be slightly reduced.
Related Tools
The Yagi-Uda project includes additional tools that help with various aspects of antenna design and optimization:
first – Initial calculations for antenna design
input – Processes input parameters for analysis
output – Displays calculated results
optimise – Helps refine antenna parameters for better performance
Each of these tools contributes to designing and analyzing Yagi-Uda antennas effectively.
Supported Platforms
The Yagi-Uda project was primarily developed for UNIX-based systems, including Linux distributions such as Ubuntu and Debian. While efforts were made to port it to other operating systems, its primary focus remains on UNIX environments.
Reporting Bugs
If you encounter any issues while using dipole or other Yagi-Uda tools, you can report them to Dr. David Kirkby (G8WRB) at david.kirkby@onetel.net. Providing clear, reproducible steps will help ensure that reported bugs are addressed efficiently.
Conclusion
For amateur radio operators and engineers working with Yagi-Uda antennas, the dipole program is a valuable tool for analyzing a single dipole’s impedance. With an easy installation process on Debian-based systems, it is an accessible and practical choice for antenna analysis.
Ubuntu MATE 25.04 is ready to soar! 🪽 Celebrating our 10th anniversary as an official Ubuntu flavour with the reliable MATE Desktop experience you love, built on the latest Ubuntu foundations. Read on to learn more 👓️
A Decade of MATE
This release marks the 10th anniversary of Ubuntu MATE becoming an official Ubuntu flavour. From our humble beginnings, we’ve developed a loyal following of users who value a traditional desktop experience with modern capabilities. Thanks to our amazing community, contributors, and users who have been with us throughout this journey. Here’s to many more years of Ubuntu MATE! 🥂
What changed in Ubuntu MATE 25.04?
Here are the highlights of what’s new in the Plucky Puffin release:
Celebrating 10 years as an official Ubuntu flavour! 🎂
Optional full disk encryption in the installer 🔐
Enhanced advanced partitioning options
Better interaction with existing BitLocker-enabled Windows installations
Improved experience when installing alongside other operating systems
Major Applications
Accompanying MATE Desktop 🧉 and Linux 6.14 🐧 are Firefox 137 🔥🦊,
Evolution 3.56 📧, LibreOffice 25.2.2 📚
See the Ubuntu 25.04 Release Notes
for details of all the changes and improvements that Ubuntu MATE benefits from.
There are no offline upgrade options for Ubuntu MATE. Please ensure you have
network connectivity to one of the official mirrors or to a locally accessible
mirror and follow the instructions above.
The Xubuntu team is happy to announce the immediate release of Xubuntu 25.04.
Xubuntu 25.04, codenamed Plucky Puffin, is a regular release and will be supported for 9 months, until January 2026.
Xubuntu 25.04, featuring the latest updates from Xfce 4.20 and GNOME 48.
Xubuntu 25.04 features the latest Xfce 4.20, GNOME 48, and MATE 1.26 updates. Xfce 4.20 features many bug fixes and minor improvements, modernizing the Xubuntu desktop while maintaining a familiar look and feel. GNOME 48 apps are tightly integrated and have full support for dark mode. Users of QEMU and KVM will be delighted to find new stability with the desktop session—the long-running X server crash has been resolved in Xubuntu 25.04 and backported to all supported Xubuntu releases.
The final release images for Xubuntu Desktop and Xubuntu Minimal are available as torrents and direct downloads from xubuntu.org/download/.
As the main server might be busy the first few days after the release, we recommend using the torrents if possible.
We want to thank everybody who contributed to this release of Xubuntu!
Highlights and Known Issues
Highlights
Xfce 4.20, released in December 2024, is included and contains many new features. Early Wayland support has been added, but is not available in Xubuntu.
GNOME 48 apps, including Font Viewer (gnome-font-viewer) and Mines (gnome-mines), include a refreshed appearance and usability improvements.
Known Issues
The shutdown prompt may not be displayed at the end of the installation. Instead, you might just see a Xubuntu logo, a black screen with an underscore in the upper left-hand corner, or a black screen. Press Enter, and the system will reboot into the installed environment. (LP: #1944519)
You may experience choppy audio or poor system performance while playing audio, but only in some virtual machines (observed in VMware and VirtualBox).
OEM installation options are not currently supported or available.
Please refer to the Xubuntu Release Notes for more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions.
The main Ubuntu Release Notes cover many other packages we carry and more generic issues.
Support
For support with the release, navigate to Help & Support for a complete list of methods to get help.
In addition to all the regular testing I am testing our snaps in a non KDE environment, so far it is not looking good in Xubuntu. We have kernel/glibc crashes on startup for some and for file open for others. I am working on a hopeful fix.
Next week I will have ( I hope ) my final surgery. If you can spare any change to help bring me over the finish line, I will be forever grateful
The Lubuntu Team is proud to announce Lubuntu 25.04, codenamed Plucky Puffin. Lubuntu 25.04 is the 28th release of Lubuntu, the 14th release of Lubuntu with LXQt as the default desktop environment. With 25.04 being an interim release, it will be supported until January of 2026. If you're a 24.10 user, please upgrade to 25.04 […]
The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 25.04 code-named “Plucky Puffin”. This marks Ubuntu Studio’s 36th release. This release is a Regular release and as such, it is supported for 9 months, until January 2026.
Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a more complete list of changes and known issues. Listed here are some of the major highlights.
This release is dedicated to the memory of Steve Langasek. Without Steve, Ubuntu Studio would not be where it is today. He provided invaluable guidance, insight, and instruction to our leader, Erich Eickmeyer, who not only learned how to package applications but learned how to do it properly. We owe him an eternal debt of gratitude.
You can download Ubuntu Studio 25.04 from our download page.
Special Notes
The Ubuntu Studio 25.04 disk image (ISO) exceeds 4 GB and cannot be downloaded to some file systems such as FAT32 and may not be readable when burned to a standard DVD. For this reason, we recommend downloading to a compatible file system. When creating a boot medium, we recommend creating a bootable USB stick with the ISO image or burning to a Dual-Layer DVD.
Minimum installation media requirements: Dual-Layer DVD or 8GB USB drive.
Full updated information, including Upgrade Instructions, are available in the Release Notes.
Upgrades from 24.10 should be enabled within a month after release, so we appreciate your patience. Upgrades from 25.04 LTS will be enabled after 24.10 reaches End-Of-Life in July 2025.
New This Release
GIMP 3.0!
The long-awaited GIMP 3.0 is included by default. GIMP is now capable of non-destructive editing with filters, better Photoshop PSD export, and so very much more! Check out the GIMP 3.0 release announcement for more information.
Pencil2D
Ubuntu Studio now includes Pencil2D! This is a 2D animation and drawing application that is sure to be helpful to animators. You can use basic clipart to make animations!
The basic features of Pencil2D are:
layers support (separated layer for bitmap, vector and soud part)
bitmap drawing
vector drawing
sound support
LibreOffice No Longer in Minimal Install
The LibreOffice suite is now part of the full desktop install. This will save space for those wishing for a minimalistic setup for their needs.
Invada Studio Plugins
Beginning this release we are including the Invada Studio Plugins first created by Invada Records Australia. This includes distortion, delay, dynamics, filter, phaser, reverb, and utility audio plugins.
PipeWire 1.2.7
This release contains PipeWire 1.2.7. One major feature this has over 1.2.4 is that v4l2loopback support is available via the pipewire-v4l2 package which is not installed by default.
PipeWire’s JACK compatibility is configured to use out-of-the-box and is zero-latency internally. System latency is configurable via Ubuntu Studio Audio Configuration.
However, if you would rather use straight JACK 2 instead, that’s also possible. Ubuntu Studio Audio Configuration can disable and enable PipeWire’s JACK compatibility on-the-fly. From there, you can simply use JACK via QJackCtl.
Ardour 8.12
This is, as of this writing, the latest release of Ardour, packed with the latest bugfixes.
To help support Ardour’s funding, you may obtain later versions directly from ardour.org. To do so, please one-time purchase or subscribe to Ardour from their website. If you wish to get later versions of Ardour from us, you will have to wait until the next regular release of Ubuntu Studio, due in October 2025.
Deprecation of Mailing Lists
Our mailing lists are getting inundated with spam and there is no proper way to fix the filtering. It uses an outdated version of MailMan, so this release announcement will be the last release announcement we send out via email. To get support, we encourage using Ubuntu Discourse for support, and for community clicking the notification bell in the Ubuntu Studio category there.
Frequently Asked Questions
Q: Does Ubuntu Studio contain snaps? A: Yes. Mozilla’s distribution agreement with Canonical changed, and Ubuntu was forced to no longer distribute Firefox in a native .deb package. We have found that, after numerous improvements, Firefox now performs just as well as the native .deb package did.
Thunderbird also became a snap so that the maintainers can get security patches delivered faster.
Additionally, Freeshow is an Electron-based application. Electron-based applications cannot be packaged in the Ubuntu repositories in that they cannot be packaged in a traditional Debian source package. While such apps do have a build system to create a .deb binary package, it circumvents the source package build system in Launchpad, which is required when packaging for Ubuntu. However, Electron apps also have a facility for creating snaps, which can be uploaded and included. Therefore, for Freeshow to be included in Ubuntu Studio, it had to be packaged as a snap.
We have additional snaps that are Ubuntu-specific, such as the Firmware Updater and the Security Center. Contrary to popular myth, Ubuntu does not have any plans to switch all packages to snaps, nor do we.
Q: Will you make an ISO with {my favorite desktop environment}? A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since we’re completely volunteer-run, we don’t have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio – which does *not* convert that flavor to Ubuntu Studio but adds its benefits.
Q: What if I don’t want all these packages installed on my machine? A: Simply use the Ubuntu Studio Installer to remove the features of Ubuntu Studio you don’t want or need!
Get Involved!
A wonderful way to contribute is to get involved with the project directly! We’re always looking for new volunteers to help with packaging, documentation, tutorials, user support, and MORE! Check out all the ways you can contribute!
Our project leader, Erich Eickmeyer, is now working on Ubuntu Studio at least part-time, and is hoping that the users of Ubuntu Studio can give enough to generate a monthly part-time income. We’re not there, but if every Ubuntu Studio user donated monthly, we’d be there! Your donations are appreciated! If other distributions can do it, surely we can! See the sidebar for ways to give!
Special Thanks
Huge special thanks for this release go to:
Eylul Dogruel: Artwork, Graphics Design
Ross Gammon: Upstream Debian Developer, Testing, Email Support
Sebastien Ramacher:Upstream Debian Developer
Dennis Braun: Upstream Debian Developer
Rik Mills: Kubuntu Council Member, help with Plasma desktop
Scarlett Moore: Kubuntu Project Lead, help with Plasma desktop
Len Ovens: Testing, insight
Mauro Gaspari: Tutorials, Promotion, and Documentation, Testing, keeping Erich sane
Simon Quigley: Qt6 Megastuff
Erich Eickmeyer: Project Leader, Packaging, Development, Direction, Treasurer
The Kubuntu Team is happy to announce that Kubuntu 25.04 has been released.
Codenamed “Plucky Puffin”, Kubuntu 25.04 continues our tradition of giving you Friendly Computing by integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.
The release features the latest KDE Plasma 6.3 desktop, KDE Gear 24.12.3, kernel 6.14, and many other updated applications and libraries.
Applications for core day-to-day usage are included and updated, such as Firefox, and LibreOffice.
In addition to the applications on our install media, 25.04 benefits from the huge number of applications in the Ubuntu archive, plus those installable via snap or other methods.
Please refer to our release notes for further details.
Note: For upgrades from 24.10, there may a delay of a few hours to days between the official release announcements and the Ubuntu Release Team enabling upgrades.
Passámos uma semana interessante e infernal, a ler programas eleitorais sobre software livre enquanto gatos se roçam em microfones; falámos de novidades de domótica e assistentes de música; meta-motores de busca privados, livres e fresquinhos, actualização do GIMP para 3.0, antecipação salivante do Ubuntu 25.04 que sai nesta quinta-feira; quem ganha num combate entre Fedora e Ubuntu; e ainda parvoíces sobre carochos da Adobe e metadonas metafóricas.
Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.
Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.
Recently, I was involved in an event where a video was shown, and the event was filmed. It would be nice to put the video of the event up somewhere so other people who weren't there could watch it. Obvious answer: upload it to YouTube. However, the video that was shown at the event is Copyrighted Media Content and therefore is disallowed by YouTube and the copyright holder; it's not demonetised (which wouldn't be a problem), it's flat-out blocked. So YouTube is out.
I'd like the video I'm posting to stick around for a long time; this is a sort of archival, reference thing where not many people will ever want to watch it but those that do might want to do so in ten years. So I'm loath to find some other random video hosting site, which will probably go bust, or pivot to selling online AI shoes or something. And the best way to ensure that something keeps going long-term is to put it on your own website, and use decent HTML, because that means that even in ten or twenty years it'll still work where the latest flavour-of-the-month thing will go the way of other old technologies and fade away and stop working over time. HTML won't do that.
But... it's an hour long and in full HD. 2.6GB of video. And one of the benefits of YouTube is that they'll make the video adaptive: it'll fit the screen, and the bandwidth, of whatever device someone's watching it on. If someone wants to look at this from their phone and its slightly-shaky two bars of 4G connection, they probably don't want to watch the loading spinner for an hour while it buffers a full HD video; they can ideally get a cut down, lower-quality but quicker to serve, version. But... how is this possible?
There are two aspects to doing this. One is that you serve up different resolutions of video, based on the viewer's screen size. This is exactly the same problem as is solved for images by the <picture> element to provide responsive images (where if you're on a 400px-wide screen you get a 400px version of the background image, not the 2000px full-res version), and indeed the magic words to search for here are responsive video. And the person you will find who is explaning all this is Scott Jehl, who has written a good description of how to do responsive video which explains it all in detail. You make versions of the video at different resolutions, and serve whichever one best matches the screen you're on, just like responsive images. Nice work; just what the doctor ordered.
But there's also a second aspect to this: responsive video adapts to screen size, but it doesn't adapt to bandwidth. What we want, in addition to the responsive stuff, is that on poor connections the viewer gets a lower-bandwidth version as well as a lower-resolution version, and that the viewer's browser can dynamically switch from moment to moment between different versions of the video to match their current network speed. This task is the job of HTTP Live Streaming, or HLS. To do this, you essentially encode the video in a bunch of different qualities and screen sizes, so you've got a bunch of separate videos (which you've probably already done above for the responsive part) and then (and this is the key) you chop up each video into a load of small segments. That way, instead of the browser downloading the whole one hour of video at a particular resolution, it only downloads the next segment at its current choice of resolution, and then if you suddenly get more (or less) bandwidth, it can switch to getting segment 2 from a different version of the video which better matches where you currently are.
Doing this sounds hard. Fortunately, all hard things to do with video are handled by ffmpeg. There's a nice writeup by Mux on how to convert an mp4 video to HLS with ffmpeg, and it works great. I put myself together a little Python script to construct the ffmpeg command line to do it, but you can do it yourself; the script just does some of the boilerplate for you. Very useful.
So now I can serve up a video which adapts to the viewer's viewing conditions, and that's just what I wanted. I have to pay for the bandwidth now (which is the other benefit of having YouTube do it, and one I now don't get) but that's worth it for this, I think. Cheers to Scott and Mux for explaining all this stuff.
Ubuntu Budgie 25.04 (Plucky Puffin) is a Standard Release with 9 months of support by your distro maintainers and Canonical, from April 2025 to Jan 2026. These release notes showcase the key takeaways for 24.10 upgraders to 25.04. Please note – there is no direct upgrade path from 24.04.2 to 25.04; you must uplift to 24.10 first or perform a fresh install. In these release notes the areas…
Watch a conversation reflecting on the 20th anniversary of Git, the version control system created by Linus Torvalds. He discusses his initial motivations for developing Git as a response to the limitations of existing systems like CVS and BitKeeper, and his desire to establish a better tool for the open-source community. Torvalds explains the processes […]
I’m pleased to announce the uCareSystem 25.04.09, the latest version of the all-in-one system maintenance tool for Ubuntu, Linux Mint, Debian and its derivatives, used by thousands ! This release brings some major changes internal changes, fixes and improvements under the hood. A new version of uCareSystem is out, and this time the focus is […]
Ubuntu MATE 24.10 is more of what you like, stable MATE Desktop on top of current Ubuntu. Read on to learn more 👓️
Ubuntu MATE 24.10
Thank you! 🙇
My sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this release 👏
I’d like to acknowledge the close collaboration with the Ubuntu Foundations team and the Ubuntu flavour teams, in particular Erich Eickmeyer who pushed critical fixes while I was travelling.
Thank you! 💚
Ships stable MATE Desktop 1.26.2 with a handful of bug fixes 🐛
Switched back to Slick Greeter (replacing Arctica Greeter) due to race-condition in the boot process which results the display manager failing to initialise.
Returning to Slick Greeter reintroduces the ability to easily configure the login screen via a graphical application, something users have been requesting be re-instated 👍
Ubuntu MATE 24.10 .iso 📀 is now 3.3GB 🤏 Down from 4.1GB in the 24.04 LTS release.
This is thanks to some fixes in the installer that no longer require as many packages in the live-seed.
Login Window
What didn’t change since the Ubuntu MATE 24.04 LTS?
If you follow upstream MATE Desktop development, then you’ll have noticed that Ubuntu MATE 24.10 doesn’t ship with the recently released MATE Desktop 1.28 🧉
I have prepared packaging for MATE Desktop 1.28, along with the associated components but encountered some bugs and regressions 🐞 I wasn’t able to get things to a standard I’m happy to ship be default, so it is tried and true MATE 1.26.2 one last time 🪨
Major Applications
Accompanying MATE Desktop 1.26.2 🧉 and Linux 6.11 🐧 are Firefox 131 🔥🦊,
Celluloid 0.27 🎥, Evolution 3.54 📧, LibreOffice 24.8.2 📚
See the Ubuntu 24.10 Release Notes
for details of all the changes and improvements that Ubuntu MATE benefits from.
There are no offline upgrade options for Ubuntu MATE. Please ensure you have
network connectivity to one of the official mirrors or to a locally accessible
mirror and follow the instructions above.
The Linux Containers project maintains Long Term Support (LTS) releases for its core projects. Those come with 5 years of support from upstream with the first two years including bugfixes, minor improvements and security fixes and the remaining 3 years getting only security fixes.
This is now the fourth round of bugfix releases for LXC, LXCFS and Incus 6.0 LTS.
LXC
LXC is the oldest Linux Containers project and the basis for almost every other one of our projects. This low-level container runtime and library was first released in August 2008, led to the creation of projects like Docker and today is still actively used directly or indirectly on millions of systems.
New LXC_IPV6_ENABLE lxc-net configuration key to turn IPv6 on/off
Fixed ability to attach to application containers with non-root entry point
LXCFS
LXCFS is a FUSE filesystem used to workaround some shortcomings of the Linux kernel when it comes to reporting available system resources to processes running in containers. The project started in late 2014 and is still actively used by Incus today as well as by some Docker and Kubernetes users.
Properly handle SLAB reclaimable memory in meminfo
Handle empty cpuset strings
Fix potential sleep interval overflows
Incus
Incus is our most actively developed project. This virtualization platform is just over a year old but has already seen over 3500 commits by over 120 individual contributors. Its first LTS release made it usable in production environments and significantly boosted its user base.
Due to the nature of this tool, it doesn’t get LTS releases as its feature set is extremely stable but still needs to receive very frequent updates to handle changes in the various Linux distributions that it builds. Distrobuilder 3.2 was released at the same time as the LTS releases, providing an up to date snapshot of that project.
systemd generator handles newer Linux distributions
Support for Alpaquita
What’s next?
We’re expecting another LTS bugfix release for the 6.0 branches in the third quarter of 2025. In the mean time, Incus will keep going with its usual monthly feature release cadence.
Thanks
This LTS release update was made possible thanks to funding provided by the Sovereign Tech Fund (now part of the Sovereign Tech Agency).
The Sovereign Tech Fund supports the development, improvement, and maintenance of open digital infrastructure. Its goal is to sustainably strengthen the open source ecosystem, focusing on security, resilience, technological diversity, and the people behind the code.
A couple weeks ago I was playing around with a multiple architecture CI setup with another team, and that led me to pull out my StarFive VisionFive 2 SBC again to see where I could make it this time with an install.
I left off about a year ago when I succeeded in getting an older version of Debian on it, but attempts to get the tooling to install a more broadly supported version of U-Boot to the SPI flash were unsuccessful. Then I got pulled away to other things, effectively just bringing my VF2 around to events as a prop for my multiarch talks – which it did beautifully! I even had one conference attendee buy one to play with while sitting in the audience of my talk. Cool.
I was delighted to learn how much progress had been made since I last looked. Canonical has published more formalized documentation: Install Ubuntu on the StarFive VisionFive 2 in the place of what had been a rather cluttered wiki page. So I got all hooked up and began my latest attempt.
My first step was to grab the pre-installed server image. I got that installed, but struggled a little with persistence once I unplugged the USB UART adapter and rebooted. I then decided just to move forward with the Install U-Boot to the SPI flash instructions. I struggled a bit here for two reasons:
The documentation today leads off with having you download the livecd, but you actually want the pre-installed server image to flash U-Boot, the livecd step doesn’t come until later. Admittedly, the instructions do say this, but I wasn’t reading carefully enough and was more focused on the steps.
I couldn’t get the 24.10 pre-installed image to work for flashing U-Boot, but once I went back to the 24.04 pre-installed image it worked.
And then I had to fly across the country. We’re spending a couple weeks around spring break here at our vacation house in Philadelphia, but the good thing about SBCs is that they’re incredibly portable and I just tossed my gear into my backpack and brought it along.
Thanks to Emil Renner Berthing (esmil) on the Ubuntu Matrix server for providing me with enough guidance to figure out where I had gone wrong above, and got me on my way just a few days after we arrived in Philly.
With the newer U-Boot installed, I was able to use the Ubuntu 24.04 livecd image on a micro SD Card to install Ubuntu 24.04 on an NVMe drive! That’s another new change since I last looked at installation, using my little NVMe drive as a target was a lot simpler than it would have been a year ago. In fact, it was rather anticlimactic, hah!
And with that, I was fully logged in to my new system.
It has 4 cores, so here’s the full output: vf2-cpus.txt
What will I do with this little single board computer? I don’t know yet. I joked with my husband that I’d “install Debian on it and forget about it like everything else” but I really would like to get past that. I have my little multiarch demo CI project in the wings, and I’ll probably loop it into that.
Since we were in Philly, I had a look over at my long-neglected Raspberry Pi 1B that I have here. When we first moved in, I used it as an ssh tunnel to get to this network from California. It was great for that! But now we have a more sophisticated network setup between the houses with a VLAN that connects them, so the ssh tunnel is unnecessary. In fact, my poor Raspberry Pi fell off the WiFi network when we switched to 802.1X just over a year ago and I never got around to getting it back on the network. I connected it to a keyboard and monitor and started some investigation. Honestly, I’m surprised the little guy was still running, but it’s doing fine!
And it had been chugging along running Rasbian based on Debian 9. Well, that’s worth an upgrade. But not just an upgrade, I didn’t want to stress the device and SD card, so I figured flashing it with the latest version of Raspberry Pi OS was the right way to go. It turns out, it’s been a long time since I’ve done a Raspberry Pi install.
I grabbed the Raspberry Pi Imager and went on my way. It’s really nice. I went with the Raspberry Pi OS Lite install since it’s the RP1, I didn’t want a GUI. The imager asked the usual installation questions, loaded up my SSH key, and I was ready to load it up in my Pi.
The only thing I need to finish sorting out is networking. The old USB WiFi adapter I have it in doesn’t initialize until after it’s booted up, so wpa_supplicant on boot can’t negotiate with the access point. I’ll have to play around with it. And what will I use this for once I do, now that it’s not an SSH tunnel? I’m not sure yet.
I realize this blog post isn’t very deep or technical, but I guess that’s the point. We’ve come a long way in recent years in support for non-x86 architectures, so installation has gotten a lot easier across several of them. If you’re new to playing around with architectures, I’d say it’s a really good time to start. You can hit the ground running with some wins, and then play around as you go with various things you want to help get working. It’s a lot of fun, and the years I spent playing around with Debian on Sparc back in the day definitely laid the groundwork for the job I have at IBM working on mainframes. You never know where a bit of technical curiosity will get you.
python-mastodon (in the course of which I found
#1101140 in blurhash-python and
proposed a small
cleanup to slidge)
python-model-bakery
python-multidict
python-pip
python-rsyncmanager
python-service-identity
python-setproctitle
python-telethon
python-trio
python-typing-extensions
responses
setuptools-scm
trove-classifiers
zope.testrunner
In bookworm-backports, I updated python-django to 3:4.2.19-1.
Although Debian’s upgrade to python-click 8.2.0 was
reverted for the time being, I fixed a
number of related problems anyway since we’re going to have to deal with it eventually:
dh-python dropped its dependency on python3-setuptools in 6.20250306, which
was long overdue, but it had quite a bit of fallout; in most cases this was
simply a question of adding build-dependencies on python3-setuptools, but in
a few cases there was a missing build-dependency on
python3-typing-extensions which had previously been pulled in as a
dependency of python3-setuptools. I fixed these bugs resulting from this:
There was a dnspython autopkgtest regression on
s390x. I independently tracked that down
to a pylsqpack bug and came up with a reduced test case before realizing
that Pranav P had already been working on it; we then worked together on it
and I uploaded their patch to Debian.
I finally gave in and joined the Debian Science
Team this month, since it often has
a lot of overlap with the Python team, and Freexian maintains several
packages under it.
I fixed a uscan error in hdf5-blosc (maintained by Freexian), and upgraded
it to a new upstream version.
I fixed a build failure with GCC 15 in
yubihsm-shell (maintained by Freexian).
Prompted by a CI failure in
debusine, I submitted a
large batch of spelling fixes and some improved static analysis to incus
(#1777,
#1778) and
distrobuilder.
Thanks to the hard work of our contributors, we are happy to announce the release of Lubuntu's Plucky Beta, which will become Lubuntu 25.04. This is a snapshot of the daily images. Approximately two months ago, we posted an Alpha-level update. While some information is duplicated below, that contains an accurate, concise technical summary of […]
The Ubuntu Studio team is pleased to announce the beta release of Ubuntu Studio 25.04, codenamed “Plucky Puffin”.
While this beta is reasonably free of any showstopper installer bugs, you will find some bugs within. This image is, however, mostly representative of what you will find when Ubuntu Studio 25.04 is released on April 17, 2025.
We encourage everyone to try this image and report bugs to improve our final release.
Special Notes
The Ubuntu Studio 25.04 image (ISO) exceeds 4 GB and cannot be downloaded to some file systems such as FAT32 and may not be readable when burned to a DVD. For this reason, we recommend downloading to a compatible file system. When creating a boot medium, we recommend creating a bootable USB stick with the ISO image or burning to a Dual-Layer DVD.
Full updated information, including Upgrade Instructions, are available in the Release Notes.
New Features This Release
This release is more evolutionary rather than revolutionary. While we work hard to bring new features, this one was not one where we had anything major to report. Here are a few highlights:
Plasma 6.3 is now the default desktop environment, an upgrade from Plasma 6.1.
PipeWire continues to improve with every release.. Version 1.2.7
The Default Panel Icons are now back. The default panel now populates depending on which applications are available, so that there are never empty icons if you choose the minimal install, and then install one or more of our featured applications. This refresh to the default is done every reboot, so it’s not a live update. Additionally, it must be refreshed manually from the User side either by selecting the Global Theme or removing the panel and adding “Ubuntu Studio Default Panel”.
While not included in this Beta, Darktable will be upgraded to 5.0.0 before final release.
Major Package Upgrades
Ardour version 8.12.0
Qtractor version 1.5.3
Audacity version 3.7.3
digiKam version 8.5.0
Kdenlive version 24.12.3
Krita version 5.2.9
GIMP version 3.0.0
There are many other improvements, too numerous to list here. We encourage you to look around the freely-downloadable ISO image.
Known Issues
The installer was supposed to be able to keep the screen from locking, but this will still happen after 15 minutes. Please keep the screen active during installation. As a workaround if you know you will be keeping your machine unattended during installation, press Alt-Space to invoke Krunner (this even works from the Install Ubuntu Studio versus the Try Ubuntu Studio live environment) and type “System Settings”. From there, search for “Screen Locking” and deactivate “Lock automatically after…”.
Another possible workaround is to click on “Switch User” and then re-login as “Live User” without a password if this happens.
The Installer background and slideshow still show the Oracular Oriole mascot. This is work in progress, to be fixed in a daily release sometime between now and final release.
Additionally, we need financial contributions. Our project lead, Erich Eickmeyer, is working long hours on this project and trying to generate a part-time income. Go here to see how you can contribute financially (options are also in the sidebar).
Frequently Asked Questions
Q: Does Ubuntu Studio contain snaps? A: Yes. Mozilla’s distribution agreement with Canonical changed, and Ubuntu was forced to no longer distribute Firefox in a native .deb package. We have found that, after numerous improvements, Firefox now performs just as well as the native .deb package did.
Thunderbird is also a snap this cycle in order for the maintainers to get security patches delivered faster.
Additionally, Freeshow is an Electron-based application. Electron-based applications cannot be packaged in the Ubuntu repositories in that they cannot be packaged in a traditional Debian source package. While such apps do have a build system to create a .deb binary package, it circumvents the source package build system in Launchpad, which is required when packaging for Ubuntu. However, Electron apps also have a facility for creating snaps, which can be uploaded and included. Therefore, for Freeshow to be included in Ubuntu Studio, it had to be packaged as a snap.
Also, to keep theming consistent, all included themes are snapped in addition to the included .deb versions so that snaps stay consistent with out themes.
We are working with Canonical to make sure that the quality of snaps goes up with each release, so we please ask that you give snaps a chance instead of writing them off completely.
Q: If I install this Beta release, will I have to reinstall when the final release comes out? A: No. If you keep it updated, your installation will automatically become the final release. However, if Audacity returns to the Ubuntu repositories before final release, then you might end-up with a double-installation of Audacity. Removal instructions of one or the other will be made available in a future post.
Q: Will you make an ISO with {my favorite desktop environment}? A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since we’re completely volunteer-run, we don’t have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio – which does *not* convert that flavor to Ubuntu Studio but adds its benefits.
Q: What if I don’t want all these packages installed on my machine? A: We now include a minimal install option. Install using the minimal install option, then use Ubuntu Studio Installer to install what you need for your very own content creation studio.
CHIRP is a powerful open-source tool for programming amateur radios, supporting brands like Baofeng, Kenwood, and Yaesu. With the transition from chirp-daily to chirp-next, Ubuntu users need a new approach to install the latest version. This guide provides a step-by-step method to install CHIRP, configure dependencies, and troubleshoot common issues.
Step 1: Install Required Dependencies
Before installing CHIRP, ensure your system has the necessary dependencies. Open a terminal and execute:
(Ensure you use the correct file name for your version.)
After installation, CHIRP should be available system-wide.
To add a shortcut for CHIRP to your application menu after installing it via pipx, first create a desktop entry file in the ~/.local/share/applications/ directory. Open a terminal and run nano ~/.local/share/applications/chirp.desktop to create a new file. In this file, add the following content:
Make sure to replace /home/YOUR_USERNAME/.local/bin/chirp with the correct path to the CHIRP executable. Once the file is created, save it and make it executable by running chmod +x ~/.local/share/applications/chirp.desktop. After that, refresh the application menu by running update-desktop-database ~/.local/share/applications or restarting your desktop environment. Your CHIRP application should now appear in the application menu, ready to launch with a custom shortcut.
Step 4: Ensure CHIRP Is in Your PATH
If CHIRP is not recognized as a command, update your PATH:
pipx ensurepath
Restart your terminal or log out and log back in. You can now run CHIRP using:
chirp
Step 5: Configure Serial Port Permissions
If CHIRP cannot detect your radio, you may need to grant serial port access.
Identify your radio’s device name: dmesg | grep ttyUSB This should return something like /dev/ttyUSB0.
Grant access to your user: sudo usermod -a -G $(stat -c %G /dev/ttyUSB0) $USER
Log out and back in or reboot your system for the changes to take effect.
Updating CHIRP
To update CHIRP in the future:
Download the latest .whl file.
Uninstall the current version: pipx uninstall chirp
Reinstall using the latest .whl following Step 3 above.
Troubleshooting Common Issues
CHIRP Doesn’t Start
Ensure pipx ensurepath has been executed.
Restart your terminal or log out and back in.
Serial Port Access Denied
Check user group permissions with: ls -l /dev/ttyUSB0
Add your user to the required group (e.g., dialout or uucp).
If wxPython is missing or outdated, install it manually: pip3 install -U -f https://extras.wxpython.org/wxPython4/extras/linux/gtk3/ubuntu-20.04 wxPython
Final Thoughts
By following this guide, you’ll have the latest CHIRP version running smoothly on Ubuntu. Whether you’re programming Baofeng, Kenwood, or other compatible radios, CHIRP simplifies configuration and channel management.
The Open Source Initiative has two classes of board seats: Affiliate seats, and Individual Member seats.
In the upcoming election, each affiliate can nominate a candidate, and each affiliate can cast a vote for the Affiliate candidates, but there's only 1 Affiliate seat available. I initially expressed interest in being nominated as an Affiliate candidate via Debian. But since Bradley Kuhn is also running for an Affiliate seat with a similar platform to me, especially with regards to the OSAID, I decided to run as part of an aligned "ticket" as an Individual Member to avoid contention for the 1 Affiliate seat.
Bradley and I discussed running on a similar ticket around 8/9pm Pacific, and I submitted my candidacy around 9pm PT on 17 February.
I was dismayed when I received the following mail from Nick Vidal:
Dear Luke,
Thank you for your interest in the OSI Board of Directors election. Unfortunately, we are unable to accept your application as it was submitted after the official deadline of Monday Feb 17 at 11:59 pm UTC. To ensure a fair process, we must adhere to the deadline for all candidates.
We appreciate your enthusiasm and encourage you to stay engaged with OSI’s mission. We hope you’ll consider applying in the future or contributing in other meaningful ways.
The OSI's contact address is in California, so it seems arbitrary and capricious to retroactively define all of these processes as being governed by UTC.
I was not able to participate in the "potential board director" info sessions accordingly, but people who attended heard that the importance of accommodating differing TZ's was discussed during the info session, and that OSI representatives mentioned they try to accommodate TZ's of everyone. This seems in sharp contrast with the above policy.
I urge the OSI to reconsider this policy and allow me to stand for an Individual seat in the current cycle.
Upd, N.B.: to people writing about this, I use they/them pronouns
Most of my Debian contributions this month were
sponsored by Freexian.
You can also support my work directly via
Liberapay.
OpenSSH
OpenSSH upstream released
9.9p2 with fixes for
CVE-2025-26465 and CVE-2025-26466. I got a heads-up on this in advance from
the Debian security team, and prepared updates for all of testing/unstable,
bookworm (Debian 12), bullseye (Debian 11), buster (Debian 10, LTS), and
stretch (Debian 9, ELTS). jessie (Debian 8) is also still in ELTS for a few
more months, but wasn’t affected by either vulnerability.
Although I’m not particularly active in the Perl team, I fixed a
libnet-ssleay-perl build failure because
it was blocking openssl from migrating to testing, which in turn was
blocking the above openssh fixes.
A lot of my Python team work is driven by its maintainer
dashboard.
Now that we’ve finished the transition to Python 3.13 as the default
version, and inspired by a recent debian-devel thread started by
Santiago, I
thought it might be worth spending a bit of time on the “uscan error”
section. uscan is typically
scraping upstream web sites to figure out whether new versions are
available, and so it’s easy for its configuration to become outdated or
broken. Most of this work is pretty boring, but it can often reveal
situations where we didn’t even realize that a Debian package was out of
date. I fixed these packages:
cssutils (this in particular was very out of date due to a new and active
upstream maintainer since 2021)
In bookworm-backports, I updated python-django to 3:4.2.18-1 (issuing
BSA-121)
and added new backports of python-django-dynamic-fixture and
python-django-pgtrigger, all of which are dependencies of
debusine.
I can’t remember exactly the joke I was making at the time in my
work’s slack instance (I’m sure it wasn’t particularly
funny, though; and not even worth re-reading the thread to work out), but it
wound up with me writing a UEFI binary for the punchline. Not to spoil the
ending but it worked - no pesky kernel, no messing around with “userland”. I
guess the only part of this you really need to know for the setup here is that
it was a Severance joke,
which is some fantastic TV. If you haven’t seen it, this post will seem perhaps
weirder than it actually is. I promise I haven’t joined any new cults. For
those who have seen it, the payoff to my joke is that I wanted my machine to
boot directly to an image of
Kier Eagan.
As for how to do it – I figured I’d give the uefi
crate a shot, and see how it is to use,
since this is a low stakes way of trying it out. In general, this isn’t the
sort of thing I’d usually post about – except this wound up being easier and
way cleaner than I thought it would be. That alone is worth sharing, in the
hopes someome comes across this in the future and feels like they, too, can
write something fun targeting the UEFI.
First thing’s first – gotta create a rust project (I’ll leave that part to you
depending on your life choices), and to add the uefi crate to your
Cargo.toml. You can either use cargo add or add a line like this by hand:
uefi = { version = "0.33", features = ["panic_handler", "alloc", "global_allocator"] }
We also need to teach cargo about how to go about building for the UEFI target,
so we need to create a rust-toolchain.toml with one (or both) of the UEFI
targets we’re interested in:
Unfortunately, I wasn’t able to use the
image crate,
since it won’t build against the uefi target. This looks like it’s
because rustc had no way to compile the required floating point operations
within the image crate without hardware floating point instructions
specifically. Rust tends to punt a lot of that to libm usually, so this isnt
entirely shocking given we’re nostd for a non-hardfloat target.
So-called “softening” requires a software floating point implementation that
the compiler can use to “polyfill” (feels weird to use the term polyfill here,
but I guess it’s spiritually right?) the lack of hardware floating point
operations, which rust hasn’t implemented for this target yet. As a result, I
changed tactics, and figured I’d use ImageMagick to pre-compute the pixels
from a jpg, rather than doing it at runtime. A bit of a bummer, since I need
to do more out of band pre-processing and hardcoding, and updating the image
kinda sucks as a result – but it’s entirely manageable.
This will take our input file (kier.jpg), resize it to get as close to the
desired resolution as possible while maintaining aspect ration, then convert it
from a jpg to a flat array of 4 byte RGBA pixels. Critically, it’s also
important to remember that the size of the kier.full.jpg file may not actually
be the requested size – it will not change the aspect ratio, so be sure to
make a careful note of the resulting size of the kier.full.jpg file.
Last step with the image is to compile it into our Rust bianary, since we
don’t want to struggle with trying to read this off disk, which is thankfully
real easy to do.
Remember to use the width and height from the final kier.full.jpg file as the
values for KIER_WIDTH and KIER_HEIGHT. KIER_PIXEL_SIZE is 4, since we
have 4 byte wide values for each pixel as a result of our conversion step into
RGBA. We’ll only use RGB, and if we ever drop the alpha channel, we can drop
that down to 3. I don’t entirely know why I kept alpha around, but I figured it
was fine. My kier.full.jpg image winds up shorter than the requested height
(which is also qemu’s default resolution for me) – which means we’ll get a
semi-annoying black band under the image when we go to run it – but it’ll
work.
Anyway, now that we have our image as bytes, we can get down to work, and
write the rest of the code to handle moving bytes around from in-memory
as a flat block if pixels, and request that they be displayed using the
UEFI GOP. We’ll just need to hack up a container
for the image pixels and teach it how to blit to the display.
/// RGB Image to move around. This isn't the same as an
/// `image::RgbImage`, but we can associate the size of
/// the image along with the flat buffer of pixels.
structRgbImage {
/// Size of the image as a tuple, as the
/// (width, height)
size: (usize, usize),
/// raw pixels we'll send to the display.
inner: Vec<BltPixel>,
}
impl RgbImage {
/// Create a new `RgbImage`.
fnnew(width: usize, height: usize) -> Self {
RgbImage {
size: (width, height),
inner: vec![BltPixel::new(0, 0, 0); width * height],
}
}
/// Take our pixels and request that the UEFI GOP
/// display them for us.
fnwrite(&self, gop: &mut GraphicsOutput) -> Result {
gop.blt(BltOp::BufferToVideo {
buffer: &self.inner,
src: BltRegion::Full,
dest: (0, 0),
dims: self.size,
})
}
}
impl Index<(usize, usize)>for RgbImage {
typeOutput= BltPixel;
fnindex(&self, idx: (usize, usize)) -> &BltPixel {
let (x, y) = idx;
&self.inner[y * self.size.0+ x]
}
}
impl IndexMut<(usize, usize)>for RgbImage {
fnindex_mut(&mut self, idx: (usize, usize)) -> &mut BltPixel {
let (x, y) = idx;
&mut self.inner[y * self.size.0+ x]
}
}
We also need to do some basic setup to get a handle to the UEFI
GOP via the UEFI crate (using
uefi::boot::get_handle_for_protocol
and
uefi::boot::open_protocol_exclusive
for the GraphicsOutput
protocol), so that we have the object we need to pass to RgbImage in order
for it to write the pixels to the display. The only trick here is that the
display on the booted system can really be any resolution – so we need to do
some capping to ensure that we don’t write more pixels than the display can
handle. Writing fewer than the display’s maximum seems fine, though.
fnpraise() -> Result {
let gop_handle = boot::get_handle_for_protocol::<GraphicsOutput>()?;
letmut gop = boot::open_protocol_exclusive::<GraphicsOutput>(gop_handle)?;
// Get the (width, height) that is the minimum of
// our image and the display we're using.
let (width, height) = gop.current_mode_info().resolution();
let (width, height) = (width.min(KIER_WIDTH), height.min(KIER_HEIGHT));
letmut buffer = RgbImage::new(width, height);
for y in0..height {
for x in0..width {
let idx_r = ((y * KIER_WIDTH) + x) * KIER_PIXEL_SIZE;
let pixel =&mut buffer[(x, y)];
pixel.red = KIER[idx_r];
pixel.green = KIER[idx_r +1];
pixel.blue = KIER[idx_r +2];
}
}
buffer.write(&mut gop)?;
Ok(())
}
Not so bad! A bit tedious – we could solve some of this by turning
KIER into an RgbImage at compile-time using some clever Cow and
const tricks and implement blitting a sub-image of the image – but this
will do for now. This is a joke, after all, let’s not go nuts. All that’s
left with our code is for us to write our main function and try and boot
the thing!
#[entry]fnmain() -> Status {
uefi::helpers::init().unwrap();
praise().unwrap();
boot::stall(100_000_000);
Status::SUCCESS
}
If you’re following along at home and so interested, the final source is over at
gist.github.com.
We can go ahead and build it using cargo (as is our tradition) by targeting
the UEFI platform.
While I can definitely get my machine to boot these blobs to test, I figured
I’d save myself some time by using QEMU to test without a full boot.
If you’ve not done this sort of thing before, we’ll need two packages,
qemu and ovmf. It’s a bit different than most invocations of qemu you
may see out there – so I figured it’d be worth writing this down, too.
$ doas apt install qemu-system-x86 ovmf
qemu has a nice feature where it’ll create us an EFI partition as a drive and
attach it to the VM off a local directory – so let’s construct an EFI
partition file structure, and drop our binary into the conventional location.
If you haven’t done this before, and are only interested in running this in a
VM, don’t worry too much about it, a lot of it is convention and this layout
should work for you.
With all this in place, we can kick off qemu, booting it in UEFI mode using
the ovmf firmware, attaching our EFI partition directory as a drive to
our VM to boot off of.
If all goes well, soon you’ll be met with the all knowing gaze of
Chosen One, Kier Eagan. The thing that really impressed me about all
this is this program worked first try – it all went so boringly
normal. Truly, kudos to the uefi crate maintainers, it’s incredibly
well done.
Booting a live system
Sure, we could stop here, but anyone can open up an app window and see a
picture of Kier Eagan, so I knew I needed to finish the job and boot a real
machine up with this. In order to do that, we need to format a USB stick.
BE SURE /dev/sda IS CORRECT IF YOU’RE COPY AND PASTING. All my drives
are NVMe, so BE CAREFUL – if you use SATA, it may very well be your
hard drive! Please do not destroy your computer over this.
$ doas fdisk /dev/sda
Welcome to fdisk (util-linux 2.40.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-4014079, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-4014079, default 4014079):
Created a new partition 1 of type 'Linux' and of size 1.9 GiB.
Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): ef
Changed type of partition 'Linux' to 'EFI (FAT-12/16/32)'.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Once that looks good (depending on your flavor of udev you may or
may not need to unplug and replug your USB stick), we can go ahead
and format our new EFI partition (BE CAREFUL THAT /dev/sda IS YOUR
USB STICK) and write our EFI directory to it.
$ doas mkfs.fat /dev/sda1
$ doas mount /dev/sda1 /mnt
$ cp -r esp/efi /mnt
$ find /mnt
/mnt
/mnt/efi
/mnt/efi/boot
/mnt/efi/boot/bootx64.efi
Of course, naturally, devotion to Kier shouldn’t mean backdooring your system.
Disabling Secure Boot runs counter to the Core Principals, such as Probity, and
not doing this would surely run counter to Verve, Wit and Vision. This bit does
require that you’ve taken the step to enroll a
MOK and know how
to use it, right about now is when we can use sbsign to sign our UEFI binary
we want to boot from to continue enforcing Secure Boot. The details for how
this command should be run specifically is likely something you’ll need to work
out depending on how you’ve decided to manage your MOK.
I figured I’d leave a signed copy of boot2kier at
/boot/efi/EFI/BOOT/KIER.efi on my Dell XPS 13, with Secure Boot enabled
and enforcing, just took a matter of going into my BIOS to add the right
boot option, which was no sweat. I’m sure there is a way to do it using
efibootmgr, but I wasn’t smart enough to do that quickly. I let ‘er rip,
and it booted up and worked great!
It was a bit hard to get a video of my laptop, though – but lucky for me, I
have a Minisforum Z83-F sitting around (which, until a few weeks ago was running
the annual http server to control my christmas tree
) – so I grabbed it out of the christmas bin, wired it up to a video capture
card I have sitting around, and figured I’d grab a video of me booting a
physical device off the boot2kier USB stick.
Attentive readers will notice the image of Kier is smaller then the qemu booted
system – which just means our real machine has a larger GOP display
resolution than qemu, which makes sense! We could write some fancy resize code
(sounds annoying), center the image (can’t be assed but should be the easy way
out here) or resize the original image (pretty hardware specific workaround).
Additionally, you can make out the image being written to the display before us
(the Minisforum logo) behind Kier, which is really cool stuff. If we were real
fancy we could write blank pixels to the display before blitting Kier, but,
again, I don’t think I care to do that much work.
But now I must away
If I wanted to keep this joke going, I’d likely try and find a copy of the
original
video when Helly 100%s her file
and boot into that – or maybe play a terrible midi PC speaker rendition of
Kier, Chosen One, Kier after
rendering the image. I, unfortunately, don’t have any friends involved with
production (yet?), so I reckon all that’s out for now. I’ll likely stop playing
with this – the joke was done and I’m only writing this post because of how
great everything was along the way.
All in all, this reminds me so much of building a homebrew kernel to boot a
system into – but like, good, though, and it’s a nice reminder of both how
fun this stuff can be, and how far we’ve come. UEFI protocols are light-years
better than how we did it in the dark ages, and the tooling for this is SO
much more mature. Booting a custom UEFI binary is miles ahead of trying to
boot your own kernel, and I can’t believe how good the uefi crate is
specifically.
Praise Kier! Kudos, to everyone involved in making this so delightful ❤️.
Wireshark is an essential tool for network analysis, and staying up to date with the latest releases ensures access to new features, security updates, and bug fixes. While Ubuntu’s official repositories provide stable versions, they are often not the most recent.
Wearing both WiresharkCore Developer and Debian/Ubuntu package maintainer hats, I’m happy to help the Wireshark team in providing updated packages for all supported Ubuntu versions through dedicated PPAs. This post outlines how you can install the latest stable and nightly Wireshark builds on Ubuntu.
Latest Stable Releases
For users who want the most up-to-date stable Wireshark version, we maintain a PPA with backports of the latest releases:
For those who want to test new features before they are officially released, nightly builds are also available. These builds track the latest development code and you can watch them cooking on their Launchpad recipe page.
Note: Nightly builds may contain experimental features and are not guaranteed to be as stable as the official releases. Also it targets only Ubuntu 24.04 and later including the current development release.
If you need to revert to the stable version later, remove the nightly PPA and reinstall Wireshark:
Throughout my career, I’ve had the privilege of working with organizations that create widely-used open source tools. The popularity of these tools is evident through their impressive download statistics, strong community presence, and engagement both online and at events.
At Influxdata, I was part of the Telegraf team, where we witnessed substantial adoption through downloads and active usage, reflected in our vibrant bug tracker.
What makes Syft and Grype particularly exciting, beyond their permissive licensing, consistent release cycle, dedicated developer team, and distinctivemascots, is how they serve as building blocks for other tools and services.
Syft isn’t just a standalone SBOM generator - it’s a library that developers can integrate into their own tools. Some organizations even build their own SBOM generators and vulnerability tools directly from our open source foundation!
(I find it delightfully meta to discover syft inside other tools using syft itself)
This collaborative building upon existing tools mirrors how Linux distributions often build upon other Linux distributions. Like Ubuntu and Telegraf, we see countless individuals and organizations creating innovative solutions that extend beyond the core capabilities of Syft and Grype. It’s the essence of open source - a multiplier effect that comes from creating accessible, powerful tools.
While we may not always know exactly how and where these tools are being used (and sometimes, rightfully so, it’s not our business), there are many cases where developers and companies want to share their innovative implementations.
I’m particularly interested in these stories because they deserve to be shared. I’ve been exploring public repositories like the GitHub network dependents for syft, grype, sbom-action, and scan-action to discover where our tools are making an impact.
The adoption has been remarkable!
I reached out to several open source projects to learn about their implementations, and Nicolas Vuilamy from MegaLinter was the first to respond - which brings us full circle.
Tired of waiting for apt to finish installing packages? Wish there were a way to make your installations blazingly fast without caring about minor things like, oh, data integrity? Well, today is your lucky day!
I’m thrilled to introduce apt-eatmydata, now available for Debian and all supported Ubuntu releases!
What Is apt-eatmydata?
If you’ve ever used libeatmydata, you know it’s a nifty little hack that disables fsync() and friends, making package installations way faster by skipping unnecessary disk writes. Normally, you’d have to remember to wrap apt commands manually, like this:
eatmydata apt install texlive-full
But who has time for that? apt-eatmydata takes care of this automagically by integrating eatmydata seamlessly into apt itself! That means every package install is now turbocharged—no extra typing required.
How to Get It
Debian
If you’re on Debian unstable/testing (or possibly soon in stable-backports), you can install it directly with:
sudo apt install apt-eatmydata
Ubuntu
Ubuntu users already enjoy faster package installation thanks to zstd-compressed packages and to switch to even higher gear I’ve backported apt-eatmydata to all supported Ubuntu releases. Just add this PPA and install:
And boom! Your apt install times are getting serious upgrade. Let’s run some tests…
# pre-download package to measure only the installation $ sudo apt install -d linux-headers-6.8.0-53-lowlatency ... # installation time is 9.35s without apt-eatmydata: $ sudo time apt install linux-headers-6.8.0-53-lowlatency ... 2.30user 2.12system 0:09.35elapsed 47%CPU (0avgtext+0avgdata 174680maxresident)k 32inputs+1495216outputs (0major+196945minor)pagefaults 0swaps $ sudo apt install apt-eatmydata ... $ sudo apt purge linux-headers-6.8.0-53-lowlatency # installation time is 3.17s with apt-eatmydata: $ sudo time eatmydata apt install linux-headers-6.8.0-53-lowlatency 2.30user 0.88system 0:03.17elapsed 100%CPU (0avgtext+0avgdata 174692maxresident)k 0inputs+205664outputs (0major+198099minor)pagefaults 0swaps
apt-eatmydata just made installing Linux headers 3x faster!
But Wait, There’s More!
If you’re automating CI builds, there’s even a GitHub Action to make your workflows faster essentially doing what apt-eatmydata does, just setting it up in less than a second! Check it out here: GitHub Marketplace: apt-eatmydata
Should You Use It?
Warning:apt-eatmydatais not for all production environments. If your system crashes mid-install, you might end up with a broken package database. But for throwaway VMs, containers, and CI pipelines? It’s an absolute game-changer. I use it on my laptop, too.
So go forth and install recklessly fast!
If you run into any issues, feel free to file a bug or drop a comment. Happy hacking!
Everyone's got a newsletter these days (like everyone's got a podcast). In general, I think this is OK: instead of going through a middleman publisher, have a direct connection from you to the people who want to read what you say, so that that audience can't be taken away from you.
On the other hand, I don't actually like newsletters. I don't really like giving my email address to random people1, and frankly an email app is not a great way to read long-form text! There are many apps which are a lot better at this.
There is a solution to this and the solution is called RSS. Andy Bell explains RSS and this is exactly how I read newsletters. If I want to read someone's newsletter and it's on Substack, or ghost.io, or buttondown.email, what I actually do is subscribe to their newsletter but what I'm actually subscribing to is their RSS feed. This sections off newsletter stuff into a completely separate app that I can catch up on when I've got the time, it means that the newsletter owner (or the site they're using) can't decide to "upsell" me on other stuff they do that I'm not interested in, and it's a better, nicer reading experience than my mail app.2
I use NetNewsWire on my iOS phone, but there are a bunch of other newsreader apps for every platform and you should choose whichever one you want. Andy lists a bunch, above.
The question, of course, then becomes: how do you find the RSS feed for a thing you want to read?3 Well, it turns out... you don't have to.
When you want to subscribe to a newsletter, you literally just put the web address of the newsletter itself into your RSS reader, and that reader will take care of finding the feed and subscribing to it, for you. It's magic. Hooray! I've tested this with substack, with ghost.io, with buttondown.email, and it works with all of them. You don't need to do anything.
If that doesn't work, then there is one neat alternative you can try, though. Kill The Newsletter will give you an email address for any site you name, and provide the incoming emails to that as an RSS feed. So, if you've found a newsletter which doesn't exist on the web (boo hiss!) and doesn't provide an RSS feed, then you go to KTN, it gives you some randomly-generated email address, you subscribe to the intransigent newsletter with that email address, and then you can subscribe to the resultant feed in your RSS reader. It's dead handy.
If you run a newsletter and it doesn't have an RSS feed and you want it to have, then have a look at whatever newsletter software you use; it will almost certainly provide a way to create one, and you might have to tick a box. (You might also want to complain to the software creators that that box wasn't ticked by default.) If you've got an RSS feed for the newsletter that you write, but putting your site's address into an RSS reader doesn't find that RSS feed, then what you need is RSS autodiscovery, which is the "magic" alluded to above; you add a line to your site's HTML in the <head> section which reads <link rel="alternate" type="application/rss+xml" title="RSS" href="https://URL/of/your/feed"> and then it'll work.
I like this. Read newsletters at my pace, in my choice of app, on my terms. More of that sort of thing.
despite how it's my business to do so and it's right there on the front page of the website, I know, I know ↩
Is all of this doable in my mail client? Sure. I could set up filters, put newsletters into their own folders/labels, etc. But that's working around a problem rather than solving it ↩
I suggested to Andy that he ought to write this post explaining how to do this and then realised that I should do it myself and stop being such a lazy snipe, so here it is ↩
For several years, DigitalOcean has been an important sponsor of Ubuntu Budgie. They provide the infrastructure we need to host our website at https://ubuntubudgie.org and our Discourse community forum at https://discourse.ubuntubudgie.org. Maybe you are familiar with them. Maybe you use them in your personal or professional life. Or maybe, like me, you didn’t really see how they would benefit you.
Whenever something touches the red cap, the system wakes up from suspend/s2idle.
I’ve used ThinkPad T14 Gen 3 AMD for 2 years, and I recently purchased T14 Gen 5 AMD. The previous system as Gen 3 annoyed me so much because the laptop randomly woke up from suspend even inside a backpack on its own, heated up the confined air in it, and drained the battery pretty fast as a consequence. Basically it’s too sensitive to any events. For example, whenever a USB Type-C cable is plugged in as a power source or whenever something touches the TrackPoint even if a display on a closed lid slightly makes contact with the red cap, the system wakes up from suspend. It was uncontrollable.
I was hoping that Gen 5 would make a difference, and it did when it comes to the power source event. However, frequent wakeups due to the TrackPoint event remained the same so I started to dig in.
Disabling touchpad as a wakeup source on T14 Gen 5 AMD
Disabling touchpad events as a wakeup source is straightforward. The touchpad device, ELAN0676:00 04F3:3195 Touchpad, can be found in the udev device tree as follows.
And you can get all attributes including parent devices like the following.
$ udevadm info --attribute-walk -p /devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/0018:04F3:3195.0001/input/input12
...
looking at device '/devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/0018:04F3:3195.0001/input/input12':
KERNEL=="input12"SUBSYSTEM=="input"DRIVER=="" ...
ATTR{name}=="ELAN0676:00 04F3:3195 Touchpad" ATTR{phys}=="i2c-ELAN0676:00"...
looking at parent device '/devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00':
KERNELS=="i2c-ELAN0676:00"SUBSYSTEMS=="i2c"DRIVERS=="i2c_hid_acpi" ATTRS{name}=="ELAN0676:00" ...
ATTRS{power/wakeup}=="enabled"
The line I’m looking for is ATTRS{power/wakeup}=="enabled". By using the identifiers of the parent device that has ATTRS{power/wakeup}, I can make sure that /sys/devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/power/wakeup is always disabled with the custom udev rule as follows.
Disabling TrackPoint as a wakeup source on T14 Gen 5 AMD
I’ve seen a pattern already as above so I should be able to apply the same method. The TrackPoint device, TPPS/2 Elan TrackPoint, can be found in the udev device tree.
$ udevadm info --attribute-walk -p /devices/platform/i8042/serio1/input/input5
...
looking at device '/devices/platform/i8042/serio1/input/input5':
KERNEL=="input5"SUBSYSTEM=="input"DRIVER=="" ...
ATTR{name}=="TPPS/2 Elan TrackPoint" ATTR{phys}=="isa0060/serio1/input0"...
looking at parent device '/devices/platform/i8042/serio1':
KERNELS=="serio1"SUBSYSTEMS=="serio"DRIVERS=="psmouse" ATTRS{bind_mode}=="auto" ATTRS{description}=="i8042 AUX port" ATTRS{drvctl}=="(not readable)" ATTRS{firmware_id}=="PNP: LEN0321 PNP0f13" ...
ATTRS{power/wakeup}=="disabled"
I hit the wall here. ATTRS{power/wakeup}=="disabled" for the i8042 AUX port is already there but the TrackPoint still wakes up the system from suspend. I had to do bisecting for all remaining wakeup sources.
Wakeup sources:
│ [/sys/devices/platform/USBC000:00/power_supply/ucsi-source-psy-USBC000:001/wakeup66]: enabled
│ [/sys/devices/platform/USBC000:00/power_supply/ucsi-source-psy-USBC000:002/wakeup67]: enabled
│ ACPI Battery [PNP0C0A:00]: enabled
│ ACPI Lid Switch [PNP0C0D:00]: enabled
│ ACPI Power Button [PNP0C0C:00]: enabled
│ ACPI Sleep Button [PNP0C0E:00]: enabled
│ AT Translated Set 2 keyboard [serio0]: enabled
│ Advanced Micro Devices, Inc. [AMD] ISA bridge [0000:00:14.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] Multimedia controller [0000:c4:00.5]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.1]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.2]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:03.1]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:04.1]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c4:00.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c4:00.4]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.4]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.5]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.6]: enabled
│ Mobile Broadband host interface [mhi0]: enabled
│ Plug-n-play Real Time Clock [00:01]: enabled
│ Real Time Clock alarm timer [rtc0]: enabled
│ Thunderbolt domain [domain0]: enabled
│ Thunderbolt domain [domain1]: enabled
│ USB4 host controller [0-0]: enabled
└─USB4 host controller [1-0]: enabled
Somehow, disabling SLPB “ACPI Sleep Button” stopped undesired wakeups by the TrackPoint.
looking at parent device '/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00':
KERNELS=="PNP0C0E:00"SUBSYSTEMS=="acpi"DRIVERS=="button" ATTRS{hid}=="PNP0C0E" ATTRS{path}=="\_SB_.SLPB" ...
ATTRS{power/wakeup}=="enabled"
The final udev rule is the following. It also disables wakeup events from the keyboard as a side effect, but opening the lid or pressing the power button can still wake up the system so it works for me.
After solving the headache of frequent wakeups for T14 Gen5 AMD. I was curious if I could apply the same to Gen 3 AMD retrospectively. Gen 3 has the following wakeup sources active out of the box.
Wakeup sources:
│ ACPI Battery [PNP0C0A:00]: enabled
│ ACPI Lid Switch [PNP0C0D:00]: enabled
│ ACPI Power Button [LNXPWRBN:00]: enabled
│ ACPI Power Button [PNP0C0C:00]: enabled
│ ACPI Sleep Button [PNP0C0E:00]: enabled
│ AT Translated Set 2 keyboard [serio0]: enabled
│ Advanced Micro Devices, Inc. [AMD] ISA bridge [0000:00:14.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.1]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.2]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:04:00.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:04:00.4]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:05:00.0]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:05:00.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:05:00.4]: enabled
│ ELAN0678:00 04F3:3195 Mouse [i2c-ELAN0678:00]: enabled
│ Mobile Broadband host interface [mhi0]: enabled
│ Plug-n-play Real Time Clock [00:01]: enabled
└─Real Time Clock alarm timer [rtc0]: enabled
Disabling the touchpad event was straightforward. The only difference from Gen 5 was the ID of the device.
When it comes to the TrackPoint or power source event, nothing was able to stop it from waking up the system even after disabling all wakeup sources. I came across a hidden gem named amd_s2idle.py. The “S0i3/s2idle analysis script for AMD systems” is full with the domain knowledge of s2idle like where to look in /proc or /sys or how to enable debug and what part of the logs is important.
By running the script, I got the following output around the unexpected wakeup.
$ sudo python3 ./amd_s2idle.py --debug-ec --duration 30
Debugging script for s2idle on AMD systems
💻 LENOVO 21CF21CFT1 (ThinkPad T14 Gen 3) running BIOS 1.56 (R23ET80W (1.56 )) released 10/28/2024 and EC 1.32
🐧 Ubuntu 24.04.1 LTS
🐧 Kernel 6.11.0-12-generic
🔋 Battery BAT0 (Sunwoda ) is operating at 90.91% of design
Checking prerequisites for s2idle
✅ Logs are provided via systemd
✅ AMD Ryzen 7 PRO 6850U with Radeon Graphics (family 19 model 44)
...
Suspending system in 0:00:02
Suspending system in 0:00:01
Started at 2025-01-04 00:46:53.063495 (cycle finish expected @ 2025-01-04 00:47:27.063532)
Collecting data in 0:00:02
Collecting data in 0:00:01
Results from last s2idle cycle
💤 Suspend count: 1
💤 Hardware sleep cycle count: 1
○ GPIOs active: ['0']
🥱 Wakeup triggered from IRQ 9: ACPI SCI
🥱 Wakeup triggered from IRQ 7: GPIO Controller
🥱 Woke up from IRQ 7: GPIO Controller
❌ Userspace suspended for 0:00:14.031448 (< minimum expected 0:00:27)
💤 In a hardware sleep state for 0:00:10.566894 (75.31%)
🔋 Battery BAT0 lost 10000 µWh (0.02%) [Average rate 2.57W]
Explanations for your system
🚦 Userspace wasn't asleep at least 0:00:30
The system was programmed to sleep for 0:00:30, but woke up prematurely.
This typically happens when the system was woken up from a non-timer based source.
If you didn't intentionally wake it up, then there may be a kernel or firmware bug
I compared all the logs generated between the events of power button, power source, TrackPoint, and touchpad. But except for the touchpad event, everything else was coming from GPIO pin #0 and there was no more information of how to distinguish those wakeup triggers. I ended up with a drastic approach of ignoring wakeup triggers from the GPIO pin #0 completely with the following kernel option.
gpiolib_acpi.ignore_wake=AMDI0030:00@0
And I get the line on each boot.
kernel: amd_gpio AMDI0030:00: Ignoring wakeup on pin 0
That comes with obvious downsides. The system doesn’t wake up frequently any longer, that is good. However, nothing can wake it up after getting into suspend. Opening the lid, pressing the power button or any key is simply ignored since all are going to GPIO pin #0. In the end, I had to enable the touchpad back as a wakeup source explicitly so the system can wakeup by tapping the touchpad. It’s far from ideal, but the touchpad is less sensitive than the TrackPoint so I will keep it that way.
So often I come across the need to avoid my system to block forever, or until a process finishes, I can’t recall how did I came across systemd inhibit, but
here’s my approach and a bit of motivation
After some fiddling (not much really), it starts directly once I login and I will be using it instead of a fully fledged plex or the like, I just want to stream some videos from time to time from my home pc over my ipad :D using VLC.
The Hack
systemd-inhibit --who=foursixnine --why="maybe there be dragons" --mode block \
bash -c 'while $(systemctl --user is-active -q rygel.service); do sleep 1h; done'