May 18, 2022

Meet our public sector team from May 23-25 at Walter E. Washington Convention Center, Washington, DC

Our collaboration wth AWS has started in 2012 making 2022 the 10th year of working together to deliver premium open source solutions in the cloud. With Public Sector designation, our goal is to continue supporting AWS providing security and compliance for Government agencies and contractors on AWS Govcloud as and all AWS regions. 

AWS Summit Washington is a three-day, public sector-led event that represents one of the best opportunities to learn about the Amazon Web Services (AWS) platform. Suitable for both veterans and newcomers, this complimentary event will give you a chance to deepen your AWS knowledge and gain new skills for designing and deploying solutions in the cloud.

Canonical will be present throughout the AWS public sector summit to discuss our solutions on the platform – such as Ubuntu Pro, a premium Ubuntu image with out-of-the-box FIPS compliance.

Canonical in the public sector

Ubuntu is the world’s most popular cloud operating system across all major public clouds, powering 50% of Linux workloads globally, and this popularity extends to the public sector as well. Ubuntu is particularly well-suited to public sector workloads for three reasons: it is secure, compliant, and economical.

  • Secure – Ubuntu is built from the ground up with security in mind. Key security features include automated updates, live patching to apply kernel CVE fixes without downtime, and up to 10 years of security maintenance.
  • Compliant – While the default configuration of Ubuntu balances usability and security, public sector users can seamlessly enable additional hardening through Ubuntu Pro to meet compliance requirements. Ubuntu supports FIPS 140-2, DISA STIG, NIST, and more.
  • Economical: Ubuntu and Ubuntu provide the best value for money on running workloads on AWS with free and paid options available along with support.

Alongside Ubuntu, we deliver secure, upstream  Kubernetes, Anbox and managed services such as Apache Kafka and Kubeflow.. Meanwhile, Canonical Kubernetes is one of the most secure and cost-effective options on the market for containerisation strategies.

Public sector customers can also benefit from our large portfolio of enterprise-grade management tools to accelerate time-to-value and minimise operating costs. 

Ubuntu Pro

While the free, AWS-optimised Ubuntu server image is already the first-choice operating system for countless public sector users, we developed the premium Ubuntu Pro image to take AWS security and compliance a step further to meet the unique requirements of Federal Government organisations, agencies, and their contractors. 

The Federal Information Processing Standard (FIPS) 140-2 is a US and Canada government data protection standard which demands that cryptographic modules be validated against exacting security standards. Ubuntu Pro FIPS includes FIPS certified components by default (along with certified components for FedRAMP, HIPAA, PCI, and ISO) to enable compliance without any additional configuration, making it the ideal foundation for federal programs and government contractors operating on AWS.

DISA-STIG and CIS industry security benchmarks are both notoriously time-consuming to satisfy manually. Ubuntu Pro solves this challenge through the Ubuntu Security Guide (USG), which brings automation tooling to drastically simplify hardening and auditing at scale. 

We know that stability is critical for public sector organisations, which is why we back Ubuntu Pro with security updates for 10 years, with a guaranteed upgrade path. And with Ubuntu Pro, security coverage is also extended to over 27,000 additional packages, including the most important open source applications such as Apache Kafka, NGINX, MongoDB, Redis and PostgreSQL.


Canonical’s public sector team – Khurrum Khan, Kelley Riggs, Carlos Bravo, and Carlos Falsiroli – will be on hand throughout the event for in-person conversations, and they will deliver a series of hands-on demonstrations. 

The first demo will walk attendees through the process of spinning up a fresh Ubuntu Pro FIPS AMI, so you can see what out-of-the-box compliance looks like in action.

The second demo will show you how to apply DISA STIG hardening with the Ubuntu Security Guide.Book a one-on-one meeting with us for a deep dive into open source security and compliance on AWS, or to schedule a demo.

Book a meeting

on May 18, 2022 09:00 AM
<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>

Google I/O 2022 took place last week and brought with it a host of exciting news from the world of Google, including the announcement of Flutter 3 with long-awaited Linux Desktop support!

Flutter 3 is the next big step in Flutter’s journey to enable multi-platform application development across what is now six platforms: iOS, Android, Web, Windows, macOS and Linux. It features improved performance and additional profiling in Flutter DevTools, support for Material 3, Apple silicon, accessibility services and web app life-cycles on top of the snazzy new Flutter Casual Games Toolkit and much more

There’s truly something for everyone in this latest release.

Linux Desktop support is production-ready

Canonical and the Flutter team have partnered closely to bring desktop support to Linux. We’ve created packages that allow for deep integration with system services like dbus, gsettings, desktop notifications and network manager, enabling everything you need to deliver a high-quality desktop experience. You can even style your app with Ubuntu’s iconic Yaru theme.

<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>
For a full overview of Canonical’s packages check out our

As of this week, Linux desktop support has moved to stable, meaning it’s production-ready and enabled immediately once you create a new application.

You can check this by running ‘flutter doctor’ and looking for the following in the output.

 [✓] Linux toolchain - develop for Linux desktop

To run your application on Linux, simply type ‘flutter run -d linux’ from the command line.

To build it, it is as easy as ‘flutter build linux’.

In addition, if you have an existing Flutter project that you’d like to bring to Linux, all you need to do is run the following command from the root project directory:

flutter create --platforms=linux

And then the above commands will become available.

You can find more information on developing for desktop OSs here

Easy authentication with FlutterFire Auth

But that’s not all, our colleagues at Invertase have also got some exciting news, with FlutterFire Auth also moving to stable. FlutterFire connects your desktop app to Google’s Firebase services and FlutterFire Auth enables developers on Windows and Linux to authenticate their app via email, phone number or various OAuth providers such as Google and GitHub.

<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>
Check out the FlutterFire Auth demo from our previous blog post.

To add FlutterFire Auth support to your project, simply run the following commands in your project root folder:

flutter pub add firebase_core_desktop
flutter pub add firebase_auth_desktop

You can also add it to your existing project with:

flutter pub add firebase_core
flutter pub add firebase_auth

For more details on how to take advantage of FlutterFire Auth, check out Invertase’s latest blog.

For more information on Invertase’s future plans for FlutterFire, you can follow their 2022 roadmap on their GitHub here.

Bring your Flutter desktop app to the Snap Store

<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>
The Snap Store serves millions of Linux users across 41 distributions!

So you’ve got a great Flutter app and you’ve built it for Linux. The next step is to get it in front of users!

The Snap store is available on any Linux distribution running snapd and delivers a range of cutting edge development tools and key productivity apps to millions of users all over the world. From IDEs like VS Code and Android Studio to messaging apps like Telegram, Discord and Slack as well as tools for creators like Blender, Shotcut and OBS Studio. We’ve even got gamers covered with our new Steam snap.

Snaps are sandboxed, easily updated and bundle all the dependencies your apps need to run across a range of Linux distributions. Packaging your Flutter app as a snap is simple and the Flutter docs have a simple how-to guide to get you started.

Once you’ve published your snap, let us know in the snapcraft forum or the Ubuntu Desktop discourse where many community members are already experimenting with Flutter on Linux!

Explore Flutter 3

Everything we’ve talked about in this post is just the tip of the iceberg when it comes to the latest release of Flutter. To learn more, check out the official Flutter I/O page for a comprehensive breakdown of its new features along with a range of helpful talks and tutorials to help developers get the most out of this versatile toolset!

At Canonical we’re looking forward to welcoming a whole new community of Flutter developers to the wonderful world of Linux. It’s great to have you here!

on May 18, 2022 08:11 AM

May 16, 2022

Welcome to the Ubuntu Weekly Newsletter, Issue 735 for the week of May 8 – 14, 2022. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on May 16, 2022 11:14 PM

May 10, 2022

Here’s my (thirty-first) monthly but brief update about the activities I’ve done in the F/L/OSS world.


This was my 40th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

There’s a bunch of things I did this month but mostly non-technical, now that DC22 is around the corner. Here are the things I did:

Debian Uploads

  • Helped Andrius w/ FTBFS for php-text-captcha, reported via #977403.
    • I fixed the samed in Ubuntu a couple of months ago and they copied over the patch here.

Other $things:

  • Volunteering for DC22 Content team.
  • Leading the Bursary team w/ Paulo.
  • Answering a bunch of questions of referees and attendees around bursary.
  • Being an AM for Arun Kumar, process #1024.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.


This was my 15th month of actively contributing to Ubuntu. Now that I joined Canonical to work on Ubuntu full-time, there’s a bunch of things I do! \o/

I mostly worked on different things, I guess.

I was too lazy to maintain a list of things I worked on so there’s no concrete list atm. Maybe I’ll get back to this section later or will start to list stuff from the fall, as I was doing before. :D

Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

This was my thirty-first month as a Debian LTS and twentieth month as a Debian ELTS paid contributor.
I worked for 23.25 hours for LTS and 20.00 hours for ELTS.

LTS CVE Fixes and Announcements:

  • Issued DLA 2976-1, fixing CVE-2022-1271, for gzip.
    For Debian 9 stretch, these problems have been fixed in version 1.6-5+deb9u1.
  • Issued DLA 2977-1, fixing CVE-2022-1271, for xz-utils.
    For Debian 9 stretch, these problems have been fixed in version 5.2.2-1.2+deb9u1.
  • Working on src:tiff and src:mbedtls to fix the issues, still waiting for more issues to be reported, though.
  • Looking at src:mutt CVEs. Haven’t had the time to complete but shall roll out next month.

ELTS CVE Fixes and Announcements:

  • Issued ELA 593-1, fixing CVE-2022-1271, for gzip.
    For Debian 8 jessie, these problems have been fixed in version 1.6-4+deb8u1.
  • Issued ELA 594-1, fixing CVE-2022-1271, for xz-utils.
    For Debian 8 jessie, these problems have been fixed in version 5.1.1alpha+20120614-2+deb8u1.
  • Issued ELA 598-1, fixing CVE-2019-16935, CVE-2021-3177, and CVE-2021-4189, for python2.7.
    For Debian 8 jessie, these problems have been fixed in version 2.7.9-2-ds1-1+deb8u9.
  • Working on src:tiff and src:beep to fix the issues, still waiting for more issues to be reported for src:tiff and src:beep is a bit of a PITA, though. :)

Other (E)LTS Work:

  • Triaged gzip, xz-utils, tiff, beep, python2.7, python-django, and libgit2,
  • Signed up to be a Freexian Collaborator! \o/
  • Read through some bits around that.
  • Helped and assisted new contributors joining Freexian.
  • Answered questions (& discussions) on IRC (#debian-lts and #debian-elts).
  • General and other discussions on LTS private and public mailing list.
  • Attended monthly Debian meeting. Held on Jitsi this month.

Debian LTS Survey

I’ve spent 18 hours on the LTS survey on the following bits:

  • Rolled out the announcement. Started the survey.
  • Answered a bunch of queries, people asked via e-mail.
  • Looked at another bunch of tickets:
  • Sent a reminder and fixed a few things here and there.
  • Gave a status update during the meeting.
  • Extended the duration of the survey.

Until next time.
:wq for today.

on May 10, 2022 05:41 AM

May 09, 2022

Welcome to the Ubuntu Weekly Newsletter, Issue 734 for the week of May 1 – 7, 2022. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on May 09, 2022 10:34 PM

May 05, 2022

Full Circle Weekly News #259

Full Circle Magazine

Release of the GNU Coreutils 9.1 set of core system utilities:

LXQt 1.1 User Environment Released:\

Rsync 3.2.4 Released:

Celestial shuns snaps:

The SDL developers have canceled the default Wayland switch in the 2.0.22 release:

New versions of Box86 and Box64 emulators that allow you to run x86 games on ARM systems:

Release of the QEMU 7.0 emulator:

PPA proposed for Ubuntu to improve Wayland support in Qt:

Movement to include proprietary firmware in the Debian distribution:

Git 2.36 source control released:

oVirt 4.5.0 Virtualization Infrastructure Management System Release:

New versions of OpenWrt 21.02.3 and 19.07.10:

Ubuntu 22.04 LTS distribution release:

Valve has released Proton 7.0-2, for running Windows games on Linux:

Release of OpenBSD 7.1:

Summary ofresults of the election of the leader of the Debian project:

New release of the Silero speech synthesis system:

Release of KDE Gear 22.04:

Full Circle Magazine
Host:, @bardictriad
Bumper: Canonical
Theme Music: From The Dust - Stardust
on May 05, 2022 01:13 PM

May 04, 2022

Contributing to any open-source project is a great way to spend a few hours each month. I started more than 10 years ago, and it has ultimately shaped my career in ways I couldn’t have imagined!

GitLab logo as cover

The new GitLab logo, just announced on the 27th April 2022.

Nowadays, my contributions focus mostly on GitLab, so you will see many references to it in this blog post, but the content is quite generalizable; I would like to share my experience to highlight why you should consider contributing to an open-source project.

Writing blog posts, tweeting, and helping foster the community are nice ways to contribute to a project ;-)

And contributing doesn’t mean only coding: there are countless ways to help an open-source project: translating it to different languages, reporting issues, designing new features, writing the documentation, offering support on forums and StackOverflow, and so on.

Before deep diving in this wall of text, be aware there are mainly three parts in this blog post, after this introduction: a context section, where I describe my personal experience with open source, and what does it mean to me. Then, a nice list of reasons to contribute to any project. In closing, there are some tips on how to start contributing, both in general, and something specific to GitLab.


Ten years ago, I was fresh out of high school, without (almost) any knowledge about IT: however, I found out that I had a massive passion for it, so I enrolled in a Computer Engineering university (boring!), and I started contributing to Ubuntu (cool!). I began with the Italian Local Community, and I soon moved to Ubuntu Touch.

I often considered rewriting that old article: however I have a strange attachment to it as it is, with all that English mistakes. One of the first blog posts I have written, and it was really well received! We all know how it ended, but it still has been a fantastic ride, with a lot of great moments: just take a look at the archive of this blog, and you can see the passion and the enthusiasm I had. I was so enthusiast, that I wrote a similar blog post to this one! I think it highlights really well some considerable differences 10 years make.

Back then, I wasn’t working, just studying, so I had a lot of spare time. My English was way worse. I was at the beginning of my journey in the computer world, and Ubuntu has ultimately shaped a big part of it. My knowledge was very limited, and I never worked before. Contributing to Ubuntu gave me a glimpse of real world, I met outstanding engineers who taught me a lot, and boosted my CV, helping me to land my first job.

Advocacy, as in this blog post, is a great way to contribute! You spread awareness, and this helps to find new contributors, and maybe inspiring some young student to try out! Since then, I completed a master’s degree in C.S., worked in different companies in three different countries, and became a professional. Nowadays, my contributions to open source are more sporadic (adulthood, yay), but given how much it meant to me, I am still a big fan, and I try to contribute when I can, and how I can.

Why contributing


During my years contributing to open-source software, I’ve met countless incredible people, with some of whom I’ve become friend. In the old blog post I mentioned David: in the last 9 years we stayed in touch, met in different occasions in different cities: last time was as recent as last summer. Back at the time, he was a Manager in the Ubuntu Community Team at Canonical, and then, he became Director of Community Relations at GitLab. Small world!

The Ubuntu Touch Community Team in Malta, in 2014

The Ubuntu Touch Community Team in Malta, in 2014. It has been an incredible week, sponsored by Canonical!

One interesting thing is people contribute to open-source projects from their homes, all around the world: when I travel, I usually know somebody living in my destination city, so I’ve always at least one night booked for a beer with somebody I’ve met only online; it’s a pleasure to speak with people from different backgrounds, and having a glimpse in their life, all united by one common passion.


Having fun is important! You cannot spend your leisure time getting bored or annoyed: contributing to open source is fun ‘cause you pick the problems you would like to work on, and you don’t need all that bureaucracy and meetings that is often needed in your daily job. You can be challenged, and feeling useful, and improving a product, without any manager on your shoulder, and with your pace.

Being up-to-date on how things evolve

For example, the GitLab Handbook is a precious collection of resources, ideas, and methodologies on how to run a 1000 people company in a transparent, full remote, way. It’s a great reading, with a lot of wisdom.

Contributing to a project typically gives you an idea on how teams behind it work, which technologies they use, and which methodologies. Many open-source projects use bleeding-edge technologies, or draw a path. Being in contact with new ideas is a great way to know where the industry is headed, and what are the latest news: it is especially true if you hang out in the channels where the community meets, being them Discord, forums, or IRC (well, IRC is not really bleeding-edge, but it is fun).


When contributing in an area that doesn’t match your expertise, you always learn something new: reviews are usually precise and on point, and projects of a remarkable size commonly have a coaching team that help you to start contributing, and guide you on how to land your first patches.

In GitLab, if you need a help in merging your code, there are the Merge Request Coaches! And for any type of help, you can always join Gitter, or ask on the forum, or write to the dedicated email address.

Feel also free to ping me directly if you want some general guidance!

Giving back

I work as a Platform Engineer. My job is built on an incredible amount of open-source libraries, amazing FOSS services, and I basically have just to glue together different pieces. When I find some rough edge that could be improved, I try to do so.

Nowadays, I find crucial having well-maintained documentation, so after I have achieved something complex, I usually go back and try to improve the documentation, where lacking. It is my tiny way to say thanks, and giving back to a world that really has shaped my career.

This is also what mostly of my blogs posts are about: after having completed something on which I spent fatigue on, I find it nice being able to share such information. Every so often, I find myself years later following my guide, and I really also appreciate when other people find the content useful.


Who doesn’t like swag? :-) Numerous projects have delightful swags, starting from stickers, that they like to share with the whole community. Of course, it shouldn’t be your main driver, ‘cause you will soon notice that it is ultimately not worth the amount of time you spend contributing, but it is charming to have GitLab socks!

A GitLab branded mechanical keyboard

A GitLab branded mechanical keyboard, courtesy of the GitLab's security team! This very article has been typed with it!


I hope I inspired you to contribute to some open-source project (maybe GitLab!). Now, let’s talk about some small tricks on how to begin easily.

Find something you are passionate about

You must find a project you are passionate about, and that you use frequently. Looking forward to a release, knowing that your contributions will be included, it is a wonderful satisfaction, and can really push you to do more.

Moreover, if you already know the project you want to contribute to, you probably know already the biggest pain points, and where the project needs some contributions.

Start small and easy

You don’t need to do gigantic contributions to begin. Find something tiny, so you can get familiar with the project workflows, and how contributions are received.

Launchpad and bazaar instead of GitLab and git — down the memory lane! My journey with Ubuntu started correcting a typo in a README, and here I am, years later, having contributed to dozens of projects, and having a career in the C.S. field. Back then, I really had no idea of what my future would have held.

For GitLab, you can take a look at the issues marked as “good for new contributors”. They are designed to be addressed quickly, and onboard new people in the community. In this way, you don’t have to focus on the difficulties of the task at hand, but you can easily explore how the community works.

Writing issues is a good start

Writing high-quality issues is a great way to start contributing: maintainers of a project are not always aware of how the software is used, and cannot be aware of all the issues. If you know that something could be improved, write it down: spend some time explicating what happens, what you expect, how to reproduce the problem, and maybe suggest some solutions as well! Perhaps, the first issue you write down could be the very first issue you resolve.

Not much time required!

Contributing to a project doesn’t require necessarily a lot of time. When I was younger, I definitely dedicated way more time to open-source projects, implementing gigantic features. Nowadays, I don’t do that anymore (life is much more than computers), but I like to think that my contributions are still useful. Still, I don’t spend more than a couple of hours a month, based on my schedule, and how much is raining (yep, in winter I definitely contribute more than in summer).

GitLab is super easy

Do you use GitLab? Then you should undoubtedly try to contribute to it. It is easy, it is fun, and there are many ways. Take a look at this guide, hang out on Gitter, and see you around. ;-)

Next week (9th-13th May 2022) there is also a GitLab Hackathon! It is a real fun and easy way to start contributing: many people are available to help you, there are video sessions talking about contributing, and just doing a small contribution you will receive a pretty prize.

And if I was able to do it with my few contributions, you can as well! And in time, if you are consistent in your contributions, you can become a GitLab Hero! How cool is that?

I really hope this wall of text made you consider contributing to an open-source project. If you have any question, or feedback, or if you would like some help, please leave a comment below, tweet me @rpadovani93 or write me an email at


on May 04, 2022 12:00 AM

May 02, 2022

Sorry, I should have posted this weeks ago to save others some time.

If you are running openconnect-sso to connect to a Cisco anyconnect VPN, then when you upgrade to Ubuntu Jammy, openssl 3.0 may stop openconnect from working. The easiest way to work around this is to use a custom configuration file as follows:

cat > $HOME/ssl.cnf
openssl_conf = openssl_init

ssl_conf = ssl_sect

system_default = system_default_sect

Options = UnsafeLegacyRenegotiation

Then use this configuration file (only) when running openconnect:

OPENSSL_CONF=~/ssl.cnf openconnect-sso

on May 02, 2022 02:39 PM

Over the last few weeks, GStreamer’s RTP stack got a couple of new and quite useful features. As it is difficult to configure, mostly because there being so many different possible configurations, I decided to write about this a bit with some example code.

The features are RFC 6051-style rapid synchronization of RTP streams, which can be used for inter-stream (e.g. audio/video) synchronization as well as inter-device (i.e. network) synchronization, and the ability to easily retrieve absolute sender clock times per packet on the receiver side.

Note that each of this was already possible before with GStreamer via different mechanisms with different trade-offs. Obviously, not being able to have working audio/video synchronization would be simply not acceptable and I previously talked about how to do inter-device synchronization with GStreamer before, for example at the GStreamer Conference 2015 in Düsseldorf.

The example code below will make use of the GStreamer RTSP Server library but can be applied to any kind of RTP workflow, including WebRTC, and are written in Rust but the same can also be achieved in any other language. The full code can be found in this repository.

And for reference, the merge requests to enable all this are [1], [2] and [3]. You probably don’t want to backport those to an older version of GStreamer though as there are dependencies on various other changes elsewhere. All of the following needs at least GStreamer from the git main branch as of today, or the upcoming 1.22 release.

Baseline Sender / Receiver Code

The starting point of the example code can be found here in the baseline branch. All the important steps are commented so it should be relatively self-explanatory.


The sender is starting an RTSP server on the local machine on port 8554 and provides a media with H264 video and Opus audio on the mount point /test. It can be started with

$ cargo run -p rtp-rapid-sync-example-send

After starting the server it can be accessed via GStreamer with e.g. gst-play-1.0 rtsp:// or similarly via VLC or any other software that supports RTSP.

This does not do anything special yet but lays the foundation for the following steps. It creates an RTSP server instance with a custom RTSP media factory, which in turn creates custom RTSP media instances. All this is not needed at this point yet but will allow for the necessary customization later.

One important aspect here is that the base time of the media’s pipeline is set to zero


This allows the timeoverlay element that is placed in the video part of the pipeline to render the clock time over the video frames. We’re going to use this later to confirm on the receiver that the clock time on the sender and the one retrieved on the receiver are the same.

let video_overlay = gst::ElementFactory::make("timeoverlay", None)
    .context("Creating timeoverlay")?;
video_overlay.set_property_from_str("time-mode", "running-time");

It actually only supports rendering the running time of each buffer, but in a live pipeline with the base time set to zero the running time and pipeline clock time are the same. See the documentation for some more details about the time concepts in GStreamer.

Overall this creates the following RTSP stream producer bin, which will be used also in all the following steps:


The receiver is a simple playbin pipeline that plays an RTSP URI given via command-line parameters and runs until the stream is finished or an error has happened.

It can be run with the following once the sender is started

$ cargo run -p rtp-rapid-sync-example-send -- "rtsp://"

Please don’t forget to replace the IP with the IP of the machine that is actually running the server.

All the code should be familiar to anyone who ever wrote a GStreamer application in Rust, except for one part that might need a bit more explanation

    glib::closure!(|_playbin: &gst::Pipeline, source: &gst::Element| {
        source.set_property("latency", 40u32);

playbin is going to create an rtspsrc, and at that point it will emit the source-setup signal so that the application can do any additional configuration of the source element. Here we’re connecting a signal handler to that signal to do exactly that.

By default rtspsrc introduces a latency of 2 seconds of latency, which is a lot more than what is usually needed. For live, non-VOD RTSP streams this value should be around the network jitter and here we’re configuring that to 40 milliseconds.

Retrieval of absolute sender clock times

Now as the first step we’re going to retrieve the absolute sender clock times for each video frame on the receiver. They will be rendered by the receiver at the bottom of each video frame and will also be printed to stdout. The changes between the previous version of the code and this version can be seen here and the final code here in the sender-clock-time-retrieval branch.

When running the sender and receiver as before, the video from the receiver should look similar to the following

The upper time that is rendered on the video frames is rendered by the sender, the bottom time is rendered by the receiver and both should always be the same unless something is broken here. Both times are the pipeline clock time when the sender created/captured the video frame.

In this configuration the absolute clock times of the sender are provided to the receiver via the NTP / RTP timestamp mapping provided by the RTCP Sender Reports. That’s also the reason why it takes about 5s for the receiver to know the sender’s clock time as RTCP packets are not scheduled very often and only after about 5s by default. The RTCP interval can be configured on rtpbin together with many other things.


On the sender-side the configuration changes are rather small and not even absolutely necessary.

rtpbin.set_property_from_str("ntp-time-source", "clock-time");

By default the RTP NTP time used in the RTCP packets is based on the local machine’s walltime clock converted to the NTP epoch. While this works fine, this is not the clock that is used for synchronizing the media and as such there will be drift between the RTP timestamps of the media and the NTP time from the RTCP packets, which will be reset every time the receiver receives a new RTCP Sender Report from the sender.

Instead, we configure rtpbin here to use the pipeline clock as the source for the NTP timestamps used in the RTCP Sender Reports. This doesn’t give us (by default at least, see later) an actual NTP timestamp but it doesn’t have the drift problem mentioned before. Without further configuration, in this pipeline the used clock is the monotonic system clock.

rtpbin.set_property("rtcp-sync-send-time", false);

rtpbin normally uses the time when a packet is sent out for the NTP / RTP timestamp mapping in the RTCP Sender Reports. This is changed with this property to instead use the time when the video frame / audio sample was captured, i.e. it does not include all the latency introduced by encoding and other processing in the sender pipeline.

This doesn’t make any big difference in this scenario but usually one would be interested in the capture clock times and not the send clock times.


On the receiver-side there are a few more changes. First of all we have to opt-in to rtpjitterbuffer putting a reference timestamp metadata on every received packet with the sender’s absolute clock time.

    glib::closure!(|_playbin: &gst::Pipeline, source: &gst::Element| {
        source.set_property("latency", 40u32);
        source.set_property("add-reference-timestamp-meta", true);

rtpjitterbuffer will start putting the metadata on packets once it knows the NTP / RTP timestamp mapping, i.e. after the first RTCP Sender Report is received in this case. Between the Sender Reports it is going to interpolate the clock times. The normal timestamps (PTS) on each packet are not affected by this and are still based on whatever clock is used locally by the receiver for synchronization.

To actually make use of the reference timestamp metadata we add a timeoverlay element as video-filter on the receiver:

let timeoverlay =
    gst::ElementFactory::make("timeoverlay", None).context("Creating timeoverlay")?;

timeoverlay.set_property_from_str("time-mode", "reference-timestamp");
timeoverlay.set_property_from_str("valignment", "bottom");

pipeline.set_property("video-filter", &timeoverlay);

This will then render the sender’s absolute clock times at the bottom of each video frame, as seen in the screenshot above.

And last we also add a pad probe on the sink pad of the timeoverlay element to retrieve the reference timestamp metadata of each video frame and then printing the sender’s clock time to stdout:

let sinkpad = timeoverlay
    .expect("Failed to get timeoverlay sinkpad");
    .add_probe(gst::PadProbeType::BUFFER, |_pad, info| {
        if let Some(gst::PadProbeData::Buffer(ref buffer)) = {
            if let Some(meta) = buffer.meta::<gst::ReferenceTimestampMeta>() {
                println!("Have sender clock time {}", meta.timestamp());
            } else {
                println!("Have no sender clock time");

    .expect("Failed to add pad probe");

Rapid synchronization via RTP header extensions

The main problem with the previous code is that the sender’s clock times are only known once the first RTCP Sender Report is received by the receiver. There are many ways to configure rtpbin to make this happen faster (e.g. by reducing the RTCP interval or by switching to the AVPF RTP profile) but in any case the information would be transmitted outside the actual media data flow and it can’t be guaranteed that it is actually known on the receiver from the very first received packet onwards. This is of course not a problem in every use-case, but for the cases where it is there is a solution for this problem.

RFC 6051 defines an RTP header extension that allows to transmit the NTP timestamp that corresponds an RTP packet directly together with this very packet. And that’s what the next changes to the code are making use of.

The changes between the previous version of the code and this version can be seen here and the final code here in the rapid-synchronization branch.


To add the header extension on the sender-side it is only necessary to add an instance of the corresponding header extension implementation to the payloaders.

let hdr_ext = gst_rtp::RTPHeaderExtension::create_from_uri(
    .context("Creating NTP 64-bit RTP header extension")?;
video_pay.emit_by_name::<()>("add-extension", &[&hdr_ext]);

This first instantiates the header extension based on the uniquely defined URI for it, then sets its ID to 1 (see RFC 5285) and then adds it to the video payloader. The same is then done for the audio payloader.

By default this will add the header extension to every RTP packet that has a different RTP timestamp than the previous one. In other words: on the first packet that corresponds to an audio or video frame. Via properties on the header extension this can be configured but generally the default should be sufficient.


On the receiver-side no changes would actually be necessary. The use of the header extension is signaled via the SDP (see RFC 5285) and it will be automatically made use of inside rtpbin as another source of NTP / RTP timestamp mappings in addition to the RTCP Sender Reports.

However, we configure one additional property on rtpbin

    glib::closure!(|_rtspsrc: &gst::Element, rtpbin: &gst::Element| {
        rtpbin.set_property("min-ts-offset", gst::ClockTime::from_mseconds(1));

Inter-stream audio/video synchronization

The reason for configuring the min-ts-offset property on the rtpbin is that the NTP / RTP timestamp mapping is not only used for providing the reference timestamp metadata but it is also used for inter-stream synchronization by default. That is, for getting correct audio / video synchronization.

With RTP alone there is no mechanism to synchronize multiple streams against each other as the packet’s RTP timestamps of different streams have no correlation to each other. This is not too much of a problem as usually the packets for audio and video are received approximately at the same time but there’s still some inaccuracy in there.

One approach to fix this is to use the NTP / RTP timestamp mapping for each stream, either from the RTCP Sender Reports or from the RTP header extension, and that’s what is made use of here. And because the mapping is provided very often via the RTP header extension but the RTP timestamps are only accurate up to clock rate (1/90000s for video and 1/48000s) for audio in this case, we configure a threshold of 1ms for adjusting the inter-stream synchronization. Without this it would be adjusted almost continuously by a very small amount back and forth.

Other approaches for inter-stream synchronization are provided by RTSP itself before streaming starts (via the RTP-Info header), but due to a bug this is currently not made use of by GStreamer.

Yet another approach would be via the clock information provided by RFC 7273, about which I already wrote previously and which is also supported by GStreamer. This also allows inter-device, network synchronization and used for that purpose as part of e.g. AES67, Ravenna, SMPTE 2022 / 2110 and many other protocols.

Inter-device network synchronization

Now for the last part, we’re going to add actual inter-device synchronization to this example. The changes between the previous version of the code and this version can be seen here and the final code here in the network-sync branch. This does not use the clock information provided via RFC 7273 (which would be another option) but uses the same NTP / RTP timestamp mapping that was discussed above.

When starting the receiver multiple times on different (or the same) machines, each of them should play back the media synchronized to each other and exactly 2 seconds after the corresponding audio / video frames are produced on the sender.

For this, both sender and all receivers are using an NTP clock ( in this case) instead of the local monotonic system clock for media synchronization (i.e. as the pipeline clock). Instead of an NTP clock it would also be possible to any other mechanism for network clock synchronization, e.g. PTP or the GStreamer netclock.

println!("Syncing to NTP clock");
    .context("Syncing NTP clock")?;
println!("Synced to NTP clock");

This code instantiates a GStreamer NTP clock and then synchronously waits up to 5 seconds for it to synchronize. If that fails then the application simply exits with an error.


On the sender side all that is needed is to configure the RTSP media factory, and as such the pipeline used inside it, to use the NTP clock


This causes all media inside the sender’s pipeline to be synchronized according to this NTP clock and to also use it for the NTP timestamps in the RTCP Sender Reports and the RTP header extension.


On the receiver side the same has to happen


In addition a couple more settings have to be configured on the receiver though. First of all we configure a static latency of 2 seconds on the receiver’s pipeline.


This is necessary as GStreamer can’t know the latency of every receiver (e.g. different decoders might be used), and also because the sender latency can’t be automatically known. Each audio / video frame will be timestamped on the receiver with the NTP timestamp when it was captured / created, but since then all the latency of the sender, the network and the receiver pipeline has passed and for this some compensation must happen.

Which value to use here depends a lot on the overall setup, but 2 seconds is a (very) safe guess in this case. The value only has to be larger than the sum of sender, network and receiver latency and in the end has the effect that the receiver is showing the media exactly that much later than the sender has produced it.

And last we also have to tell rtpbin that

  1. sender and receiver clock are synchronized to each other, i.e. in this case both are using exactly the same NTP clock, and that no translation to the pipeline’s clock is necessary, and
  2. that the outgoing timestamps on the receiver should be exactly the sender timestamps and that this conversion should happen based on the NTP / RTP timestamp mapping

source.set_property_from_str("buffer-mode", "synced");
source.set_property("ntp-sync", true);

And that’s it.

A careful reader will also have noticed that all of the above would also work without the RTP header extension, but then the receivers would only be synchronized once the first RTCP Sender Report is received. That’s what the test-netclock.c / test-netclock-client.c example from the GStreamer RTSP server is doing.

As usual with RTP, the above is by far not the only way of doing this and GStreamer also supports various other synchronization mechanisms. Which one is the correct one for a specific use-case depends on a lot of factors.

on May 02, 2022 01:00 PM

April 29, 2022

This month:
Command & Conquer
* How-To : Python, Blender and Latex
* Graphics : Inkscape
* Everyday Ubuntu : KDE Science
Micro This Micro That
* Review : CutefishOS
* Review : FreeOffice 2021
* My Opinion : First Look At Ubuntu 22.04
Ubports Touch
* Ubuntu Games : Growbot
plus: News, My Story, The Daily Waddle, Q&A, and more.


Get it while it’s hot!

on April 29, 2022 07:35 PM

April 26, 2022

Ubuntu MATE 22.04 LTS is the culmination of 2 years of continual improvement 😅 to Ubuntu and MATE Desktop. As is tradition, the LTS development cycle has a keen focus on eliminating paper 🧻 cuts 🔪 but we’ve jammed in some new features and a fresh coat of paint too 🖌 The following is a summary of what’s new since Ubuntu MATE 21.10 and some reminders of how we got here from 20.04. Read on to learn more 🧑‍🎓

Thank you! 🙇

I’d like to extend my sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this LTS release 👏 From reporting bugs, submitting translations, providing patches, contributing to our crowd funding, developing new features, creating artwork, offering community support, actively testing and providing QA feedback to writing documentation or creating this fabulous website. Thank you! Thank you all for getting out there and making a difference! 💚

Ubuntu MATE 22.04 LTS Ubuntu MATE 22.04 LTS (Jammy Jellyfish) - Mutiny layout with Yark-MATE-dark

What’s changed?

Here are the highlights of what’s changed recently.

MATE Desktop 1.26.1 🧉

Ubuntu MATE 22.04 features MATE Desktop 1.26.1. MATE Desktop 1.26.0 was introduced in 21.10 and benefits from significant effort 😅 in fixing bugs 🐛 in MATE Desktop, optimising performance ⚡ and plugging memory leaks. MATE Desktop 1.26.1 addresses the bugs we discovered following the initial 1.26.0 release. Our community also fixed some bugs in Plank and Brisk Menu 👍 and also fixed the screen reader during installs for visually impaired users 🥰 In all over 500 bugs have been addressed in this release 🩹

Yaru 🎨

Ubuntu MATE 21.04 was the first release to ship with a MATE variant of the Yaru theme. A year later and we’ve been working hard with members of the Yaru and Ubuntu Desktop teams to bring full MATE compatibility to upstream Yaru, including all the accent colour varieties. All reported bugs 🐞 in the Yaru implementation for MATE have also been fixed 🛠

Yaru Themes Yaru Themes in Ubuntu MATE 22.04 LTS

Ubuntu MATE 22.04 LTS ships with all the Yaru themes, including our own “chelsea cucumber” version 🥒 The legacy Ambiant/Radiant themes are no longer installed by default and neither are the stock MATE Desktop themes. We’ve added an automatic settings migration to transition users who upgrade to an appropriate Yaru MATE theme.

Cherries on top 🍒

In collaboration with Paul Kepinski 🇫🇷 (Yaru team) and Marco Trevisan 🇮🇹 (Ubuntu Desktop team) we’ve added dark/light panels and panel icons to Yaru for MATE Desktop and Unity. I’ve added a collection of new dark/light panel icons to Yaru for popular apps with indicators such as Steam, Dropbox, uLauncher, RedShift, Transmission, Variety, etc.

Light Panel Dark Panel Light and Dark panels

I’ve added patches 🩹 to the Appearance Control Center that applies theme changes to Plank (the dock), Pluma (text editor) and correctly toggles the colour scheme preference for GNOME 42 apps. When you choose a dark theme, everything will go dark in unison 🥷 and vice versa.

So, Ubuntu MATE 22.04 LTS is now using everything Yaru/Suru has to offer. 🎉

AI Generated wallpapers

My friend Simon Butcher 🇬🇧 is Head of Research Platforms at Queen Mary University of London managing the Apocrita HPC cluster service. He’s been creating AI 🤖 generated art using bleeding edge CLIP guided diffusion models 🖌 The results are pretty incredible and we’ve included the 3 top voted “Jammy Jellyfish” in our wallpaper selection as their vivid and vibrant styles compliment the Yaru accent colour theme options very nicely indeed 😎

If you want the complete set, here’s a tarball of all 8 wallpapers at 3840x2160:

Ubuntu MATE stuff 🧉

Ubuntu MATE has a few distinctive apps and integrations of it’s own, here’s a run down of what’s new and shiny ✨

MATE Tweak

Switching layouts with MATE Tweak is its most celebrated feature. We’ve improved the reliability of desktop layout switching and restoring custom layouts is now 100% accurate 💯

Ubuntu MATE Desktop Layouts Having your desktop your way in Ubuntu MATE

We’ve removed mate-netbook from the default installation of Ubuntu MATE and as a result the Netbook layout is no longer available. We did this because mate-maximus, a component of mate-netbook, is the cause of some compatibility issues with client side decorated (CSD) windows. There are still several panel layouts that offer efficient resolution use 📐 for those who need it.

MATE Tweak has refreshed its supported for 3rd party compositors. Support for Compton has been dropped, as it is no longer actively maintained and comprehensive support for picom has been added. picom has three compositor options: Xrender, GLX and Hybrid. All three are can be selected via MATE Tweak as the performance and compatibility of each varies depending on your hardware. Some people choose to use picom because they get better gaming performance or screen tearing is reduced. Some just like subtle animation effects picom adds 💖


Recent versions of rofi, the tool used by MATE HUD to visualise menu searches, has a new theme system. MATE HUD has been updated to support this new theme engine and comes with two MATE specific themes (mate-hud and mate-hud-rounded) that automatically adapt to match the currently selected GTK theme.

You can add your own rofi themes to ~/.local/share/rofi/themes. Should you want to, you can use any rofi theme in MATE HUD. Use Alt + F2 to run rofi-theme-selector to try out the different themes, and if there is one you prefer you can set it as default by using running the following in a terminal:

gsettings set org.mate.hud rofi-theme <theme name>

MATE HUD MATE HUD uses the new rofi theme engine

Windows & Shadows

I’ve updated the Metacity/Marco (the MATE Window Manager) themes in Yaru to make sure they match GNOME/CSD/Handy windows for a consistent look and feel across all window types 🪟 and 3rd party compositors like picom. I even patched how Marco and picom render shadows so windows they look cohesive regardless of toolkit or compositor being used.

Ubuntu MATE Welcome & Boutique

The Software Boutqiue has been restocked with software for 22.04 and Firefox 🔥🦊 ESR (.deb) has been added to the Browser Ballot in Ubuntu MATE Welcome.

Ubuntu MATE Welcome Browser Ballot Comprehensive browser options just a click away

41% less fat 🍩

Ubuntu MATE, like it’s lead developer, was starting to get a bit large around the mid section 😊 During the development of 22.04, the image 📀 got to 4.1GB 😮

So, we put Ubuntu MATE on a strict diet 🥗 We’ve removed the proprietary NVIDIA drivers from the local apt pool on the install media and thanks to migrating fully to Yaru (which now features excellent de-duplication of icons) and also removing our legacy themes/icons. And now the Yaru-MATE themes/icons are completely in upstream Yaru, we were able to remove 3 snaps from the default install and the image is now a much more reasonable 2.7GB; 41% smaller. 🗜

This is important to us, because the majority of our users are in countries where Internet bandwidth is not always plentiful. Those of you with NVIDIA GPUs, don’t worry. If you tick the 3rd party software and drivers during the install the appropriate driver for your GPU will be downloaded and installed 👍

Install 3rd party drivers NVIDIA GPU owners should tick Install 3rd party software and drivers during install

While investigating 🕵 a bug in Xorg Server that caused Marco (the MATE window manager) to crash we discovered that Marco has lower frame time latency ⏱ when using Xrender with the NVIDIA proprietary drivers. We’ve published a PPA where NVIDIA GPU users can install a version of Marco that uses Xpresent for optimal performance

sudo apt-add-repository ppa:ubuntu-mate-dev/marco
sudo apt upgrade

Should you want to revert this change you install ppa-purge and run the following from a terminal: sudo ppa-purge -o ubuntu-mate-dev -p marco.

But wait! There’s more! 😲

These reductions in size are after we added three new applications to the default install on Ubuntu MATE: GNOME Clocks, Maps and Weather My family and I 👨‍👩‍👧 have found these applications particularly useful and use them regularly on our laptops without having to reach for a phone or tablet.

GNOME Clocks, Maps & Weather New additions to the default desktop application in Ubuntu MATE 22.04 LTS

For those of you who like a minimal base platform, then the minimal install option is still available which delivers just the essential Ubuntu MATE Desktop and Firefox browser. You can then build up from there 👷

Packages, packages, packages 📦

It doesn’t matter how you like to consume your Linux 🐧 packages, Ubuntu MATE has got you covered with PPA, Snap, AppImage and FlatPak support baked in by default. You’ll find flatpak, snapd and xdg-desktop-portal-gtk to support Snap and FlatPak and the (ageing) libfuse2 to support AppImage are all pre-installed.

Although flatpak is installed, FlatHub is not enabled by default. To enable FlatHub run the following in a terminal:

flatpak remote-add --if-not-exists flathub

We’ve also included snapd-desktop-integration which provides a bridge between the user’s session and snapd to integrate theme preferences 🎨 with snapped apps and can also automatically install snapped themes 👔 All the Yaru themes shipped in Ubuntu MATE are fully snap aware.

Ayatana Indicators

Ubuntu MATE 20.10 transitioned to Ayatana Indicators 🚥 As a quick refresher, Ayatana Indicators are a fork of Ubuntu Indicators that aim to be cross-distro compatible and re-usable for any desktop environment 👌

Ubuntu MATE 22.04 LTS comes with Ayatana Indicators 22.2.0 and sees the return of Messages Indicator 📬 to the default install. Ayatana Indicators now provide improved backwards compatibility to Ubuntu Indicators and no longer requires the installation of two sets of libraries, saving RAM, CPU cycles and improving battery endurance 🔋

Ayatana Indicator Settings Ayatana Indicators Settings

To compliment the BlueZ 5.64 protocol stack in Ubuntu, Ubuntu MATE ships Blueman 2.2.4 which offers comprehensive management of Bluetooth devices and much improved pairing compatibility 💙🦷

I also patched mate-power-manager, ayatana-indicator-power and Yaru to add support for battery powered gaming input devices, such as controllers 🎮 and joysticks 🕹

Active Directory

And in case you missed it, the Ubuntu Desktop team added the option to enroll your computer into an Active Directory domain 🔑 during install. Ubuntu MATE has supported the same capability since it was first made available in the 20.10 release.

Raspberry Pi image 🥧

  • Should be available very shortly after the release of 22.04.

Major Applications

Accompanying MATE Desktop 1.26.1 and Linux 5.15 are Firefox 99.0, Celluloid 0.20, Evolution 3.44 & LibreOffice

See the Ubuntu 22.04 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 22.04 LTS

This new release will be first available for PC/Mac users.


Upgrading from Ubuntu MATE 20.04 LTS and 21.10

You can upgrade to Ubuntu MATE 22.04 LTS from Ubuntu MATE either 20.04 LTS or 21.10. Ensure that you have all updates installed for your current version of Ubuntu MATE before you upgrade.

  • Open the “Software & Updates” from the Control Center.
  • Select the 3rd Tab called “Updates”.
  • Set the “Notify me of a new Ubuntu version” drop down menu to “For long-term support versions” if you are using 20.04 LTS; set it to “For any new version” if you are using 21.10.
  • Press Alt+F2 and type in update-manager -c -d into the command box.
  • Update Manager should open up and tell you: New distribution release ‘XX.XX’ is available.
    • If not, you can use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
  • Click “Upgrade” and follow the on-screen instructions.

There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

Known Issues

Here are the known issues.

Component Problem Workarounds Upstream Links
Ubuntu Ubiquity slide shows are missing for OEM installs of Ubuntu MATE


Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

on April 26, 2022 04:47 PM

April 23, 2022

It is now widely known that Ubuntu 22.04 LTS (Jammy Jellyfish) ships Firefox as a snap, but some people (like me) may prefer installing it from .deb packages to retain control over upgrades or to keep extensions working.

Luckily there is still a PPA serving firefox (and thunderbird) debs at maintained by the Mozilla Team. (Thank you!)

You can block the Ubuntu archive’s version that just pulls in the snap by pinning it:

$ cat /etc/apt/preferences.d/firefox-no-snap 
Package: firefox*
Pin: release o=Ubuntu*
Pin-Priority: -1

Now you can remove the transitional package and the Firefox snap itself:

sudo apt purge firefox
sudo snap remove firefox
sudo add-apt-repository ppa:mozillateam/ppa
sudo apt update
sudo apt install firefox

Since the package comes from a PPA unattended-upgrades will not upgrade it automatically, unless you enable this origin:

echo 'Unattended-Upgrade::Allowed-Origins:: "LP-PPA-mozillateam:${distro_codename}";' | sudo tee /etc/apt/apt.conf.d/51unattended-upgrades-firefox

Happy browsing!

Update: I have found a few other, similar guides at and and I’ve updated the pinning configuration based on them.

on April 23, 2022 02:38 PM

April 21, 2022

The Xubuntu team is happy to announce the immediate release of Xubuntu 22.04.

Xubuntu 22.04, codenamed Jammy Jellyfish, is a long-term support (LTS) release and will be supported for 3 years, until 2025.

The Xubuntu and Xfce development teams have made great strides in usability, expanded features, and additional applications in the last two years. Users coming from 20.04 will be delighted with improvements found in Xfce 4.16 and our expanded application set. 21.10 users will appreciate the added stability that comes from the numerous maintenance releases that landed this cycle.

The final release images are available as torrents and direct downloads from

As the main server might be busy in the first few days after the release, we recommend using the torrents if possible.

Xubuntu Core, our minimal ISO edition, is available to download from [torrent]. Find out more about Xubuntu Core here.

We’d like to thank everybody who contributed to this release of Xubuntu!

Highlights and Known Issues


  • Mousepad 0.5.8, our text editor, broadens its feature set with the addition of session backup and restore, plugin support, and a new gspell plugin.
  • Ristretto 0.12.2, the versatile image viewer, improves thumbnail support and features numerous performance improvements.
  • Whisker Menu Plugin 2.7.1 expands customization options with several new preferences and CSS classes for theme developers.
  • Firefox is now included as a Snap package.
  • Refreshed user documentation, available on the ISO and online.
  • Six new wallpapers from the 22.04 Community Wallpaper Contest.

Known Issues

  • The shutdown prompt may not be displayed at the end of the installation. Instead you might just see a Xubuntu logo, a black screen with an underscore in the upper left hand corner, or just a black screen. Press Enter and the system will reboot into the installed environment. (LP: #1944519)
  • The Firefox Snap is not currently able to open the locally-installed Xubuntu Docs. (LP: #1967109)

For more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions, please refer to the Xubuntu Release Notes.

The main Ubuntu Release Notes cover many of the other packages we carry and more generic issues.


For support with the release, navigate to Help & Support for a complete list of methods to get help.

on April 21, 2022 10:44 PM

The Xubuntu team is happy to announce the results of the 22.04 community wallpaper contest!

As always, we’d like to send out a huge thanks to every contestant. The Xubuntu Community Wallpaper Contest gives us a unique chance to interact with the community and get contributions from members who may otherwise not have had the opportunity to join in before. With around 130 submissions, the contest garnered less interest this time around, but we still had a lot of great work to pick from. All of the submissions are browsable on the 22.04 contest page at

Without further ado, here are the winners:

From left to right, top to bottom. Click on the links for full-size image versions.

Congratulations, and thanks for your wonderful contributions!

on April 21, 2022 10:21 PM

E191 Podcast Wacom Portugal

Podcast Ubuntu Portugal

Dali, leia-se Diogo, foi – para variar – às compras, com o objectivo de se dotar de ferramentas criativas. O quadrado do Carrondo, viu nesse acto uma perspectiva mais técnica. Numa semana em que a Vodafone volta à conversa e as migrações também, desta feita de WordPress para Hugo, ou simples HTML…
Já sabem: oiçam, subscrevam e partilhem!


### Apoios
Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.
Se estiverem interessados em outros bundles não listados nas notas usem o link e vão estar também a apoiar-nos.

### Atribuição e licenças
Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo [Senhor Podcast](
O website é produzido por Tiago Carrondo e o [código aberto]( está licenciado nos termos da [Licença MIT](
A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](
Este episódio e a imagem utilizada estão licenciados nos termos da licença: [Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)](, [cujo texto integral pode ser lido aqui]( Estamos abertos a licenciar para permitir outros tipos de utilização, [contactem-nos]( para validação e autorização.

on April 21, 2022 10:03 PM
Thanks to all the hard work from our contributors, Lubuntu 22.04 LTS has been released. With the codename Jammy Jellyfish, Lubuntu 22.04 is the 22nd release of Lubuntu, the eighth release of Lubuntu with LXQt as the default desktop environment. Support lifespan Lubuntu 22.04 LTS will be supported for 3 years until April 2025. Our […]
on April 21, 2022 08:09 PM

Kubuntu 22.04 LTS Released

Kubuntu General News

The Kubuntu Team is happy to announce that Kubuntu 22.04 LTS has been released, featuring the beautiful KDE Plasma 5.24 LTS: simple by default, powerful when needed.

Codenamed “Jammy Jellyfish”, Kubuntu 22.04 continues our tradition of giving you Friendly Computing by integrating the latest and greatest Free Software technologies into a high-quality, easy-to-use Linux distribution.

The team has been hard at work through this cycle, introducing new features and fixing bugs.

Under the hood, there have been updates to many core packages, including a new 5.15-based kernel, KDE Frameworks 5.92, Plasma 5.24 LTS and KDE Gear (formerly Applications) 21.12.3.

Kubuntu has seen many updates for other applications, both in our default install, and installable from the Ubuntu archive.

Elisa, KDE connect, Krita, Kdevelop, Digikam, Latte-dock, and many many more applications are updated.

Applications that are key for day-to-day usage are included and updated, such as Firefox, VLC and Libreoffice.

For this release we provide Thunderbird for email support, however the KDE PIM suite (including kontact and kmail) is still available to install from the archive.

For a list of other application updates, upgrading notes and known bugs be sure to read our release notes.

Download 22.04 LTS or read how to upgrade from 21.10 and 20.04.

Note: From 21.10, there may a delay of a few hrs to days between the official release announcements and the Ubuntu Release Team enabling upgrades. From 20.04, upgrades will not be enabled until approximately the date of the 1st 22.04 point release (22.04.1) at the end of July.

on April 21, 2022 06:24 PM

The Ubuntu OpenStack team at Canonical is pleased to announce the general
availability of OpenStack Yoga on Ubuntu 22.04 LTS (Jammy Jellyfish) and
Ubuntu 20.04 LTS (Focal Fossa) via the Ubuntu Cloud Archive. Details of
the Yoga release can be found at:

To get access to the Ubuntu Yoga packages:

Ubuntu 22.04 LTS

OpenStack Yoga is available by default on Ubuntu 22.04.

Ubuntu 20.04 LTS

The Ubuntu Cloud Archive for OpenStack Yoga can be enabled on Ubuntu
20.04 by running the following command:

sudo add-apt-repository cloud-archive:yoga

The Ubuntu Cloud Archive for Yoga includes updates for:

aodh, barbican, ceilometer, ceph (17.1.0), cinder, designate,
designate-dashboard, dpdk (21.11), glance, gnocchi, heat,
heat-dashboard, horizon, ironic, ironic-ui, keystone, libvirt (8.0.0),
magnum, magnum-ui, manila, manila-ui, masakari, mistral, murano,
murano-dashboard, networking-arista, networking-bagpipe,
networking-baremetal, networking-bgpvpn, networking-hyperv,
networking-l2gw, networking-mlnx, networking-odl, networking-sfc,
neutron, neutron-dynamic-routing, neutron-fwaas, neutron-vpnaas, nova,
octavia, octavia-dashboard, openstack-trove, openvswitch (2.17.0),
ovn (22.03.0), ovn-octavia-provider, placement, sahara,
sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx,
vitrage, watcher, watcher-dashboard, zaqar, and zaqar-ui.

For a full list of packages and versions, please refer to:

Reporting bugs

If you have any issues please report bugs using the ‘ubuntu-bug’ tool to
ensure that bugs get logged in the right place in Launchpad:

sudo ubuntu-bug nova-conductor

Thank you to everyone who contributed to OpenStack Yoga!

(on behalf of the Ubuntu OpenStack Engineering team)

on April 21, 2022 06:03 PM

A jellyfish and a mainframe

Elizabeth K. Joseph

Happy Ubuntu 22.04 LTS (Jammy Jellyfish) release day!

April has been an exciting month. On April 5th, the IBM z16 was released. For those of you who aren’t aware, this is the IBM zSystems class of mainframes that I’ve been working on at IBM for the past three years. As a Developer Advocate, I’ve been able to spend a lot of time digging into the internals, learning about the implementation of DevOps practices and incorporation of Linux into environments, and so much more. I’ve also had the opportunity to work with dozens of open source projects in the Linux world as they get their software to run on the s390x architecture. This includes working with several Linux distributions, and most recently forming the Open Mainframe Project Linux Distributions Working Group with openSUSE’s Sarah Julia Kriesch.

As a result, I’m delighted to continue to spend a little time with Ubuntu!

For the Ubuntu 22.04 release, the team at Canonical has already been working hard to incorporate key features of the IBM z16, which Frank Heimes has gone into detail about on a technical level on the Ubuntu on Big Iron Blog, IBM z16 launches with Ubuntu 22.04 (beta) support, and also over on with IBM z16 is here, and Ubuntu 22.04 LTS beta is ready. Finally, Frank published: Ubuntu 22.04 LTS got released

Indeed, timing was fortuitous, as Frank notes:

“Since the development of the new IBM z16 happened in parallel with the development of the upcoming Ubuntu Server release, Canonical was able to ensure that Ubuntu Server 22.04 LTS (beta) already includes support for new IBM z16 capabilities.

And this is not limited to the support for the core system, but also includes its peripherals and special facilities”

Now that it’s release day, I wanted to celebrate with the community by sharing a few details of the IBM z16 and some highlights from those blog posts.

So first – the IBM z16 is so pretty! It comes in one to four frames, depending on the needs of the client. Inside the maximum configuration it has up to 200 Processor Units, featuring 5.2Ghz IBM Telum Processors, 40 TB of memory, and 85 LPARs.

As for how Ubuntu was able to leverage improvements to 22.04 to take advantage of everything from the AI Accelerator on the IBM Telum processor to new Quantum-Safe technologies, Frank goes on to share:

“Since we constantly improve Ubuntu, 22.04 was updated and modified for IBM z16 and other platforms in the following areas:

  • virtually the entire cryptography stack was updated, due to the switch to openssl 3
  • some Quantum-safe options are available: library for quantum-safe cryptographic algorithms (liboqs), post-quantum encryption and signing tool (codecrypt), implementation of public-key encryption scheme NTRUEncrypt (libntru)
  • Secure Execution got refined and the virtualization stack updated
  • the chacha20 in-kernel stream cipher (RFC 7539) was hardware optimized using SIMD
  • the kernel zcrypt device driver is now able to exploit the new IBM zSystems crypto hardware, especially Crypto Express8S (CEX8S)
  • and finally a brand new protected key crypto library package (libzpc) was added

This is a really interesting time to be a Linux distribution in this ecosystem. Beyond these fantastic strides made with Ubuntu, the collaboration that’s already taking place across distributions in our new Working Group has been exciting to watch.

Keep up the good work, everyone! And Ubuntu friends, pause a bit today to celebrate, you’ve earned it.

Jellyfish earrings!

Side note: I haven’t mentioned the IBM LinuxONE. As some background, the IBM z16 can have Integrated Facility for Linux (IFL) processors, so you can already run Linux on this generation of mainframes! But the LinuxONE product line only has IFLs, meaning they exclusively run Linux. As a separate product, it can have different release dates, and the current timeline that’s been published is “second half of 2022” for the announcement of the next LinuxONE. Stay tuned, and know that everything I’ve shared about Ubuntu 22.04 for the IBM z16 will also be true of the next LinuxONE.

on April 21, 2022 05:39 PM

The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 22.04, code-named “Jammy Jellyfish”. This marks Ubuntu Studio’s 31st release. This release is a Long-Term Support release and as such, it is supported for 3 years (until April 2025).

Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a complete list of changes and known issues.

You can download Ubuntu Studio 22.04 LTS from our download page.


Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading.

Due to the change in desktop environment that started after the release of 20.04 LTS, direct upgrades from 20.04 LTS are not supported and may only be attempted at-your-own-risk. As with any system-critical operation, back-up your data before attempting any upgrade. The safest upgrade path is a backup of your /home directory and a clean install.

We have had anecdotal reports of successful upgrades from 20.04 LTS (Xfce desktop) to later releases (Plasma desktop), but this will remain at your own risk, and it is highly recommended to wait until 22.04.1 is released in August before attempting such an upgrade.

Instructions for upgrading are included in the release notes.

New This Release

Most of this release is evolutionary on top of 21.10 rather than revolutionary. As such, most of the applications contained are simply upgraded versions. Details on key packages can be found in the release notes.

Dark Theme By Default

For this release, we have a neutral-toned dark theme by default. While we could have gone with the Breeze Dark color scheme since we dropped the Materia KDE widget and window theme (it was difficult to maintain and work with new Plasma features), we decided to develop our own based on GNOME’s Adwaita Dark theme with a corresponding Light theme. This was to help with photography since a neutral tone is necessary as Breeze Dark has a more blueish hue, which can trick the eye into seeing photos as appearing warmer than they actually are.

However, switching from the dark theme to the light theme is a breeze (pun somewhat intended). When opening the System Settings, one only has to look at the home screen to see how to do that.

Support for rEFInd

rEFInd is a bootloader for UEFI-based systems. Our settings which help to support the lowlatency kernel help to create a menu entry to help apply those settings and keep the lowlatency kernel as the default kernel detected by rEFInd. To keep it current, simply enter sudo dpkg-reconfigure ubuntustudio-lowlatency-settings in the command line after a kernel update.

For a more complete list of changes, please see the release notes.

Backports PPA

System Settings with Accent Colors (Folder Colors will follow if Backports PPA is added)

There are a few items planned for the Backports PPA once the next release cycle opens. One of those is folder icons that match the accent color set in the System Settings.

We plan on keeping the backports PPA up-to-date for the next two years until the release of 24.04 LTS, at which point you will be encouraged to update.

Instructions for enabling the Ubuntu Studio Backports PPA

  • Automatic method:
    • Open Ubuntu Studio Installer
    • Click “Enable Backports”
  • Manual method:
    • sudo add-apt-repository ppa:ubuntustudio-ppa/backports
    • sudo apt upgrade

Note that at release time, there’s nothing in there yet, so if you add it now (at the time of this writing) you’ll get a 404 (file not found) error.

On a related note, at this time, the Backports PPA is frozen for 21.10 and 20.04 LTS. To receive newer versions of software, you must upgrade.

Plasma Backports

Since we share the Desktop Environment with Kubuntu, simply adding the Kubuntu Backports will help you with keeping the desktop environment and its components up-to-date with the latest versions:

  • sudo add-apt-repository ppa:kubuntu-ppa/backports
  • sudo apt upgrade

More Updates

There are many more updates not covered here but are mentioned in the Release Notes. We highly recommend reading those release notes so you know what has been updated and know any known issues that you may encounter.

Get Involved!

A great way to contribute is to get involved with the project directly! We’re always looking for new volunteers to help with packaging, documentation, tutorials, user support, and MORE! Check out all the ways you can contribute!

Special Thanks

Huge special thanks for this release go to:

  • Len Ovens: Studio Controls, Ubuntu Studio Installer, Coding
  • Thomas Ward: Packaging, Ubuntu Core Developer for Ubuntu Studio
  • Eylul Dogruel: Artwork, Graphics Design, Website Lead
  • Ross Gammon: Upstream Debian Developer, Guidance, Testing
  • Sebastien Ramacher: Upstream Debian Developer
  • Dennis Braun: Debian Package Maintainer
  • Rik Mills: Kubuntu Council Member, help with Plasma desktop
  • Mauro Gaspari: Tutorials, Promotion, and Documentation, Testing
  • Brian Hechinger: Testing and bug reporting
  • Chris Erswell: Testing and bug reporting
  • Robert Van Den Berg: Testing and bug reporting, IRC Support
  • Krytarik Raido: IRC Moderator, Mailing List Moderator
  • Erich Eickmeyer: Project Leader, Packaging, Direction, Treasurer
on April 21, 2022 05:10 PM

April 18, 2022

Hace tiempo en el grupo de Developers Ve en el mundo, preguntaba si alguien había utilizado Obsidian. Considerándome un power user de Evernote, sé lo que necesito, y en los años que llevo buscando una alternativa que funcione en linux, tenia el problema de que muy pocas realmente funcionan bien o siquiera tienen soporte para linux.

Decidí irme por Obsidian, luego de considerar Notion y otras herramientas principalmente por los plugins, que tiene Zettelkasten como feature, esta pensado con las siguientes otras razones:

  • Es perfecto para Zettelkasten y para construir un segundo cerebro (Building a Second Brain), y tiene un sistema de tags simple, pero poderoso.
  • Soporte para Markdown by default.
  • Los archivos están en mis dispositivos, no en la nube by design.
  • Puedo elegir alternativas de respaldo, control de versiones y redundancia.
  • Soporte para Kanban Boards.
  • Soporte para Day Planners.
  • Soporte para Natural Language Dates para cosas like: @today, @tomorrow.
  • La interfaz es exactamente lo que me encanta de Sublime que es lo que había estado utilizando para tomar notas, junto a la aplicación de notas del teléfono y notas via el correo.
  • Tiene un VIM mode :D.
  • El roadmap promete.


Revisando varias cosas, y realmente investigando un poco, llegue al siguiente workflow:

  • Logre construir el workflow que hace lo que necesito:
    • Vault en git en mi VPS, en una instancia propia de gitea, la data es mia.
    • En Linux:
      • El manejo de las distintas Vaults (Bóvedas) seria por git directamente.
        • Nextcloud como mecanismo de backup redundante en el NAS en casa via scsi en un rpi.
        • A mediano, o largo plazo:
          • La solución de NAS podría ser FreeNAS, TrueNAS o Synology
          • VPS como gateway, utilizando vpn con wireguard para mantener todo en una red privada.
    • En OSX:
      • Seria igual que en linux, por git pero con la diferencia de que las bóvedas estarían alojadas en una carpeta en iCloud.
      • La llave ssh con permiso de escritura en el repo seria importada al keystore (ssh-add -K), para que no de problemas a la hora de pedir contraseñas.
      • Queda pendiente revisar como hacer con las firmas de los commits con GPG, o maybe usando ssh para firmar commits
    • En IOS, los vaults se estarían abriendo via iCloud, dejando por fuera el manejo con git, mientras se agrega el soporte en para ios/mobile en obsidian-git.


En un tiempo revisare este post, y actualizare seguramente a mi nuevo workflow… o hare una vista en retrospectiva de que pudo salir mejor, etc, sin embargo creo que la primera tarea que hare, sera escribir un plugin para poderlo integrar con la creación de posts de este blog, y utilizar el grafo de tags, que por ahora… se ve así:

tag cloud

on April 18, 2022 12:00 AM

April 16, 2022

To celebrate the new release of the popular free and open source GNU/Linux distribution Ubuntu 22.04, we are going to hold a release party. The event will take place on the 1st of May. Due to continued COVID uncertainties, the event will be held in a live virtual format with moderated Q&A.  The Call for […]
on April 16, 2022 04:40 AM

April 14, 2022

Ep 190 – Societal

Podcast Ubuntu Portugal

O Miguel está deprimido! Acabou de descobrir que daqui a uns anos – 5, talvez 10 – vai ter de gastar dinheiro num telefone novo. O Carrondo está todo contente com o seu alarme novo, livre. O Constantino andou a conversar no Twitter, para variar… esta semana foi sobre extensões do Firefox. Mas o prato principal desta refeição foi a relação da Comissão Europeia com o Software Livre!

Já sabem: oiçam, subscrevam e partilhem!



Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on April 14, 2022 07:45 PM

April 10, 2022

Here’s my (thirtieth) monthly but brief update about the activities I’ve done in the F/L/OSS world.


This was my 39th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

I recovered this month and cleared up a bunch of my backlog. So a good month, that way.

I didn’t do any uploads this month but I still did the following this month:

Other $things:

  • Volunteering for DC22 Content team.
  • Volunteering for DC22 Bursary team.
  • Being a DC22 Bursary lead along w/ Paulo.
  • Being an AM for Arun Kumar, process #1024.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.


This was my 14th month of actively contributing to Ubuntu. Now that I joined Canonical to work on Ubuntu full-time, there’s a bunch of things I do! \o/

I mostly worked on different things, I guess.

I was too lazy to maintain a list of things I worked on so there’s no concrete list atm. Maybe I’ll get back to this section later or will start to list stuff from the fall, as I was doing before. :D

Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

This was my thirtieth month as a Debian LTS and nineteenth month as a Debian ELTS paid contributor.
I worked for 57.75 out of 59.50 hours for LTS and 42.25 out of 60.00 hours for ELTS.

LTS CVE Fixes and Announcements:

ELTS CVE Fixes and Announcements:

  • Issued ELA 578-1, fixing CVE-2021-0561, for flac.
    For Debian 8 jessie, these problems have been fixed in version 1.3.0-3+deb8u2.
  • Issued ELA 582-1, fixing some vulnerabilties which haven’t a CVE ID assigned yet, for wordpress.
    For Debian 8 jessie, these problems have been fixed in version 4.1.35+dfsg-0+deb8u1.
  • Worked on readying up python2.7 update. But the tests fails with a segfault but only on jessie. The very same works fine on stretch.
    Been trying to workthru the tests but it looks that it’s a test-only thing. But I’ll double-check to be sure. :)
  • Looked into src:bind9 for Markus. Also, coordinated the same w/ the Ubuntu security team (ESM one). Reported the findings that I and Marc discussed.
    Markus seemed to workthru a way out in the end. \o/
  • Working on src:tiff and src:beep to fix the issues, waiting for more issues to be reported for src:tiff and src:beep is a bit of a PITA, though. :)

Other (E)LTS Work:

Debian LTS Survey

I’ve spent 9 hours on the LTS survey on the following bits:
(but I’ll invoice them next month)

  • Organize questions. Re-order, fix, and add things wherever needed.
  • Finally set the whole thing up.
  • Did a couple of dry-runs.
  • Drafted the mail to be sent.

Until next time.
:wq for today.

on April 10, 2022 05:41 AM

April 07, 2022

 I regularly fly between Belgium and my second home country Latvia. How much am I sponsoring Vladimir when doing that? About 25€. Back of the envelope calculation.

  • CRL - RIX return = 330 kg CO2 (source)
  • 1 l jet fuel a1 = 2.52 kg CO2 (source)
  • 1 l jet fuel = 0.85€ (source, some currency and SI conversion required)
  • refinery and distribution margin ~ 15% (conservative ballpark guesstimate based upon price/barrel for crude and jet a1 fuel)
  • percentage of Russian crude in EU: 27% (source)
  • (330/2.52)*.85*.85*.27 = 25.55€

P.S. More source countries have "interesting" policies. For example. 8% of EU imports are from Saudi Arabia.

P.P.S. Our upcoming holiday will be by night train. Exciting!

on April 07, 2022 03:21 PM

April 05, 2022

To celebrate the new release of the popular free and open source GNU/Linux distribution, Ubuntu 22.04, we are going to hold a release party. The event will take place on the 1st of May. Due to continued COVID uncertainties, the event will be held in a live virtual format with moderated Q&A.  The Call for […]
on April 05, 2022 09:41 AM

Previously: v5.9

Linux v5.10 was released in December, 2020. Here’s my summary of various security things that I found interesting:

While guest VM memory encryption with AMD SEV has been supported for a while, Joerg Roedel, Thomas Lendacky, and others added register state encryption (SEV-ES). This means it’s even harder for a VM host to reconstruct a guest VM’s state.

x86 static calls
Josh Poimboeuf and Peter Zijlstra implemented static calls for x86, which operates very similarly to the “static branch” infrastructure in the kernel. With static branches, an if/else choice can be hard-coded, instead of being run-time evaluated every time. Such branches can be updated too (the kernel just rewrites the code to switch around the “branch”). All these principles apply to static calls as well, but they’re for replacing indirect function calls (i.e. a call through a function pointer) with a direct call (i.e. a hard-coded call address). This eliminates the need for Spectre mitigations (e.g. RETPOLINE) for these indirect calls, and avoids a memory lookup for the pointer. For hot-path code (like the scheduler), this has a measurable performance impact. It also serves as a kind of Control Flow Integrity implementation: an indirect call got removed, and the potential destinations have been explicitly identified at compile-time.

network RNG improvements
In an effort to improve the pseudo-random number generator used by the network subsystem (for things like port numbers and packet sequence numbers), Linux’s home-grown pRNG has been replaced by the SipHash round function, and perturbed by (hopefully) hard-to-predict internal kernel states. This should make it very hard to brute force the internal state of the pRNG and make predictions about future random numbers just from examining network traffic. Similarly, ICMP’s global rate limiter was adjusted to avoid leaking details of network state, as a start to fixing recent DNS Cache Poisoning attacks.

SafeSetID handles GID
Thomas Cedeno improved the SafeSetID LSM to handle group IDs (which required teaching the kernel about which syscalls were actually performing setgid.) Like the earlier setuid policy, this lets the system owner define an explicit list of allowed group ID transitions under CAP_SETGID (instead of to just any group), providing a way to keep the power of granting this capability much more limited. (This isn’t complete yet, though, since handling setgroups() is still needed.)

improve kernel’s internal checking of file contents
The kernel provides LSMs (like the Integrity subsystem) with details about files as they’re loaded. (For example, loading modules, new kernel images for kexec, and firmware.) There wasn’t very good coverage for cases where the contents were coming from things that weren’t files. To deal with this, new hooks were added that allow the LSMs to introspect the contents directly, and to do partial reads. This will give the LSMs much finer grain visibility into these kinds of operations.

set_fs removal continues
With the earlier work landed to free the core kernel code from set_fs(), Christoph Hellwig made it possible for set_fs() to be optional for an architecture. Subsequently, he then removed set_fs() entirely for x86, riscv, and powerpc. These architectures will now be free from the entire class of “kernel address limit” attacks that only needed to corrupt a single value in struct thead_info.

sysfs_emit() replaces sprintf() in /sys
Joe Perches tackled one of the most common bug classes with sprintf() and snprintf() in /sys handlers by creating a new helper, sysfs_emit(). This will handle the cases where kernel code was not correctly dealing with the length results from sprintf() calls, which might lead to buffer overflows in the PAGE_SIZE buffer that /sys handlers operate on. With the helper in place, it was possible to start the refactoring of the many sprintf() callers.

nosymfollow mount option
Mattias Nissler and Ross Zwisler implemented the nosymfollow mount option. This entirely disables symlink resolution for the given filesystem, similar to other mount options where noexec disallows execve(), nosuid disallows setid bits, and nodev disallows device files. Quoting the patch, it is “useful as a defensive measure for systems that need to deal with untrusted file systems in privileged contexts.” (i.e. for when /proc/sys/fs/protected_symlinks isn’t a big enough hammer.) Chrome OS uses this option for its stateful filesystem, as symlink traversal as been a common attack-persistence vector.

ARMv8.5 Memory Tagging Extension support
Vincenzo Frascino added support to arm64 for the coming Memory Tagging Extension, which will be available for ARMv8.5 and later chips. It provides 4 bits of tags (covering multiples of 16 byte spans of the address space). This is enough to deterministically eliminate all linear heap buffer overflow flaws (1 tag for “free”, and then rotate even values and odd values for neighboring allocations), which is probably one of the most common bugs being currently exploited. It also makes use-after-free and over/under indexing much more difficult for attackers (but still possible if the target’s tag bits can be exposed). Maybe some day we can switch to 128 bit virtual memory addresses and have fully versioned allocations. But for now, 16 tag values is better than none, though we do still need to wait for anyone to actually be shipping ARMv8.5 hardware.

fixes for flaws found by UBSAN
The work to make UBSAN generally usable under syzkaller continues to bear fruit, with various fixes all over the kernel for stuff like shift-out-of-bounds, divide-by-zero, and integer overflow. Seeing these kinds of patches land reinforces the the rationale of shifting the burden of these kinds of checks to the toolchain: these run-time bugs continue to pop up.

flexible array conversions
The work on flexible array conversions continues. Gustavo A. R. Silva and others continued to grind on the conversions, getting the kernel ever closer to being able to enable the -Warray-bounds compiler flag and clear the path for saner bounds checking of array indexes and memcpy() usage.

That’s it for now! Please let me know if you think anything else needs some attention. Next up is Linux v5.11.

© 2022, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

on April 05, 2022 12:01 AM

March 31, 2022

The Ubuntu Studio team is pleased to announce the beta release of Ubuntu Studio 22.04 LTS, codenamed “Jammy Jellyfish”.

While this beta is reasonably free of any showstopper DVD build or installer bugs, you may find some bugs within. This image is, however, reasonably representative of what you will find when Ubuntu Studio 22.04 LTS is released on April 21, 2022.

Ubuntu Studio 22.04 LTS will be Ubuntu Studio’s first Long-Term Support(LTS) release with the KDE Plasma Desktop Environment.

Special notes:

  • Due to the change in desktop environment, directly upgrading to Ubuntu Studio 22.04 LTS from 20.04 LTS is not supported and will not be supported.  However, upgrades from Ubuntu Studio 21.10 will be supported. See the Release Notes for more information. Anecdotally, some people have had success upgrading from 20.04 LTS to a later version, so your mileage may vary.
  • The Ubuntu Studio 22.04 LTS disk image (ISO) exceeds 4.0 GB and cannot be downloaded to some file systems such as FAT32, and may not be readable when burned to a DVD. For this reason, we recommend creating a bootable USB stick with the ISO image.

Images can be obtained from this link:

Full updated information is available in the Release Notes.

New Features

Ubuntu Studio 22.04 LTS includes the new KDE Plasma 5.24 LTS desktop environment. This is a beautiful and functional upgrade to previous versions, and we believe you will like it.

Studio Controls is upgraded to 2.3.0 and includes numerous bug fixes.

OBS Studio is upgraded to version 27.2.3 and works with Wayland sessions. While Wayland is not currently the default, it is available as unsupported and experimental.

There are many other improvements, too numerous to list here. We encourage you to take a look around the freely-downloadable ISO image.

Known Issues

  • At this time, the installer (Calamares) will crash when attempting an installation on a manually-partitioned btrfs file system. (LP: #1966774)
  • MyPaint crashes upon launching (LP: #1967163), may be resolved after an update.
  • There are a few cosmetic issues that should be resolved before final release.

Official Ubuntu Studio release notes can be found at

Further known issues, mostly pertaining to the desktop environment, can be found at

Additionally, the main Ubuntu release notes contain more generic issues:

Frequently Asked Questions

Q: Does KDE Plasma use more resources than your former desktop environment (Xfce)?
A: In our testing, the increase in resource usage is negligible, and our optimizations were never tied to the desktop environment.

Q: Does Ubuntu Studio contain snaps?
A: Yes. Mozilla’s distribution agreement with Canonical changed, and Ubuntu was forced to no-longer distribute Firefox in a native .deb package. We have found that, after initial launch, Firefox performs just as well as the native .deb package did.

Q: If I install this Beta release, will I have to reinstall when the final release comes out?
A: No. If you keep it updated, your installation will automatically become the final release.

Q: Will you make an ISO with {my favorite desktop environment}?
A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since we’re completely volunteer-run, we don’t have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio.

Please Test!

This release we are participating in Ubuntu Testing Week, which begins as soon as this beta is released. During this testing cycle we will work with the amazing folks at the Ubuntu Hideout Discord ( They have created a #testing-cycles channel in Discord and had a very good response with users coming forward and helping with testing. They have expressed a desire to have the Ubuntu Hideout Discord community make an even greater impact during this release.

Whether or not you choose to participate on Discord is up to you. If you do find something that is a legitimate bug, please open a terminal and type “ubuntu-bug (package name)” to file the bug report since this collects valuable information we need when debugging. If you get a popup while something is running saying that something has crashed, don’t hesitate to click on the “send bug report” button.

on March 31, 2022 05:43 PM

March 29, 2022

Recently, I needed to check if a regression in Ubuntu 22.04 Beta was triggered by the mesa upgrade. Ok, sounds simple, let me just install the older mesa version.

Let’s take a look.

Oh, wow, there are about 24 binary packages (excluding the packages for debug symbols) included in mesa!

Because it’s no longer published in Ubuntu 22.04, we can’t use our normal apt way to install those packages. And downloading those one by one and then installing them sounds like too much work.

Step Zero: Prerequisites

If you are an Ubuntu (or Debian!) developer, you might already have ubuntu-dev-tools installed. If not, it has some really useful tools!

$ sudo apt install ubuntu-dev-tools

Step One: Create a Temporary Working Directory

Let’s create a temporary directory to hold our deb packages. We don’t want to get them mixed up with other things.

$ mkdir mesa-downgrade; cd mesa-downgrade

Step Two: Download All the Things

One of the useful tools is pull-lp-debs. The first argument is the source package name. In this case, I next need to specify what version I want; otherwise it will give me the latest version which isn’t helpful. I could specify a series codename like jammy or impish but that won’t give me what I want this time.

$ pull-lp-debs mesa 21.3.5-1ubuntu2

By the way, there are several other variations on pull-lp-debs:

  • pull-lp-source – downloads source package from Launchpad.
  • pull-lp-debs – downloads debs package(s) from Launchpad.
  • pull-lp-ddebs – downloads dbgsym/ddebs package(s) from Launchpad.
  • pull-lp-udebs – downloads udebs package(s) from Launchpad.
  • pull-debian-* – same as pull-lp-* but for Debian packages.

I use the LP and Debian source versions frequently when I just want to check something in a package but don’t need the full git repo.

Step Three: Install Only What We Need

This command allows us to install just what we need.

$ sudo apt install --only-upgrade --mark-auto ./*.deb

--only-upgrade tells apt to only install packages that are already installed. I don’t actually need all 24 packages installed; I just want to change the versions for the stuff I already have.

--mark-auto tells apt to keep these packages marked in dpkg as automatically installed. This allows any of these packages to be suggested for removal once there isn’t anything else depending on them. That’s useful if you don’t want to have old libraries installed on your system in case you do manual installation like this frequently.

Finally, the apt install syntax has a quirk: It needs a path to a file because it wants an easy way to distinguish from a package name. So adding ./ before filenames works.

I guess this is a bug. apt should be taught that libegl-mesa0_21.3.5-1ubuntu2_amd64.deb is a file name not a package name.

Step Four: Cleanup

Let’s assume that you installed old versions. To get back to the current package versions, you can just upgrade like normal.

$ sudo apt dist-upgrade

If you do want to stay on this unsupported version a bit longer, you can specify which packages to hold:

$ sudo apt-mark hold

And you can use apt-mark list and apt-mark unhold to see what packages you have held and release the holds. Remember you won’t get security updates or other bug fixes for held packages!

And when you’re done with the debs we download, you can remove all the files:

$ cd .. ; rm -ri mesa-downgrade

Bonus: Downgrading back to supported

What if you did the opposite and installed newer stuff than is available in your current release? Perhaps you installed from jammy-proposed and you want to get back to jammy ? Here’s the syntax for libegl-mesa0

Note the /jammy suffix on the package name.

$ sudo apt install libegl-mesa0/jammy

But how do you find these packages? Use apt list

Here’s one suggested way to find them:

$ apt list --installed --all-versions| grep local] --after-context 1

Finally, I should mention that apt is designed to upgrade packages not downgrade them. You can break things by downgrading. For instance, a database could upgrade its format to a new version but I wouldn’t expect it to be able to reverse that just because you attempt to install an older version.

on March 29, 2022 09:55 PM

March 19, 2022

Rust is all hot these days, and it is indeed a nice language to work with. In this blog post, I take a look at a small challenge: how to host private crates in the form of Git repositories, making them easily available both to developers and CI/CD systems.


Ferris the crab, unofficial mascot for Rust.

A Rust crate can be hosted in different places: on a public registry on, but also in a private Git repo hosted somewhere. In this latter case, there are some challenges on how to make the crate easily accessible to both engineers and CI/CD systems.

Developers usually authenticate through SSH keys: given humans are terrible at remembering long passwords, using SSH keys allow us to do not memorize credentials, and having authentication methods that just works. The security is quite high as well: each device has a unique key, and if a device gets compromised, deactivating the related key solves the problem.

On the other hand, it is better avoiding using SSH keys for CI/CD systems: such systems are highly volatile, with dozens, if not hundreds, instances that get created and destroyed hourly. For them, a short-living token is a way better choice. Creating and revoking SSH keys on the fly can be tedious and error-prone.

This arises a challenge with Rust crates: if hosted on a private repository, the dependency can be reachable through SSH, such as

my-secret-crate = { git = "ssh://", branch = "main" }

or through HTTPS, such as

my-secret-crate = { git = "", branch = "main" }

In the future, I hope we don’t need to host private crates on Git repositories, GitLab should add a native implementation of a private registry for crates.

The former is really useful and simple for engineers: authentication is the same as always, so no need to worry about it. However, it is awful for CI/CD: now there is need to manage the lifecycle of SSH keys for automatic systems.

The latter is awful for engineers: they need a new additional authentication method, slowing them down, and of course there will be authentication problems. On the other hand, it is great for automatic systems.

How to conciliate the two worlds?

Well, let’s use them both! In the Cargo.toml file, use the SSH protocol, so developers can simply clone the main repo, and they will be able to clone the dependencies without further hassle.

Then, configure the CI/CD system to clone every dependency through HTTPS, thanks to a neat feature of Git itself: insteadOf.

From the Git-SCM website:

Any URL that starts with this value will be rewritten to start, instead, with . In cases where some site serves a large number of repositories, and serves them with multiple access methods, and some users need to use different access methods, this feature allows people to specify any of the equivalent URLs and have Git automatically rewrite the URL to the best alternative for the particular user, even for a never-before-seen repository on the site. When more than one insteadOf strings match a given URL, the longest match is used.

Basically, it allows rewriting part of the URL automatically. In this way, it is easy to change the protocol used: developers will be happy, and the security team won’t have to scream about long-lived credentials on CI/CD systems.

Do you need an introduction to GitLab CI/CD? I’ve written something about it!

An implementation example using GitLab, but it can be done on any CI/CD system:

    - git config --global credential.helper store
    - echo "https://gitlab-ci-token:${CI_JOB_TOKEN}" > ~/.git-credentials
    - git config --global url."https://gitlab-ci-token:${CI_JOB_TOKEN}".insteadOf ssh://
    - cargo build

The CI_JOB_TOKEN is a unique token valid only for the duration of the GitLab pipeline. In this way, also if a machine got compromised, or logs leaked, the code is still sound and safe.

What do you think about Rust? If you use it, have you integrated it with your CI/CD systems? Share your thoughts in the comments below, reach me on Twitter (@rpadovani93) or drop me an email at


on March 19, 2022 12:00 AM

March 13, 2022

I invented a solitaire card game. I was thinking about solo roleplaying, and the Carta SRD stuff I did for Borealis, and I was thinking about the idea of the cards forming the board you’re playing on and also being the randomness tool. Then I came up with the central mechanic of the branching tree, and the whole game sorta fell into place, and now I’ve been experimenting with it for a day so I want to write it down.

I’m not sure about the framing device; originally it was a dungeon crawl, but since it’s basically “have a big series of battles” I’m a bit uncomfortable with that story since it’s very murder-y. So maybe it’s a heist where you’re defeating many traps to steal the big diamond from a rich guy’s booby-trapped hoard? Not sure, suggestions welcomed.

The game

Required: a pack of cards. Regular 52-card deck. You don’t need jokers.


Remove the Ace of Spades from the deck and place it in front of you, face up, on the table. This is where you start; the entrance to the dungeon. Deal four cards face down vertically above it in a line. Above that, place the King of Diamonds, face up, so you have six cards in a vertical line, K♦ at the top, 4 face-down cards, A♠ at the bottom. The K♦ is the target, the thing you’re trying to get to (it is treasure, because it is the king, do you see, of diamonds! get it? it’s the biggest diamond). Now return the four face-down cards to the deck and shuffle the deck; this is to ensure that there are exactly four card lengths between A♠ and K♦. Deal one card, face up, over to the left. This is your SKILL score. Its suit is not relevant; its value is the important number. Ace is 1, JQK are 11, 12, 13. Higher is better.

The play: basic

In each round, you confront a monster. Deal one card face up, to the right, which represents the monster, and the monster’s SKILL score (A=1, 2=2, K=13).

Deal three cards face down: these are the monster’s attacks. Deal yourself three cards. Now turn the monster’s cards over. You now pair your cards against the monster’s, in whichever order you please, so there are three pairs. Your score in each pair is your SKILL plus your dealt card; the monster’s score in each pair is its SKILL plus its dealt card.

For each pair where your score is higher than the monster’s, you get a point; for each where the monster wins, you lose a point; a tie scores 0. This means that you will have a score between 3 and -3 for the round.

An example: imagine that you have a SKILL of 7, and you deal the 9♦ as the monster. This means this monster has a SKILL of 9. You then deal three cards for monster attacks; 4♥, J♥, 6♣. You deal yourself three cards: 2♣, 7♦, Q♣. So you elect to pair them as follows:

4♥ + SKILL 9 = 137♦ + SKILL 7 = 14we win! +1
J♥ (11) + SKILL 9 = 202♣ + SKILL 7 = 9big loss: -1
6♣ + SKILL 9 = 15Q♣ (12) + SKILL 7 = 19we win! +1

So that’s an overall score of +2 this round: you won the round and defeated the monster!

If your score for the round is positive, then next round when you deal the card for the monster, you can deal this many extra cards and choose the monster you want from them. (So since we won with +2, next round we deal the next three cards out and choose the one we want to be the monster. The other two cards are returned to the pack, which is shuffled.) If your score is negative, then you have to remove that many cards from the dungeon (which will be explained shortly). (The Ace of Spades always stays in place.)

Return the monster attack and your attack cards to the deck and shuffle it ready for the next round. If your round score was negative or zero, or if the monster was a King, then put the monster card in the discard pile. If your score was positive, then add the monster card to the dungeon.

Adding to the dungeon

To add a card to the dungeon, it must be touching the last card that was added to the dungeon (or the Ace of Spades, if no card has yet been added). Rotate the card so that its orientation is the same as the card’s value, on a clock face. So a Queen (value 12) is placed vertically. A 3 is placed horizontally. An Ace is placed pointing diagonally up and to the right. This should be at 30 degrees, but you can eyeball it; don’t get the protractor out. Remember, it must be touching or overlapping the last card that was added. In this way, the path through the dungeon grows. The goal is to have the path reach the King of Diamonds; if there is a continuous path from A♠ to K♦ then you have won!

  1. The setup
  2. a 3 is placed, touching the Ace of Spades, and pointing in the direction of a 3 on a clock
  3. an 8 is placed, touching the 3 (because the 3 was the last added card). Note that it needs to point “downwards” towards an 8 on a clock face
  4. a Queen is placed, pointing upwards towards the 12

Optional rules, ramblings, and clarifications

That’s the game. Build the dungeon turn by turn, see if you can obtain the treasure, the king of diamonds. Here are some extra thoughts, possible extra rules, and questions that I have which I haven’t yet worked out answers for.

Special” cards: armour and weapons

It would be nice to add a bit more skill and planning to the game; there’s not a huge amount of agency. So here’s the first optional rule, about armour and weapons. A monster card which is a Club is potentially a weapon. If you deal a monster card that’s a Club, then you can elect to treat it as a weapon instead and put it in your stash. When you have weapons in your stash, you can choose to add the stored weapon to any one attack, and it adds its rank to your score for that attack. (So if you have SKILL of 7, and you play a 3 for an attack, and you have a 4♣ stored as a weapon, then you can play that 4 as well for a total score of 14, not 10.) Similarly, a monster card that’s a Spade can be treated as armour. If you have armour, the next attack you lose will become a tie. So if your score for a round would be -2 (you lost two attacks and tied one) but you have an armour card then you would discard your armour card and that score becomes -1 (one loss, two ties). Clubs and Spades used this way go into the discard pile, not back into the pack.

This is an optional rule becuase I’m not sure about the balancing of it. In particular, when do you get to add a weapon card? Should you have to add a weapon before the monster attacks are turned over, so it’s a bit of a gamble? Or can you add it when you know whether it’ll win or not? (If yes, then everyone holds weapons until they know they’ll make the difference between a win and a loss, which doesn’t require any skill or judgement to do.)

The length of the dungeon

The distance of 4 cards is based on some rough simulations I ran which suggest that with a 4 card distance a player should win about 5% of the time, which feels about right for a difficult solitaire game; you want to not win that often, but not so infrequently that you doubt that winning is possible. But changing the distance to 3 cards may make a big difference there (it should give a win about one time in 10, in the simulation).

Removing cards

Question: should it be allowed to delete cards in the middle of the path if you lose, thus leaving a gap in the path. You shouldn’t be able to win by reaching the king of diamonds if there’s a gap, of course, but having gaps mid-game seems ok. However, then you have to be able to add cards to fill the gap up, which seems very difficult. This is because we have to require that newly added cards are added to the end of the path, otherwise everyone makes all “negative” cards simply build by touch the Ace of Spades and so we never actually go backwards.

Angle of cards

Cards are reversible. So an 8, which should be a negative card, is actually the same as a 2, which is positive. What’s the best way to enforce this? When considering the “clock face” for orientation, does the centre of the clock face have to be in the centre of the most recent card?

Also, kings not being able to add to the path seems a bit arbitrary. Problem is that there aren’t 13 hours on a clock. This can obviously be justified in-universe (maybe kings are boss monsters or something?) but it feels a bit of a wart.

And that’s it

That’s it. Game idea written down, which should hopefully get it out of my head. If anyone else plays it, or has thoughts on the rules, on improvements, or on a theme and setting, I’d love to hear them; @sil on Twitter is probably the easiest way.

on March 13, 2022 10:38 PM

March 09, 2022

We are pleased to announce that Plasma 5.24.3 is now available in our backports PPA for Kubuntu 21.10 (Impish Indri).

The release announcement detailing the new features and improvements in Plasma 5.24.3 can be found here.

To upgrade:

Add the following repository to your software sources list:


or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt full-upgrade


Please note that more bugfix releases are scheduled by KDE for Plasma 5.24, so while we feel these backports will be beneficial to enthusiastic adopters, users wanting to use a Plasma release with more rounds of stabilisation/bugfixes ‘baked in’ may find it advisable to stay with Plasma 5.22 as included in the original 21.10 (Impish Indri) release.

The Kubuntu Backports PPA for 21.10 also currently contains newer versions of KDE Gear (formerly Applications) and other KDE software. The PPA will also continue to receive updated versions of KDE packages other than Plasma, for example KDE Frameworks.

Issues with Plasma itself can be reported on the KDE bugtracker [1]. In the case of packaging or other issues, please provide feedback on our mailing list [2], IRC [3], and/or file a bug against our PPA packages [4].

1. KDE bugtracker:
2. Kubuntu-devel mailing list:
3. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on
4. Kubuntu ppa bugs:

on March 09, 2022 01:15 PM

February 27, 2022

Linux Gaming in 2022

Bryan Quigley

Quick thoughts

I'd bet the Steam Deck (and other changes) will have the following impacts on Linux overall by the end of 2022.

  1. The majority of Linux users will run Wayland over X11.
  2. Valve’s Steamdeck is going to double the number of Linux gamers per Valve's Hardware Survey.
  3. Flatpak/Flathub will ride the Deck wave - usage will double.
  4. Distros will ride the Deck wave - gaming usage will increase by about 20% (1% -> 1.20%).

1 Majority Wayland

Currently we are at less than 10% running wayland per Firefox telemetry stats on Phoronix but there are a lot of movers, namely:

  1. Ubuntu 22.04 LTS will be the first Ubuntu LTS release defaulting to Wayland.
  2. Nvidia drivers explicitly improving Wayland support.
  3. Although this is still a big list, KDE wayland support has been getting a lot of improvements recently and it's the default on some installs.
  4. Steam Deck will be using Wayland.

2 Double Steam Linux users

Right now for January 2022 1.06% of Steam users are running Linux. I estimate about 340-460k Steam Linux users (Valve published Flatpak installs for November at about 5%. There are approximately 17000-23000 users for each update).

We don't have current numbers for Steam deck reservations, but near launch it was > 100k. Selling 300k-500k seems quite within the realm of possibilities. I would also not be surprised if they sold more.

3 Flathub usage doubles

Download on Flathub not a link

Flatpak is the easiest way to install non-Steam software on a Steam deck - "Yes. You'll be able to install external apps via Flatpak or other software without going into developer mode" - Steam Deck FAQ

The key items I see at first would be MineCraft and the large collection of gaming emulators. It would also be the obvious choice if another studio wanted to bring a game to the Deck.

Valve has avoided picking sides regarding Flatpak vs Snap vs AppImage so far. They still offer a deb from their own download page. Given Steam's user base just making the Flatpak the default would likely more than double Flatpak usage.

There are more wildcards here:

  • How actually easy will flatpak be?
  • How hard are other options - snaps (requires dev mode?), AppImage (seems like it might work fine), etc?

4 Distros ride the wave

I'm expecting at least 20% increase on the Steam Hardware Survey (so 1% to 1.20%) not including Steam Deck. Right now, the Steam Linux usage is less than half what you get from other sources. That could mean multiple things:

  • Linux is less likely to be used by gamers
  • Linux users prefer playing awesome open source games (or otherwise don't use Steam)
  • Users switch to game on other platforms. Many users might take a fresh look at Steam on their existing Linux boxes with all the press from the Steam Deck.

All are likely true to some extent.

Linux distros have many options to further ride the wave:

  • Encourage Linux consumer focused pre-installs with similar AMD chips to what's in the Steam Deck (I know many have asked for more AMD preinstalls for a long while)
  • Enable Flathub/flatpak by default for easier finding apps (Steam itself is not discoverable on Ubuntu unless you enable flatpak. If flatpak takes off more, this will be quite essential)
  • Gaming on Linux has been discussed more on mainstream tech shows than at any point in my memory. This is a marketing opportunity to not pass up.
  • Help support Game studios with Linux porting and compatibility. (or show other stores they can come!)


Do you think these will all come to pass? Was I way off? Add a comment via Gitlab

on February 27, 2022 09:07 AM

Small EInk Phone

Bryan Quigley

Update 2022-02-26: Only got 12 responses which likely means there isn't that much demand for this product at this time (or it wasn't interesting enough to spread). Here are the results as promised:

What's the most you would be willing to spend on this? 7 - $200, 4 - $400. But that doesn't quite capture it. Some wanted even cheaper than $200 (which isn't doable) and others were will to spend a lot more.

Of the priority's that got at least 2 people agreeing (ignoring rating): 4 - Openness of components, Software Investments 3 - Better Modem, Headphone Jack, Cheaper Price 2 - Convergence Capable, Color eInk, Replaceable Battery

I'd guess about half of the respondents would likely be happy with a PinePhone (Pro) that got better battery life and "Just Works".

End Update.

Would you be interested in crowdfunding a small E Ink Open Phone? If yes, check out the specs and fill out the form below.

If I get 1000 interested people, I'll approach manufacturers. I plan to share the results publicly in either case. I will never share your information with manufacturers but contact you by email if this goes forward.


  • Small sized for 2021 (somewhere between 4.5 - 5.2 inches)
  • E Ink screen (Maybe Color) - battery life over playing videos/games
  • To be shipped with one of the main Linux phone OSes (Manjaro with KDE Plasma, etc).
  • Low to moderate hardware specs
  • Likely >6 months from purchase to getting device

Minimum goal specs (we might be able to do much better than these, but again might not):

  • 4 Core
  • 32 GB Storage
  • USB Type-C (Not necessarily display out capable)
  • ~8 MP Front camera
  • GPS
  • GSM Modem (US)

Software Goals:

  • Only open source apps pre-installed
  • Phone calls
  • View websites / webapps including at least 1 rideshare/taxi service working (may not be official)
  • 2 day battery life (during "normal" usage)

Discussions: Phoronix

on February 27, 2022 08:50 AM

February 24, 2022

In English Gracias a todos por el arduo trabajo de nuestros colaboradores, nos complace anunciar que se ha lanzado Lubuntu 20.04.4 LTS. ¿Qué es Lubuntu? Lubuntu es una versión oficial de Ubuntu que utiliza el entorno de escritorio ligero Qt (LXQt). El objetivo del proyecto es proporcionar una distribución de Linux liviana pero funcional basada […]
on February 24, 2022 11:29 PM

February 18, 2022

As of 2022-02-16, Launchpad supports a couple of features on its SSH endpoints (,,, and that it previously didn’t: Ed25519 public keys (a well-regarded format, supported by OpenSSH since 6.5 in 2014) and signatures with existing RSA public keys using SHA-2 rather than SHA-1 (supported by OpenSSH since 7.2 in 2016).

I’m hesitant to call these features “new”, since they’ve been around for a long time elsewhere, and people might quite reasonably ask why it’s taken us so long. The problem has always been that Launchpad can’t really use a normal SSH server such as OpenSSH because it needs features that aren’t practical to implement that way, such as virtual filesystems and dynamic user key authorization against the Launchpad database. Instead, we use Twisted Conch, which is a very extensible Python SSH implementation that has generally served us well. The downside is that, because it’s an independent implementation and one that occupies a relatively small niche, it often lags behind in terms of newer protocol features.

Catching up to this point has been something we’ve been working on for around five years, although it’s taken a painfully long time for a variety of reasons which I thought some people might find interesting to go into, at least people who have the patience for details of the SSH protocol. Many of the delays were my own responsibility, although realistically we probably couldn’t have added Ed25519 support before OpenSSL/cryptography work that landed in 2019.

  • In 2015, we did some similar work on SHA-2 key exchange and MAC algorithms.
  • In 2016, various other contributors were working on ECDSA and Ed25519 support (e.g. #533 and #644). At the time, it seemed best to keep an eye on this but mainly leave them to it. I’m very glad that some people worked on this before me - studying their PRs helped a lot, even parts that didn’t end up being merged directly.
  • In 2017, it became clear that this was likely to need some more attention, but before we could do anything else we had to revamp Launchpad’s build system to use pip rather than buildout, since without that we couldn’t upgrade to any newer versions of Twisted. That proved to be a substantial piece of yak-shaving: first we had to upgrade Launchpad off Ubuntu 12.04, and then the actual build system rewrite was a complicated project of its own.
  • In 2018, I fixed an authentication hang that happened if a client even tried to offer ECDSA or Ed25519 public keys to Launchpad, and we got ECDSA support fully working in Launchpad. We also discovered as a result of automated interoperability tests run as part of the Debian OpenSSH packaging that Twisted needed to gain support for the new openssh-key-v1 private key format, which became a prerequisite for Ed25519 support since OpenSSH only ever writes those keys in the new format, and so I fixed that.
  • In 2019, Python’s cryptography package gained support for X25519 (the Diffie-Hellman key exchange function based on Curve25519) and Ed25519, and it became somewhat practical to add support to Twisted on top of that. However, it required OpenSSL 1.1.1b, and it seemed unlikely that we would be in a position to upgrade all the relevant bits of Launchpad’s infrastructure to use that in the near term. I at least managed to add curve25519-sha256 key exchange support to Twisted based on some previous work by another contributor, and I prepared support for Ed25519 keys in Twisted even though I knew we weren’t going to be able to use it yet.
  • 2020 was … well, everyone knows what 2020 was like, plus we had a new baby. I did some experimentation in spare moments, but I didn’t really have the focus to be able to move this sort of complex problem forward.
  • In 2021, I bit the bullet and started seriously working on fallback mechanisms to allow us to use Ed25519 even on systems lacking a sufficient version of OpenSSL, though found myself blocked on figuring out type-checking issues following a code review. It then became clear on the release of OpenSSH 8.8 that we were going to have to deal with RSA SHA-2 signatures as well, since otherwise OpenSSH in Ubuntu soon wouldn’t be able to authenticate to Launchpad by default (which also caused me to delay uploading 8.8 to Debian unstable for a while). To deal with that, I first had to add SSH extension negotiation to Twisted.
  • Finally, in 2022, I added RSA SHA-2 signature support to Twisted, finally unblocked myself on the type-checking issue with the Ed25519 fallback mechanism, quickly put together a similar fallback mechanism for X25519, backported the whole mess to Twisted 20.3.0 since we currently can’t use anything newer due to the somewhat old version of Python 3 that we’re running, promptly ran into and fixed a regression that affected SFTP uploads to and, and finally added Ed25519 as a permissible key type in Launchpad’s authserver.

Phew! Thanks to everyone who works on Twisted, cryptography, and OpenSSL - it’s been really useful to be able to build on solid lower-level cryptographic primitives - and to those who helped with code review.

on February 18, 2022 01:49 PM

February 16, 2022

New domain names for PPAs

Launchpad News

Since they were introduced in 2007, Launchpad’s Personal Package Archives (PPAs) have always been hosted on This has generally worked well, but one significant snag became clear later on: it was difficult to add HTTPS support for PPAs due to the way that cookies work on the web.

Launchpad uses a cookie for your login session, which is of course security-critical, and because we use multiple domain names for the main web application (,, and so on), the session cookie domain has to be set to allow subdomains of We set the “Secure” flag on session cookies to ensure that browsers only ever send them over HTTPS, as well as the “HttpOnly” flag to prevent direct access to it from JavaScript; but there are still ways in which arbitrary JS on an HTTPS subdomain of might be able to exfiltrate or abuse users’ session cookies. As a result, we can never allow any HTTPS subdomain of to publish completely user-generated HTML that we don’t process first.

We don’t currently know of a way to get to serve arbitrary HTML as Content-Type: text/html, but this is quite a brittle protection as there are certainly ways (used for things like installer uploads) to upload arbitrary files to under a user-controlled directory structure, and we don’t want the webapp’s security to depend on nobody figuring out how to convince a browser to interpret any of that as arbitrary HTML. The librarian is already on a separate domain name for a similar reason.

To resolve this dilemma, we’ve added a new domain name which supports both HTTP and HTTPS (and similarly for private PPAs, which as before is HTTPS-only). add-apt-repository in Ubuntu 22.04 will use the new domain name by default.

The old names will carry on working indefinitely – we know they’re embedded in lots of configuration and scripts, and we have no inclination to break all of those – but we recommend moving to the new names where possible. will remain HTTP-only.

Some systems may need to be updated to support the new domain name, particularly things like HTTP(S) proxy configuration files and no_proxy environment variables.

on February 16, 2022 06:08 PM

February 11, 2022

  • So, you want to remove that pesky file from your last commit?
  • By accident (naturally, as you and me are perfect beings) a file was commited and it should have not?
  • The cat went over the keyboard and now there’s an extra file in your commit?

If the answer to any of the above is yes, here’s how to do it without pain (Tanking into account, that you want to do that on the last commit; If you need to do it in the middle of a rebase, see the previous post or combine this trick with a rebase (edit a commit with a rebase…).

git restore --source=HEAD^ pesky.file

You can always checkout the file again, or use some witchcraft extracted from man git-restore to do it all at once

This is blatantly stolen from: but also man git could get you there too, given enough reading.

on February 11, 2022 12:00 AM

February 07, 2022

KUserFeedback 1.2.0

Jonathan Riddell

KUserFeedback is a library for collecting user feedback for apps via telemetry and surveys.
Version 1.2.0 is now available for packaging.

Signed byE0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Esk-Riddell <>


? bump version for new release
? opengl source: Do not crash if we could not make our context current
? Add Linux Qt6 CI
? Make UserFeedbackConsole build with Qt6
? Fix linking libKUserFeedbackCommon.a
? Build with C++17
? Build docs on Qt6 too
? Adapt CMake code to make it build with Qt6
? Add auto generated files to .gitignore
? Add FreeBSD CI
? Add Android CI
? Enable Linux CI
? Fix typos found by codespell
? Qt 6: Replace calls to removed QDateTime(QDate) constructor
? Qt 6: Fix issues caused by size() returning a qsizetype
? Qt 6: Remove QNetworkRequest::FollowRedirectsAttribute
? Replace declarations of QVector, QStringList
? CMake: Allow building with Qt 6
? Qt 6: Replace QMap&lt;QVariant …
? Qt 6: Fix signature of methods for QQmlListProperty
? Fix cmake warning
? Port away from ECMSetupVersion&#39;s deprecated *_VERSION_STRING CMake variable
? Make the survey expression variant comparison work with Qt6 as well
? Use non-deprecated QStandardPaths enum values
? Make QString to QUuid conversion explicit
? fix Windows compile, no unistd.h needed
? Update historical links to
? Check for invalid JSON first, then for empty objects
? Don&#39;t record telemetry-less survey queries, that just produces empty rows
? Fix php unit tests
? Also record the device pixel ratio
? [server] Convert JSON fetch to stream data
? Provider: add API to restore default user-visible settings
? Always show the &quot;View previously submitted data…&quot; link
? Add appdata file for UserFeedbackConsole
? KUserFeedback: Convert license headers to SPDX expressions
? Fix area charts with Qt 5.14
? Make it compile without deprecated method

on February 07, 2022 12:00 PM

The CMA, the UK’s regulator of business competition and markets, what the USA calls “antitrust”, is conducting a study into mobile platforms and the mobile ecosystem. You may recall that I and others presented to the CMA in September 2021 about Apple’s browser ban. They have invited public comments, and they honestly are eager to hear from people: not solely big players with big legal submissions, but real web developers. But the time is nigh: they need to hear from you by 5pm UK time today, Monday 7th February 2022.

Bruce Lawson, who I presented with, has summarised the CMA’s interim report. What’s important for our perspectives today is how they feel about mobile browsers. In particular, they call out how on Apple’s iOS devices, there is only one browser: Safari. While other browser names do exist — Chrome, Firefox, and the like — they are all Safari dressed up in different clothes. It’s been surprising how many developers didn’t realise this: check out the Twitter hashtag #AppleBrowserBan for more on that. So the CMA are looking for feedback and comments from anyone in the UK or who does any business in the UK, on how you feel about the mobile ecosystem of apps and browsers in general, and how you feel about the browser landscape specifically. Did you decide to use the web, or not use the web, on mobile devices in a way that felt like you had no choice? Do you feel like web apps are a match for native apps or not?

If you’re a web developer, you may have already been part of, or encountered, some parts of this discussion on social media already. And that may mean that you’re a bit tired of it, because it can be quite bad-tempered in places, and because there’s an awful lot of disingenuous argument. People on or closely allied with browser vendors have a very bad habit of wholly ignoring problems in their own camp and lauding problems in others: this is not of course specific to browser developers (everybody does this in arguments!) but it’s pretty annoying. Chrome defenders generally divert the conversation away from privacy, data collection, and Google’s desire to inhale all your data from everywhere: I’ve asked in the past about Chrome integrating with the platform it’s on and have been unembarrassedly told that “Chrome is the platform”, which I’m sure sounds great if you’re Google and maybe not for everyone else. And Safari developers tend to pretend to be deaf when asked questions they don’t like, and then complain that they get no engagement after haughtily refusing any that isn’t on their own terms. Yes, this is all very irritating, I agree with you. But here’s the thing: you do not have to take a side. This does not have to be tribal. The CMA want your opinions by the end of today, and you don’t have to feel like you’re striking a blow in the browser wars or a blow for privacy or a blow against native by doing so. The thing that every survey and every poll always flags is that they only hear from people with an axe to grind. You don’t have to feel like you’re defending one side or another to have an opinion on how you think that mobile platforms treat the web, and how you think they could do that differently. Be on the side of the web: that’s the right side to be on, because there’s no other side. (Nobody claims to be against the web — nobody with any sense or knowledge of history, anyway — so you know it’s a good place to be.)

Bruce has very excellently provided some sample responses to the CMA along with guidance about points you may want to cover or include, and you should definitely read that. But get your skates on: responses have to be in today.

Send email to And do post your response publicly if you can, so others can see it, learn from it, and see that you’ve done it.

on February 07, 2022 09:00 AM