May 15, 2021

Are you using Kubuntu 21.04 Hirsute Hippo, our current Stable release? Or are you already running our development builds of the upcoming 21.10 Impish Indri?

We currently have Plasma 5.21.90 (Plasma 5.22 Beta)  available in our Beta PPA for Kubuntu 21.04, and 21.10 development series.

However this is a beta release, and we should re-iterate the disclaimer from the upstream release announcement:

DISCLAIMER: This is beta software and is released for testing purposes. You are advised to NOT use Plasma 5.22 Beta in a production environment or as your daily desktop. If you do install Plasma 5.22 Beta, you must be prepared to encounter (and report to the creators) bugs that may interfere with your day-to-day use of your computer.

If you are prepared to test, then…..

Add the beta PPA and then upgrade:

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt full-upgrade -y

Then reboot.

In case of issues, testers should be prepared to use ppa-purge to remove the PPA and revert/downgrade packages.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use a to post testing feedback to the Kubuntu team as a bug, or give feedback on IRC [1], or mailing lists [2].
  • If you believe you have found a bug in the underlying software, then is the best place to file your bug report.

Please review the release announcement and changelog.

[Test Case]
* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.21?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.
* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing.
– Test the ‘fixed’ functionality or ‘new’ feature.

Testing may involve some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu and the KDE community as a whole.


Please stop by the Kubuntu-devel IRC channel if you need clarification of any of the steps to follow.

[1] – irc://
[2] –

on May 15, 2021 09:59 AM

May 14, 2021

If you are in the USA - Please use my new site to write to your congresspeople asking for summer time all year long.

The USA has an active bill in congress to keep us from changing the clocks and stay on time like it is in the summer year round (also called permanent DST). Changing the clocks has not been shown to have substantial benefits and the harms have been well documented.

For global communities - like FLOSS -

  • It makes it that much harder to schedule across the world.
  • The majority of the world does not do clock switching. It's generally EU/US specific.

If you are in the USA - Please use my new site to write to your congresspeople asking for summer time all year long.

If you want to help out

  • the site is all available on Github although the actual contact congress bit is from ActionNetwork.
  • I'd be very happy to make this site global in nature for all of us stuck with unstable time. Please get in touch!
on May 14, 2021 07:45 PM

S14E10 – Stars Grew Firmly

Ubuntu Podcast from the UK LoCo

This week we’ve been playing with RISC-V. We discuss the future of Ubuntu releases, bring you some command line love and go over all your wonderful feedback.

It’s Season 14 Episode 10 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on May 14, 2021 09:30 AM

May 13, 2021

Ep 142 – Autonomia Digital

Podcast Ubuntu Portugal

Neste episódio do Podcast Ubuntu Portugal falou-se sobre os censos, a Festa da Wiki-Lusofonia, aplicações de som, e sons, aplicações móveis de partilha de imagens, e do cada vez mais popular OBS Ninja, que tem como fã número 1 o Diogo!

Já sabem: oiçam, subscrevam e partilhem!



Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on May 13, 2021 09:45 PM

Sometimes, applications may not run well, or they could even crash. When such issues occur, it is useful to have a consistent, reproducible method of triggering the problem, so that developers can have a reliable way and sufficient data to troubleshoot the issues and produce a fix. In the software world, the GNU Debugger (gdb) is a powerful tool that allows developers to do just that.

With snaps, things are slightly more complicated. Snaps run as isolated, self-contained applications, with strong security confinement. They are managed and launched by the snapd service. This means that if you were to invoke gdb to troubleshoot snaps exhibiting startup or runtime issues, the actual application execution will be masked by the snapd processes that wrap it. To work around this phenomenon, and give developers the right tools for the job, the snap daemon also includes gdbserver, which allows users to inspect their applications in a manner that is very similar to the classic Linux system.

Invoke gdbserver

If you run a snap with the –gdb flag, gdb will launch and behave just as it would were it called against the same executable outside of the snap environment. Alternatively, you can run gdbserver, which also allows you to connect to gdb remotely, as well as offers the ability to run applications as a non-privileged (standard) user.

The syntax is as follows:

snap run --gdbserver “snap name”

This will start the snap, stop the execution at the entry point, instantiate gdb, and allow remote access to it via a random high port that will be printed on the command line.

snap run --gdbserver snapster

Welcome to "snap run --gdbserver".
You are right before your application is run.
Please open a different terminal and run:

gdb -ex="target remote :44626" -ex=continue -ex="signal SIGCONT"
(gdb) continue

or use your favorite gdb frontend and connect to :44626

Practical example

Let’s take a look at an application that crashes when invoked. If you’d like to follow along, please take a look at the following gist for details on the actual source code, the compilation flags, and the segfexample snap snapcraft.yaml contents.

snap run --gdbserver segfexample

Welcome to "snap run --gdbserver".
You are right before your application is run.
Please open a different terminal and run:

In a separate terminal window, running gdb -ex=”target remote :34621″ shows the following output:

gdb -ex="target remote :34621" -ex=continue -ex="signal SIGCONT"
GNU gdb (Ubuntu 8.1.1-0ubuntu1) 8.1.1
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".

To resume execution, you need to type cont until you hit the issue – mind, like most issues that you can troubleshoot with gdb, these need to be time-independent, and reproducible:

Program received signal SIGCONT, Continued.
0x00007f1255a0918b in raise () from target:/lib/x86_64-linux-gnu/

Reading /lib/x86_64-linux-gnu/ from remote target…
Reading /lib/x86_64-linux-gnu/ from remote target…
Reading /lib/x86_64-linux-gnu/.debug/ from remote target…

Program received signal SIGSEGV, Segmentation fault.
0x0000561bbcd5e6be in main () at source.c:11
11            pointer[i]=i;
<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>

Once you know where the issue occurs, you can do troubleshooting like you normally would. You can set conditions, breakpoints, disassemble the code, and more. You will also need debug symbols for the specific versions of libraries that your applications uses to be able to decipher the functions where the bug occurs. If you use libraries available from your system archives, you can then install the matching debug packages. If you’re developing your own application, either compile it with symbols, or load the symbols into the debugger.

<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>


Troubleshooting problems is never easy. Sometimes, there can be complex issues in software, and it can take time and effort to resolve them. To that end, development ecosystems should be designed with as much foresight and flexibility as possible, to provide developers with a friendly and efficient setup that can help them fix code errors. With gdbserver, snapd offers snap publishers (and users) a convenient way of investigating and resolving repeatable bugs and crashes without compromising on the benefits of the snap security confinement.

If you have any suggestions on how to make application troubleshooting even more useful and resilient, please join our forum and share your thoughts.

Photo by christian buehner on Unsplash.

on May 13, 2021 11:34 AM

The Ubuntu in the wild blog post ropes in the latest highlights about Ubuntu and Canonical around the world on a bi-weekly basis. It is a summary of all the things that made us feel proud to be part of this journey. What do you think of it?

<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>
Ubuntu in the wild

Open source and software complexity

Open source is the backbone of the software industry as we know it today. It has its benefits and its drawbacks. The most notable one is complexity: software management can be a real challenge, and companies need to understand how to handle it properly to be successful. Alex Chalkias, Product Manager at Canonical, explores the implications of open source complexity and what can be done to overcome it.

Read more on that here!

AMD Ryzen 9 5900X: Windows 10 vs Ubuntu 21.04

Phoronix benchmarked the latest Microsoft Windows 10 against Ubuntu 21.04 on the same AMD Ryzen 9 5900X system. Among other things, they found out that Ubuntu 21.04 was about 8% faster than the latest Windows 10 build.

Read the full benchmarks report here! 

Open source AI stack: is it the right time?

After the LAMP (Linux, Apache, MySQL, PHP), it is time for an open source software stack for AI development. Innovative companies are coming together to create a set of tools that data scientists and data engineers can use to build an end-to-end AI/ML platform. This article goes over the genesis of the AI Infrastructure Alliance or AIIA.

Read more on that here! 

Making Edge AI accessible

Xilinx launched a new family of adaptive system-on-modules (SOM) designed to facilitate the adoption of edge AI. They will come with Ubuntu Linux support to allow developers to use their preferred environment with pre-built software infrastructure and helpful utilities. 

Read more on that here!

on May 13, 2021 08:15 AM

May 11, 2021

Do you know a person, project or organisation doing great work in open tech in the UK? We want to hear about it. We are looking for nominations for people and projects working on open source software, hardware and data. We are looking for companies or orgnisations working in fintech with open, helping achieve the objectives of any of the United Nations Sustainable Development Goals. Nominations are open for projects, organisations and individuals that demonstrate outstanding contribution and impact for the Diversity and Inclusion ecosystem. This includes solving unique challenges, emphasis transparency of opportunities, mentorship, coaching and nurturing the creation of diverse, inclusive and neurodiverse communities. And individuals who you admire either under 25 or of any age.

Self nominations are welcome and encouraged. You can also nominate in more than one category.

Nominations may be submitted until 11.59pm on 13 June 2021.

Awards Event 11 November 2021.

Those categories again:

Hardware – sponsored by The Stack
Software – sponsored by GitLab
Financial Services – sponsored by FINOS
Sustainability – sponsored by Centre for Net Zero
Belonging Network – sponsored by Osmii
Young Person (under 25) – sponsored by JetStack
Individual – sponsored by Open Source Connections

Read more and find the nomination form on the OpenUK website.

Winners of Awards 2020, First edition

Young Person • Josh Lowe
Individual • Liz Rice
Financial Services and Fintech in Open Source • Parity
Open Data • National Library of Wales
Open Hardware • LowRISK
Open Source Software • HospitalRun

on May 11, 2021 03:53 PM

Keeping It Brief

Stephen Michael Kellat

In no particular order:

  • I’ve been doing some junkbox surgery to resuscitate an amd64 desktop. It will be running Xubuntu and should be running Impish Indri if I ever get the USB stick written to do the install. Dr. Frankenstein would be proud of this monster I created, I think.
  • It was snowing here on Saturday. I had a bounceback function in one application show me some pictures I took a year ago and apparently it was snowing here then too.
  • I do need to get set up to be able to receive and decode radiofax transmissions made on the published schedules.
  • There has been some monumental scrollback that I have been encountering in Telegram. I do eventually get through reading it. There’s just a bunch of change happening and I can’t be all that present online. Some of what is happening is quite unexpected. All I can do is improvise, adapt, and overcome.
  • The next step in typesetting is trying to figure out what I am not getting right in usage of the bookcover package from CTAN.

Tags: Change

on May 11, 2021 04:31 AM

May 10, 2021

Welcome to the Ubuntu Weekly Newsletter, Issue 682 for the week of May 2 – 8, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on May 10, 2021 10:43 PM

The Big Iron Hippo

Elizabeth K. Joseph

It’s been about a year since I last wrote about an Ubuntu release on IBM Z (colloquially known as “mainframes” and nicknamed “Big Iron”). In my first year at IBM my focus really was Linux on Z, along with other open source software like KVM and how that provides support for common tools via libvirt to make management of VMs on IBM Z almost trivial for most Linux folks. Last year I was able to start digging a little into the more traditional systems for IBM Z: z/OS and z/VM. While I’m no expert, by far, I have obtained a glimpse into just how powerful these operating systems are, and it’s impressive.

This year, with this extra background, I’m coming back with a hyper focus on Linux, and that’s making me appreciate the advancements with every Linux kernel and distribution release. Engineers at IBM, SUSE, Red Hat, and Canonical have made an investment in IBM Z, and are supporting those with kernel and other support for IBM Z hardware.

So it’s always exciting to see the Ubuntu release blog post from Frank Heimes over at Canonical! And the one for Hirsute Hippo is no exception: The ‘Hippo’ is out in the wild – Ubuntu 21.04 got released!

Several updates to the kernel! A great, continued focus on virtualization and containers! I can already see that the next LTS, coming out in the spring of 2022, is going to be a really impressive one for Ubuntu on IBM Z and LinuxONE.

on May 10, 2021 08:07 PM

Here are some uploads for April.

2021-04-06: Upload package bundlewrap (4.7.1-1) to Debian unstable.

2021-04-06: Upload package calamares ( to Debian experimental.

2021-04-06: Upload package flask-caching (1.10.1-1) to Debian unstable.

2021-04-06: Upload package xabacus (8.3.5-1) to Debian unstable.

2021-04-06: Upload package python-aniso8601 (9.0.1-1) to Debian experimental.

2021-04-07: Upload package gdisk (1.0.7-1) to Debian unstable.

2021-04-07: Upload package gnome-shell-extension-disconnect-wifi (28-1) to Debian unstable.

2021-04-07: Upload package gnome-shell-extension-draw-on-your-screen (11-1) to Debian unstable.

2021-04-12: Upload package s-tui (1.1.1-1) to Debian experimental.

2021-04-12: Upload package speedtest-cli (2.1.3-1) to Debian unstable.

2021-04-19: Spnsor package bitwise (0.42-1) to Debian unstable (E-mail request).

2021-04-19: Upload package speedtest-cli (2.1.3-2) to Debian unstable.

2021-04-23: Upload package speedtest-cli (2.0.2-1+deb10u2) to Debian buster (Closes: #986637)

on May 10, 2021 03:01 PM

Writing software is similar to translating from one language to another. Specifically, it is similar to translating from your native language to some other language. You are translating to that other language so that you can help those others do some task for you. You might not understand this other language very well, and some concepts might be difficult to express in the other language. You are doing your best though when translating, but as we know, some things can get lost in translation.

On software testing

When writing software, some things do get lost in translation. You know what your software should do, but you need to express your needs into the particular programming language that you are using. Even small pieces of software will have some sort of problem, which are called software defects. There is a whole field in computer science which is called software testing, and their goal is to find early such software defects so that they get fixed before the software gets released and reaches the market. When you buy a software package, it has gone through intensive software testing. Because if a customer uses the software package, and then it crashes or malfunctions, it reflects really poorly. They might even return the software and demand their money back!

In the field of software testing, you try to identify actions that a typical customer will likely perform, and may crash the software. If you could, you would try to find all possible software defects and have them fixed. But in reality, identifying all software defects is not possible. And this is a hard fact and a known issue in software testing; no matter how hard you try, there will still be some more software defects.

This post is about security though, and not about software testing. What gives? Well, a software defect can make the software to malfunction. This malfunctioning can make the software to perform an action that was not intended by the software developers. It can make the software do what some attacker wants it to do. Farfetched? Not at all. This is what a big part of computer security works on.

Security fuzzing

When security researchers perform software testing with an aim of finding software defects, we say that they are performing security fuzzing, or just fuzzing. Therefore, fuzzing is similar to software testing, but with the focus on identifying ways to make the software malfunction in a really bad way.

Security researchers find security vulnerabilities, ways to break into a computer system. This means that fuzzing is the first half of the job to find security vulnerabilities. The second part is to analyse each software defect and try to figure out, if possible, a way to break into the system. In this post we are only focusing on the first part of the job.

Defects and vulnerabilities

Are all software defects a potential candidate for a security vulnerability? Let’s see an example of a text editor. If you are using the text editor only to edit your own documents, but not open downloaded text documents, then there is no chance for a security vulnerability. Because an attacker would not have a way to influence this text editor. There would be no input of this text editor that is exposed to the attacker.

A text editor.

However, most computers are connected to the Internet. And most operating systems, either Windows, OS/X or a Linux distribution, are pre-configured to open text documents with a text editor. If you are browsing the Internet, you may find an interesting text document and decide to download and open it on your computer. Or, you may receive an email with an attachment of a text document. In both cases, it is the document file that is fully in control of an attacker. That means that an attacker can modify any aspect of that file. A Word document is a ZIP file that contains several individual files. There are opportunities to modify any of the individual files, ZIP it back into a Doc file and try to open it. If you get a crash, you successfully managed to fuzz the application, in a manual way. If you manage to crash the application simply by editing a Doc document due to your own work, then you are a natural in security fuzzing. Just keep a copy of that exact crashing document because it could be gold to a security researcher.

If you rename a .doc file and change the extension into .zip, then you can open it with a ZIP file manager. And can see the individual files inside it.

Artificial intelligence

If there is a complex task that a person could do but it is tedious and expensive, then you can either use a computer and make it work as just like a person would, or break down the task into a simpler but repetitive form so that it is suitable for a computer. The latter is quite enticing because computing power is way cheaper and more abundant than employing an expert.

Suppose you want to recognize apples from digital images. You can either employ an apple expert to identify if there is an apple in a photograph (any variety of apple). Or, get an expert to share the domain knowledge of apples and have them help in creating software that understands all shapes and colors of apples. Or, obtain several thousands of photos of different apples and train an AI system to detect apples in new images.

Employing a domain expert to manually identify the apples does not scale. Developing software using domain knowledge does not scale easily to, let’s say, other fruits. And developing this domain-specific software is also expensive compared to training an AI system to detect the specific objects.

Similarly, with security fuzzing. A security expert working manually does not scale and the process is expensive to perform repeatedly. Developing software that acts exactly like a security expert is also expensive as well the software would have to capture the whole domain knowledge of software security. And the very best next option is to break the problem into smaller tasks, and use primarily cheap computer power.

Advanced Fuzzing League++

And that leads as to the Advanced Fuzzing League++ (afl++). It is a security fuzzing software that requires lots of computer power, it runs the software that we are testing many times with slightly different inputs each time, and looks whether any of the attempts have managed to lead to a software crash.

afl++ does security fuzzing, and this is just the first part of the security work. A security researcher will take the results of the fuzzing (i.e. the list of crash reports) and manually look whether these can be exploited so that an attacker can make the software let them in.


Up to now, afl++ has been developed so that it can use as much computer power as possible. There are many ways to parallelise to multiple computers.

afl++ uses software instrumentation. When you have access to the source code, you can recompile it in a special way so that when afl++ does the fuzzing, afl++ will know if a new input causes the execution to reach new unexplored areas of the executable. It helps afl++ to expand the coverage to all of the executable.

afl++ does not automatically recognize the different inputs to a software. You have to guide it whether the input is from the command-line, from the network, or elsewhere.

afl++ can be fine-tuned in order to perform even better. Running an executable repeatedly from scratch is not as performant as to just running the same main function of the executable repeatedly.

afl++ can be used whether you have the source code of the software or whether you do not have it.

afl++ can fuzz binaries from a different architecture that your fuzzing server. It uses Qemu for hardware virtualization and can also use CPU emulation through unicorn.

afl++ has captured the mind share on security fuzzing and there are more and more new efforts to expand support to different things. For example, there is support for Frida (dynamic instrumentation).

afl++ has a steep learning curve. Good introductory tutorials are hard to find.

on May 10, 2021 01:09 PM

May 09, 2021


This post explores some of the darker corners of command-line parsing that some may be unaware of.

You might want to grab a coffee.


No, I’m not questioning your debating skills, I’m referring to parsing command-lines!

Parsing command-line option is something most programmers need to deal with at some point. Every language of note provides some sort of facility for handling command-line options. All a programmer needs to do is skim read the docs or grab the sample code, tweak to taste, et voila!

But is it that simple? Do you really understand what is going on? I would suggest that most programmers really don’t think that much about it. Handling the parsing of command-line options is just something you bolt on to your codebase. And then you move onto the more interesting stuff. Yes, it really does tend to be that easy and everything just works… most of the time.

Most? I hit an interesting issue recently which expanded in scope somewhat. It might raise an eyebrow for some or be a minor bomb-shell for others.



Back in the mists of time (~2012), I wrote a simple CLI utility in C called utfout. utfout is a simple tool that basically produces output. It’s like echo(1) or printf(3), but maybe slightly better ;)

Unsurprisingly, utfout uses the ubiquitous and venerable getopt(3) library function to parse the command-line. Specifically, utfout relies on getopt(3) to:

  • Parse the command-line arguments in strict order.
  • Handle multiple identical options as and when they occur.
  • Handle positional (non-option) arguments.

(Note: We’re going to come back to the term “in strict order” later. But, for now, let’s move on).

One interesting aspect of utfout is that it allows the specification of a repeat value so you can do something like this to display “hello” three times:

$ utfout "hello" -r 2

That -r repeat option takes an integer as the repeat value. But the integer can be specified as -1 meaning “repeat forever”. Looking at such a command-line we have:

$ utfout "hello\n" -r -1

We’re going to come back to examples like this later. For now, just remember that this option can accept a numeric (negative) value.


Recently, I decided to rewrite utfout in rust. Hence, rout was born (well, it’s almost been born: I’m currently writing a test suite for it, but I should be releasing it soon).

When I started working on rout, I looked for a rust command-line argument parsing crate that had the semantics of getopt(3). Although there are getopt() clones, I wanted something a little more “rust-like”. The main contenders didn’t work out for various reasons (I’ll come back to this a little later), and since I was looking for an excuse to write some more rust, I decided, in the best tradition of basically every programmer ever, to reinvent the wheel and write my own. This was fun. But more than that, I uncovered some interesting behavioural points that may be unknown to many. More on this later.

I soon had some command-line argument parsing code and sine it ended up being useful to me, I ended up publishing it as my first rust crate. It’s called ap for “argument parser”. Not a very creative name maybe, but succinct and simple, like the crate itself.

By this stage, the rout codebase was coming along nicely and it was time to add the CLI parsing. But when I added ap to rout and tried running the repeat command (-r -1), it failed. The problem? ap was assuming that -1 was a command-line option, not an option argument. Silly bug right? Err, yes and no. Read on for an explanation!

getopt minutiae

It may not common knowledge, but getopt(3), and in fact most argument parsing packages, provide support for numeric option names. If you haven’t read the back-story, this means it supports options like -7 which might be a short-hand for the long option --enable-lucky-seven-mode (whatever that means ;) And so to our first revelation:

Revelation #1:

getopt(3) supports any ASCII option name that is not -, ; or :.

In fact, it’s a little more subtle that that: although you can create an option called +, it cannot be the first character in optstring.

If you didn’t realise this, don’t feel too bad! You need to squint a bit when reading the man page to grasp this point, since it is almost painfully opaque on the topic of what constitutes a valid option character. Quoting verbatim from getopt(3):

optstring is a string containing the legitimate option characters.

Aside from the fact that the options are specified as a const char *, yep, that is your only clue! The FreeBSD man page is slightly clearer, but I would still say not clear enough personally. Yes, you could read the source, but I’ll warn you now, it’s not pretty or easy to grok!

But let this sink in: you can use numeric option names

The more astute reader may be hearing faint alarm bells ringing at this point. Not to worry if that’s not you as I’ll explain all later.

An easy way to test getopt behaviour

I’ve created a simple C program called test_getopt.c that allows you to play with getopt(3) without having to create lots of test programs, or recompile a single program constantly as you tweak it.

The program allows you to specify the optstring as the first command-line argument with all subsequent arguments being passed to getopt(3).

See the README for some examples.

Real-world evidence

If you’ve ever run the ss(1) or socat(1) commands, you may have encountered numeric options as both commands accept -4 and -6 options to denote IPv4 and IPv6 respectively. I’m reasonably sure I’ve also seen a command use -# as an option but cannot remember which.

The ap bug

The real bug in ap was that it was prioritising options over argument order: it was not parsing “in strict order”.

Parsing arguments in strict order

Remember we mentioned parsing argument “in strict order” earlier? Well “in strict order” doesn’t just mean that arguments are parsed sequentially in the order presented (first, second, third, etc), it also means that option arguments will be consumed by the “parent” (aka previous) option, regardless of whether the option argument starts with a dash or not. It’s beautifully simple and logical and crucially results in zero ambiguity for getopt(3).

To explain this, imagine your program calls getopt() like this:

getopt(argc, argv, "12:");

The program could then be invoked in any of the following ways:

$ prog -1
$ prog -2 foo

But: it could also be called like this:

$ prog -2 -1

getopt(3) parses this happily: there is no error and no ambiguity. As far as getopt(3) is concerned the user specified the -2 option passing it the value of -1. To be clear, as far as getopt(3) is concerned, the -1 option was not been specified!

Revelation #2:

In argument parsing, “in strict order” means the parser considers each argument in sequential order and if a command-line argument is an option and that option requires an argument, the next command-line argument will become that options argument, regardless of whether it starts with dash or not!

Going back to the revelation. Consuming the next argument after an option requiring a value is a brilliantly simple design. It’s also easy to implement. And since getopt(3) is part of the POSIX standard, it’s actually the behaviour you should be expecting from a command-line parser, atleast if you started out as a systems programmer. But since the details of this parsing behaviour have been somewhat shrouded in mystery, you may not be aware that you should be expecting such behaviour from other parsers!

But, alas, POSIX or not, this behaviour isn’t necessarily intuitive (see above) and indeed this is not how all command-line parsers work.

Summary of command-line argument parsers

As my curiosity was now piqued, I decided to do a quick survey of command-line parsing packages for a variety of languages. This is in no way complete and I’ve missed out many languages and packages. But it’s an interesting sample nonetheless.

The table below summarises the behaviour for various languages and parsing libraries:

languagelibrary/packagestrict ordering?
bashgetoptsYes (uses getopt(3))
C/C++getopt(3)Yes (POSIX standard! ;)
rustclap (v2+v3)No
zshgetoptsYes (uses getopt(3))


The libraries that do not use strict ordering (aka “the getopt way”) are not wrong or broken, they just work slightly differently! As long as you are aware of the difference, there is no problem ;)

Why are some libraries different?

It comes down to how the command-line arguments are parsed by the package.

Assume the library has just read an argument and determined definitively that it is an option and that the option requires a value. It then reads the next argument:

  • If the library is like getopt(3), it will just consume the argument as the value for the just-seen option (regardless of whether the argument starts with a dash or not).

  • Alternatively, if this new argument starts with a dash, the library will consider it an option and then error since the previous argument (the option) was expecting a value.

    The subtlety here is that “getopt()-like” implementations allow option values to look like options, which may surprise you.

So what?

We’ve had two revelations:

  1. Most argument parsers support numeric option names.
  2. Strict argument parsing means consuming the next argument, even if it starts with a dash.

You may be envisaging some of the potential problems now:

“What if my program accepts a numeric option and also has an option that accepts a numeric argument?”

There is also the slightly more subtle issue:

"What if my program has a flag option and also has an option that can accept a free-form string value?

Indeed! Here be dragons! To make these problems clearer, we’re going to look at some examples.

Example 1: Missile control

Imagine an evil and powerful tech-savvy despot asks his minions to write a CLI program for him to launch missiles. The program uses getopt(3) with an optstring of “12n:” so that he can launch a single missile (-1), two missiles (-2), or lots (-n <count>):

Here’s how he could do his evil work:

$ fire-missile -1
Firing 1 missile!
$ fire-missile -2
Firing 2 missiles!

Unfortunately, the poor programmer who wrote this program didn’t check the inputs correctly. Here’s what happens when the despot decides to fire a single missile, but maybe in a drunken stupor / tab-complete fail, runs the following by mistake:

$ fire-missiles -n -1
Firing 4294967295 missiles!

He’s meant to run fire-missiles -1 (or indeed fire-missiles -n 1), but got confused and appears to have started Armageddon by mistake since the program parsed the -n option value as a signed integer.

Example 2: Get rich quick or get fired?

Another example. Imagine a program used to transfer money between banks by allowing the admin to specify two IBAN (International Bank Account Number) numbers, an amount and a transaction summary field. Here are the arguments the program will accept:

  • -f <IBAN>: Source account.
  • -t <IBAN>: Destination account.
  • -a <amount>: Amount of money to transfer (let’s ignore things like different currencies and exchange rates to keep it simple).
  • -s <text>: Human readable summary of the transaction.
  • -d: Dry-run mode - don’t actually send the money, just show what would be done.

We could use it to send 100 units of currency like this:

$ prog -f me -t you -a 100 -s test

For this program we specify a getopt(3) optstring of “a:df:s:t:”. Fine. But using strict ordering, if I run the program as follows, I’ll probably get fired!

$ prog -f me -t you -a 10000000000 -s -d

Oops! I meant to specify a summary, but I forgot. But hey, that’s fine as I specified to run this in dry-run mode using -d. Oh. Wait a second…

Yes, I’m in trouble because the money was sent as in fact I didn’t specify to run in dry-run mode: I specified a summary of “-d” due to the strict argument parsing semantics of getopt(3).

Example 3: Something to give you nightmares

Using the knowledge of the revelations, you can easily contrive some real horrors. Take, for example, the following abomination:

$ prog -12 3 -4 -5 -67 8 -9

How is that parsed? Is that first -12 argument a simple negative number? Or is it actually a -1 option with the option argument value 2? Or is it a -1 option and a -2 option “bundled” together?

The answer of course depends on how you’ve defined the optstring value to getopt(3). But please, please never write programs with interfaces like this! ;)

You can use the test_getopt.c program to test out various ways of parsing that horrid command-line. For example, one way to handle them might be like this:

$ test_getopt "1::45:9" -12 3 -4 -5 -67 8 -9
INFO: getopt option: '1' (optarg: '2', optind: 2, opterr: 1, optopt: 0)
INFO: getopt option: '4' (optarg: '', optind: 4, opterr: 1, optopt: 0)
INFO: getopt option: '5' (optarg: '-67', optind: 6, opterr: 1, optopt: 0)
INFO: getopt option: '9' (optarg: '', optind: 8, opterr: 1, optopt: 0)

But alternatively, it could be parsed like this:

$ test_getopt "12:4567:9" -12 3 -4 -5 -67 8 -9
INFO: getopt option: '1' (optarg: '', optind: 1, opterr: 1, optopt: 0)
INFO: getopt option: '2' (optarg: '3', optind: 3, opterr: 1, optopt: 0)
INFO: getopt option: '4' (optarg: '', optind: 4, opterr: 1, optopt: 0)
INFO: getopt option: '5' (optarg: '', optind: 5, opterr: 1, optopt: 0)
INFO: getopt option: '6' (optarg: '', optind: 5, opterr: 1, optopt: 0)
INFO: getopt option: '7' (optarg: '8', optind: 7, opterr: 1, optopt: 0)
INFO: getopt option: '9' (optarg: '', optind: 8, opterr: 1, optopt: 0)


Coincidentally, by combining test_getopt with utfout, you can prove Revelation #1 rather simply:

$ (utfout -a "\n" "\{\x21..\x7e}"; echo) |\
while read char
test_getopt "x$char" -"$char"

Note: The leading “x” in the specified optstring argument is to avoid having to special case the string since the first character is “special” to getopt(3). See the man page for further details.


Admittedly, these are very contrived (and hopefully unrealistic!) examples. The missile control example is also a very poor use of getopt(3) since in this scenario, a simple check on argv[1] would be sufficient to determine how many missiles to fire. However, you can now see the potential pitfalls of numeric options and strict argument parsing.

To test a parser

If you want to establish if your chosen command-line parsing library accepts numeric options and if it parses in strict order, create a program that:

  • Accepts a -1 flag option (an option that does not require an argument).

  • Accepts a -2 argument option (that does accept an argument).

  • Run the program as follows:

    $ prog -2 -1
  • If the program succeeds (and sets the value for your -2 option to -1), your parser is “getopt()-like” (is parsing in strict order) and implicitly also supports numeric options.


Here’s what we’ve unearthed:

  • The getopt(3) man page on Linux is currently ambiguous.

    I wrote a patch to resolve this, and the patch has been accepted. Hopefully it will land in the next release of the man pages project.

  • All command-line parsing packages should document precisely how they consume arguments.

    Unfortunately, most don’t say anything about it! However, ap does. Specifically, see the documentation here.

  • getopt(3) doesn’t just support alphabetic option names: a name can be almost any ASCII character (-3, -%, -^, -+, etc).

  • Numeric options should be used with caution as they can lead to ambiguity; not for getopt(3) et al, but for the end user running the program. Worst case, there could be security implications.

  • Permitting negative numeric option values should also be considered carefully. Rather than supporting -r -1, it would be safer if utfout and rout required the repeat count to be >= 1 and if the user wants to repeat forever, support -r max or -r forever rather than -r -1.

  • Some modern command-line parsers prioritise options over argument ordering (meaning they are not “getopt()-like”).

  • You should understand how your chosen parser works before using it.

  • Parsing arguments “in strict order” does not only mean “in sequential order”: it means the parser prioritises command-line arguments over option values.

  • If your chosen parsing package prioritises arguments over options (like getopt(3), you need to take care if you use numeric options since arguments will be consumed “greedily” (and silently).

  • If your chosen parsing package prioritises options over arguments, you will probably be safer (since an incorrect command-line will generate an error), but you need to be aware that the package is not “getopt()-like”.

  • a CLI program must validate all command-line option values; command-line argument parsers provide a way for users to inject data into a program, so a wise programmer will always be paranoid!

  • The devil is in the detail ;)

on May 09, 2021 08:53 AM

May 06, 2021

Ep 141 – Soberania Digital

Podcast Ubuntu Portugal

Algumas dicas sobre cursos de formação de programação, esclarecimento de dúvidas em tempo real, de patronos e um balanço do último encontro da LoCo Ubuntu Portugal foram os tópicos destes 57 minutos de amena cavaqueira no vosso podcast preferido.

Já sabem: oiçam, subscrevam e partilhem!



Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on May 06, 2021 09:45 PM

(Lots of) new procenv release(s)


procenv is now at version 0.55.

Significant changes since version 0.46:

  • FreeBSD and Hurd fixes.
  • Show if running in a VM.
  • --memory now shows more memory details.
  • More capabilities.
  • PROCENV_EXEC fixes.

Further details on the release page.

on May 06, 2021 06:58 PM

S14E09 – Mint Badge Twist

Ubuntu Podcast from the UK LoCo

This week we’ve been debugging DNS and making passively cooled computers. We round up the community news, including the highlights of the 21.04 releases from the Ubuntu flavours, an event and our favourite picks from the tech news.

It’s Season 14 Episode 09 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on May 06, 2021 02:00 PM

May 04, 2021


Benjamin Mako Hill

In exciting professional news, it was recently announced that I got an National Science Foundation CAREER award! The CAREER is the US NSF’s most prestigious award for early-career faculty. In addition to the recognition, the award involves a bunch of money for me to put toward my research over the next 5 years. The Department of Communication at the University of Washington has put up a very nice web page announcing the thing. It’s all very exciting and a huge honor. I’m very humbled.

The grant will support a bunch of new research to develop and test a theory about the relationship between governance and online community lifecycles. If you’ve been reading this blog for a while, you’ll know that I’ve been involved in a bunch of research to describe how peer production communities tend to follow common patterns of growth and decline as well as a studies that show that many open communities become increasingly closed in ways that deter lots of the kinds contributions that made the communities successful in the first place.

Over the last few years, I’ve worked with Aaron Shaw to develop the outlines of an explanation for why many communities because increasingly closed over time in ways that hurt their ability to integrate contributions from newcomers. Over the course of the work on the CAREER, I’ll be continuing that project with Aaron and I’ll also be working to test that explanation empirically and to develop new strategies about what online communities can do as a result.

In addition to supporting research, the grant will support a bunch of new outreach and community building within the Community Data Science Collective. In particular, I’m planning to use the grant to do a better job of building relationships with community participants, community managers, and others in the platforms we study. I’m also hoping to use the resources to help the CDSC do a better job of sharing our stuff out in ways that are useful as well doing a better job of listening and learning from the communities that our research seeks to inform.

There are many to thank. The proposed work was the direct research of the work I did as the Center for Advanced Studies in the Behavioral Sciences at Stanford where I got to spend the 2018-2019 academic year in Claude Shannon’s old office and talking through these ideas with an incredible range of other scholars over lunch every day. It’s also the product of years of conversations with Aaron Shaw and Yochai Benkler. The proposal itself reflects the excellent work of the whole CDSC who did the work that made the award possible and provided me with detailed feedback on the proposal itself.

on May 04, 2021 02:29 AM

So, you’re in the middle of a review, and have couple of commits but one of the comments is asking you to modify a line that belongs to second to last, or even the first commit in your list, and you’re not willing to do:

git commit -m "Add fixes from the review" $file

Or you simply don’t know, and have no idea what squash or rebase means?, well I won’t explain squash today, but I will explain rebase


See how I do it, and also how do I screw up!


It all boils down to making sure that you trust git, and hope that things are small enough so that if you lose the stash, you can always rewrite it.

So in the end, for me it was:

git fetch origin
git rebase -i origin/master # if your branch is not clean, git will complain and stop
git stash # because my branch was not clean, and my desired change was already done
git rebase -i origin/master # now, let's do a rebase
# edit desired commits, for this add the edit (or an e) before the commit
# save and quit vim ([esc]+[:][x] or [esc]+[:][w][q], or your editor, if you're using something else
git stash pop # because I already had my change
$HACK # if you get conflicts or if you want to modify more
      # be careful here, if you rewrite history too much
      # you will end up in Back to the Future II
      # Luckly you can do git rebase --abort
git commit --amend $files #(alternatively, git add $files first then git commit --amend
git rebase --continue
git push -f # I think you will need to add remote+branch, git will remind you
# go on with your life

Note: A squash is gonna put all of the commits together, just make sure that there’s an order:

I lied, here’s a very quick and dirty squash guide

  • pick COMMIT1
  • pick COMMIT2
  • squash COMMIT3 # (Git will combine this commit, with the one above iir, so COMMIT2+COMMIT3 and git will ask you for a new commit message)

I lied

on May 04, 2021 12:00 AM

May 03, 2021

Welcome to the Ubuntu Weekly Newsletter, Issue 681 for the week of April 25 – May 1, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on May 03, 2021 09:56 PM

May 02, 2021

As the recently released Kubuntu 21.04 with beautiful Plasma 5.21 makes its way into the world, inevitably other things come to their end.

Kubuntu 18.04 LTS was released in April 2018, and reached ‘End of Life’ for its 3 years of flavour support on 1st May 2021. All Kubuntu users should therefore switch to a newer supported release.

Download a supported release or upgrade from 18.04 to 20.04 LTS.

The Kubuntu team would thank users of all releases, especially for the amazing additional community support on IRC, forums, mailing lists, and elsewhere.

on May 02, 2021 04:39 PM

Starting May 2021

Stephen Michael Kellat

In no particular order:

  • I will be working the primary/special election on Tuesday.

    • Staffing this election with enough pollworkers is apparently quite difficult. We’re not even running questions in all the precincts in the county this round. This makes me wonder how the off-year municipal general election in November may go as to pollworker staffing in a few months.

    • We’re engaging in a test this election by not utilizing electronic pollbooks. We will be returning to pen and paper. This is billed as a cost-saving experiment. It may prove interesting.

  • I’ve got my Raspberry Pi 4 upgraded to 21.04. So far I am liking what I am encountering. Since this is a production machine I am not willing to shift it to testing Impish Indri. My laptop provides Ubuntu via WSL so that remains 20.04.

  • The CDC/ATSDR Social Vulnerability Index is something I have been spending time looking at lately. It provides a county by county comparison of how rough life can be. With respect to the county where I reside here in Ohio the Covid Act Now site provides a further break down of CDC’s metrics that show my area as very highly vulnerable socially. Compared to our neighboring counties we might as well be on another planet.

  • That I am looking at Greenstone Digital Library again might explain why I am trying to explore some options for resuscitating non-armhf desktop hardware to be run headless. Having an in-house server would be easier to handle than trying to slap it on a virtual private server somewhere.

  • Quite a bit of time is being spent off-line visiting our local parks to take the family chihuahua on adventures. You wouldn’t expect to find so many homeless encampments in the parks, though. With the statewide burn ban in effect I’ve reported encampments to park management but I haven’t seen any impact from reporting. Apparently there are some very nice, very cozy camping sites in the Ashtabula River gulf.

  • I am quite thankful for the work of the release team and all the devs who brought out yet another great release.

Tags: Miscellany

on May 02, 2021 04:54 AM

April 29, 2021

Can you smell 👃 that? That’s the smell of fresh paint 🖌 with just a hint of cucumber 🥒

Ubuntu MATE 21.04 is here and it has a new look thanks to the collaboration with the Yaru team. This release marks the start of a new visual direction for Ubuntu MATE, while retaining the features you’ve come to love 💚 Read on to learn 🎓 what we’ve been working on over the last 6 months and get some insight to what we’ll be working on next.

We would like to take this opportunity to extend our thanks 🙏 to everyone who contributed to this release, including:

  • The Yaru team for welcoming us so warmly 🤗 into the Yaru project and for all their hard work during this development cycle
  • The Ayatana Indicator team who helped add new features and fix bugs that improved the indicator experience
  • Everyone who participated in the QA/testing and bug filing 🐛
  • Those of you who have been contributing to documentation and translations

Thank you! Thank you all very much indeed 🥰

Ubuntu MATE 21.04 Ubuntu MATE 21.04 (Hirsute Hippo)

What changed since the Ubuntu MATE 20.10?

Here are the highlights of what’s changed since the release of Groovy 🕶 Gorilla 🦍

MATE Desktop 🧉

The MATE Desktop team released maintenance 🔧 updates for the current stable 1.24 release of MATE. We’ve updated the MATE packaging in Debian to incorporate all these bug 🐛 fixes and translation updates and synced those packages to Ubuntu so they all feature in this 21.04 release. There are no new features, just fixes 🩹

Ayatana Indicators 🚥

A highlight of the Ubuntu MATE 20.10 release was the transition to Ayatana Indicators. You can read 👓 the 20.10 release notes to learn what Ayatana Indicators are and why this transition will be beneficial in the long term.

We’ve added new versions of Ayatana Indicators including ‘Indicators’ settings to the Control Center, which can be used to configure the installed indicators.

Ayatana Indicator Settings Ayatana Indicators Settings

Other indicator changes include:

Expect to see more Ayatana Indicators included in future releases of Ubuntu MATE. Top candidates are:

  • Display Indicator - needs uploading to Debian and Ubuntu
  • Messages Indicator - needs uploading to Debian and Ubuntu
    • ayatana-webmail is available for install in Ubuntu MATE 21.04
  • Keyboard Indicator - requires feature parity with MATE keyboard applet
  • Bluetooth Indicator - requires integration work with Blueman

Yaru MATE 🎨

This is where most of the work was invested 💦

A new derivative of the Yaru theme, called Yaru MATE, has been created in collaboration with the Yaru team. During our discussions with the Yaru team we decided to make one significant departure from how Yaru MATE is delivered; Yaru MATE is only providing a light and dark theme, with the light theme being default. This differs from Yaru in Ubuntu which features a mixed light/dark by default.

We’ve decided to offer only light and dark variants of Yaru MATE as it makes maintaining the themes much easier, the mixed light/dark Yaru theme does require extra work to maintain due to the edge cases it surfaces. Offering just light and dark variants also ensures better application compatibility.

This work touched on a number of projects, here’s what Ubuntu MATE now enjoys as a result of Yaru MATE:

  • GTK 2.x, 3.x and 4.x Yaru MATE light and dark themes
  • Suru icons along with a number of new icons specifically made for MATE Desktop and Ubuntu MATE
  • LibreOffice Yaru MATE icon theme, which are enabled by default on new installs
  • Font contrast is much improved throughout the desktop and applications
  • Websites honour dark mode at the operating system level
    • If you enable the Yaru MATE Dark theme, websites that provide a dark mode will automatically use their dark theme to match your preferences.

In return for the excellent theme and icons from the Yaru team, the Ubuntu MATE team worked on the following which are now features of Yaru and Yaru MATE:

As a result of our window manager and GTKSourceView contributions it is now possible to use all three upstream Yaru themes from Ubuntu in Ubuntu MATE 💪

Yaru MATE GTKSourceView Yaru MATE GTKSourceView, Tiled Windows and Plank theme

Going the extra mile 🎽

In order to make Yaru MATE shine we’ve also created:

Yaru MATE Snaps

snapd will soon be able to automatically install snaps of themes that match your currently active theme. The snaps we’ve created are ready to integrate with that capability when it is available.

The gtk-theme-yaru-mate and icon-theme-yaru-mate snaps are pre-installed in Ubuntu MATE, but are not automatically connected to snapped applications. Running the following commands in a terminal periodically, or after you install a snapped GUI application, will connect the themes to compatible snaps until such time snapd supports doing this automatically.

for PLUG in $(snap connections | grep gtk-common-themes:gtk-3-themes | awk '{print $2}'); do sudo snap connect ${PLUG} gtk-theme-yaru-mate:gtk-3-themes; done
for PLUG in $(snap connections | grep gtk-common-themes:gtk-2-themes | awk '{print $2}'); do sudo snap connect ${PLUG} gtk-theme-yaru-mate:gtk-2-themes; done
for PLUG in $(snap connections | grep gtk-common-themes:icon-themes | awk '{print $2}'); do sudo snap connect ${PLUG} icon-theme-yaru-mate:icon-themes; done

What’s next? 🔮

While we made lots of progress with Yaru MATE for 21.04, the work is on going. Here’s what we’ll be working on next:

  • Some symbolic icons are being provided by a fallback to the Ambiant MATE and Radiant MATE icon themes, something we are keen to address for Ubuntu MATE 21.10.
  • Ubuntu MATE doesn’t have a full compliment of Suru icons for MATE Desktop, yet.
  • Plymouth boot theme will be aligned with the EFI respecting theme shipped in Ubuntu.

Mutiny 🏴‍☠️

The Mutiny layout, which provides a desktop layout that somewhat mimics Unity, has been a source of bug reports and user frustration 😤 for sometime now. Switching to/from Mutiny has often crashed resulting in a broken desktop session 😭

We have removed MATE Dock Applet from Ubuntu MATE and refactored the Mutiny layout to use Plank instead.

Mutiny layout with Yaru MATE Dark Mutiny layout with Yaru MATE Dark

  • Switching to the Mutiny layout via MATE Tweak will automatically theme Plank
    • Light and dark Yaru themes for Plank are included
  • Mutiny no longer enables Global Menus and also doesn’t undecorate maximised windows by default
    • If you like those features you can enable them via MATE Tweak
  • Window Buttons Applet is no longer integrated in the Mutiny top panel by default.
    • You can manually add it to your custom panel configuration should you want it.
    • Window Buttons Applet has been updated to automatically use window control buttons from the active theme. HiDPI support is also improved.

As a result of these changes Mutiny is more reliable and retains much of the Unity look and feel that many people like.

Command line love 🧑‍💻

We’ve included a few popular utilities requested by command line warriors. neofetch, htop and inxi are all included in the default Ubuntu MATE install. neofetch also features an Ubuntu MATE ASCII logo.

Raspberry Pi images

We will release Ubuntu MATE 21.04 images for the Raspberry Pi in the days following the release for PC 🙂

Major Applications

Accompanying MATE Desktop 1.24.1 and Linux 5.11 are Firefox 87, LibreOffice, Evolution 3.40 & Celluloid 0.20.

Major Applications

See the Ubuntu 21.04 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 21.04

This new release will be first available for PC/Mac users.


Upgrading from Ubuntu MATE 20.10

You can upgrade to Ubuntu MATE 21.04 from Ubuntu MATE 20.10. Ensure that you have all updates installed for your current version of Ubuntu MATE before you upgrade.

  • Open the “Software & Updates” from the Control Center.
  • Select the 3rd Tab called “Updates”.
  • Set the “Notify me of a new Ubuntu version” dropdown menu to “For any new version”.
  • Press Alt+F2 and type in update-manager -c into the command box.
  • Update Manager should open up and tell you: New distribution release ‘21.04’ is available.
    • If not, you can use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
  • Click “Upgrade” and follow the on-screen instructions.

There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

Known Issues

Here are the known issues.

Component Problem Workarounds Upstream Links
Plank When snaps update, they disappear from Plank.
Ubuntu MATE Screen reader installs using orca are currently not working
Ubuntu Ubiquity slide shows are missing for OEM installs of Ubuntu MATE
Ubuntu Shim-signed causes system not to boot on certain older EFI systems


Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

on April 29, 2021 10:51 PM

Lubuntu 18.04 (Bionic Beaver) was released April 27, 2018 and will reach End of Life on Friday, April 30, 2021. This means that after that date there will be no further security updates or bugfixes released. We highly recommend that you re-install with 20.04 as soon as possible if you are still running 18.04. After […]

The post Lubuntu 18.04 LTS End of Life and Current Support Statuses first appeared on Lubuntu.

on April 29, 2021 10:31 AM

April 28, 2021

Yes, you read it right. timg is a text mode image viewer and can also play videos. But, but, how is that possible?

timg uses suitable Unicode characters and also the colour support that is available in many terminal emulators.

The timg application that can show images and videos on a terminal emulator. Create your own teddy bear with Blender by following this Blender tutorial by tutor4u.

timg is developed by Henner Zeller, and in 2017 I wrote a blog post about creating a snap package for timg. A snap package was created and published on the Snap Store. I even registered the name timg although some time later it became much stricter to register a package name if you are not the maintainer. In addition, it was so early days for snap packages that I think you could not setup the license of the software in the package, and it always came up as Proprietary.

Fast forward from 2017 to a couple of weeks ago, a user posted an issue that the snap package of timg does not have the proper license. I was pinged through that Github issue and decided to update the snapcraft.yaml to whatever is now supported in snap packages. Apparently, you can now set the license in snap packages. Moreover, timg has been updated and can play many more image and video formats. I figured out the latter because timg now has a lot more dependencies than before.

What use would you have of a text mode image viewer and video player?

  1. Security. The snap package at least does not have access to the X11 server, nor the network, neither the audio server.
  2. Convenience. You are on a remote server (like a VPS) and do not want to ssh -X after you install an X11 application with all the dependencies.
  3. Workflow. The image you view, is part of your text session. No popup windows that open and dissappear.

on April 28, 2021 05:48 PM

April 27, 2021

As of a few days ago, a new feature in clang-query allows introspecting the source locations for a given clang AST node. The feature is also available for experimentation in Compiler Explorer. I previously delivered a talk at EuroLLVM 2019 and blogged in 2018 about this feature and others to assist in discovery of AST matchers and source locations. This is a major step in getting the Tooling API discovery features upstream into LLVM/Clang.


When creating clang-tidy checks to perform source to source transformation, there are generally two steps common to all checks:

  • Matching on the AST
  • Replacing particular source ranges in source files with new text

To complete the latter, you will need to become familiar with the source locations clang provides for the AST. A diagnostic is then issued with zero or more “fix it hints” which indicate changes to the code. Almost all clang-tidy checks are implemented in this way.

Some of the source locations which might be interesting for a FunctionDecl are illustrated here:

Pick Your Name

A common use case for this kind of tooling is to port a large codebase from a deprecated API to a new API.

A tool might replace a member call pushBack with push_back on a custom container, for the purpose of making the API more like standard containers. It might be the case that you have multiple classes with a pushBack method and you only want to change uses of it on a particular class, so you can not simply find and replace across the entire repository.

Given test code like

    struct MyContainer
        // deprected:
        void pushBack(int t);

        // new:
        void push_back(int t);    

    void calls()
        MyContainer mc;


A matcher could look something like:

    match cxxMemberCallExpr(

Try experimenting with it on Compiler Explorer.

An explanation of how to discover how to write this AST matcher expression is out of scope for this blog post, but you can see blogs passim for that too.

Know Your Goal

Having matched a call to pushBack the next step is to replace the source text of the call with push_back. The call to mc.pushBack() is represented by an instance of CXXMemberCallExpr. Given the instance, we need to identify the location in the source of the first character after the “.” and the location of the opening paren. Given those locations, we create a diagnostic with a FixItHint to replace that source range with the new method name:

    diag(MethodCallLocation, "Use push_back instead of pushBack")
        << FixItHint::CreateReplacement(
            sourceRangeForCall, "push_back");

When we run our porting tool in clang-tidy, we get output similiar to:

warning: Use push_back instead of pushBack [misc-update-pushBack]

Running clang-tidy with -fix then causes the tooling to apply the suggested fix. Once we have tested it, we can run the tool to apply the change to all of our code at once.

Find Your Place

So, how do we identify the sourceRangeForCall?

One way is to study the documentation of the Clang AST to try to identify what API calls might be useful to access that particular source range. That is quite difficult to determine for newcomers to the Clang AST API.

The new clang-query feature allows users to introspect all available locations for a given AST node instance.

note: source locations here
 * "getExprLoc()"

note: source locations here
 * "getEndLoc()"
 * "getRParenLoc()"

With this output, we can see that the location of the member call is retrievable by calling getExprLoc() on the CXXMemberCallExpr, which happens to be defined on the Expr base class. Because clang replacements can operate on token ranges, the location for the start of the member call is actually all we need to complete the replacement.

One of the design choices of the srcloc output of clang-query is that only locations on the “current” AST node are part of the output. That’s why for example, the arguments of a function call are not part of the locations output for a CXXMemberCallExpr. Instead it is necessary to traverse to the argument and introspect the locations of the node which represents the argument.

By traversing to the MemberExpr of the CXXMethodCallExpr we can see more locations. In particular, we can see that getOperatorLoc() can be used to get the location of the operator (a “.” in this case, but it could be a “->” for example) and getMemberNameInfo().getSourceRange() can be used to get a source range for the name of the member being called.

The Best Location

Given the choice of using getExprLoc() or getMemberNameInfo().getSourceRange(), the latter is preferable because it is more semantically related to what we want to replace. Aside from the hint that we want the “source range” of the “member name”, the getExprLoc() should be disfavored as that API is usually only used to choose a position to indicate in a diagnostic. That is not specifically what we wish to use the location for.

Additionally, by experimenting with slightly more complex code, we can see that getExprLoc() on a template-dependent call expression does not give the desired source location (At time of publishing! – This is likely undesirable in this case). At any rate, getMemberNameInfo().getSourceRange() gives the correct source range in all cases.

In the end, our diagnostic can look something like:

    diag(MethodCallLocation, "Use push_back instead of pushBack")
        << FixItHint::CreateReplacement(
            theMember->getMemberNameInfo().getSourceRange(), "push_back");

This feature is a powerful way to discover source locations and source ranges while creating and maintaining clang-tidy checks. Let me know if you find it useful!

on April 27, 2021 09:08 AM

April 25, 2021

Full Circle Weekly News #207

Full Circle Magazine

EndeavourOS 2021.04.17

OpenSSH 8.6 Release with Vulnerability fix

Nginx 1.20.0 released

Node.js 16.0 JavaScript Server Platform Released

Tetris-OS - you guessed it

University of Minnesota Suspended from Linux Kernel Development after Submitting Questionable Patches

OpenVPN 2.5.2 and 2.4.11 update

Microsoft begins testing support for running Linux GUI applications on Windows

Ubuntu 21.04 Distribution Release

Chrome OS 90 released

OpenBSD adds initial support for RISC-V architecture

First version of InfiniTime, firmware for open PineTime smartwatches

ToaruOS 1.14

Kuroko 1.1 programming language

Full Circle Magazine
Host: @bardictriad,
Bumper: Canonical
Theme Music: From The Dust - Stardust
on April 25, 2021 04:06 PM

Six months ago I was elected to the Ubuntu Community Council. After the first month, I wrote a text about how I experienced the first month. Time flies and now six months have already passed.

In the first few months we have been able to fill some of the councils and boards that needed to be refilled in the community. But even where this has not been possible, we have initiated new ways to ensure that we move forward on the issues. One example is the LoCo Council, which could not be filled again, but we found people who were given the task of rethinking this council and proposing new structures. This process of evaluating and rethinking this area will take some time.

There are some issues that we have on the agenda at the moment. Some of these are general issues related to the community, but some affects individual members of the community or where there are problems.

For some topics, we quickly realised that it makes sense to have contact persons at Canonical who can advance these topics. We were very pleased to find Monica Ayhens-Madon and Rhys Davies, two employees at Canonical, who support us in bringing topics into the organisation and also implement tasks. One consequence of this has been the reactivation of the Community Team at Canonical.

One topic that we have also come across, through the staffing of the board and the update of the benefits that you get as a member, is the Ubuntu Membership. At this point I would like to advertise the community and to show your community connection with Ubuntu through a membership. If you want to do this and know what benefits you are entitled to, you can read about it in the Ubuntu Wiki.

There are still enough construction sites, but structurally we are already on the right track again. Since the topics are dealt with in our free time and everyone on the Community Council has other things to do, topics sometimes drag on a bit. Sometimes I’m a bit impatient, but I’m getting better at it.

After six months, I can see that we as a Community Council have laid many building blocks and have already had some discussions where we have different approaches and thus also very different ideas. This is good for the community and leads to the different positions and opinions finding their way into the community.

You can read about our public meetings in the Community Hub. There is also the possibility, when we call for topics for our meetings, to bring in topics that we should look at in the Council, because this is important for the cooperation of the community.

If you want to get involved in discussions about the community, you can do so at the Community Hub. You can also send us an email to the mailing list community-council at if you have a community topic on your mind. If you want to contact me: you can do so by commenting, via Twitter or sending a mail to me: torsten.franz at

on April 25, 2021 03:30 PM

April 24, 2021

What's New? April 2021

This month has been one of my busiest in quite some time. With Xubuntu 21.04 arriving earlier this week, we&aposve been pushing to test and land fixes and translations. Since there weren&apost any respins of the final image, we didn&apost have do repeat tests which saved a tremendous amount of time. I&aposm also continuing to use and hack on elementary which has been a great deal of fun. But before all that...

Ghost 4.0

Oh hey, a brand new Ghost release. I was going to write a couple days sooner, but when I see an update notification, I go for it! Ghost 4.0 is a massive release with tons of new features. The standout features to me are the new dashboard, email newsletters, and performance improvements. Here&aposs my thoughts...

The new dashboard seems to prioritize membership and revenue, so I&aposm not sure how much use I&aposll get out of it. I currently have sponsorship platforms on GitHub, Patreon, and PayPal. Would you prefer being able to sponsor me on my website instead? Chime in in the comments!

What's New? April 2021The new Ghost dashboard. If you&aposre not monetizing your content, there&aposs not much here for now.

Email newsletters are now built into Ghost. You may have noticed a few new "Subscribe" buttons and sections on the website... these are that. If you subscribe, you&aposll automatically get an email of each new post. With how infrequently I write, I can guarantee that your inbox won&apost become cluttered because of me, so go ahead and subscribe. If I start writing more... well, I&aposm sorry. 😏️

Performance improvements including automatically responsive and lazy-loading images, improved requests-per-second, and reduced latency sound like a nice overall improvement. I already worked hard at speeding up my site with optimized images and caching, so I&aposm excited to see just how much faster it is going forward.

What's New? April 2021There&aposs some work to do, but I&aposll take it! Source:

Upgrading to Ghost 4.0 was trivially easy. It&aposs still not quite as easy as upgrading a Wordpress instance–you have to use the CLI. But the commands are simple and fast. I upgraded to the latest 3.x release, and then on to the 4.x release with no issues whatsoever. For the latest theme features, I pulled the latest Casper theme changes into the Mouser theme and made a few minor adjustments. You can pick up Mouser 4.0.4 on GitHub.

As an aside, has Ghost always had integration with Unsplash? I wasn&apost quite sure what to do for the feature image this time around, but then saw the Unsplash button and... success. "green grass field sunset scenery photo" by Aniket Bhattacharya captured April... at least in my mind and in the allergies I suffer from.

Xubuntu 21.04 "Hirsute Hippo"

Xubuntu 21.04 is here! This release was one of our largest in quite some time. The latest release includes Xfce 4.16, numerous UX improvements, some packageset updates, a brand new minimal install option on the main image, and many translation updates. Lots of individual contributors came together to make this release a reality, and it really shows. I&aposm working on a overly descriptive and enthusiastic post for this, you can expect it sometime in the next day or so.

What's New? April 2021Xubuntu 21.04 "Hirsute Hippo" includes a clean and attractive desktop.

elementary Development Update

I&aposve mainly been hacking on the Sound Indicator recently, adding and expanding on the input and output device selectors and improving MPRIS support. Meanwhile, I&aposve been picking up some useful knowledge with Vala and more obscure Gtk/GLib components.

Early this month I finished adding support for selecting audio devices in the Sound Indicator. This was a continuation of a previous attempt by another contributor who laid a solid foundation to build on. After a few adjustments, this was merged in! I&aposve been iterating and improving on this integration since, and I think you&aposre going to enjoy it in elementary 6!

What's New? April 2021Input and output device selection in the Sound Indicator.

I&aposve been working on a few other improvements as well. Temporary MPRIS players, such as those created by Chromium and Firefox when playing YouTube content, will no longer stick around once playback is over. Once merged, the Sound Indicator will remember previously-used devices while also keeping the list of options tidy. I&aposve been chasing an interesting issue where the Sound Indicator doesn&apost quite reconnect after it&aposs updated, and I think I&aposm close to having it fixed.


I&aposll be finishing out the month with some writing about Xubuntu 21.04, planning for Xubuntu 21.10 "Impish Indri", reviewing some merge requests for Catfish, and doing some additional elementary development! I&aposm having fun and feeling pretty motivated these days, so look forward to some more updates soon. Thanks for reading!

on April 24, 2021 01:34 PM

April 23, 2021

The C ternary operator expr1 ? expr2 : expr3 has a subtle side note described in K&R 2nd edition, page 52, section 2.11:

"If expr2 and expr3 are of different types, the type of the result is determined by the conversion rules discussed earlier in the chapter".

This refers to page 44, section 2.7 "Type Conversions". It's worth reading a few times along with section A6.5 "Arithmetic Conversions".

Here is an example of a type conversion gotcha:


At a glance one would think the program would print out -1 as the output.  Note that the expr2 is actually type converted to unsigned int so the result is 4294967295 if int types are 32 bits wide.

One solution is to type convert expr2 and/or expr3 to a long int and because that is wider than the unsigned int x this takes precedence. Alternatively just use:

References: soc: aspeed: fix a ternary sign expansion bug

on April 23, 2021 11:55 AM

April 22, 2021

Thanks to all the hard work from our contributors, Lubuntu 21.04 has been released. With the codename Hirsute Hippo, Lubuntu 21.04 is the 20th release of Lubuntu, the sixth release of Lubuntu with LXQt as the default desktop environment. Support lifespan Lubuntu 21.04 will be supported until January 2022. Our main focus will be on […]

The post Lubuntu 21.04 (Hirsute Hippo) Released! first appeared on Lubuntu.

on April 22, 2021 06:09 PM

The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 21.04, code-named “Hirsute Hippo”. This marks Ubuntu Studio’s 29th release. This release is a regular release, and as such it is supported for nine months until January 2022.

Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a complete list of changes and known issues.

You can download Ubuntu Studio 21.04 from our download page.

If you find Ubuntu Studio useful, please consider making a contribution.


Due to the change in desktop environment this release, direct upgrades from release prior to 20.10 are not supported.

In the coming weeks, you should see a prompt to upgrade from 20.10 during your regular updates. If you wish to update at that time, click “Install Upgrade”.

New This Release

This release includes Plasma 5.21.4, the full-featured desktop environment made by KDE. The theming uses the Materia theme and icons are Papirus icons.


Studio Controls has seen further development as its own independent project and has been updated to verison 2.1.4.

Ardour 6.6+ (Future 6.7 Snapshot)

Ardour has been updated to version 6.6+, meaning this is a git snapshot of what will eventually be Ardour 6.7. This had to be done because Ardour 6.5 started to fail to build with a newer library introduced into the Ubuntu archives, and could only be resolved with this snapshot. We hope to have Ardour 6.7 in via official updates once released.

New Application: Agordejo

Agordejo is new to Ubuntu Studio this release. It was brought-in for those unsatisfied with RaySession’s audio session management but found New Session Manager’s interface to be too old and clunky. Agordejo comes in and provides the best of both worlds: Legacy NSM compatibility and advanced session management for your audio sessions.

Other Notable Updates

Carla has been upgraded to version 2.3. Full release announcement at


OBS Studio

Included this cycle is OBS Studio 26.1.2, which includes the ability to use OBS as a virtual webcam in another application! (requires administrative access to machine to create loopback device)

For those that would like to use the advanced audio processing power of JACK with OBS Studio, OBS Studio is JACK-aware!

More Updates

There are many more updates not covered here but are mentioned in the Release Notes. We highly recommend reading those release notes so you know what has been updated and know any known issues that you may encounter.

Get Involved!

A great way to contribute is to get involved with the project directly! We’re always looking for new volunteers to help with packaging, documentation, tutorials, user support, and MORE! Check out all the ways you can contribute!

Special Thanks

Huge special thanks for this release go to:

  • Len Ovens: Studio Controls, Ubuntu Studio Installer, Coding
  • Thomas Ward: Packaging, Ubuntu Core Developer for Ubuntu Studio
  • Eylul Dogruel: Artwork, Graphics Design, Website Lead
  • Ross Gammon: Upstream Debian Developer, Guidance, Testing
  • Dennis Braun: Debian Package Maintainer
  • Rik Mills: Kubuntu Council Member, help with Plasma desktop
  • Mauro Gaspari: Tutorials, Promotion, and Documentation, Testing
  • Krytarik Raido: IRC Moderator, Mailing List Moderator
  • Erich Eickmeyer: Project Leader, Packaging, Direction, Treasurer
on April 22, 2021 05:11 PM

The Xubuntu team is happy to announce the immediate release of Xubuntu 21.04.

Xubuntu 21.04, codenamed Hirsute Hippo, is a regular release and will be supported for 9 months, until January 2022. If you need a stable environment with longer support time we recommend that you use Xubuntu 20.04 LTS instead.

The final release images are available as torrents and direct downloads from

As the main server might be busy in the first few days after the release, we recommend using the torrents if possible.

Xubuntu Core, our minimal ISO edition, is available to download from [torrentmagnet]. Find out more about Xubuntu Core here.

We’d like to thank everybody who contributed to this release of Xubuntu!

Highlights and Known Issues


  • Xfce 4.16: This is Xubuntu’s first release with the new Gtk3-only Xfce 4.16, which features a year’s worth of updates and fixes.
  • New Software: Xubuntu now comes pre-installed with HexChat and Synaptic to provide easy IRC communication and advanced package management.
  • Minimal Install: You can now install a minimal version of the Xubuntu desktop through the ubiquity installer.
  • UX Tweaks: A number of User Experience (UX) tweaks were made on the desktop, application menu, panel, keyboard shortcuts and file manager.

Known Issues

  • The boot decryption password prompt is sometimes not displayed. Press Escape twice to reveal the prompt (1917062).

For more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions, please refer to the Xubuntu Release Notes.

The main Ubuntu Release Notes cover both many of the other packages we carry and more generic issues.


For support with the release, navigate to Help & Support for a complete list of methods to get help.

on April 22, 2021 10:56 AM

April 15, 2021

The French speaking Ubuntu community really likes making ubuntu t-shirts. Each six months, releases after releases, they make new ones. Here is the Hirsute Hippo. You can buy it before the 26th of April for €15 (+ shipping costs) and receive it at the end of May 2021. You can buy it later but it will be more expensive and you will not have any garanty of stock.

The designer, a Ubuntu-fr member, is Ocelot. Thank you Ocelot !

on April 15, 2021 02:10 PM

April 12, 2021

One year ago I joined GitLab as a Solution Architect. In this blog post I do not want to focus on my role, my daily work or anything pandemic related. Also, there won’t be a huge focus in regard to all remote working. I rather want to focus on my personal experiences in regard to the work culture. I’ll focus on things which I certainly did not think about before I joined GitLab (or any other company before).

Before joining GitLab I worked for four German companies. As a German with a Sri-Lankan Tamil heritage I was always a minority at work. Most of the time it wasn’t an issue. At least that’s what I thought. At all those previous companies there were mostly white male and with very few (or even none) non-males especially in technical and leading roles. Nowadays, I realize what a huge difference a globally distributed company makes with people from different countries, cultures, background and gender.

There were sooo many small things which makes a difference and which opened my eyes.

People are pronouncing my name correctly

Some of you might (hopefully) think:

Wait that was an issue!?

Yes, yes it was. And it was super annoying. Working in a globally distributed companies means that the default behavior is: People are asking how to correctly (!) pronounce the full (!) name. It’s a simple question, and it directly shows respect even if you struggle to pronounce it correctly on the first time. My name is transcribed from Tamil to English. So the average colleagues simply tries to pronounce it in English, and it’s perfect and that includes the German GitLab colleagues. In previous jobs there were a lot of colleagues who didn’t ask, and I was “the person with the complicated name”, “you know who” or some even called me “Sushiwahn”. One former colleague referenced me to the customer in a phone call as “the other colleague”. That was not cool. If you wonder on how to pronounce my name: I uploaded a recording on my profile website on I should’ve done that way earlier.

The meaning/origin of my name

I never really cared about the meaning of my name. So many people have asked me if my name has a meaning or what the origin was. I didn’t know, and I also didn’t really care. My mum always simply told me “Your name has style”. My teammate Sri one day randomly dropped a message in our team channel:

If you break down your name into the root words, it basically translates to “Good Life (Sujeevan) Prince Of Victory (Vijayakumaran).

That blew my mind 🤯.

#BlackLivesMatter and #StopAsianHate

So many terrible things happened in the last year in the world. When these two movements appeared it was a big topic in the company. Even with messages in our #company-fyi channel which is normally used for company related announcements. While #BlackLivesMatter was covered in the German media, #StopAsianHate was not really covered in the German media at all.

Around the time of #BlackLivesMatter my manager asked in our team meeting how/if it affects us even if we - in our EMEA team – are far away from the US. I had the chance to share stories from my past which wouldn’t have happened for the average white person in a white country. This never happened in any other company I worked before. When the Berlin truck attack at a Christmas market (back in 2016) happened it was a big topic at lunchtime with the colleagues. When a racist shot German Migrants in Hanau in February 2020 it was not really a topic at work. At one of the attacks Germans were the victims in the other it were migrants. Both shootings happened before I joined GitLab. When there was a shooting in Vienna in November 2020 colleagues directly jumped into the local Vienna channel and asked if everyone is somewhat okay. See the difference!


We have a #lang-de Channel for German (the language not the country!) related content. There are obviously many other channels for other languages. What surprised me? There were way more people outside of Germany without a German background who are learning or trying to learn German. It’s a small thing but it’s cool! There were many discussions about word meanings and how to handle the German language. Personally that got me thinking if I should pick up learning French again.

Meanings of Emojis

There are a lot of emojis. Specially in Slack. At the beginning it was somewhat overwhelming, but I got used to it. One thing which confused me right after I joined was the usage of the 🙏🏻 emoji. Interestingly the name of the emoji is simply “folded hands”. But what is the meaning of it? When I first saw the use of it I was somewhat confused. For me as a Hindu it’s clearly “praying”. The second meaning which comes to my mind is use of it as a form of greeting – see Namaste. However, there are so many colleagues who use it for some sort of “thanks”. Or even “sorry”. Emojis have different meanings in different cultures!

Different People – Different Mindsets

Since GitLab is an all-remote company our informal communication is happening in Slack channels and in coffee chats in Zoom calls. In the first weeks I scheduled a lot of coffee chats to get to know my teammates and some other colleagues in other teams. The most useful bot (for me) in Slack is the Donut Bot which randomly connects two persons every two weeks. I don’t have to take care to randomly select people from different teams and department. And honestly I most likely would be somewhat biased when I would cherry-pick people to schedule coffee chats.

So every two weeks I get matched with some “random” person. This lowers the bar to talk to someone from some other department where I (shamefully) thought: “Oh that role sounds boring to me.” But if it sounds boring to me that’s the first sign that I should talk to them. Without the Donut Bot I would’ve most likely not talked to someone from the Legal department, just to give one small example. And there were also a lot of engineers who didn’t really talk to someone from Sales, like I am part of the Sales team. Even though we do not need to talk about work related stuff I generally learned something new when I leave the conversation at the end.

However, the more interesting part is to get to know all the different people in the different countries and continents with different cultures. There are many colleagues who left their home country and live somewhere else. The majority of these people are either in the group “I moved because of my (previous) work” or “I moved because of my partner”. The most surprising sentence came from a Canadian colleague though:

I’m thinking of relocating to Germany for a couple of years since it’s easily possible with GitLab. All my friends here are migrants and I really want to experience how it is to learn a new language and live in another country.

That was by far the most interesting reason I heard so far! Besides that my favorite question I ask those people who moved away from their home country is what they’re missing and what they would miss if they moved back. This also leads to some fascinating stories. Most of them are related to food, some to specific medicine and some reasons are even “I like the $specific_mentality over here which I would miss”.

I left out the more obvious parts of a globally distributed team like getting to know how life is in the not-so-rich countries of the world. Also, I finally understood what the difference between the average German is compared to the average Silicon Valley person. The latter is way more open to a visionary goal while the average German wants to keep their safe job for a long time (yes, even in IT).

Mental Health Awareness

We have a lot of content related to Mental Health which I still need to check out. It’s a super important topic on so many different levels. At all my previous employers this was not a topic at all. I might even say it’s generally a taboo topic. One thing which I definitely did not expect was the introduction of the Family and Friends day which was introduced in May 2020 shortly after I joined the company and it happened nearly every month since then and was introduced because of the COVID-Lockdowns. On that day (nearly) the whole company has a day off to spend time with their family and friends. My German friends reaction to that was something like:

Wait didn’t you join a hyper-growth startup-ish company? That doesn’t sound like late-stage capitalism what I would have expected!

In addition to that, there’s also a #mental-health-aware Slack channel where everyone can talk about their problems. I was really surprised to see so many team members to share their problems and what they are currently struggling with. I couldn’t have imagined that people share very personal stories in the company and that includes people sharing their experince with getting help from a therapist.

As someone who is somewhat an introvert who struggles to talk to a lot of people in big groups in real life this past year (and a few more months) has been relatively easy to handle in this regard as I only met four team members in person so far. However, the first in-person company event is coming up, and I’m pretty confident that getting in touch with a lot of mostly unknown people will be easier than at other companies I’ve worked for so far.

Things which I totally expected

There are still things which I expected to work as intended. Here’s a short list:

  • All Remote and working async works pretty damn good and I really don’t want to go back to an office
  • Spending company money is easy and definitely not a hassle
  • Not having to justify how and when I exactly work is a huge relief
  • Not being forced to request paid time off is an unfamiliar feeling at the beginning, but I got used to it pretty quickly
  • Working with people with a vision who can additionally identify with the company is great
  • No real barriers between teams and departments
  • Values matter

For me personally GitLab has set the bar for companies I might work for in the future pretty high. That’s good and bad at the same time ;-). If you want to read another story of “1 year at GitLab” I can highly recommend the blogpost of dnsmichi from a month ago.

on April 12, 2021 07:15 PM

April 06, 2021

Full Circle Weekly News #204

Full Circle Magazine

Please welcome new host, Moss Bliss


DigiKam 7.2 released

4MLinux 36.0 released

Malicious changes detected in the PHP project Git repository

New version of Cygwin 3.2.0, the GNU environment for Windows

SeaMonkey 2.53.7 Released

Nitrux 1.3.9 with NX Desktop is Released

Parrot 4.11 Released with Security Checker Toolkit

Systemd 248 system manager released

GIMP 2.10.24 released

Deepin 20.2 ready for download

Installer added to Arch Linux installation images

Ubuntu 21.04 beta released

Full Circle Magazine
Host: @bardictriad
Bumper: Canonical
Theme Music: From The Dust - Stardust
on April 06, 2021 05:02 PM

Reuse Licensing Helper

Harald Sitter

It’s boring but important! Stay with me! Please! 😘

For the past couple of years Andreas Cord-Landwehr has done excellent work on moving KDE in a more structured licensing direction. Free software licensing is an often overlooked topic, that is collectively understood to be important, but also incredibly annoying, bureaucratic, and complex. We all like to ignore it more than we should.

If you are working on KDE software you really should check out KDE’s licenses howto and maybe also glance over the comprehensive policy. In particular when you start a new repo!

I’d like to shine some light on a simple but incredibly useful tool: reuse. reuse helps you check licensing compliance with some incredibly easy commands.

Say you start a new project. You create your prototype source, maybe add a readme – after a while it’s good enough to make public and maybe propose for inclusion as mature KDE software by going through KDE Review. You submit it for review and if you are particularly unlucky you’ll have me come around the corner and lament how your beautiful piece of software isn’t completely free software because some files lack any sort of licensing information. Alas!

See, you had better use reuse…

pip3 install --user reuse

reuse lint: lints the source and tells you which files aren’t licensed

reuse download --all: downloads the complete license files needed for compliance based on the licenses used in your source (unfortunately you’ll still need to manually create the KDEeV variants)

If you are unsure how to license a given file, consult the licensing guide or the policy or send a mail to one of the devel mailing lists. There’s help a plenty.

Now that you know about the reuse tool there’s even less reason to start projects without 100% compliance so I can shut up about it 🙂

on April 06, 2021 01:03 PM

April 05, 2021

Previously: v5.8

Linux v5.9 was released in October, 2020. Here’s my summary of various security things that I found interesting:

seccomp user_notif file descriptor injection
Sargun Dhillon added the ability for SECCOMP_RET_USER_NOTIF filters to inject file descriptors into the target process using SECCOMP_IOCTL_NOTIF_ADDFD. This lets container managers fully emulate syscalls like open() and connect(), where an actual file descriptor is expected to be available after a successful syscall. In the process I fixed a couple bugs and refactored the file descriptor receiving code.

zero-initialize stack variables with Clang
When Alexander Potapenko landed support for Clang’s automatic variable initialization, it did so with a byte pattern designed to really stand out in kernel crashes. Now he’s added support for doing zero initialization via CONFIG_INIT_STACK_ALL_ZERO, which besides actually being faster, has a few behavior benefits as well. “Unlike pattern initialization, which has a higher chance of triggering existing bugs, zero initialization provides safe defaults for strings, pointers, indexes, and sizes.” Like the pattern initialization, this feature stops entire classes of uninitialized stack variable flaws.

common syscall entry/exit routines
Thomas Gleixner created architecture-independent code to do syscall entry/exit, since much of the kernel’s work during a syscall entry and exit is the same. There was no need to repeat this in each architecture, and having it implemented separately meant bugs (or features) might only get fixed (or implemented) in a handful of architectures. It means that features like seccomp become much easier to build since it wouldn’t need per-architecture implementations any more. Presently only x86 has switched over to the common routines.

SLAB kfree() hardening
To reach CONFIG_SLAB_FREELIST_HARDENED feature-parity with the SLUB heap allocator, I added naive double-free detection and the ability to detect cross-cache freeing in the SLAB allocator. This should keep a class of type-confusion bugs from biting kernels using SLAB. (Most distro kernels use SLUB, but some smaller devices prefer the slightly more compact SLAB, so this hardening is mostly aimed at those systems.)

Adrian Reber added the new CAP_CHECKPOINT_RESTORE capability, splitting this functionality off of CAP_SYS_ADMIN. The needs for the kernel to correctly checkpoint and restore a process (e.g. used to move processes between containers) continues to grow, and it became clear that the security implications were lower than those of CAP_SYS_ADMIN yet distinct from other capabilities. Using this capability is now the preferred method for doing things like changing /proc/self/exe.

debugfs boot-time visibility restriction
Peter Enderborg added the debugfs boot parameter to control the visibility of the kernel’s debug filesystem. The contents of debugfs continue to be a common area of sensitive information being exposed to attackers. While this was effectively possible by unsetting CONFIG_DEBUG_FS, that wasn’t a great approach for system builders needing a single set of kernel configs (e.g. a distro kernel), so now it can be disabled at boot time.

more seccomp architecture support
Michael Karcher implemented the SuperH seccomp hooks, Guo Ren implemented the C-SKY seccomp hooks, and Max Filippov implemented the xtensa seccomp hooks. Each of these included the ever-important updates to the seccomp regression testing suite in the kernel selftests.

stack protector support for RISC-V
Guo Ren implemented -fstack-protector (and -fstack-protector-strong) support for RISC-V. This is the initial global-canary support while the patches to GCC to support per-task canaries is getting finished (similar to the per-task canaries done for arm64). This will mean nearly all stack frame write overflows are no longer useful to attackers on this architecture. It’s nice to see this finally land for RISC-V, which is quickly approaching architecture feature parity with the other major architectures in the kernel.

new tasklet API
Romain Perier and Allen Pais introduced a new tasklet API to make their use safer. Much like the timer_list refactoring work done earlier, the tasklet API is also a potential source of simple function-pointer-and-first-argument controlled exploits via linear heap overwrites. It’s a smaller attack surface since it’s used much less in the kernel, but it is the same weak design, making it a sensible thing to replace. While the use of the tasklet API is considered deprecated (replaced by threaded IRQs), it’s not always a simple mechanical refactoring, so the old API still needs refactoring (since that CAN be done mechanically is most cases).

x86 FSGSBASE implementation
Sasha Levin, Andy Lutomirski, Chang S. Bae, Andi Kleen, Tony Luck, Thomas Gleixner, and others landed the long-awaited FSGSBASE series. This provides task switching performance improvements while keeping the kernel safe from modules accidentally (or maliciously) trying to use the features directly (which exposed an unprivileged direct kernel access hole).

filter x86 MSR writes
While it’s been long understood that writing to CPU Model-Specific Registers (MSRs) from userspace was a bad idea, it has been left enabled for things like MSR_IA32_ENERGY_PERF_BIAS. Boris Petkov has decided enough is enough and has now enabled logging and kernel tainting (TAINT_CPU_OUT_OF_SPEC) by default and a way to disable MSR writes at runtime. (However, since this is controlled by a normal module parameter and the root user can just turn writes back on, I continue to recommend that people build with CONFIG_X86_MSR=n.) The expectation is that userspace MSR writes will be entirely removed in future kernels.

uninitialized_var() macro removed
I made treewide changes to remove the uninitialized_var() macro, which had been used to silence compiler warnings. The rationale for this macro was weak to begin with (“the compiler is reporting an uninitialized variable that is clearly initialized”) since it was mainly papering over compiler bugs. However, it creates a much more fragile situation in the kernel since now such uses can actually disable automatic stack variable initialization, as well as mask legitimate “unused variable” warnings. The proper solution is to just initialize variables the compiler warns about.

function pointer cast removals
Oscar Carter has started removing function pointer casts from the kernel, in an effort to allow the kernel to build with -Wcast-function-type. The future use of Control Flow Integrity checking (which does validation of function prototypes matching between the caller and the target) tends not to work well with function casts, so it’d be nice to get rid of these before CFI lands.

flexible array conversions
As part of Gustavo A. R. Silva’s on-going work to replace zero-length and one-element arrays with flexible arrays, he has documented the details of the flexible array conversions, and the various helpers to be used in kernel code. Every commit gets the kernel closer to building with -Warray-bounds, which catches a lot of potential buffer overflows at compile time.

That’s it for now! Please let me know if you think anything else needs some attention. Next up is Linux v5.10.

© 2021, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

on April 05, 2021 11:24 PM

April 01, 2021

The Ubuntu Studio team is pleased to announce the beta release of Ubuntu Studio 21.04, codenamed Hirsute Hippo.

While this beta is reasonably free of any showstopper DVD build or installer bugs, you may find some bugs within. This image is, however, reasonably representative of what you will find when Ubuntu Studio 21.04 is released on April 22, 2021.

Please note: Due to the change in desktop environment, directly upgrading to Ubuntu Studio 21.04 from 20.04 LTS is not supported and will not be supported.  However, upgrades from Ubuntu Studio 20.10 will be supported. See the Release Notes for more information.

Images can be obtained from this link:

Full updated information is available in the Release Notes.

New Features

Ubuntu Studio 20.04 includes the new KDE Plasma 5.21 desktop environment. This is a beautiful and functional upgrade to previous versions, and we believe you will like it.

Agordejo, a refined GUI frontend to New Session Manager, is now included by default. This uses the standardized session manager calls throughout the Linux Audio community to work with various audio tools.

Studio Controls is upgraded to 2.1.4 and includes a host of improvements and bug fixes.

BSEQuencer, Bshapr, Bslizr, and BChoppr are included as new plugins, among others.

QJackCtl has been upgraded to 0.9.1, and is a huge improvement. However, we still maintain that Jack should be started with Studio Controls for its features, but QJackCtl is a good patchbay and Jack system monitor.

There are many other improvements, too numerous to list here. We encourage you to take a look around the freely-downloadable ISO image.

Known Issues

Official Ubuntu Studio release notes can be found at

Further known issues, mostly pertaining to the desktop environment, can be found at

Additionally, the main Ubuntu release notes contain more generic issues:

Please Test!

If you have some time, we’d love for you to join us in testing. Testing begins…. NOW!

on April 01, 2021 09:58 PM

March 31, 2021

Google Pixel phones support what they call ”Motion Photo” which is essentially a photo with a short video clip attached to it. They are quite nice since they bring the moment alive, especially as the capturing of the video starts a small moment before the shutter button is pressed. For most viewing programs they simply show as static JPEG photos, but there is more to the files.

I’d really love proper Shotwell support for these file formats, so I posted a longish explanation with many of the details in this blog post to a ticket there too. Examples of the newer format are linked there too.

Info posted to Shotwell ticket

There are actually two different formats, an old one that is already obsolete, and a newer current format. The older ones are those that your Pixel phone recorded as ”MVIMG_[datetime].jpg", and they have the following meta-data:

Xmp.GCamera.MicroVideo                       XmpText     1  1
Xmp.GCamera.MicroVideoVersion XmpText 1 1
Xmp.GCamera.MicroVideoOffset XmpText 7 4022143
Xmp.GCamera.MicroVideoPresentationTimestampUs XmpText 7 1331607

The offset is actually from the end of the file, so one needs to calculate accordingly. But it is exact otherwise, so one simply extract a file with that meta-data information:

# Extracts the microvideo from a MVIMG_*.jpg file

# The offset is from the ending of the file, so calculate accordingly
offset=$(exiv2 -p X "$1" | grep MicroVideoOffset | sed 's/.*\"\(.*\)"/\1/')
filesize=$(du --apparent-size --block=1 "$1" | sed 's/^\([0-9]*\).*/\1/')
extractposition=$(expr $filesize - $offset)
echo offset: $offset
echo filesize: $filesize
echo extractposition=$extractposition
dd if="$1" skip=1 bs=$extractposition of="$(basename -s .jpg $1).mp4"

The newer format is recorded in filenames called ”PXL_[datetime].MP.jpg”, and they have a _lot_ of additional metadata:

Xmp.GCamera.MotionPhoto                      XmpText     1  1
Xmp.GCamera.MotionPhotoVersion XmpText 1 1
Xmp.GCamera.MotionPhotoPresentationTimestampUs XmpText 6 233320
Xmp.xmpNote.HasExtendedXMP XmpText 32 E1F7505D2DD64EA6948D2047449F0FFA
Xmp.Container.Directory XmpText 0 type="Seq"
Xmp.Container.Directory[1] XmpText 0 type="Struct"
Xmp.Container.Directory[1]/Container:Item XmpText 0 type="Struct"
Xmp.Container.Directory[1]/Container:Item/Item:Mime XmpText 10 image/jpeg
Xmp.Container.Directory[1]/Container:Item/Item:Semantic XmpText 7 Primary
Xmp.Container.Directory[1]/Container:Item/Item:Length XmpText 1 0
Xmp.Container.Directory[1]/Container:Item/Item:Padding XmpText 1 0
Xmp.Container.Directory[2] XmpText 0 type="Struct"
Xmp.Container.Directory[2]/Container:Item XmpText 0 type="Struct"
Xmp.Container.Directory[2]/Container:Item/Item:Mime XmpText 9 video/mp4
Xmp.Container.Directory[2]/Container:Item/Item:Semantic XmpText 11 MotionPhoto
Xmp.Container.Directory[2]/Container:Item/Item:Length XmpText 7 1679555
Xmp.Container.Directory[2]/Container:Item/Item:Padding XmpText 1 0

Sounds like fun and lots of information. However I didn’t see why the “length” in first item is 0 and I didn’t see how to use the latter Length info. But I can use the mp4 headers to extract it:

# Extracts the motion part of a MotionPhoto file PXL_*.MP.mp4

extractposition=$(grep --binary --byte-offset --only-matching --text \
-P "\x00\x00\x00\x18\x66\x74\x79\x70\x6d\x70\x34\x32" $1 | sed 's/^\([0-9]*\).*/\1/')

dd if="$1" skip=1 bs=$extractposition of="$(basename -s .jpg $1).mp4"

UPDATE: I wrote most of this blog post earlier. When now actually getting to publishing it a week later, I see the obvious ie the ”Length” is again simply the offset from the end of the file so one could do the same less brute force approach as for MVIMG. I’ll leave the above as is however for the ❤️ of binary grepping.

(cross-posted to my other blog)

on March 31, 2021 11:06 AM

On a little bit of a tangent from my typical security posting, I thought I’d include some of my “making” efforts.

Due to the working from home for an extended period of time, I wanted to improve my video-conferencing setup somewhat. I have my back to windows, so the lighting is pretty bad, so I wanted to get some lights. I didn’t want to spend big money, so I got this set of Neewer USB-powered lights. It came with tripod bases, monopod-style stands, and ball heads to mount the lights.

The lights work well and are a great value for the money, but the stands are not as great. The tripods are sufficiently light that they’re easy to knock over, and they take more desk space than I’d really like. I have a lot of stuff on my desk, and appreciate desk real estate, so go to great length to minimize permanent fixtures on the desk. I have my monitors on monitor arms, my desk lamp on a mount, etc. I really wanted to minimize the space used by these lights.

I looked for an option to clamp to the desk and support the existing monopods with the light. I found a couple of options on Amazon, but they either weren’t ideal, or I was going to end up spending as much on the clamps as I did on the lamps. I wanted to see if I could do an alternative.

I have a 3D Printer, so almost every real-world problem looks like a use case for 3D printing, and this was no exception. I wasn’t sure if a 3D-printed clamp would have the strength and capability to support the lights, and didn’t think the printer could make threads small enough to fit into the base of the lamp monopods (which accept a 1/4x20 thread, just like used on cameras and other photography equipment).

I decided to see if I could incorporate a metal thread into a 3D printed part in some way. There are threaded inserts you can implant into a 3D print, but I was concerned about the strength of that connection, and would still need a threaded adapter to connect the two (since both ends would now be a “female” connector). Instead, I realized I could incorporate a 1/4x20 bolt into the print. I settled on 3/8” length so it wouldn’t stick too far through the print and a hex head so it wouldn’t rotate in the print, making screwing/unscrewing the item easier.

I designed a basic clamp shape with a 2” opening for the desk, and then used this excellent thread library to make a large screw in the device to clamp it to the desk from the bottom. I put an inset for the hex head in the top and a hole for the screw to fit through. When I printed my first test, I was pretty concerned that things wouldn’t fit or would break at the slightest torquing.

Clamp Sideview

Much to my own surprise, it just worked! The screw threads on the clamp side were a little bit tight at first, but they work quite well, and certainly don’t come undone over time. I’ve now had my light mounted on one of these clamps for a few months and no problems, but I would definitely not recommend a 3D printed clamp for something heavy or very valuable. (If I’m going to hold up a several thousand dollar camera, I’m going to mount it on proper mounts.)

Clamp On Table

Note on printing: If you want to 3D print this yourself, lay the clamp on its side on the print bed. Not only do you avoid needing support, you ensure that the layers lines go along the “spine” of the clamp, rather than the stress separating layers.

Clamp Model

on March 31, 2021 07:00 AM