January 28, 2020

Welcome to the Ubuntu Weekly Newsletter, Issue 615 for the week of January 19 – 25, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on January 28, 2020 12:30 AM

January 27, 2020

It has been a while since the last AppStream-related post (or any post for that matter) on this blog, but of course development didn’t stand still all this time. Quite the opposite – it was just me writing less about it, which actually is a problem as some of the new features are much less visible. People don’t seem to re-read the specification constantly for some reason 😉. As a consequence, we have pretty good adoption of features I blogged about (like fonts support), but much of the new stuff is still not widely used. Also, I had to make a promise to several people to blog about the new changes more often, and I am definitely planning to do so. So, expect posts about AppStream stuff a bit more often now.

What actually was AppStream again? The AppStream Freedesktop Specification describes two XML metadata formats to describe software components: One for software developers to describe their software, and one for distributors and software repositories to describe (possibly curated) collections of software. The format written by upstream projects is called Metainfo and encompasses any data installed in /usr/share/metainfo/, while the distribution format is just called Collection Metadata. A reference implementation of the format and related features written in C/GLib exists as well as Qt bindings for it, so the data can be easily accessed by projects which need it.

The software metadata contains a unique ID for the respective software so it can be identified across software repositories. For example the VLC Mediaplayer is known with the ID org.videolan.vlc in every software repository, no matter whether it’s the package archives of Debian, Fedora, Ubuntu or a Flatpak repository. The metadata also contains translatable names, summaries, descriptions, release information etc. as well as a type for the software. In general, any information about a software component that is in some form relevant to displaying it in software centers is or can be present in AppStream. The newest revisions of the specification also provide a lot of technical data for systems to make the right choices on behalf of the user, e.g. Fwupd uses AppStream data to describe compatible devices for a certain firmware, or the mediatype information in AppStream metadata can be used to install applications for an unknown filetype easier. Information AppStream does not contain is data the software bundling systems are responsible for. So mechanistic data how to build a software component or how exactly to install it is out of scope.

So, now let’s finally get to the new AppStream features since last time I talked about it – which was almost two years ago, so quite a lot of stuff has accumulated!

Specification Changes/Additions

Web Application component type

(Since v0.11.7) A new component type web-application has been introduced to describe web applications. A web application can for example be GMail, YouTube, Twitter, etc. launched by the browser in a special mode with less chrome. Fundamentally though it is a simple web link. Therefore, web apps need a launchable tag of type url to specify an URL used to launch them. Refer to the specification for details. Here is a (shortened) example metainfo file for the Riot Matrix client web app:

<component type="web-application">
  <summary>A glossy Matrix collaboration client for the web</summary>
    <p>Communicate with your team[...]</p>
  <icon type="stock">im.riot.webapp</icon>
  <url type="homepage">https://riot.im/</url>
  <launchable type="url">https://riot.im/app</launchable>

Repository component type

(Since v0.12.1) The repository component type describes a repository of downloadable content (usually other software) to be added to the system. Once a component of this type is installed, the user has access to the new content. In case the repository contains proprietary software, this component type pairs well with the agreements section.

This component type can be used to provide easy installation of e.g. trusted Debian or Fedora repositories, but also can be used for other downloadable content. Refer to the specification entry for more information.

Operating System component type

(Since v0.12.5) It makes sense for the operating system itself to be represented in the AppStream metadata catalog. Information about it can be used by software centers to display information about the current OS release and also to notify about possible system upgrades. It also serves as a component the software center can use to attribute package updates to that do not have AppStream metadata. The operating-system component type was designed for this and you can find more information about it in the specification documentation.

Icon Theme component type

(Since v0.12.8) While styles, themes, desktop widgets etc. are already covered in AppStream via the addon component type as they are specific to the toolkit and desktop environment, there is one exception: Icon themes are described by a Freedesktop specification and (usually) work independent of the desktop environment. Because of that and on request of desktop environment developers, a new icon-theme component type was introduced to describe icon themes specifically. From the data I see in the wild and in Debian specifically, this component type appears to be very underutilized. So if you are an icon theme developer, consider adding a metainfo file to make the theme show up in software centers! You can find a full description of this component type in the specification.

Runtime component type

(Since v0.12.10) A runtime is mainly known in the context of Flatpak bundles, but it actually is a more universal concept. A runtime describes a defined collection of software components used to run other applications. To represent runtimes in the software catalog, the new AppStream component type was introduced in the specification, but it has been used by Flatpak for a while already as a nonstandard extension.

Release types

(Since v0.12.0) Not all software releases are created equal. Some may be for general use, others may be development releases on the way to becoming an actual final release. In order to reflect that, AppStream introduced at type property to the release tag in a releases block, which can be either set to stable or development. Software centers can then decide to hide or show development releases.

End-of-life date for releases

(Since v0.12.5) Some software releases have an end-of-life date from which onward they will no longer be supported by the developers. This is especially true for Linux distributions which are described in a operating-system component. To define an end-of-life date, a release in AppStream can now have a date_eol property using the same syntax as a date property but defining the date when the release will no longer be supported (refer to the releases tag definition).

Details URL for releases

(Since v0.12.5) The release descriptions are short, text-only summaries of a release, usually only consisting of a few bullet points. They are intended to give users a fast, quick to read overview of a new release that can be displayed directly in the software updater. But sometimes you want more than that. Maybe you are an application like Blender or Krita and have prepared an extensive website with an in-depth overview, images and videos describing the new release. For these cases, AppStream now permits an url tag in a release tag pointing to a website that contains more information about a particular release.

Release artifacts

(Since v0.12.6) AppStream limited release descriptions to their version numbers and release notes for a while, without linking the actual released artifacts. This was intentional, as any information how to get or install software should come from the bundling/packaging system that Collection Metadata was generated for.

But the AppStream metadata has outgrown this more narrowly defined purpose and has since been used for a lot more things, like generating HTML download pages for software, making it the canonical source for all the software metadata in some projects. Coming from Richard Hughes awesome Fwupd project was also the need to link to firmware binaries from an AppStream metadata file, as the LVFS/Fwupd use AppStream metadata exclusively to provide metadata for firmware. Therefore, the specification was extended with an artifacts tag for releases, to link to the actual release binaries and tarballs. This replaced the previous makeshift “release location” tag.

Release artifacts always have to link to releases directly, so the releases can be acquired by machines immediately and without human intervention. A release can have a type of source or binary, indicating whether a source tarball or binary artifact is linked. Each binary release can also have an associated platform triplet for Linux systems, an identifier for firmware, or any other identifier for a platform. Furthermore, we permit sha256 and blake2 checksums for the release artifacts, as well as specifying sizes. Take a look at the example below, or read the specification for details.

​  <release version="1.2" date="2014-04-12" urgency="high">
​    [...]
​    <artifacts>
​      <artifact type="binary" platform="x86_64-linux-gnu">
​        <location>https://example.com/mytarball.bin.tar.xz</location>
​        <checksum type="blake2">852ed4aff45e1a9437fe4774b8997e4edfd31b7db2e79b8866832c4ba0ac1ebb7ca96cd7f95da92d8299da8b2b96ba480f661c614efd1069cf13a35191a8ebf1</checksum>
​        <size type="download">12345678</size>
​        <size type="installed">42424242</size>
​      </artifact>
​      <artifact type="source">
​        <location>https://example.com/mytarball.tar.xz</location>
​        [...]
​      </artifact>
​    </artifacts>
​  </release>

Issue listings for releases

(Since v0.12.9) Software releases often fix issues, sometimes security relevant ones that have a CVE ID. AppStream provides a machine-readable way to figure out which components on your system are currently vulnerable to which CVE registered issues. Additionally, a release tag can also just contain references to any normal resolved bugs, via bugtracker URLs. Refer to the specification for details. Example for the issues tag in AppStream Metainfo files:

​  <issue url="https://example.com/bugzilla/12345">bz#12345</issue>
​  <issue type="cve">CVE-2019-123456</issue>

Requires and Recommends relations

(Since v0.12.0) Sometimes software has certain requirements only justified by some systems, and sometimes it might recommend specific things on the system it will run on in order to run at full performance.

I was against adding relations to AppStream for quite a while, as doing so would add a more “functional” dimension to it, impacting how and when software is installed, as opposed to being only descriptive and not essential to be read in order to install software correctly. However, AppStream has pretty much outgrown its initial narrow scope and adding relation information to Metainfo files was a natural step to take. For Fwupd it was an essential step, as Fwupd firmware might have certain hard requirements on the system in order to be installed properly. And AppStream requirements and recommendations go way beyond what regular package dependencies could do in Linux distributions so far.

Requirements and recommendations can be on other software components via their id, on a modalias, specific kernel version, existing firmware version or for making system memory recommendations. See the specification for details on how to use this. Example:

  <id version="1.0" compare="ge">org.example.MySoftware</id>
​  <kernel version="5.6" compare="ge">Linux</kernel>
​  <memory>2048</memory> <!-- recommend at least 2GiB of memory -->

This means that AppStream currently supported provides, suggests, recommends and requires relations to refer to other software components or system specifications.


(Since v0.12.1) The new agreement section in AppStream Metainfo files was added to make it easier for software to be compliant to the EU GDPR. It has since been expanded to be used for EULAs as well, which was a request coming (to no surprise) from people having to deal with corporate and proprietary software components. An agreement consists of individual sections with headers and descriptive texts and should – depending on the type – be shown to the user upon installation or first use of a software component. It can also be very useful in case the software component is a firmware or driver (which often is proprietary – and companies really love their legal documents and EULAs).

Contact URL type

(Since v0.12.4) The contact URL type can be used to simply set a link back to the developer of the software component. This may be an URL to a contact form, their website or even a mailto: link. See the specification for all URL types AppStream supports.

Videos as software screenshots

(Since v0.12.8) This one was quite long in the making – the feature request for videos as screenshots had been filed in early 2018. I was a bit wary about adding video, as that lets you run into a codec and container hell as well as requiring software centers to support video and potentially requiring the appstream-generator to get into video transcoding, which I really wanted to avoid. Alternatively, we would have had to make AppStream add support for multiple likely proprietary video hosting platforms, which certainly would have been a bad idea on every level. Additionally, I didn’t want to have people add really long introductory videos to their applications.

Ultimately, the problem was solved by simplification and reduction: People can add a video as “screenshot” to their software components, as long as it isn’t the first screenshot in the list. We only permit the vp9 and av1 codecs and the webm and matroska container formats. Developers should expect the audio of their videos to be muted, but if audio is present, the opus codec must be used. Videos will be size-limited, for example Debian imposes a 14MiB limit on video filesize. The appstream-generator will check for all of these requirements and reject a video in case it doesn’t pass one of the checks. This should make implementing videos in software centers easy, and also provide the safety guarantees and flexibility we want.

So far we have not seen many videos used for application screenshots. As always, check the specification for details on videos in AppStream. Example use in a screenshots tag:

​  <screenshot type="default">
​    <image type="source" width="1600" height="900">https://example.com/foobar/screenshot-1.png</image>
​  </screenshot>
​  <screenshot>
​    <video codec="av1" width="1600" height="900">https://example.com/foobar/screencast.mkv</video>
​  </screenshot>
​ </screenshots>

Emphasis and code markup in descriptions

(Since v0.12.8) It has long been requested to have a little bit more expressive markup in descriptions in AppStream, at least more than just lists and paragraphs. That has not happened for a while, as it would be a breaking change to all existing AppStream parsers. Additionally, I didn’t want to let AppStream descriptions become long, general-purpose “how to use this software” documents. They are intended to give a quick overview of the software, and not comprehensive information. However ultimately we decided to add support for at least two more elements to format text: Inline code elements as well as em emphases. There may be more to come, but that’s it for now. This change was made about half a year ago, and people are currently advised to use the new styling tags sparingly, as otherwise their software descriptions may look odd when parsed with older AppStream implementation versions.

Remove-component merge mode

(Since v0.12.4) This addition is specified for the Collection Metadata only, as it affects curation. Since AppStream metadata is in one big pool for Linux distributions, and distributions like Debian freeze their repositories, it sometimes is required to merge metadata from different sources on the client system instead of generating it in the right format on the server. This can also be used for curation by vendors of software centers. In order to edit preexisting metadata, special merge components are created. These can permit appending data, replacing data etc. in existing components in the metadata pool. The one thing that was missing was a mode that permitted the complete removal of a component. This was added via a special remove-component merge mode. This mode can be used to pull metadata from a software center’s catalog immediately even if the original metadata was frozen in place in a package repository. This can be very useful in case an inappropriate software component is found in the repository of a Linux distribution post-release. Refer to the specification for details.

Custom metadata

(Since v0.12.1) The AppStream specification is extensive, but it can not fit every single special usecase. Sometimes requests come up that can’t be generalized easily, and occasionally it is useful to prototype a feature first to see if it is actually used before adding it to the specification properly. For that purpose, the custom tag exists. The tag defines a simple key-value structure that people can use to inject arbitrary metadata into an AppStream metainfo file. The libappstream library will read this tag by default, providing easy access to the underlying data. Thereby, the data can easily be used by custom applications designed to parse it. It is important to note that the appstream-generator tool will by default strip the custom data from files unless it has been whitelisted explicitly. That way, the creator of a metadata collection for a (package) repository has some control over what data ends up in the resulting Collection Metadata file. See the specification for more details on this tag.

Miscellaneous additions

(Since v0.12.9) Additionally to JPEG and PNG, WebP images are now permitted for screenshots in Metainfo files. These images will – like every image – be converted to PNG images by the tool generating Collection Metadata for a repository though.

(Since v0.12.10) The specification now contains a new name_variant_suffix tag, which is a translatable string that software lists may append to the name of a component in case there are multiple components with the same name. This is intended to be primarily used for firmware in Fwupd, where firmware may have the same name but actually be slightly different (e.g. region-specific). In these cases, the additional name suffix is shown to make it easier to distinguish the different components in case multiple are present.

(Since v0.12.10) AppStream has an URI format to install applications directly from webpages via the appstream: scheme. This URI scheme now permits alternative IDs for the same component, in case it switched its ID in the past. Take a look at the specification for details about the URI format.

(Since v0.12.10) AppStream now supports version 1.1 of the Open Age Rating Service (OARS), so applications (especially games) can voluntarily age-rate themselves. AppStream does not replace parental guidance here, and all data is purely informational.

Library & Implementation Changes

Of course, besides changes to the specification, the reference implementation also received a lot of improvements. There are too many to list them all, but a few are noteworthy to mention here.

No more automatic desktop-entry file loading

(Since v0.12.3) By default, libappstream was loading information from local .desktop files into the metadata pool of installed applications. This was done to ensure installed apps were represented in software centers to allow them to be uninstalled. This generated much more pain than it was useful for though, with metadata appearing two to three times in software centers because people didn’t set the X-AppStream-Ignore=true tag in their desktop-entry files. Also, the generated data was pretty bad. So, newer versions of AppStream will only load data of installed software that doesn’t have an equivalent in the repository metadata if it ships a metainfo file. One more good reason to ship a metainfo file!

Software centers can override this default behavior change by setting the AS_POOL_FLAG_READ_DESKTOP_FILES flag for AsPool instances (which many already did anyway).

LMDB caches and other caching improvements

(Since v0.12.7) One of the biggest pain points in adding new AppStream features was always adjusting the (de)serialization of the new markup: AppStream exists as a YAML version for Debian-based distributions for Collection Metadata, an XML version based on the Metainfo format as default, and a GVariant binary serialization for on-disk caching. The latter was used to drastically reduce memory consumption and increase speed of software centers: Instead of loading all languages, only the one we currently needed was loaded. The expensive icon-finding logic, building of the token cache for searches and other operations were performed and the result was saved as a binary cache on-disk, so it was instantly ready when the software center was loaded next.

Adjusting three serialization formats was pretty laborious and a very boring task. And at one point I benchmarked the (de)serialization performance of the different formats and found out the the XML reading/writing was actually massively outperforming that of the GVariant cache. Since the XML parser received much more attention, that was only natural (but there were also other issues with GVariant deserializing large dictionary structures).

Ultimately, I removed the GVariant serialization and replaced it with a memory-mapped XML-based cache that reuses 99.9% of the existing XML serialization code. The cache uses LMDB, a small embeddable key-value store. This makes maintaining AppStream much easier, and we are using the same well-tested codepaths for caching now that we also use for normal XML reading/writing. With this change, AppStream also uses even less memory, as we only keep the software components in memory that the software center currently displays. Everything that isn’t directly needed also isn’t in memory. But if we do need the data, it can be pulled from the memory-mapped store very quickly.

While refactoring the caching code, I also decided to give people using libappstream in their own projects a lot more control over the caching behavior. Previously, libappstream was magically handling the cache behind the back of the application that was using it, guessing which behavior was best for the given usecase. But actually, the application using libappstream knows best how caching should be handled, especially when it creates more than one AsPool instance to hold and search metadata. Therefore, libappstream will still pick the best defaults it can, but give the application that uses it all control it needs, down to where to place a cache file, to permit more efficient and more explicit management of caches.

Validator improvements

(Since v0.12.8) The AppStream metadata validator, used by running appstreamcli validate <file>, is the tool that each Metainfo file should run through to ensure it is conformant to the AppStream specification and to give some useful hints to improve the metadata quality. It knows four issue severities: Pedantic issues are hidden by default (show them with the --pedantic flag) and affect upcoming features or really “nice to have” things that are completely nonessential. Info issues are not directly a problem, but are hints to improve the metadata and get better overall data. Things the specification recommends but doesn’t mandate also fall into this category. Warnings will result in degraded metadata but don’t make the file invalid in its entirety. Yet, they are severe enough that we fail the validation. Things like that are for example a vanishing screenshot from an URL: Most of the data is still valid, but the result may not look as intended. Invalid email addresses, invalid tag properties etc. fall into that category as well: They will all reduce the amount of metadata systems have available. So the metadata should definitely be warning-free in order to be valid. And finally errors are outright violation of the specification that may likely result in the data being ignored in its entirety or large chunks of it being invalid. Malformed XML or invalid SPDX license expressions would fall into that group.

Previously, the validator would always show very long explanations for all the issues it found, giving detailed information on an issue. While this was nice if there were few issues, it produces very noisy output and makes it harder to quickly spot the actual error. So, the whole validator output was changed to be based on issue tags, a concept that is also known from other lint tools such as Debian’s Lintian: Each error has its own tag string, identifying it. By default, we only show the tag string, line of the issue, severity and component name it affects as well a short repeat of an invalid value (in case that’s applicable to the issue). If people do want to know detailed information, they can get it by passing --explain to the validation command. This solution has many advantages:

  • It makes the output concise and easy to read by humans and is mostly already self-explanatory
  • Machines can parse the tags easily and identify which issue was emitted, which is very helpful for AppStream’s own testsuite but also for any tool wanting to parse the output
  • We can now have translators translate the explanatory texts

Initially, I didn’t want to have the validator return translated output, as that may be less helpful and harder to search the web for. But now, with the untranslated issue tags and much longer and better explanatory texts, it makes sense to trust the translators to translate the technical explanations well.

Of course, this change broke any tool that was parsing the old output. I had an old request by people to have appstreamcli return machine-readable validator output, so they could integrate it better with preexisting CI pipelines and issue reporting software. Therefore, the tool can now return structured, machine-readable output in the YAML format if you pass --format=yaml to it. That output is guaranteed to be stable and can be parsed by any CI machinery that a project already has running. If needed, other output formats could be added in future, but for now YAML is the only one and people generally seem to be happy with it.

Create desktop-entry files from Metainfo

(Since v0.12.9) As you may have noticed, an AppStream Metainfo file contains some information that a desktop-entry file also contains. Yet, the two file formats serve very different purposes: A desktop file is basically launch instructions for an application, with some information about how it is displayed. A Metainfo file is mostly display information and less to none launch instructions. Admittedly though, there is quite a bit of overlap which may make it useful for some projects to simply generate a desktop-entry file from a Metainfo file. This may not work for all projects, most notably ones where multiple desktop-entry files exists for just one AppStream component. But for the simplest and most common of cases, a direct mapping between Metainfo and desktop-entry file, this option is viable.

The appstreamcli tool permits this now, using the appstreamcli make-desktop-file subcommand. It just needs a Metainfo file as first parameter, and a desktop-entry output file as second parameter. If the desktop-entry file already exists, it will be extended with the new data from tbe Metainfo file. For the Exec field in a desktop-entry file, appstreamcli will read the first binary entry in a provides tag, or use an explicitly provided line passed via the --exec parameter.

Please take a look at the appstreamcli(1) manual page for more information on how to use this useful feature.

Convert NEWS files to Metainfo and vice versa

(Since v0.12.9) Writing the XML for release entries in Metainfo files can sometimes be a bit tedious. To make this easier and to integrate better with existing workflows, two new subcommands for appstreamcli are now available: news-to-metainfo and metainfo-to-news. They permit converting a NEWS textfile to Metainfo XML and vice versa, and can be integrated with an application’s build process. Take a look at AppStream itself on how it uses that feature.

In addition to generating the NEWS output or reading it, there is also a second YAML-based option available. Since YAML is a structured format, more of the features of AppStream release metadata are available in the format, such as marking development releases as such. You can use the --format flag to switch the output (or input) format to YAML.

Please take a look at the appstreamcli(1) manual page for a bit more information on how to use this feature in your project.

Support for recent SPDX syntax

(Since v0.12.10) This has been a pain point for quite a while: SPDX is a project supported by the Linux Foundation to (mainly) provide a unified syntax to identify licenses for Open Source projects. They did change the license syntax twice in incompatible ways though, and AppStream already implemented a previous versions, so we could not simply jump to the latest version without supporting the old one.

With the latest release of AppStream though, the software should transparently convert between the different version identifiers and also support the most recent SPDX license expressions, including the WITH operator for license exceptions. Please report any issues if you see them!

Future Plans?

First of all, congratulations for reading this far into the blog post! I hope you liked the new features! In case you skipped here, welcome to one of the most interesting sections of this blog post! 😉

So, what is next for AppStream? The 1.0 release, of course! The project is certainly mature enough to warrant that, and originally I wanted to get the 1.0 release out of the door this February, but it doesn’t look like that date is still realistic. But what does “1.0” actually mean for AppStream? Well, here is a list of the intended changes:

  • Removal of almost all deprecated parts of the specification. Some things will remain supported forever though: For example the desktop component type is technically deprecated for desktop-application but is so widely used that we will support it forever. Things like the old application node will certainly go though, and so will the /usr/share/appdata path as metainfo location, the appcategory node that nobody uses anymore and all other legacy cruft. I will be mindful about this though: If a feature still has a lot of users, it will stay supported, potentially forever. I am closely monitoring what is used mainly via the information available via the Debian archive. As a general rule of thumb though: A file for which appstreamcli validate passes today is guaranteed to work and be fine with AppStream 1.0 as well.
  • Removal of all deprecated API in libappstream. If your application still uses API that is flagged as deprecated, consider migrating to the supported functions and you should be good to go! There are a few bigger refactorings planned for some of the API around releases and data serialization, but in general I don’t expect this to be hard to port.
  • The 1.0 specification will be covered by an extended stability promise. When a feature is deprecated, there will be no risk that it is removed or become unsupported (so the removal of deprecated stuff in the specification should only happen once). What is in the 1.0 specification will quite likely be supported forever.

So, what is holding up the 1.0 release besides the API cleanup work? Well, there are a few more points I want to resolve before releasing the 1.0 release:

  • Resolve hosting release information at a remote location, not in the Metainfo file (#240): This will be a disruptive change that will need API adjustments in libappstream for sure, and certainly will – if it happens – need the 1.0 release. Fetching release data from remote locations as opposed to having it installed with software makes a lot of sense, and I either want to have this implemented and specified properly for the 1.0 release, or have it explicitly dismissed.
  • Mobile friendliness / controls metadata (#192 & #55): We need some way to identify applications as “works well on mobile”. I also work for a company called Purism which happens to make a Linux-based smartphone, so this is obviously important for us. But it also is very relevant for users and other Linux mobile projects. The main issue here is to define what “mobile” actually means and what information makes sense to have in the Metainfo file to be future-proof. At the moment, I think we should definitely have data on supported input controls for a GUI application (touch vs mouse), but for this the discussion is still not done.
  • Resolving addon component type complexity (lots of issue reports): At the moment, an addon component can be created to extend an existing application by $whatever thing This can be a plugin, a theme, a wallpaper, extra content, etc. This is all running in the addon supergroup of components. This makes it difficult for applications and software centers to occasionally group addons into useful groups – a plugin is functionally very different from a theme. Therefore I intend to possibly allow components to name “addon classes” they support and that addons can sort themselves into, allowing easy grouping and sorting of addons. This would of course add extra complexity. So this feature will either go into the 1.0 release, or be rejected.
  • Zero pending feature requests for the specification: Any remaining open feature request for the specification itself in AppStream’s issue tracker should either be accepted & implemented, or explicitly deferred or rejected.

I am not sure yet when the todo list will be completed, but I am certain that the 1.0 release of AppStream will happen this year, most likely before summer. Any input, especially from users of the format, is highly appreciated.

Thanks a lot to everyone who contributed or is contributing to the AppStream implementation or specification, you are great! Also, thanks to you, the reader, for using AppStream in your project 😉. I definitely will give a bit more frequent and certainly shorter updates on the project’s progress from now on. Enjoy your rich software metadata, firmware updates and screenshot videos meanwhile! 😀

on January 27, 2020 02:48 PM

An intro to MicroK8s

Ubuntu Blog

MicroK8s is the smallest, fastest multi-node Kubernetes. Single-package fully conformant lightweight Kubernetes that works on 42 flavours of Linux as well as Mac and Windows using Multipass. Perfect for: Developer workstations, IoT, Edge, CI/CD.

Anyone who’s tried to work with Kubernetes knows the pain of having to deal with getting setup and running with the deployment. There are minimalist solutions in the market that reduce time-to-deployment and complexity but the light weight solutions come at the expense of critical extensibility and missing add-ons.

If you don’t want to spend time jumping through hoops to get Kubernetes up and running, MicroK8s gets you started in under 60 seconds.

“Canonical might have assembled the easiest way to provision a single node Kubernetes cluster”

Kelsey Hightower, Google.

Join our webinar to learn why developers choose to work with MicroK8s as a reliable, fast, small and upstream version of Kubernetes and how you can get started. The webinar will also feature the add-ons available including Kubeflow for AI/ML work, Grafana and Prometheus for monitoring, service mesh tools and more.

Watch the webinar

on January 27, 2020 02:44 PM

January 25, 2020

Write more

Stuart Langridge

I’ve written a couple of things here recently and I’d forgotten how much I enjoy doing that. I should do more of it.

Most of my creative writing energy goes into D&D, or stuff for work, or talks at conferences, or #sundayroastclub, but I think quite a lot of it is bled away by Twitter; an idea happens, and then while it’s still just an idea I tweet it and then it’s used up. There’s a certain amount of instant gratification involved in this, of course, but I think it’s like a pressure valve; because a tweet is so short, so immediate, it’s easy to release the steam in a hundred tiny bursts rather than one long exhalation. I’m not good at metaphors, but in my head this seems like one of those thermometers for charities: my creative wellspring builds up to the overflow point — call it the value of 50 — and so I tweet something which drops it back down to 48. Then it builds up again to 50 and another tweet drops it back to 48, and so on. In the old days, it’d run up to fifty and then keep going while I was consumed with the desire to write but also consumed with the time required to actually write something, and then there’d be something long and detailed and interesting which would knock me back down to thirty, or ten, or nought.

I kinda miss that. I’m not sure what to do about it, though. Swearing off Twitter isn’t really an option; even ignoring the catastrophic tsunami of FOMO that would ensue, I’d be hugely worried that if I’m not part of the conversation, part of the zeitgeist, I’d just vanish from the public discourse. Not sure my ego could cope with that.

So I’m between the devil and the deep blue sea. Neither of those are nice (which, obviously, is the point) but, like so many people before me, and I suspect me included, I think I’m going to make an effort to turn more thoughts into writing rather than into snide asides or half-finished thoughts where maybe a hundred likes will finish them.

Of course I don’t have comments, so your thoughts on this should be communicated to me via Twitter. The irony hurricane proceeds apace. (Or on your own weblog which then sends me a webmention via the form below, of course, but that’s not all that likely yet.) Check in a month whether I’ve even remotely stuck to this or if I’ve just taken the easy option.

on January 25, 2020 12:45 AM

January 24, 2020

This week, as part of my work on the Ubuntu Robotics team, I headed up to Slippery Rock University in northwestern PA to meet with Dr. Sam Thangiah and to introduce students to the Robot Operating System (ROS).  New semester, lots of new opportunities for learning!

We started with a really simple robot environment.  Check out this build! This Raspberry Pi runs an Ubuntu 18.04 image which gives it all the built-in LTS security advantages. It’s mounted on piece of plexiglass with two motors and a motor controller board from the PiHut.  We worked through about 75 lines of sample python code which hooked the RPi.GPIO library to control the general purpose I/O pins, and we created an abstract Motor class.  This got our two-wheeled robot up and running…running right off the table. Oops.

Getting moving was just the beginning.  With a robot active in the physical world, we identified plenty of new problems to solve.  One motor ran a bit faster than the other and the robot drifted right. Sometimes one of the wheels lost traction so the robot didn’t go where we sent it.  But probably the most important problem yet to solve was to keep it from running into things… and from running off the table

Many of these problems are solved by the Robotics Operating System (ROS), the evolving product of a very active and innovative open source robotics community.  With ROS installed on the Pi, another 25 lines of python code created a ROS node listening for commands on the “/move” topic. Devices on the network were able to send motion commands directly to the robot, and we opened the door on the immense library of tools available within ROS.

Robotics can be an outstanding learning tool where the digital realm meets the physical realm.  It’s a place where a student’s code makes real, observable actions and where they can experiment with their environment.  In just over an hour, our conversations wandered over everything from basic electrical theory to mechanical engineering, including a touch of kinematics, some mathematics, and a few lines of python code to solve our problems.  If you’d like to learn more about building your own two-wheeled robot, see the “Your first robot” blog and video series by Kyle Fazzari, Canonical’s lead engineer in robotics.

Now that they’ve been given the basic building blocks, it’ll be exciting to see what a room full of motivated students can produce this semester!

on January 24, 2020 10:11 PM

Traditionally, LXD is used to create system containers, light-weight virtual machines that use Linux Container features and not hardware virtualization.

However, starting from LXD 3.19, it is possible to create virtual machines as well. That is, now with LXD you can create both system containers and virtual machines.

In the following we see how to setup LXD for virtual machines, then start a virtual machine and use it. Finally, we go through some troubleshooting.

How to setup LXD for virtual machines

Launching LXD virtual machines requires some preparation. We need to pass some information to the virtual machine so that we can then be able to connect to it as soon as it boots up. We pass the necessary information to the virtual machine using a LXD profile, through cloud-init.

Creating a LXD profile for virtual machines

Here is such a profile. There is a cloud-init configuration that essentially has all the information that is passed to the virtual machine. Then, there is a config device that makes available a disk device to the virtual machine, and from there it can setup a VM-specific LXD component.

   user.user-data: |
     ssh_pwauth: yes
       - name: ubuntu
         passwd: "$6$iBF0eT1/6UPE2u$V66Rk2BMkR09pHTzW2F.4GHYp3Mb8eu81Sy9srZf5sVzHRNpHP99JhdXEVeN0nvjxXVmoA6lcVEhOOqWEd3Wm0"
         lock_passwd: false
         groups: lxd
         shell: /bin/bash
         sudo: ALL=(ALL) NOPASSWD:ALL
description: LXD profile for virtual machines
    source: cloud-init:config
    type: disk
name: vm

This profile

  • Enables password authentication in SSH (ssh_pwauth: yes)
  • Adds a non-root user ubuntu with password ubuntu. See Troubleshooting below on how to change this.
  • The password is not in a locked state.
  • The user account belongs to the lxd group, in case we want to run LXD inside the LXD virtual machine.
  • The shell is /bin/bash.
  • Can sudo to all without requiring a password.
  • Some extra configuration will be passed to the virtual machine through an ISO image named config.iso. Once you get a shell in the virtual machine, you can install the rest of the support by mounting this ISO image and running the installer.

We now need to create a profile with the above content. Here is how we do this. You first create an empty profile called vm. Then, you run the cat | lxc profile edit vm command which allows you to paste the above profile configuration and finally hit Control+D to have it saved. Alternatively, you can run lxc profile edit vm and then paste in there the following text. The profile was adapted from the LXD 3.19 announcement page.

$ lxc profile create vm
$ cat | lxc profile edit vm
   user.user-data: |
     ssh_pwauth: yes
       - name: ubuntu
         passwd: "$6$iBF0eT1/6UPE2u$V66Rk2BMkR09pHTzW2F.4GHYp3Mb8eu81Sy9srZf5sVzHRNpHP99JhdXEVeN0nvjxXVmoA6lcVEhOOqWEd3Wm0"
         lock_passwd: false
         groups: lxd
         shell: /bin/bash
         sudo: ALL=(ALL) NOPASSWD:ALL
description: LXD profile for virtual machines
    source: cloud-init:config
    type: disk
name: vm

$ lxc profile show vm

We have created the profile with the virtual machine-specific. We have now the pieces in place to launch a LXD virtual machine.

Launching a LXD virtual machine

We launch a LXD virtual machine with the following command. It is the standard lxc launch command, with the addition of the --vm option to create a virtual machine (instead of a system container). We specify the default profile (whichever base configuration you use in your LXD installation) and on top of that we add our VM-specific configuration with --profile vm. Depending on your computer’s specifications, it takes a few seconds to launch the container, and then less than 10 seconds for the VM to boot up and receive the IP address from your network.

$ lxc launch ubuntu:18.04 vm1 --vm --profile default --profile vm
Creating vm1
Starting vm1
$ lxc list vm1
| NAME |  STATE  | IPV4 | IPV6 |      TYPE       | SNAPSHOTS |
| vm1  | RUNNING |      |      | VIRTUAL-MACHINE | 0         |
$ lxc list vm1
| NAME |  STATE  |        IPV4        | IPV6 |      TYPE       | SNAPSHOTS |
| vm1  | RUNNING | (eth0) |      | VIRTUAL-MACHINE | 0         |

We have enabled password authentication for SSH, which means that we can connect to the VM straight away with the following command.

$ ssh ubuntu@
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-74-generic x86_64)

* Documentation:  https://help.ubuntu.com
* Management:     https://landscape.canonical.com
* Support:        https://ubuntu.com/advantage 

System information as of Fri Jan 24 09:22:19 UTC 2020 
 System load:  0.03              Processes:             100
 Usage of /:   10.9% of 8.68GB   Users logged in:       0
 Memory usage: 15%               IP address for enp3s5:
 Swap usage:   0%

0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.


Using the console in a LXD VM

LXD has the lxc console command to give you a console to a running system container and virtual machine. You can use the console to view the boot messages as they appear, and also log in using a username and password. In the LXD profile we set up a password primarily to be able to connect through the lxc console. Let’s get a shell through the console.

$ lxc console vm1
To detach from the console, press: +a q                      [NOTE: Press Enter at this point]

Ubuntu 18.04.3 LTS vm1 ttyS0

vm1 login: ubuntu
Password: **********
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-74-generic x86_64)

* Documentation:  https://help.ubuntu.com
* Management:     https://landscape.canonical.com
* Support:        https://ubuntu.com/advantage 

System information as of Fri Jan 24 09:22:19 UTC 2020 
 System load:  0.03              Processes:             100
 Usage of /:   10.9% of 8.68GB   Users logged in:       0
 Memory usage: 15%               IP address for enp3s5:
 Swap usage:   0%

0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.


To exit from the console, logout from the shell first, then press Ctrl+A q.

ubuntu@vm1:~$ logout

Ubuntu 18.04.3 LTS vm1 ttyS0

vm1 login:                                               [Press Ctrl+A q]

Bonus tip: When you launch a LXD VM, you can run straight away lxc console vm1 and you get the chance to view the boot up messages of the Linux kernel in the VM as they appear.

Setting up the LXD agent inside the VM

In any VM environment the VM is separated from the host. For usability purposes, we often add a service in the VM so that it makes it easier to access the VM resources from your host. This service is available in the config device that was made available to the VM through cloud-init. At some point in the future, the LXD virtual machine images will be adapted so that they automatically setup the configuration from the config device. But for now, we do this manually by setting up the LXD agent service. First, get a shell into the virtual machine either through SSH or lxc console. We become root and perform the mount of the config device. We can see the exact files of the config device. We run ./install.sh and make the LXD Agent service run automatically in the VM. Finally, we reboot the VM so that the changes take effect.

ubuntu@vm1:~$ sudo -i
root@vm1:~# mount -t 9p config /mnt/
root@vm1:~# cd /mnt/
root@vm1:/mnt# ls -l
total 6390
-r-------- 1 999 root      745 Jan 24 09:18 agent.crt
-r-------- 1 999 root      288 Jan 24 09:18 agent.key
dr-x------ 2 999 root        5 Jan 24 09:18 cloud-init
-rwx------ 1 999 root      595 Jan 24 09:18 install.sh
-r-x------ 1 999 root 11495360 Jan 24 09:18 lxd-agent
-r-------- 1 999 root      713 Jan 24 09:18 server.crt
dr-x------ 2 999 root        4 Jan 24 09:18 systemd
root@vm1:/mnt# ./install.sh 
Created symlink /etc/systemd/system/multi-user.target.wants/lxd-agent.service → /lib/systemd/system/lxd-agent.service.
Created symlink /etc/systemd/system/multi-user.target.wants/lxd-agent-9p.service → /lib/systemd/system/lxd-agent-9p.service.

LXD agent has been installed, reboot to confirm setup.
To start it now, unmount this filesystem and run: systemctl start lxd-agent-9p lxd-agent
root@vm1:/mnt# reboot

Now the LXD Agent service is running in the VM. We are ready to use the LXD VM just like a LXD system container.

Using a LXD virtual machine

By installing the LXD agent inside the LXD VM, we can run the usual LXD commands such as lxc exec, lxc file, etc. Here is how to get a shell, either using the built-in alias lxc shell, or lxc exec to get a shell with the non-root account of the Ubuntu container images (from the repository ubuntu:).

$ lxc shell vm1
root@vm1:~# logout
$ lxc exec vm1 -- sudo --user ubuntu --login

We can transfer files between the host and the LXD virtual machine. We create a file mytest.txt on the host. We push that file to the virtual machine vm1. The destination of the push is vm1/home/ubuntu/, where vm1 is the name of the virtual machine (or system container). It is a bit weird that we do not use : to separate the name from the path, just like in SSH and elsewhere. The reason is that : is used to specify a remote LXD server, so it cannot be used to separate the name from the path. We then perform a recursive pull of the ubuntu home directory and place it in /tmp. Finally, we have a look at the retrieved directory.

$ echo "This is a test" > mytest.txt
$ lxc file push mytest.txt vm1/home/ubuntu/
$ lxc file pull --recursive vm1/home/ubuntu/ /tmp/
$ ls -ld /tmp/ubuntu/
drwxr-xr-x 4 myusername myusername 4096 Jan  28 01:00 /tmp/ubuntu/

We can view the lxc info of the virtual machine.

$ lxc info vm1 
Name: vm1
Location: none
Remote: unix://
Architecture: x86_64
Created: 2020/01/27 20:20 UTC
Status: Stopped
Type: virtual-machine
Profiles: default, vm

Other functionality that is available to system containers should be made also available to virtual machines in the following months.


Error: unknown flag: –vm

You will get this error message when you try to launch a virtual machine while your version of LXD is 3.18 or lower. VM support has been added to LXD 3.19, therefore the version should be either 3.19 or newer.

Error: Failed to connect to lxd-agent

You can launched a LXD VM and you are trying to connect to it using lxc exec and get a shell (or run other commands). The LXD VM needs to have a service running inside the VM that will receive the lxc exec commands. This service has not been installed yet into the LXD VM, or for some reason it is not running.

Error: The LXD VM does not get automatically an IP address

The LXD virtual machine should be able to get an IP address from LXD’s dnsmasq without issues.

macvlan works as well but would not show up in lxc list vm1 until you setup the LXD Agent.

$ lxc list vm1
| NAME |  STATE  |         IPV4         | IPV6 |      TYPE       | SNAPSHOTS |
| vm1  | RUNNING | (enp3s5) |      | VIRTUAL-MACHINE | 0         |

I created a LXD VM and did not have to do any preparation at all!

When you lxc launchor lxc init with the aim to create a LXD VM, you need to remember to pass the --vm option in order to create a virtual machine instead of a container. To verify whether your newly created machine is a system container or a virtual machine, run lxc list and it should show you the type under the Type column.

How do I change the VM password in the LXD profile?

You can generate a new password using the following command. We are not required to echo -n in this case because mkpasswd with take care of the newline for us. We use the SHA-512 method, because this is the password hashing algorithm since Ubuntu 16.04.

$  echo "mynewpassword" | mkpasswd --method=SHA-512 --stdin

Then, run lxc profile edit vm and replace the old password field with your new one.

How do I set my public key instead of a password?

Instead of passwd, use ssh-authorized-keys. See the cloud-init example on ssh-authorized-keys.


In LXD 3.19 there is initial support for virtual machines. As new versions of LXD are being developed, more features from system containers will get implemented into virtual machines as well. In April 2020 we will be getting LXD 4.0, long-term support for five to ten years. There is ongoing work to add as much functionality for virtual machines in order to make it into the feature freeze for LXD 4.0. If you are affected, it makes sense to follow closely the development of virtual machine support in LXD towards the LXD 4.0 feature freeze.

on January 24, 2020 10:08 AM

January 23, 2020

Episódio 74 – WSL por Nuno do Carmo (parte 1). E eis que chega a continação da história: 2 Ubuntus e 1 Windows entram num bar e… Já sabem: oiçam, comentem e partilhem!

  • https://ulsoy.org/blog/experiencing-wsl-as-a-linux-veteran-part-1/
  • https://meta.wikimedia.org/wiki/WikiCon_Portugal
  • https://www.humblebundle.com/books/python-machine-learning-packt-books?partner=PUP
  • https://www.humblebundle.com/books/holiday-by-makecation-family-projects-books?partner=PUP
  • https://stackoverflow.com/questions/56979849/dbeaver-ssh-tunnel-invalid-private-key
  • https://fosdem.org
  • https://github.com/PixelsCamp/talks
  • https://pixels.camp/


Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on January 23, 2020 10:45 PM

This is a follow-up to the End of Life warning sent earlier this month to confirm that as of today (Jan 23, 2020), Ubuntu 19.04 is no longer supported. No more package updates will be accepted to 19.04, and it will be archived to old-releases.ubuntu.com in the coming weeks.

The original End of Life warning follows, with upgrade instructions:

Ubuntu announced its 19.04 (Disco Dingo) release almost 9 months ago, on April 18, 2019. As a non-LTS release, 19.04 has a 9-month support cycle and, as such, the support period is now nearing its end and Ubuntu 19.04 will reach end of life on Thursday, Jan 23rd.

At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 19.04.

The supported upgrade path from Ubuntu 19.04 is via Ubuntu 19.10. Instructions and caveats for the upgrade may be found at:


Ubuntu 19.10 continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:


Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.

Originally posted to the ubuntu-announce mailing list on Thu Jan 23 21:13:01 UTC 2020 by Adam Conrad, on behalf of the Ubuntu Release Team

on January 23, 2020 10:19 PM

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In December, 208.00 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

Though December was as quiet as to be expected due to the holiday season, the usual amount of security updates were still released by our contributors.
We currently have 59 LTS sponsors each month sponsoring 219h. Still, as always we are welcoming new LTS sponsors!

The security tracker currently lists 34 packages with a known CVE and the dla-needed.txt file has 33 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on January 23, 2020 06:19 PM
Our favorite Disco Dingo, Ubuntu Studio 19.04, has reached end-of-life and will no longer receive any updates. If you have not yet upgraded, please do so now or forever lose the ability to upgrade! Ubuntu Studio 20.04 LTS is scheduled for April of 2020. The transition from 19.10 to 20.04... Continue reading
on January 23, 2020 12:00 AM

January 21, 2020

New Website!

Ubuntu Studio

Ubuntu Studio has had the same website design for nearly 9 years. Today, that changed. We were approached by Shinta from Playmain, asking if they could contribute to the project by designing a new website theme for us. Today, after months of correspondence and collaboration, we are proud to unveil... Continue reading
on January 21, 2020 06:48 PM

January 19, 2020

Number word sequences

Stuart Langridge

I was idly musing about number sequences, and the Lychrel algorithm. If you don’t know about this, there’s a good Numberphile video on it: basically, take any number, reverse it, add the two, and if you get a palindrome stop, and if you don’t, keep doing it. So start with, say, 57, reverse to get 75, add them to get 57+75=132, which isn’t a palindrome, so do it again; reverse 132 to get 231, add to get 132+231=363, and that’s a palindrome, so stop. There are a bunch of interesting questions that can be asked about this process (which James Grime goes into in the video), among which are: does this always terminate? What’s the longest chain before termination? And so on. 196 famously hasn’t terminated so far and it’s been tried for several billion iterations.

Anyway, I was thinking about another such iterative process. Take a number, express it in words, then add up the values of all the letters in the words, and do it again. So 1 becomes ONE, and ONE is 15, 14, 5 (O is the fifteenth letter of the alphabet, N the fourteenth, and so on), so we add 15+14+5 to get 34, which becomes THIRTY FOUR, and so on. (We skip spaces and dashes; just the letters.)

Take a complete example: let’s start with 4.

  • 4 -> FOUR -> 6+15+21+18 = 60
  • 60 -> SIXTY -> 19+9+24+20+25 = 97
  • 97 -> NINETY-SEVEN -> 14+9+14+5+20+25+19+5+22+5+14 = 152
  • 152 -> ONE HUNDRED AND FIFTY-TWO -> 15+14+5+8+21+14+4+18+5+4+1+14+4+6+9+6+20+25+20+23+15 = 251
  • 251 -> TWO HUNDRED AND FIFTY-ONE -> 20+23+15+8+21+14+4+18+5+4+1+14+4+6+9+6+20+25+15+14+5 = 251

and 251 is a fixed point: it becomes itself. So we stop there, because we’re now in an infinite loop.

A graph of this iterative process, starting at 4

Do all numbers eventually go into a loop? Do all numbers go into the same loop — that is, do they all end up at 251?

It’s hard to tell. (Well, it’s hard to tell for me. Some of you may see some easy way to prove this, in which case do let me know.) Me being me, I wrote a little Python programme to test this out (helped immeasurably by the Python 3 num2words library). As I discovered before, if you’re trying to pick out patterns in a big graph of numbers which all link to one another, it’s a lot easier to have graphviz draw you pretty pictures, so that’s what I did.

I’ve run numbers up to 5000 or so (after that I got a bit bored waiting for answers; it’s not recreational mathematics if I have to wait around, it’s a job for which I’m not getting paid). And it looks like numbers settle out into a tiny island which ends up at 251, a little island which ends up at 285, and a massive island which ends up at 259, all of which become themselves1. (You can see an image of the first 500 numbers and how they end up; extending that up to 5000 just makes the islands larger, it doesn’t create new islands… and the diagrams either get rather unwieldy or they get really big and they’re hard to display.2)

A graph of the first 500 numbers and their connections

I have a theory that (a) yes all numbers end up in a fixed point and (b) there probably aren’t any more fixed points. Warning: dubious mathematical assertions lie ahead.

There can’t be that many numbers that encode to themselves. This is both because I’ve run it up to 5000 and there aren’t, and because it just seems kinda unlikely and coincidental. So, we assume that the fixed points we have are most or all of the fixed points available. Now, every number has to end up somewhere; the process can’t just keep going forever. So, if you keep generating numbers, you’re pretty likely at some point to hit a number you’ve already hit, which ends up at one of the fixed points. And finally, the numbers-to-words process doesn’t grow as fast as actual numbers do. Once you’ve got over a certain limit, you’ll pretty much always end up generating a number smaller than oneself in the next iteration. The reason I think this is that adding more to numbers doesn’t make their word lengths all that much longer. Take, for example, the longest number (in words) up to 100,000, which is (among others) 73,373, or seventy-three thousand, three hundred and seventy-three. This is 47 characters long. Even if they were all Z, which they aren’t, it’d generate 47×26=1222, which is way less than 73,373. And adding lots more doesn’t help much: if we add a million to that number, we put one million on the front of it, which is only another 10 characters, or a maximum added value of 260. There’s no actual ceiling — numbers in words still grow without limit as the number itself grows — but it doesn’t grow anywhere near as fast as the number itself does. So the numbers generally get smaller as they iterate, until they get down below four hundred or so… and all of those numbers terminate in one of the three fixed points already outlined. So I think that all numbers will terminate thus.

The obvious flaw with this argument is that it ought to apply to the reverse-and-add process above too and it doesn’t for 196 (and some others). So it’s possible that my approach will also make a Lychrel-ish number that may not terminate, but I don’t think it will; the argument above seems compelling.

You might be thinking: bloody English imperialist! What about les nombres, eh? Or die Zahlen? Did you check those? Mais oui, I checked (nice one num2words for supporting a zillion languages!) Same thing. There are different fixed points (French has one big island until 177, a very small island to 232, a 258, 436 pair, and 222 which encodes to itself and nothing else encodes to it, for example.Not quite: see the update at the end. Nothing changes about the maths, though. Images of French and German are available, and you can of course use the Python 3 script to make your own; run it as python3 numwords.py no for Norwegian, etc.) You may also be thinking “what about American English, eh? 101 is ONE HUNDRED ONE, not ONE HUNDRED AND ONE.” I have not tested this, partially because I think the above argument should still hold for it, partially because num2words doesn’t support it, and partially because that’s what you get for throwing a bunch of perfectly good tea into the ocean, but I don’t think it’d be hard to verify if someone wants to try it.

No earth-shattering revelations here, not that it matters anyway because I’m 43 and you can only win a Fields Medal if you’re under forty, but this was a fun little diversion.

Update: Minirop pointed out on Twitter that my code wasn’t correctly highlighting the “end” of a chain, which indeed it was not. I’ve poked the code, and the diagrams, to do this better; it’s apparent that both French and German have most numbers end up in a fairy large loop, rather than at one specific number. I don’t think this alters my argument for why this is likely to happen for all numbers (because a loop of numbers which all encode to one another is about as rare as a single number which encodes to itself, I’d guess), but maybe I haven’t thought about it enough!

  1. Well, 285 is part of a 285, 267, 313, 248, 284, 285 loop.
  2. This is also why the graphs use neato, which is much less pleasing a layout for this than the “tree”-style layout of dot, because the dot images end up being 32,767 pixels across and all is a disaster.
on January 19, 2020 10:02 PM

Com o objectivo constante de inovar vamos hoje, dia em que gravamos o episódio 74 do nosso podcast preferido, permitir a todos os que lerem esta publicação a tempo – e tiverem disponibilidade – poder assistir à gravação do PUP.

No futuro, este será um privilégio da patronagem (é $1, deixem-se de coisas!) mas por enquanto todos vão poder fazer parte.

Queremos com esta iniciativa atingir 3 objectivos:

  • Dar mais amor aos nossos patronos;
  • Aumentar o número de seguidores que temos no yt;
  • Aumentar o número de patronos.

Se, nesta altura, continuas com vontade de assistir, basta abrires esta ligação uns minutos antes das 22.00:

on January 19, 2020 04:54 PM

January 17, 2020

Are you using Kubuntu 19.10 Eoan Ermine, our current Stable release? Or are you already running our development builds of the upcoming 20.04 LTS Focal Fossa?

We currently have Plasma 5.17.90 (Plasma 5.18 Beta)  available in our Beta PPA for Kubuntu 19.10.

The 5.18 beta is also available in the main Ubuntu archive for the 20.04 development release, and can be found on our daily ISO images.

This is a Beta Plasma release, so testers should be aware that bugs and issues may exist.

If you are prepared to test, then…..

For 19.10 add the PPA and then upgrade

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt update && sudo apt full-upgrade -y

Then reboot. If you cannot reboot from the application launcher,

systemctl reboot

from the terminal.

In case of issues, testers should be prepare to use ppa-purge to remove the PPA and revert/downgrade packages.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use a launchpad.net to post testing feedback to the Kubuntu team as a bug, or give feedback on IRC [1], Telegram [2] or mailing lists [3].
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

Please review the release announcement and changelog.

[Test Case]

* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.16 or 5.17?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.

* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing. e.g. “clock combobox instead of tri-state checkbox for 12/24 hour display.”
– Test the ‘fixed’ functionality or ‘new’ feature.

Testing involves some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu and the KDE community as a whole.


Please stop by the Kubuntu-devel IRC channel or Telegram group if you need clarification of any of the steps to follow.

[1] – irc://irc.freenode.net/kubuntu-devel
[2] – https://t.me/kubuntu_support
[3] – https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel

on January 17, 2020 09:48 AM

January 15, 2020

KUserFeedback is a framework for collecting user feedback for applications via telemetry and surveys.

The library comes with an accompanying control and result UI tool.


Signed by Jonathan Riddell <jr@jriddell.org> 2D1D5B0588357787DE9EE225EC94D18F7F05997E

KUserFeedback as it will be used in Plasma 5.18 LTS

on January 15, 2020 04:15 PM

Some time ago, there was a thread on debian-devel where we discussed how to make Qt packages work on hardware that supports OpenGL ES, but not the desktop OpenGL.

My first proposal was to switch to OpenGL ES by default on ARM64, as that is the main affected architecture. After a lengthy discussion, it was decided to ship two versions of Qt packages instead, to support more (OpenGL variant, architecture) configurations.

So now I am announcing that we finally have the versions of Qt GUI and Qt Quick libraries that are built against OpenGL ES, and the release team helped us to rebuild the archive for compatibility with them. These packages are not co-installable together with the regular (desktop OpenGL) Qt packages, as they provide the same set of shared libraries. So most packages now have an alternative dependency like libqt5gui5 (>= 5.x) | libqt5gui5-gles (>= 5.x). Packages get such a dependency automatically if they are using ${shlibs:Depends}.

These Qt packages will be mostly needed by ARM64 users, however they may be also useful on other architectures too. Note that armel and armhf are not affected, because there Qt was built against OpenGL ES from the very beginning. So far there are no plans to make two versions of Qt on these architectures, however we are open to bug reports.

To try that on your system (running Bullseye or Sid), just run this command:

# apt install libqt5gui5-gles libqt5quick5-gles

The other Qt submodule packages do not need a second variant, because they do not use any OpenGL API directly. Most of the Qt applications are installable with these packages. At the moment, Plasma is not installable because plasma-desktop FTBFS, but that will be fixed sooner or later.

One major missing thing is PyQt5. It is linking against some Qt helper functions that only exist for desktop OpenGL build, so we will probably need to build a special version of PyQt5 for OpenGL ES.

If you want to use any OpenGL ES specific API in your package, build it against qtbase5-gles-dev package instead of qtbase5-dev. There is no qtdeclarative5-gles-dev so far, however if you need it, please let us know.

In case you have any questions, please feel free to file a bug against one of the new packages, or contact us at the pkg-kde-talk mailing list.

on January 15, 2020 02:55 PM

January 14, 2020

Zanshin 0.5.71

Jonathan Riddell


We are happy and proud to announce the immediate availability of Zanshin 0.5.71.

This updates the code to work with current libraries and apps from Kontact.

The GPG signing key for the tar is
Jonathan Riddell with 0xEC94D18F7F05997E


on January 14, 2020 03:37 PM

January 12, 2020

Kubuntu 19.04 reaches end of life

Kubuntu General News

Kubuntu 19.04 Disco Dingo was released on April 18, 2019 with 9 months support. As of January 23, 2020, 19.04 reaches ‘end of life’. No more package updates will be accepted to 19.04, and it will be archived to old-releases.ubuntu.com in the coming weeks.

The official end of life announcement for Ubuntu as a whole can be found here [1].

Kubuntu 19.10 Eoan Ermine continues to be supported, receiving security and high-impact bugfix updates until July 2020.

Users of 19.04 can follow the Kubuntu 19.04 to 19.10 Upgrade [2] instructions.

Should for some reason your upgrade be delayed, and you find that the 18.10 repositories have been archived to old-releases.ubuntu.com, instructions to perform a EOL Upgrade can be found on the Ubuntu wiki [3].

Thank you for using Kubuntu 19.04 Disco Dingo.

The Kubuntu team.

[1] – https://lists.ubuntu.com/archives/ubuntu-announce/2020-January/000252.html
[2] – https://help.ubuntu.com/community/EoanUpgrades/Kubuntu
[3] – https://help.ubuntu.com/community/EOLUpgrades

on January 12, 2020 11:23 PM
Lubuntu 19.04 (Disco Dingo) will reach End of Life on Thursday, January 23, 2020. This means that after that date there will be no further security updates or bugfixes released. We highly recommend that you update to 19.10 as soon as possible if you are still running 19.04. After January 23rd, the only supported releases […]
on January 12, 2020 06:50 PM

January 11, 2020

Fernando Lanero, Paco Molinero y Marcos Costales analizamos la privacidad en la red de redes: Internet. Además entrevistamos a Paco Molinero como responsable del grupo de traductores de Ubuntu al Español en Launchpad URL de traducción.

Escúchanos en:

on January 11, 2020 03:56 PM

January 08, 2020

The Cyber Infrastructure Security Agency at the United States Department of Homeland Security has been filling my inbox with updates in the past 36 hours. From an alert concerning possible cyber response to a United States military strike in Baghdad to a press release about a new analytical report concerning geopolitical tensions (which is actually a 2 page PDF file) they certainly have been issuing reports. It may be prudent in these tense geopolitical times to ensure your package updates are in fact up to date, that you’re not running unnecessary services on public-facing servers, and that your firewall definitions meet current needs.

You don’t want to wind up with an intrusion like the United States Government Printing Office did relative to the website of the Federal Depository Library Program.

on January 08, 2020 01:32 AM

January 06, 2020

Amazon recently announced the AWS IAM Access Analyzer, a useful tool to help discover if you have granted unintended access to specific types of resources in your AWS account.

At the moment, an Access Analyzer needs to be created in each region of each account where you want to run it.

Since this manual requirement can be a lot of work, it is a common complaint from customers. Given that Amazon listens to customer feedback and since we currently have to specify a “type” of “ACCOUNT”, I expect at some point Amazon may make it easier to run Access Analyzer across all regions and maybe in all accounts in an AWS Organization. Until then…

This article shows how I created an AWS IAM Access Analyzer in all regions of all accounts in my AWS Organization using the aws-cli.


To make this easy, I use the bash helper functions that I defined in last week’s blog post here:

Running AWS CLI Commands Across All Accounts In An AWS Organization

Please read the blog post to see what assumptions I make about the AWS Organization and account setup. You may need to tweak things if your setup differs from mine.

Here is my GitHub repo that makes it more convenient for me to install the bash functions. If your AWS account structure matches mine sufficiently, it might work for you, too:


IAM Access Analyzer In All Regions Of Single Account

To start, let’s show how to create an IAM Access Analyzer in all regions of a single account.

Here’s a simple command to get all the regions in the current AWS account:

aws ec2 describe-regions \
  --output text \
  --query 'Regions[][RegionName]'

This command creates an IAM Access Analyzer in a specific region. We’ll tack on a UUID because that’s what Amazon does, though I suspect it’s not really necessary.

uuid=$(uuid -v4 -FSIV || echo "1") # may need to install "uuid" command
aws accessanalyzer create-analyzer \
   --region "$region" \
   --analyzer-name "$analyzer" \
   --type ACCOUNT

By default, there is a limit of a single IAM Access Analyzer per account region. The fact that this is a “default limit” implies that it may be increased by request, but for this guide, we’ll just not create an IAM Access Analyzer if one already exists.

This command lists the name of any IAM Access Analyzers that might already have been created in a region:

aws accessanalyzer list-analyzers \
  --region "$region" \
  --output text \
  --query 'analyzers[][name]'

We can put the above together, iterating over the regions, checking to see if an IAM Access Analyzer already exists, and creating one if it doesn’t:

regions=$(aws ec2 describe-regions \
  --output text \
  --query 'Regions[][RegionName]' |

for region in $regions; do
  analyzer=$(aws accessanalyzer list-analyzers \
    --region "$region" \
    --output text \
    --query 'analyzers[][name]')
  if [ -n "$analyzer" ]; then
    echo "$region: EXISTING: $analyzer"
    uuid=$(uuid -v4 -FSIV || echo "1") # may need to install "uuid" command
    echo "$region: CREATING: $analyzer"
    aws accessanalyzer create-analyzer \
       --region "$region" \
       --analyzer-name "$analyzer" \
       --type ACCOUNT \
       > /dev/null # only show errors

Creating IAM Access Analyzers In All Regions Of All Accounts

Now let’s prepare to run the above in multiple accounts using the aws-cli-multi-account-sessions bash helper functions from last week’s article:

git clone git@github.com:alestic/aws-cli-multi-account-sessions.git
source aws-cli-multi-account-sessions/functions.sh

Specify the values for source_profile and mfa_serial from your aws-cli config file. You can leave the mfa_serial empty if you aren’t using MFA:


Specify the role you can assume in all accounts:

role="admin" # Yours might be called "OrganizationAccountAccessRole"

Get a list of all accounts in the AWS Organization, and a list of all regions:

accounts=$(aws organizations list-accounts \
             --output text \
             --query 'Accounts[].[JoinedTimestamp,Status,Id,Email,Name]' |
           grep ACTIVE |
           sort |
           cut -f3) # just the ids

regions=$(aws ec2 describe-regions \
            --output text \
            --query 'Regions[][RegionName]' |

Run this once to create temporary session credentials with MFA:

aws-session-init $source_profile $mfa_serial

Iterate through AWS accounts, running the necessary AWS CLI commands to create an AIM Access Analyzer in each account/role and each region:

for account in $accounts; do
  echo "Visiting account: $account"
  aws-session-set $account $role || continue

  for region in $regions; do
    # Run the aws-cli commands using the assume role credentials
    analyzers=$(aws-session-run \
                  aws accessanalyzer list-analyzers \
                    --region "$region" \
                    --output text \
                    --query 'analyzers[][name]')
    if [ -n "$analyzers" ]; then
      echo "$account/$region: EXISTING: $analyzers"
      uuid=$(uuid -v4 -FSIV || echo "1")
      echo "$account/$region: CREATING: $analyzer"
      aws-session-run \
        aws accessanalyzer create-analyzer \
          --region "$region" \
          --analyzer-name "$analyzer" \
          --type ACCOUNT \
          > /dev/null # only show errors

Clear out bash variables holding temporary AWS credentials:


In a bit, you can go to the AWS IAM Console and view what the Access Analyzers found.

Yep, you have to look at the Access Analyzer findings in each account and each region. Wouldn’t it be nice if we had some way to collect all this centrally? I think so, too, so I’m looking into what can be done there. Thoughts welcome in the comments below or on Twitter.


The following deletes all IAM Access Analyzers in all regions in the current account. You don’t need to do this if you want to leave the IAM Access Analyzers running, especially since there is no additional cost for keeping them.


source_profile=[as above]
mfa_serial=[as above]
role=[as above]

accounts=$(aws organizations list-accounts \
             --output text \
             --query 'Accounts[].[JoinedTimestamp,Status,Id,Email,Name]' |
           grep ACTIVE |
           sort |
           cut -f3) # just the ids

regions=$(aws ec2 describe-regions \
            --profile "$source_profile" \
            --output text \
            --query 'Regions[][RegionName]' |

aws-session-init $source_profile $mfa_serial

for account in $accounts; do
  echo "Visiting account: $account"
  aws-session-set $account $role || continue

  for region in $regions; do
    # Run the aws-cli commands using the assume role credentials
    analyzers=$(aws-session-run \
                  aws accessanalyzer list-analyzers \
                    --region "$region" \
                    --output text \
                    --query 'analyzers[][name]')
    for analyzer in $analyzers; do
      echo "$account/$region: DELETING: $analyzer"
      aws-session-run \
        aws accessanalyzer delete-analyzer \
          --region "$region" \
          --analyzer-name "$analyzer"


Original article and comments: https://alestic.com/2020/01/aws-iam-access-analyzer/

on January 06, 2020 08:01 AM

January 04, 2020

Watching people windsurf at Blouberg beach

A lot has happened in Debian recently, I wrote seperate blog entries about that but haven’t had the focus to finish them up, maybe I’ll do that later this month. In the meantime, here are some uploads I’ve done during the month of December…

Debian packaging work

2019-12-02: Upload package calamares (3.2.17-1) to Debian unstable.

2019-12-03: Upload package calamares ( to Debian unstable.

2019-12-04: Upload package python3-flask-caching to Debian unstable.

2019-12-04: File removal request for python3-flask-cache (BTS: #946139).

2019-12-04: Upload package gamemode (1.5~git20190812-107d469-3) to Debian unstable.

2019-12-11: Upload package gnome-shell-extension-draw-on-your-screen (5-1) to Debian unstable.

2019-12-11: Upload package xabacus (8.2.3-1) to Debian unstable.

2019-12-11: Upload package gnome-shell-extension-gamemode (4-1) to Debian unstable.

2019-12-11: Upload package gamemode (1.5~git20190812-107d469-4) to Debian unstable.

Debian package sponsoring/reviewing

2019-12-02: Sponsor package scrcpy (1.11+ds-1) for Debian unstable (mentors.debian.net request).

2019-12-03: Sponsor package python3-portend (2.6-1) for Debian unstable (Python team request).

2019-12-04: Merge MR#1 for py-postgresql (DPMT).

2019-12-04: Merge MR#1 for pyphen (DPMT).

2019-12-04: Merge MR#1 for recommonmark (DPMT).

2019-12-04: Merge MR#1 for python-simpy3 (DPMT).

2019-12-04: Merge MR#1 for gpxpy (DPMT).

2019-12-04: Sponsor package gpxpy (1.3.5-2) (Python team request).

2019-12-04: Merge MR#1 for trac-subcomponents (DPMT).

2019-12-04: Merge MR#1 for debomatic (PAPT).

2019-12-04: Merge MR#1 for archmage (PAPT).

2019-12-04: Merge MR#1 for ocrfeeder (PAPT).

2019-12-04: Sponsor package python3-tempura (1.14.1-2) for Debian unstable (Python team request).

2019-12-04: Sponsor package python-sabyenc (4.0.1-1) for Debian experimental (Python team request).

2019-12-04: Sponsor package python-yenc (0.4.0-7) for Debian unstable (Python team request).

2019-12-05: Sponsor package python-gntp (1.0.3-1) for Debian unstable (Python team request).

2019-12-05: Sponsor package python-cytoolz (0.10.1-1) for Debian unstable (Python team request).

2019-12-22: Sponsor package mwclient (0.10.0-2) for Debian unstable (Python team request).

2019-12-22: Sponsor package hyperlink (19.0.0-1) for Debian unstable (Python team request).

2019-12-22: Sponsor package drf-generators (0.4.0-1) for Debian unstable (Python team request).

2019-12-22: Sponsor package python-mongoengine (0.18.2-1) for Debian unstable (Python team request).

2019-12-22: Sponsor package libcloud (2.7.0-1) for Debian unstable (Python team request).

2019-12-22: Sponsor package pep8-naming (0.9.1-1) for Debian unstable (Python team request).

2019-12-23: Sponsor package python-django-braces (1.13.0-2) for Debian unstable (Python team request).

on January 04, 2020 11:34 AM

January 01, 2020

Catfish 1.4.12 Released

Welcome to 2020! Let's ring in the new year with a brand new Catfish release.

What's New

Wayland Support

Catfish 1.4.12 adds support for running on Wayland. Before now, there were some X-specific dependencies related to handling display sizes. These have now been resolved, and Catfish should run smoothly and consistently everywhere.

Catfish 1.4.12 ReleasedCatfish 1.4.12 on Wayland on Ubuntu 19.10

Dialog Improvements

All dialogs now utilize client-side decorations (CSD) and are modal. The main window will continue to respect the window layout setting introduced in the 1.4.10 release.

I also applied a number of fixes to the new Preferences and Search Index dialogs, so they should behave more consistently and work well with keyboard navigation.

Release Process Updates

I've improved the release process to make it easier for maintainers and to ensure builds are free of temporary files. This helps ensure a faster delivery to package maintainers, and therefore to distributions.

Translation Updates

Albanian, Catalan, Chinese (China), Chinese (Taiwan), Czech, Danish, Dutch, French, Galician, German, Italian, Japanese, Norwegian Bokmål, Russian, Serbian, Spanish, Turkish


Source tarball

$ md5sum catfish-1.4.12.tar.bz2 

$ sha1sum catfish-1.4.12.tar.bz2 

$ sha256sum catfish-1.4.12.tar.bz2 

Catfish 1.4.12 will be included in Xubuntu 20.04 "Focal Fossa", available in April.

on January 01, 2020 09:57 PM

December 30, 2019

by generating a temporary IAM STS session with MFA then assuming cross-account IAM roles

I recently had the need to run some AWS commands across all AWS accounts in my AWS Organization. This was a bit more difficult to accomplish cleanly than I had assumed it might be, so I present the steps here for me to find when I search the Internet for it in the future.

You are also welcome to try out this approach, though if your account structure doesn’t match mine, it might require some tweaking.

Assumptions And Background

(Almost) all of my AWS accounts are in a single AWS Organization. This allows me to ask the Organization for the list of account ids.

I have a role named “admin” in each of my AWS accounts. It has a lot of power to do things. The default cross-account admin role name for accounts created in AWS Organizations is “OrganizationAccountAccessRole”.

I start with an IAM principal (IAM user or IAM role) that the aws-cli can access through a “source profile”. This principal has the power to assume the “admin” role in other AWS accounts. In fact, that principal has almost no other permissions.

I require MFA whenever a cross-account IAM role is assumed.

You can read about how I set up AWS accounts here, including the above configuration:

Creating AWS Accounts From The Command Line With AWS Organizations

I use and love the aws-cli and bash. You should, too, especially if you want to use the instructions in this guide.

I jump through some hoops in this article to make sure that AWS credentials never appear in command lines, in the shell history, or in files, and are not passed as environment variables to processes that don’t need them (no export).


For convenience, we can define some bash functions that will improve clarity when we want to run commands in AWS accounts. These freely use bash variables to pass information between functions.

The aws-session-init function obtains temporary session credentials using MFA (optional). These are used to generate temporary assume-role credentials for each account without having to re-enter an MFA token for each account. This function will accept optional MFA serial number and source profile name. This is run once.

aws-session-init() {
  # Sets: source_access_key_id source_secret_access_key source_session_token
  local source_profile=${1:-${AWS_SESSION_SOURCE_PROFILE:?source profile must be specified}}
  local mfa_serial=${2:-$AWS_SESSION_MFA_SERIAL}
  local token_code=
  local mfa_options=
  if [ -n "$mfa_serial" ]; then
    read -s -p "Enter MFA code for $mfa_serial: " token_code
    mfa_options="--serial-number $mfa_serial --token-code $token_code"
  read -r source_access_key_id \
          source_secret_access_key \
          source_session_token \
    <<<$(aws sts get-session-token \
           --profile $source_profile \
           $mfa_options \
           --output text \
           --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]')
  test -n "$source_access_key_id" && return 0 || return 1

The aws-session-set function obtains temporary assume-role credentials for the specified AWS account and IAM role. This is run once for each account before commands are run in that account.

aws-session-set() {
  # Sets: aws_access_key_id aws_secret_access_key aws_session_token
  local account=$1
  local role=${2:-$AWS_SESSION_ROLE}
  local name=${3:-aws-session-access}
  read -r aws_access_key_id \
          aws_secret_access_key \
          aws_session_token \
    <<<$(AWS_ACCESS_KEY_ID=$source_access_key_id \
         AWS_SECRET_ACCESS_KEY=$source_secret_access_key \
         AWS_SESSION_TOKEN=$source_session_token \
         aws sts assume-role \
           --role-arn arn:aws:iam::$account:role/$role \
           --role-session-name "$name" \
           --output text \
           --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]')
  test -n "$aws_access_key_id" && return 0 || return 1

The aws-session-run function runs a provided command, passing in AWS credentials in environment variables for that process to use. Use this function to prefix each command that needs to run in the currently set AWS account/role.

aws-session-run() {
  AWS_ACCESS_KEY_ID=$aws_access_key_id \
  AWS_SECRET_ACCESS_KEY=$aws_secret_access_key \
  AWS_SESSION_TOKEN=$aws_session_token \

The aws-session-cleanup function should be run once at the end, to make sure that no AWS credentials are left lying around in bash variables.

aws-session-cleanup() {
  unset source_access_key_id source_secret_access_key source_session_token
  unset    aws_access_key_id    aws_secret_access_key    aws_session_token

Running aws-cli Commands In Multiple AWS Accounts

After you have defined the above bash functions in your current shell, here’s an example for how to use them to run aws-cli commands across AWS accounts.

As mentioned in the assumptions, I have a role named “admin” in each account. If your role names are less consistent, you’ll need to do extra work to automate commands.

role="admin" # Yours might be called "OrganizationAccountAccessRole"

This command gets all of the account ids in the AWS Organization. You can use whatever accounts and roles you wish, as long as you are allowed to assume-role into them from the source profile.

accounts=$(aws organizations list-accounts \
             --output text \
             --query 'Accounts[].[JoinedTimestamp,Status,Id,Email,Name]' |
           grep ACTIVE |
           sort |
           cut -f3) # just the ids
echo "$accounts"

Run the initialization function, specifying the aws-cli source profile for assuming roles, and the MFA device serial number or ARN. These are the same values as you would use for source_profile and mfa_serial in the aws-cli config file for a profile that assumes an IAM role. Your “source_profile” is probably “default”. If you don’t use MFA for assuming a cross-account IAM role, then you may leave MFA serial empty.

source_profile=default # The "source_profile" in your aws-cli config
mfa_serial=arn:aws:iam::YOUR_ACCOUNTID:mfa/YOUR_USER # Your "mfa_serial"

aws-session-init $source_profile $mfa_serial

Now, let’s iterate through the AWS accounts, running simple AWS CLI commands in each account. This example will output each AWS account id followed by the list of S3 buckets in that account.

for account in $accounts; do
  # Set up temporary assume-role credentials for an account/role
  # Skip to next account if there was an error.
  aws-session-set $account $role || continue

  # Sample command 1: Get the current account id (should match)
  this_account=$(aws-session-run \
                   aws sts get-caller-identity \
                     --output text \
                     --query 'Account')
  echo "Account: $account ($this_account)"

  # Sample command 2: List the S3 buckets in the account
  aws-session-run aws s3 ls

Wrap up by clearing out the bash variables holding temporary credentials.


Note: The credentials used by this approach are all temporary and use the default expiration. If any expire before you complete your tasks, you may need to adjust some of the commands and limits in your accounts.


Thanks to my role model, Jennine Townsend, the above code uses a special bash syntax to set the AWS environment variables for the aws-cli commands without an export, which would have made the sensitive environment variables available to other commands we might need to run. I guess nothing makes you as (justifiably) paranoid as deep sysadmin experience.

Jennine also wrote code that demonstrates the same approach of STS get-session-token with MFA followed by STS assume-role for multiple roles, but I never quite understood what she was trying to explain to me until I tried to accomplish the same result. Now I see the light.

GitHub Repo

For my convenience, I’ve added the above functions into a GitHub repo, so I can easily add them to my $HOME/.bashrc and use them in my regular work.


Perhaps you may find it convenient as well. The README provides instructions for how I set it up, but again, your environment may need tailoring.

Original article and comments: https://alestic.com/2019/12/aws-cli-across-organization-accounts/

on December 30, 2019 09:00 AM

December 29, 2019

Full Circle Weekly News #160

Full Circle Magazine

Ubuntu “Complete” sound: Canonical

Theme Music: From The Dust – Stardust

on December 29, 2019 12:30 PM

December 28, 2019


Rhonda D'Vine

I was musing about writing about this publicly. For the first time in all these years of writing pretty personal stuff about my feelings, my way of becoming more honest with myself and a more authentic person through that I was thinking about letting you in on this is a good idea.

You see, people have used information from my personal blog in the past, and tried to use it against me. Needless to say they failed with it, and it only showed their true face. So why does it feel different this time?

Thing is, I'm in the midst of my second puberty, and the hormones are kicking in in complete hardcore mode. And it doesn't help at all that there is trans antagonist crap from the past and also from the present popping up left and right at a pace and a concentrated amount that is hard to swallow on its own without the puberty.

Yes, I used to be able to take those things with a much more stable state. But every. Single. Of. These. Issues is draining all the energy out of myself. And even though I'm aware that I'm not the only one trying to fix all of those, even though for some spots I'm the only one doing the work, it's easier said than done that I don't have to fix the world, when the areas involved mean the world to me. Are areas that support me in so many ways. Are places that I need. And on top of that, the hormones are multiplying the energy drain of those.

So ... I know it's not that common. I know you are not used to a grown up person to go through puberty. But for god's sake. Don't make it harder than it has to be. I know it's hard to deal with a 46 year old teenager, so to say, I'm just trying to survive in this world of systematic oppression of trans people.

It would be nice to go for a week without having to cry your eyes out because another hostile event happened that directly affects your existence. The existence of trans lives aren't a matter of different opinions or different points of view, so don't treat it like that, if you want me to believe that you are a person able of empathy and basic respect.

Sidenote: Finishing to write this at this year's #36c3 is quite interesting because of the conference title: Resource Exhaution. Oh the irony.

/personal | permanent link | Comments: 14 | Flattr this

on December 28, 2019 10:22 PM

My year on HackerOne

Riccardo Padovani

Last year, totally by chance, I found a security issue over Facebook - I reported it, and it was fixed quite fast. In 2018, I also found a security issue over Gitlab, so I signed up to HackerOne, and reported it as well. That first experience with Gitlab was far from ideal, but after that first report I’ve started reporting more, and Gitlab has improved its program a lot.


Since June 2019, when I opened my first report of the year, I reported 27 security vulnerabilities: 4 has been marked as duplicated, 3 as informative, 2 as not applicable, 9 have been resolved, and 9 are currently confirmed and the fix is ongoing. All these 27 vulnerabilities were reported to Gitlab.

Especially in October and November I had a lot of fun testing the implementation of ElasticSearch over Gitlab. Two of the issues I have found on this topic have already been disclosed:

Why just Gitlab?

I have an amazing daily job as Solutions Architect at Nextbit that I love. I am not interested in becoming a full-time security researcher, but I am having fun dedicating some hours every month in looking for securities vulnerabilities.

However, since I don’t want it to be a job, I focus on a product I know very well, also because sometimes I contribute to it and I use it daily.

I also tried to target some program I didn’t know anything about, but I get bored quite fast: to find some interesting vulnerability you need to spend quite some time to learn how the system works, and how to exploit it.

Last but not least, Gitlab nowadays manages its HackerOne program in a very cool way: they are very responsive, kind, and I like they are very transparent! You can read a lot about how their security team works in their handbook.

Can you teach me?

Since I have shared a lot of the disclosed reports on Twitter, some people came and asked me to teach them how to start in the bug bounties world. Unfortunately, I don’t have any useful suggestion: I haven’t studied on any specific resource, and all the issues I reported this year come from a deep knowledge of Gitlab, and from what I know thanks to my daily job. There are definitely more interesting people to follow on Twitter, just check over some common hashtags, such as TogetherWeHitHarder.

Gitlab’s Contest

I am writing this blog post from my new keyboard: a custom-made WASD VP3, generously donated by Gitlab after I won a contest for their first year of public program on HackerOne. I won the best written report category, and it was a complete surprise; I am not a native English speaker, 5 years ago my English was a monstrosity (if you want to have some fun, just go reading my old blog posts), and still to this day I think is quite poor, as you can read here.

Indeed, if you have any suggestion on how to improve this text, please write me!

custom keyboard

Congratulations to Gitlab for their first year on HackerOne, and keep up the good work! Your program rocks, and in the last months you improved a lot!

HackeOne Clear

HackerOne started a new program, called HackerOne Clear, only on invitation, where they vet all researchers. I was invited and I thought about accepting the invitation. However, the scope of the data that has to be shared to be vetted is definitely too wide, and to be honest I am surprised so many people accepted the invitation. HackerOne doesn’t perform the check, but delegates to a 3rd party. This 3rd party company asks a lot of things.

I totally understand the need of background checks, and I’d be more than happy to provide my criminal record. It wouldn’t be the first time I am vetted, and I am quite sure it wouldn’t be the last.

More than the criminal record, I am a puzzled about these requirements:

  • Financial history, including credit history, bankruptcy and financial judgments;
  • Employment or volunteering history, including fiduciary or directorship responsibilities;
  • Gap activities, including travel;
  • Health information, including drug tests;
  • Identity, including identifying numbers and identity documents;

Not only the scope is definitely too wide, but also all these data will be stored and processed outside EU! Personal information will be stored in the United States, Canada and Ireland. Personal information will be processed in the United States, Canada, the United Kingdom, India and the Philippines.

As European citizen who wants to protect his privacy, I cannot accept such conditions. I’ve written to HackerOne asking why such a wide scope of data, and they replied that since it’s their partner that actually collects the information, there is nothing they can do. I really hope HackerOne will require fewer data in the future, preserving privacy of their researchers.


In these days I’ve though a lot about what I want to do in my future about bug bounties, and for the 2020 I will continue as I’ve done in the last months: assessing Gitlab, dedicating not more than a few hours a month. I don’t feel ready to step up my game at the moment. I have a lot of other interests I want to pursue in 2020 (travelling, learning German, improve my cooking skills), so I will not prioritize bug bounties for the time being.

That’s all for today, and also for the 2019! It has been a lot of fun, and I wish to you all a great 2020! For any comment, feedback, critic, write to me on Twitter (@rpadovani93) or drop an email at riccardo@rpadovani.com.



  • 29th December 2019: added paragraph about having asked to HackerOne more information on why they need such wide scope of personal data.
on December 28, 2019 07:00 PM

December 26, 2019

Full Circle Weekly News #159

Full Circle Magazine

Linux Mint 19.3 Beta Available
Canonical Introduces Ubuntu AWS Rolling Kernel
Purism Announces the Librem 5 USA
Ubuntu “Complete” sound: Canonical

Theme Music: From The Dust – Stardust

on December 26, 2019 01:32 PM

I had an opportunity to listen to a year-end wrap up in the most recent episode of Late Night Linux while at the gym. I encourage you, the reader, to listen to their summation. Mr. Ressington noted that an upcoming episode would deal with reviewing predictions for the year.

In light of that, I took a look back at the blog. In retrospect, I apparently did not make any predictions for 2019. The year for me was noted in When Standoffs Happen as starting with the longest shutdown of the federal government in the history of the USA. I wrote the linked post on Christmas Eve last year and had no clue I would end up working without pay for part of that 35 day crisis. After that crisis ended we wound up effectively moving from one further crisis to another at work until my sudden resignation at the start of October. The job was eating me alive. Blog posts would reflect that as time went by and a former Mastodon account would show my decline over time.

I’m not sure how to feel that my old slot apparently was not filled and people followed me in departing. Will the last one out of the Post of Duty turn out the lights?

As to what happened during my year a significant chunk is a story that can’t be told. That was the job. Significant bits of life in the past year for me are scattered across case files that I hope to never, ever see again that are held by multiple agencies. It most simply can be explained in this piece of Inspirobot.me output: “Believe in chaos. Prepare for tears.”

Frankly, I don’t think anybody could have seen the events of my 2019 coming. For three quarters of the year I was not acting but rather reacting as were most people around me. I’ve been trying to turn things around but that has been slow going. I’ve thankfully gotten back to doing some contributions in different Ubuntu community areas. I had missed doing that and the job-related restrictions I had been working under kept me away for too long. Apparently I’ve been present on AskUbuntu longer than I had thought, for example.

In short, depending on your perspective 2019 was either a great year of growth or a nightmare you’re thankful to escape. Normally you don’t want to live life on the nightmare side all that much. I look forward to 2020 as a time to rebuild and make things better.

All that being said I should roll onwards to predictions. My predictions for 2020 include:

  • There will be a “scorched earth” presidential campaign in the United States without a clear winner.
  • The 20.04 LTS will reach new records for downloads and installations across all flavours.
  • Ubuntu Core will become flight-qualified to run a lunar lander robot. It won’t be an American robot, though.
  • One of the flavours will have a proof of concept installable desktop image where everything is a snap. Redditors will not rejoice, though.
  • The Ubuntu Podcast goes on a brief hiatus in favor of further episodes of 8 Bit Versus.
  • I will finish the hard sci-fi story I am working on and get it in order to submit somewhere
  • Erie Looking Productions will pick up an additional paying client
  • There will be a safe design for a Raspberry Pi 4 laptop and I will switch to that as a “daily driver”

And now for non sequitur theater…for those seeking a movie to watch between Christmas and New Year’s Eve due to the paucity of good television programming I recommend Invasion of the Neptune Men which can be found on the Internet Archive. The version there is not the version covered by Mystery Science Theater 3000, though. Other films to watch from the archives, especially if you’re still reeling in shock from the horror that is the film version of Cats, can be found by visiting https://archive.org/details/moviesandfilms.

on December 26, 2019 04:21 AM

December 24, 2019

In May 2019, my research group was invited to give short remarks on the impact of Janet Fulk and Peter Monge at the International Communication Association‘s annual meeting as part of a session called “Igniting a TON (Technology, Organizing, and Networks) of Insights: Recognizing the Contributions of Janet Fulk and Peter Monge in Shaping the Future of Communication Research.

Youtube: Mako Hill @ Janet Fulk and Peter Monge Celebration at ICA 2019

I gave a five-minute talk on Janet and Peter’s impact to the work of the Community Data Science Collective by unpacking some of the cryptic acronyms on the CDSC-UW lab’s whiteboard as well as explaining that our group has a home in the academic field of communication, in no small part, because of the pioneering scholarship of Janet and Peter. You can view the talk in WebM or on Youtube.

[This blog post was first published on the Community Data Science Collective blog.]

on December 24, 2019 05:04 PM

People logging in to Ubuntu systems via SSH or on the virtual terminals are familiar with the Message Of The Day greeter which contains useful URLs and important system information including the number of updates that need to be installed manually.

However, when starting a Ubuntu container or a Ubuntu terminal on WSL, you are entering a shell directly which is way less welcoming and also hides if there are software updates waiting to be installed:

user@host:~$ lxc shell bionic-container

To make containers and the WSL shell friendlier to new users and more informative to experts it would be nice to show MOTD there, too, and this is exactly what the show-motd package does. The message is printed only once every day in the first started interactive shell to provide up-to-date information without becoming annoying. The package is now present in Ubuntu 19.10 and WSL users already get it installed when running apt upgrade.
Please give it a try and tell us what you think!

Bug reports, feature requests are welcome and if the package proves to be useful it will be backported to current LTS releases!

on December 24, 2019 12:27 AM

December 23, 2019

When I started using Ubuntu more than a decade ago I was impressed how well it worked out of the box. It detected most of my machine’s hardware and I could start working on it immediately. The Windows Subsystem for Linux is a much newer Ubuntu platform which can also run graphical applications without any extra configuration inside Ubuntu.

If you set up and start an X server or a PulseAudio server on Windows and start Ubuntu in WSL, the starting Ubuntu instance detects the presence of the servers and caches the configuration while the instance is running. The detection takes a few hundred milliseconds at the first start in the worst case, but using a cached configuration is fast and the delay is unnoticeable in subsequently started shells.

The detection is performed by /etc/profile.d/wsl-integration.sh from the wslu package and the configuration is cached in $HOME/.cache/wslu/integration.

If you would like to use a different X or sound configuration, like redirecting graphical applications to a remote X server you can prepopulate the cached configuration with a script running before /etc/profile.d/wsl-integration.sh or you can disable the detection logic with a similar script by making $HOME/.cache/wslu/integration an empty file.

on December 23, 2019 10:42 PM

Strange Creatures

Benjamin Mako Hill

I found what appears to be a “turtile” on the whiteboard in the Community Data Science lab at the University of Washington.

[See previous discussion for context.]

on December 23, 2019 08:25 PM

December 22, 2019

On Group Building

Rhonda D'Vine

Recently I thought a lot about group building. There had been some dynamics going on in way to many communities that I am involved with, and it always came down to the same structural thing:

  • Is the group welcoming participation of everyone?
  • Is the group actively excluding people?

When put this way, I guess most people will quite directly swing towards the first option and outrule the second. Thing though is, it's not that easy. And I'd like to explain why.

Passive vs. Active Exclusion

The story about passive exclusion

Exclusion always happens, it even has to happen, regardless how you try to form a group. Let me explain it by an example, that recently happened at an event in Germany. It was an event called "Hanse inter nichtbinär trans Tagung (HINT)", conference for inter, non-binary and trans folks. A doctor was invited who performs genital "corrective" operations on babies, something that inter people suffer a lot and unfortunately is still legal and a huge practise around the globe while there is no medical need for any of this. In turn inter people didn't feel safe to attend a conference anymore that was specifically set out for them as part of the target audience.

And that's just one example. I could come up with a fair amount of others, like having sexual abusive people at polyamory meetups, and there is a fair amount of free software related discussions going on too, having abusive people in the community actively invalidating others, ridiculing them, belittling them, or software that specifically enables access to hate-speech sites on free software portals, all with the reasoning that it's about free software after all.

All these things lead to passive exclusion. It leads to an environment that suddenly doesn't feel safe for a fair amount of people to get involved in in the first place. People that are claimed to be wanted within the community. People who are told to grow a thicker skin. People who are criticized for pointing out the discrimination and being rightfully emotionally wound up, also about the silent bystanders, having to do the emotional labour themselves. And as organizers and group leads, it's definitely the less energy consuming approach.

Some further fruit for thoughts:

The story with active exclusion

When you understand this, and start to engage with abusive people that make other feel unsafe, you might realize: That's actually hell of a work! And an unthankful one on top of that. You suddenly have to justify your actions. You will receive abusive messages about how could you exclude that person because they never have been abusive to them so it can't be true, you are splitting the community, and whatsnot. It's an unthankful job to stand up for the mistreated, because in the end it will always feel like mistreating someone else. Holding people accountable for their actions never feels good, and I totally get that. That's also likely the reason why most communities don't do it (or, only in over-the-top extreme cases way too late), and this is a recurring pattern, because of that.

But there are these questions you always have to ask yourself when you want to create a community:

"Whom do I want to create a community for, whom do I want to have in there - and what kind of behavior works against that? What am I willing to do to create the space?"

When you have a clear view on those questions, it still might be needed to revisit it from time to time when things pop up that you haven't thought about before. And if you mean it honest, for a change, start to listen to the oppressed and don't add the hurt by calling them out for their reaction of fighting for their sheer existence and survival. Being able to talk calmly about an issue is a huge privilege and in general shows that you aren't affected by it, at all. And doesn't contribute to solving the discrimination, rather just distracts from it.

Middle ground?

One last note: Active exclusion doesn't necessarily have to happen all the time. Please check in with the abused about what their needs are. Sometimes they can deal with in a different way. Sometimes the abusers start to realize their mistake and healing can happen. Sometimes discussions are needed, mediation, with or without the abused.

But ultimately, if you want to build any inclusive environment, you have to face the fact that you very likely will have to exclude people and be ready to do so. Because as Paula said in her toot above:

"If you give oppressors a platform, then guess what, marginalized people will leave your platform and you'll soon have a platform of dicks!"

/debian | permanent link | Comments: 0 | Flattr this

on December 22, 2019 04:12 PM

December 21, 2019

This is one of those “for my own reference on my next (pine) phone” posts, but anyone using ubuntu phone (ubports.com) may find it useful.

I use mutt (in a libertine ‘container’) on the ubuntu phone for sending email. The terminal keyboard is not bad, but one annoying thing I’ve found is that the auxilliary keyboard rows were not optimal for use in vi. The top row, for those who haven’t seen it, is a single row of extra buttons. There are several top rows to choose from, i.e. Fn keys, control keys, scroll keys, and there’s even one where buttons type out full commands like ‘clear’, ‘rm’ and ‘chmod’.

The main buttons I use are the up arrow, tab, and escape. But escape is in the ‘fn’ row, while tab and up arrow are in the scroll list. So I kept having to switch between different rows. To switch rows, you hold down a button until a popup appears allowing you to choose. This is suboptimal.

To fix this, I went into /opt/click.ubuntu.com/com.ubuntu.terminal/0.9.4/qml/KeyboardRows/Layouts and edited ScrollKeys.json. I removed the _key suffix for all the labels, which just take up space so that fewer buttons show up in one line. I copied the escape key entry from FunctionKeys.json as the first entry in ScrollKeys.json. Then I moved all other entries which preceded the tab key to the end of the file (adjusting the trailing ‘,’ as needed). Finally, I copied ScrollKeys.json to AScrollkeys.json, to make this the first keyboard row whenever I fire up the terminal. (The file ~/.config/ubuntu-terminal-app/ubuntu-terminal-app.conf supposedly orders these, but it is re-written every time the terminal starts!)

Perhaps I should add a row for ‘|’, ‘!’, and a few others which I’m always going to the second number screen for. But for now, this should speed things up.

on December 21, 2019 09:49 PM

What's New?


  • Radio indicators are now displayed on the layout options, making your selection clearer on all themes

Bug Fixes

  • Startup crash when GdkDisplay or GdkScreen calls return None (LP #1822914)
  • Configuration of preferred window layout (Xfce #16085)
  • Finding files in the target directory (Xfce #15985, #16233)
  • Symbolic links looping, causing search to go on forever (Xfce #16272)
  • Home (~) expansion for the start path, simplifying commandline usage:
catfish --path=~/Desktop

Translation Updates

Catfish 1.4.11 Released

Albanian, Belarusian, Catalan, Chinese (China), Chinese (Taiwan), Croatian, Czech, Dutch, French, German, Interlingue, Italian, Korean, Lithuanian, Malay, Norwegian Bokmål, Portuguese, Portuguese (Brazil), Russian, Serbian, Slovak, Spanish, Thai, Turkish


Source tarball

$ md5sum catfish-1.4.11.tar.bz2 

$ sha1sum catfish-1.4.11.tar.bz2 

$ sha256sum catfish-1.4.11.tar.bz2 
Checksums since the Xfce Release Manager is not currently publishing them.

Catfish 1.4.11 will be included in Xubuntu 20.04 "Focal Fossa", available in April.

on December 21, 2019 01:00 PM

December 18, 2019

The Grantlee community is pleased to announce the release of Grantlee version 5.2.0.

For the benefit of the uninitiated, Grantlee is a set of Qt based libraries including an advanced string template system in the style of the Django template system.

{# This is a simple template #}
{% for item in list %}
    {% if item.quantity == 0 %}
    We're out of {{ item.name }}!
    {% endif %}
{% endfor %}

Included in this release contains a major update to the script bindings used in Grantlee to provide Javascript implementations of custom features. Allan Jensen provided a port from the old QtScript bindings to new bindings based on the QtQml engine. This is a significant future-proofing of the library. Another feature which keeps pace with Qt is the ability to introspect classes decorated with Q_GADGET provided by Volker Krause. Various cleanups and bug fixes make up the rest of the release. I made some effort to modernize it as this is the last release I intend to make of Grantlee.

This release comes over 3 and a half years after the previous release, because I have difficulty coming up with new codenames for releases. Just joking of course, but I haven’t been making releases as frequently as I should have, and it is having an impact on the users of Grantlee, largely in KDE applications. To remedy that, I am submitting Grantlee for inclusion in KDE Frameworks. This will mean releases will happen monthly and in an automated fashion. There is some infrastructure to complete in order to complete that transition, so hopefully it will be done early in the new year.

on December 18, 2019 08:41 PM

December 17, 2019

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In November, 248.50 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

November was a quieter month again, most notably ‘#922246: www/lts: if DLA-1234-1 and DLA-1234-2 exist, only that last one shows up in indexes‘ was fixed, so that finally all DLAs show up on www.debian.org as they should.

We currently have 58 LTS sponsors each month sponsoring 215h. This month we are pleased to welcome the University of Oxford, TouchWeb and Dinahosting among our sponsors. It’s particularly interesting to see hosting providers that are creating financial incentives to migrate to newer versions: customers that don’t upgrade have to pay an extra amount which is then partly given back to Debian LTS.

The security tracker currently lists 35 packages with a known CVE and the dla-needed.txt file has 30 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on December 17, 2019 02:47 PM

December 15, 2019

Javier Teruelo, Paco Molinero y Marcos Costales analizamos el fin de la neutralidad de Internet y obsolescencia programada en las pizarras digitales de los colegios.

Escúchanos en:

on December 15, 2019 04:01 PM