November 30, 2020

MAAS provides a state-of-the-art User Interface (UI), which simplifies usage. But you may not know that MAAS also has a robust Command-line Interface (CLI), which actually provides more functionality than the UI.  Everything you can do from the UI, you can do from the CLI, but not the other way round. Let’s walk through MAAS operations using only the CLI, and look at a few jq tricks to produce human-readable CLI output.

Quick questions you may have:

Installing MAAS

First, installation: note that our MAAS host is named “wintermute,” so you can ignore any references in the text that follows. Let’s be naive and start cold with MAAS 2.8. Step one is to install (but not initialise) the MAAS snap:

stormrider@wintermute:~$ sudo snap install maas --channel=2.8
maas (2.8/stable) 2.8.1-8567-g.c4825ca06 from Canonical installed

Looking over the MAAS initialisation modes, it looks like region+rack mode will do just fine for this install. There’s no need for the complexity of separate rack controllers just yet. First, though, there’s a decision about whether to use the POC mode (with a test DB) or just install to full-up production mode. The latter seems like the most involved, so let’s go with production mode.

Production PostgreSQL

Running MAAS in production mode means a local PostgreSQL install, from packages. Like all package installs, this process begins with a quick package update:

stormrider@wintermute:~$ sudo apt update -y
...list of updates follows...

This will grab any packages that might be needed for the install to succeed. Install the latest PostgreSQL package, which happens to be version 12 at this writing:

stormrider@wintermute:~$ sudo apt install -y postgresql
...install messages follow...

Set up a PostgreSQL user and a suitable MAAS database, needed to configure what follows:

stormrider@wintermute:~$ sudo -u postgres psql -c \
stormrider@wintermute:~$ sudo -u postgres createdb -O \
"maascli" "maasclidb"

Note that there’s no system response to the database creation command — the old UNIX rule of “no news is good news.” Not to worry, more than likely you would see an error message if something didn’t work.

Next, we need to add the new database to the PostgreSQL HBA configuration, by editing /etc/postgres/12/main/pg_hbq.conf, adding a line to the bottom of the file for the new maasdatabase:

host     maasclidb      maascli     0/0                   md5

Finally, we can initialise MAAS, like this:

stormrider@wintermute:~$ sudo maas init region+rack \
--database-uri \
MAAS URL [default=]: 
...MAAS setup notifications...

There’s an important bit of feedback there, the MAAS URL, which will be needed for the CLI login. That’s followed by a running commentary on the steps MAAS is taking to initialise, ending with the following message:

MAAS has been set up.             

If you want to configure external authentication or use
MAAS with Canonical RBAC, please run

sudo maas configauth

To create admins when not using external authentication, run

sudo maas createadmin

Obviously, the next step is an easy call: run createadminto set up an admin user:

stormrider@wintermute:~$ sudo maas createadmin
[sudo] password for stormrider:
Username: admin
Email: <anything can go here, it's not used>
Import SSH keys [] (lp:user-id or gh:user-id): xxxxxxxxxxxx

This makes for an easy install of production MAAS, all via command line (since the installation is always at the CLI).

Configuring MAAS (CLI-only)

Now that MAAS is up and running, it’s time to configure it. You can see documentation on these steps in the CLI configuration journey, part of the new RAD documentation set. Since we’re covering the full range of CLI operations, we’ll go ahead and recap the journey here.

Logging in

The first step for any new CLI operations is logging in, which requires two steps in the CLI. First, we need to get the MAAS apikey, which permits the CLI to access the MAAS API. Note that the MAAS API is actually the entry point for all MAAS actions through all access methods.

Here’s how we can retrieve and store the MAAS apikey:

sudo maas apikey --username=admin > api-key-file

You can make sure you got a valid API key by displaying the contents of api-key-file:

stormrider@wintermute:~$: cat api-key-file

Note that the string above isn’t an actual API key, just characters that were made up for this example. Anyway, we can now login to MAAS — but first, let’s try maas --help — there’s an important distinction that gets skipped over, causing some grief.

Getting help

In the MAAS CLI, you always get help by typing some variant of the basic command:

stormrider@wintermute:~$ maas --help

If you’re not logged in, or if you don’t type a “logged-in username” (referred to as a valid profile) after maas, you get the following, very generic help output:

usage: maas [-h] COMMAND ...

optional arguments:
  -h, --help      show this help message and exit

drill down:
    login         Log in to a remote API, and remember its 
                  description and credentials.
    logout        Log out of a remote API, purging any stored 
    list          List remote APIs that have been logged-in to.
    refresh       Refresh the API descriptions of all profiles.
    init          Initialise MAAS in the specified run mode.
    config        View or change controller configuration.
    status        Status of controller services.
    migrate       Perform migrations on connected database.
    apikey        Used to manage a user's API keys. Shows 
                  existing keys unless --generate or --delete 
                  is passed.
    configauth    Configure external authentication.
    createadmin   Create a MAAS administrator account.
                  Change a MAAS user's password.

What you see above isn’t even half of what the MAAS CLI will do, but it’s all you get as an unrecognized user.

So now, let’s login and try that help again:

stormrider@wintermute:~$ maas login admin \ < api-key-file

You are now logged in to the MAAS server at with the profile name

For help with the available commands, try:

  maas admin --help

Having logged in, you get much more detailed help:

stormrider@wintermute:~$ maas admin --help

usage: maas admin [-h] COMMAND ...

Issue commands to the MAAS region controller at

optional arguments:
  -h, --help            show this help message and exit

drill down:
    account             Manage the current logged-in user.
    bcache-cache-set    Manage bcache cache set on a machine.
    bcache-cache-sets   Manage bcache cache sets on a machine.
    bcache              Manage bcache device on a machine.
    bcaches             Manage bcache devices on a machine.
    block-device        Manage a block device on a machine.
    block-devices       Manage block devices on a machine.
    boot-resource       Manage a boot resource.
    boot-resources      Manage the boot resources.
    boot-source         Manage a boot source.
                        Manage a boot source selection.
                        Manage the collection of boot source 
    boot-sources        Manage the collection of boot sources.
                        Manage a custom commissioning script.
                        Manage custom commissioning scripts.
    dhcpsnippet         Manage an individual DHCP snippet.
    dhcpsnippets        Manage the collection of all DHCP 
                        snippets in MAAS.
    dnsresource         Manage dnsresource.
    dnsresource-record  Manage dnsresourcerecord.
                        Manage DNS resource records (e.g. CNAME, 
                        MX, NS, SRV, TXT)
    dnsresources        Manage dnsresources.
    device              Manage an individual device.
    devices             Manage the collection of all the devices 
                        in the MAAS.
    discoveries         Query observed discoveries.
    discovery           Read or delete an observed discovery.
    domain              Manage domain.
    domains             Manage domains.
    events              Retrieve filtered node events.
    fabric              Manage fabric.
    fabrics             Manage fabrics.
    fan-network         Manage Fan Network.
    fan-networks        Manage Fan Networks.
    file                Manage a FileStorage object.
    files               Manage the collection of all the files in 
                        this MAAS.
    ipaddresses         Manage IP addresses allocated by MAAS.
    iprange             Manage IP range.
    ipranges            Manage IP ranges.
    interface           Manage a node's or device's interface.
    interfaces          Manage interfaces on a node.
    license-key         Manage a license key.
    license-keys        Manage the license keys.
    maas                Manage the MAAS server.
    machine             Manage an individual machine.
    machines            Manage the collection of all the machines 
                        in the MAAS.
    network             Manage a network.
    networks            Manage the networks.
    node                Manage an individual Node.
    node-results        Read the collection of commissioning
                        script results.
    node-script         Manage or view a custom script.
    node-script-result  Manage node script results.
                        Manage node script results.
    node-scripts        Manage custom scripts.
    nodes               Manage the collection of all the nodes in
                        the MAAS.
    notification        Manage an individual notification.
    notifications       Manage the collection of all the 
                        notifications in MAAS.
                        Manage the collection of all Package 
                        Repositories in MAAS.
    package-repository  Manage an individual package repository.
    partition           Manage partition on a block device.
    partitions          Manage partitions on a block device.
    pod                 Manage an individual pod.
    pods                Manage the collection of all the pod in 
                        the MAAS.
    rack-controller     Manage an individual rack controller.
    rack-controllers    Manage the collection of all rack 
                        controllers in MAAS.
    raid                Manage a specific RAID (Redundant Array 
                        of Independent Disks) on a machine.
    raids               Manage all RAIDs (Redundant Array of 
                        Independent Disks) on a machine.
    region-controller   Manage an individual region controller.
    region-controllers  Manage the collection of all region 
                        controllers in MAAS.
    resource-pool       Manage a resource pool.
    resource-pools      Manage resource pools.
    sshkey              Manage an SSH key.
    sshkeys             Manage the collection of all the SSH keys 
                        in this MAAS.
    sslkey              Manage an SSL key.
    sslkeys             Operations on multiple keys.
    space               Manage space.
    spaces              Manage spaces.
    static-route        Manage static route.
    static-routes       Manage static routes.
    subnet              Manage subnet.
    subnets             Manage subnets.
    tag                 Tags are properties that can be 
                        associated with a Node and serve as 
                        criteria for selecting and allocating
    tags                Manage all tags known to MAAS.
    user                Manage a user account.
    users               Manage the user accounts of this MAAS.
    version             Information about this MAAS instance.
    vlan                Manage a VLAN on a fabric.
    vlans               Manage VLANs on a fabric.
    vm-host             Manage an individual vm-host.
    vm-hosts            Manage the collection of all the vm-hosts 
                        in the MAAS.
    vmfs-datastore      Manage VMFS datastore on a machine.
    vmfs-datastores     Manage VMFS datastores on a machine.
    volume-group        Manage volume group on a machine.
    volume-groups       Manage volume groups on a machine.
    zone                Manage a physical zone.
    zones               Manage physical zones.

This is a profile.  Any commands you issue on this profile will
operate on the MAAS region server.

The command information you see here comes from the region 
server's API; it may differ for different profiles.  If you 
believe the API may have changed, use the command's 'refresh' 
sub-command to fetch the latest version of this help information 
from the server.

You can see that the help is considerably more detailed when you log in and apply a profile name to the help request.

Setting DNS

The very first blank line you encounter in the MAAS UI is the DNS server IP address. In the UI, most people just type “” (Google’s DNS server) and forget about it. But the CLI has no box, so how do you get there? Well, setting MAAS DNS is part of the set-configcommands:

stormrider@wintermute:~$ maas admin maas set-config \
name=upstream_dns value=""
Machine-readable output follows:

The value=object does not have to be quoted, since it’s an IP address, which is continuous text without spaces — but it seems like a good habit to just type values in quotes.

Importing images

The next thing would be to import images. Looking at the dashboard, Ubuntu 18.04 has already been imported. We can bring in some other image (like Ubuntu 16.04 LTS) just to see how that works, and also confirm that the 18.04 (default) image is actually imported. We can check 18.04 with the following command:

stormrider@wintermute:~$ maas admin boot-resources read

Machine-readable output follows:
        "id": 7,
        "type": "Synced",
        "name": "grub-efi-signed/uefi",
        "architecture": "amd64/generic",
        "resource_uri": "/MAAS/api/2.0/boot-resources/7/"
        "id": 8,
        "type": "Synced",
        "name": "grub-efi/uefi",
        "architecture": "arm64/generic",
        "resource_uri": "/MAAS/api/2.0/boot-resources/8/"
        "id": 9,
        "type": "Synced",
        "name": "grub-ieee1275/open-firmware",
        "architecture": "ppc64el/generic",
        "resource_uri": "/MAAS/api/2.0/boot-resources/9/"
        "id": 10,
        "type": "Synced",
        "name": "pxelinux/pxe",
        "architecture": "i386/generic",
        "resource_uri": "/MAAS/api/2.0/boot-resources/10/"
        "id": 1,
        "type": "Synced",
        "name": "ubuntu/bionic",
        "architecture": "amd64/ga-18.04",
        "resource_uri": "/MAAS/api/2.0/boot-resources/1/",
        "subarches": "generic,hwe-p,hwe-q,hwe-r,hwe-s,hwe-t,hwe-u,hwe-v,hwe-w,ga-16.04,ga-16.10,ga-17.04,ga-17.10,ga-18.04"
        "id": 2,
        "type": "Synced",
        "name": "ubuntu/bionic",
        "architecture": "amd64/ga-18.04-lowlatency",
        "resource_uri": "/MAAS/api/2.0/boot-resources/2/",
        "subarches": "generic,hwe-p,hwe-q,hwe-r,hwe-s,hwe-t,hwe-u,hwe-v,hwe-w,ga-16.04,ga-16.10,ga-17.04,ga-17.10,ga-18.04"
        "id": 3,
        "type": "Synced",
        "name": "ubuntu/bionic",
        "architecture": "amd64/hwe-18.04",
        "resource_uri": "/MAAS/api/2.0/boot-resources/3/",
        "subarches": "generic,hwe-p,hwe-q,hwe-r,hwe-s,hwe-t,hwe-u,hwe-v,hwe-w,ga-16.04,ga-16.10,ga-17.04,ga-17.10,ga-18.04"
        "id": 4,
        "type": "Synced",
        "name": "ubuntu/bionic",
        "architecture": "amd64/hwe-18.04-edge",
        "resource_uri": "/MAAS/api/2.0/boot-resources/4/",
        "subarches": "generic,hwe-p,hwe-q,hwe-r,hwe-s,hwe-t,hwe-u,hwe-v,hwe-w,ga-16.04,ga-16.10,ga-17.04,ga-17.10,ga-18.04,hwe-18.10,hwe-19.04"
        "id": 5,
        "type": "Synced",
        "name": "ubuntu/bionic",
        "architecture": "amd64/hwe-18.04-lowlatency",
        "resource_uri": "/MAAS/api/2.0/boot-resources/5/",
        "subarches": "generic,hwe-p,hwe-q,hwe-r,hwe-s,hwe-t,hwe-u,hwe-v,hwe-w,ga-16.04,ga-16.10,ga-17.04,ga-17.10,ga-18.04"
        "id": 6,
        "type": "Synced",
        "name": "ubuntu/bionic",
        "architecture": "amd64/hwe-18.04-lowlatency-edge",
        "resource_uri": "/MAAS/api/2.0/boot-resources/6/",
        "subarches": "generic,hwe-p,hwe-q,hwe-r,hwe-s,hwe-t,hwe-u,hwe-v,hwe-w,ga-16.04,ga-16.10,ga-17.04,ga-17.10,ga-18.04,hwe-18.10,hwe-19.04"

That’s a lot of information, but it looks like several 18.04 images downloaded and synched. You can use grep to simplify that output:

stormrider@wintermute:~$ maas admin boot-resources read \
| grep architecture 

 "architecture": "amd64/generic",
 "architecture": "arm64/generic",
 "architecture": "ppc64el/generic",
 "architecture": "i386/generic",
 "architecture": "amd64/ga-18.04",
 "architecture": "amd64/ga-18.04-lowlatency",
 "architecture": "amd64/hwe-18.04",
 "architecture": "amd64/hwe-18.04-edge",
 "architecture": "amd64/hwe-18.04-lowlatency",
 "architecture": "amd64/hwe-18.04-lowlatency-edge",

That definitely confirms 18.04. But what are those three or four on top? Looking at the massive JSON output, we can see that they have names like “open-firmware,” “uefi,” and “pxe.” Okay, so those are images that can PXE-boot machines, basically. But how could we sort this information out in a neat way?

enter jq

If you’re going to use the MAAS CLI — or anything with JSON-based output — you’ll want to consider learning the command line tool jq. It’s quite handy for parsing the JSON output of the MAAS CLI. So, for example, if we want a formatted table of names and architectures, we can run the last command through jq like this:

stormrider@wintermute:~$ maas admin boot-resources read \
| jq -r '.[] | "\(.name)\t\(.architecture)"'

grub-efi-signed/uefi        amd64/generic
grub-efi/uefi            arm64/generic
grub-ieee1275/open-firmware    ppc64el/generic
pxelinux/pxe            i386/generic
ubuntu/bionic            amd64/ga-18.04
ubuntu/bionic            amd64/ga-18.04-lowlatency
ubuntu/bionic            amd64/hwe-18.04
ubuntu/bionic            amd64/hwe-18.04-edge
ubuntu/bionic            amd64/hwe-18.04-lowlatency
ubuntu/bionic            amd64/hwe-18.04-lowlatency-edge

So you can see that we basically have (a) the images we need to boot machines, and (b) an 18.04 image (set) to deploy. That’s a good start, but let’s see if we can pull down another image with the CLI. We can select images with the boot-source-selections command, so let’s try that with “Trusty” (Xenial Xerus, aka 16.04):

stormrider@wintermute:~$ maas admin boot-source-selections \
create 1 os="ubuntu" release="trusty" arches="amd64" \
subarches="*" labels="*"

Machine-readable output follows:
    "os": "ubuntu",
    "release": "trusty",
    "arches": [
    "subarches": [
    "labels": [
    "boot_source_id": 1,
    "id": 2,
    "resource_uri": "/MAAS/api/2.0/boot-sources/1/selections/2/"

You repeat the maas admin boot-resources read command above to confirm that you’ve captured the 16.04 versions. Importing them is now a fairly simple command:

stormrider@wintermute:~$ maas admin boot-resources import
Machine-readable output follows:
Import of boot resources started

This blog post is fairly long, so let’s pause here and continue the MAAS CLI operations process in the next post.

on November 30, 2020 10:58 PM

November 27, 2020

The KDE release service will make another bundle of releases next month on Dec 10th. If you have an app in KDE released as part of this please add in new feature on this wiki page so we can make an announcement

on November 27, 2020 12:01 PM

November 26, 2020

Ep 118 – Passeio

Podcast Ubuntu Portugal

Tornamos histórias enfadonhas em aventuras fantásticas, acontecimentos cinzentos em verdadeiros contos de fadas, ou então falamos só sobre Ubuntu e outras cenas… Aqui fica mais um episódio no vosso podcast preferido.

Já sabem: oiçam, subscrevam e partilhem!



Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on November 26, 2020 10:45 PM

S13E36 – Singing at the dinner table

Ubuntu Podcast from the UK LoCo

This week we have been playing DRAG. We discuss what we’ve been doing during lock down, bring you an extension of love, go over all your wonderful feedback and take a trip to ThinkPad corner.

It’s Season 13 Episode 36 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on November 26, 2020 03:00 PM

A Linux kernel for each developer team, which uses it to bring up target boards. Bespoke, built, issued, and maintained over years by the vendor. Teams that focus on building great apps, rather than figuring out hardware dependencies. Happy developers that bootstrap smart devices in no time.

This is what highly productive embedded systems development should look like. Let’s unpack that vision.


Embedded systems developers can be as productive as web, desktop or mobile developers are. Most developers don’t have to worry about hardware dependencies like kernels, and BSPs, except for embedded software. 

Developers are in the business of building applications, not that of building and maintaining kernels and BSPs. building and maintaining hardware-dependent software artifacts is not the developers’ job to be done. Linux vendors should take care of that burden, so developers can focus on building great embedded applications.

As devices are becoming increasingly software-defined, new development experiences become possible. Build your embedded app, generate an OS image to deliver the app, burn and boot. Congratulations, you’ve built a smart appliance.

App focus

Developer-friendly embedded Linux should just deliver apps to devices. Satellite companies don’t build their own rockets. They focus on building satellites and lease a rocket to deliver it as a payload. Many developer teams also have to “build the rocket” to deliver embedded applications.

Developers would be more successful, if Linux vendors made it their job to provide and maintain the scaffold that teams need to deliver embedded apps. In such a world, teams would focus on creating apps.

The resulting app-centric development cycle could boil down to booting, building and deploying. Building on top of vendor-provided scaffolds, developers would create a bootable image for their target boards. Teams would then develop apps. After testing, they will build a system image that delivers all these apps. Then burn, deploy, done. 

<noscript> <img alt="" height="491" src=",q_auto,fl_sanitize,c_fill,w_874,h_491/" width="874" /> </noscript>
App-centric embedded development cycle


Embedded software development practices pre-date the cloud-native and devops era. These practices come from a past when infinite compute capabilities of any architecture weren’t available on demand. They stem from a time when software was hosted locally, rather than on shared online repositories. They were made for a time when automated builds, and CI/CD were non-existent.

<noscript> <img alt="" height="400" src=",q_auto,fl_sanitize,c_fill,w_711,h_400/" width="711" /> </noscript>
Building embedded software with modern CI/CD tools

Integrating embedded development toolchains with modern CI/CD tools unlocks devops-style collaboration. This means providing tools to mediate collaboration between security, product engineering, and operations focals. The security expert will push patches, product engineering will release features, and operations admins will manage device fleets. All in sync, using the same tools.

Ubuntu Core

Ubuntu Core puts developer teams first. Canonical engineers Ubuntu Core to unlock productive embedded software development workflows. Ubuntu Core comes with tools like Snapcraft, and infrastructure like IoT app stores. These capabilities enable developer teams to collaborate effectively around projects. We’ve made these things our job, delivering more than what embedded Linux vendors typically do. Our job is to free you up to do yours: inventing.

on November 26, 2020 02:07 PM

Welcome to the 2020 edition of my Hacker Holiday Gift Guide! This has been a trying year for all of us, but I sincerely hope you and your family are happy and healthy as this year comes to an end.

Table of Contents

General Security

ProtonMail Subscription

ProtonMail is a great encrypted mail provider for those with an interest in privacy or cryptography. They offer gift cards for subscriptions to both ProtonMail and ProtonVPN, their VPN service.

Encrypted Flash Drive

Datashur Pro

I know cloud storage is all the rage, but sometimes you need a local copy. Sometimes, you even need that local copy to be protected – maybe it’s user data, maybe it’s financial data, maybe it’s medical data – and hardware encryption allows you to go from one system to another without needing any special software. Additionally, it can’t be keylogged or easily compromised from software. This Datashur Pro is my choice of encrypted flash drive, but there are a number of options out there.

Cryptographic Security Key

Yubikey 5C

These devices act as a second factor for authentication, but some of them can do so much more. The Yubikey 5 can also function as a hardware security token for encryption keys and provide one-time-password functionality. Keys from Feitian Technologies support Bluetooth Low Energy in addition to NFC and USB, allowing them to work with a variety of devices. If you or your hacker are into open source, the SoloKey keys are open source hardware implementations of the specification.

Linux Basics for Hackers

Linux Basics For Hackers

I’ve been using Linux for more than two decades, so I honestly initially just bought Linux Basics for Hackers because of the awesome hacker penguin on the cover. If you’re not already familiar with Linux, but need it to grow your skillset, this is an excellent book with a focus on the Linux you need to know as an information security professional or hacker. It has a particular focus on Kali Linux, the Linux distribution popular for penetration testing, but the lessons are more broadly applicable across different security domains.

Penetration Testers & Red Teamers

These gifts are for your pentesters, red teamers, and those learning the field.

The Pentester Blueprint

The Pentester Blueprint

The Pentester Blueprint is a guide to getting started as a professional penetration tester. It’s not very technical, and it’s not going to teach your recipient how to “hack”, but it’s great career advice for those getting started in penetration testing or looking to make a career transition. It basically just came out, so it’s up-to-date (which is, of course, a perpetual issue in technical books these days. It’s written in a very easy-reading style, so is great for those considering the switch to pentesting.

Online Learning Labs

I can recommend several online labs, some of which offer gift cards:

Penetration Testing: A Hands-On Introduction to Hacking

Penetration Testing

Georgia Weidman’s book, “Penetration Testing: A Hands-On Introduction to Hacking” is one of the best introductory guides to penetration testing that I have seen. Even though it’s been a few years since it was released, it remains high-quality content and a great introductory guide to the space. Available via Amazon or No Starch Press. Georgia is a great speaker and teacher and well-known for her efforts to spread knowledge within the security community.

WiFi Pineapple Mark VII

WiFi Pineapple

The WiFi Pineapple is probably the best known piece of “hacking hardware”. Now in it’s seventh generation, it’s used for conducting WiFi security audits, on-site penetration tests, or even as a remote implant for remote penetration tests. I’ve owned several versions of the WiFi Pineapple and found that it only gets better with each generation. Especially with dual radios, it can do things like act as a client on one radio while providing an access point on the other radio.

The WiFi Pineapple does have a bit of a learning curve, but it’s a great option for those getting into the field or learning about the various types of WiFi audits and attacks. The USB ports also allow expansion if you need to add a capability not already built-in.



PoC||GTFO is an online journal for offensive security and exploitation. No Starch Press has published a pair of physical journals in a beautiful biblical style. The content is very high quality, but they’re also presented in a striking style that would go well on the bookshelf of even the most discerning hacker. Check out both Volume I and Volume II, with Volume III available for pre-order to be delivered in January.

Hardware Hackers



Tigard is a pretty cool little hardware hacker’s universal interface that I’m super excited about. Similar to my open source project, TIMEP, it’s a universal interface for SPI, I2C, JTAG, SWD, UART, and more. It’s great for examining embedded devices and IoT, and is a really well-thought-out implementation of such a board. It supports a variety of voltages and options and is even really well documented on the back of the board so you never have to figure out how to hook it up. This is great both for those new to hardware hacking as well as those experienced looking for an addition to the toolkit.

Hardware Hacker: Adventures in Making and Breaking Hardware

Hardware Hacker

Andrew “Bunnie” Huang is a well-known hardware hacker with both experience in making and breaking hardware, and Hardware Hacker: Adventures in Making and Breaking Hardware is a great guide to his experiences in those fields. It’s not a super technical read, but it’s an excellent and interesting resource on the topics.

RTL-SDR Starter Kit


Software-Defined Radio allows you to examine wireless signals between devices. This is useful if you want to take a look at how wireless doorbells, toys, and other devices work. This Nooelec kit is a great starting SDR, as is this kit from

iFixit Pro Tech Toolkit

The iFixit Pro Tech Toolkit is probably the tool I use the most during security assessments of IoT/embedded devices. This kit can get into almost anything, and the driver set in it has bits for almost anything. It has torx, security torx, hex, Phillips and slotted bits, in addition to many more esoteric bits. The kit also contains other opening tools for prying and pulling apart snap-together enclosures and devices. I will admit, I don’t think I’ve ever used the anti-static wrist strap, even if it would make sense to do so.

Young Hackers



imagiCharm by imagiLabs is a small hardware device that allows young programmers to get their first bite into programming embedded devices – or even programming in general. While I haven’t tried it myself, it looks like a great concept, and providing something hands-on looks like a clear win for encouraging students and helping them find their interest.

Mechanical Puzzles

PuzzleMaster offers a bunch of really cool mechanical puzzles and games. These include things like puzzle locks, twisty puzzles, and more. When we’re all stuck inside, why not give something hands on a try?

Friends and Family of Hackers

Bring a touch of hacking to your friends and family!

Hardware Security Keys

Yubico Security Key

A Security Key is a physical 2 factor security token that makes web logins much more secure. Users touch the gold disc when signing in to verify their signin request, so even if a password gets stolen, the account won’t be stolen. These tokens are supported by sites like Google, GitHub, Vanguard, Dropbox, GitLab, Facebook, and more.

Unlike text-message based second factor, these tokens are impossible to phish, can’t be stolen via phone number porting attacks, and don’t depend on your phone having a charge.



Control-Alt-Hack is a hacking-themed card game. Don’t expect technical accuracy, but it’s a lot of fun to play. Featuring terms like “Entropy” and “Mission”, it brings the theme of hacking to the whole family. It’s an interesting take on things, and a really cool concept. If you’re a fan of independent board/card games and a fan of hacking, this would be a fun addition to your collection.

VPN Subscription

If your friends or family use open wireless networks (I know, maybe not as much this year), they should consider using a VPN. I currently use Private Internet Access when I need a commercial provider, but I have also used Ivacy before, as well as ProtonVPN.

Non-Security Tech

These are tech items that are not specific to the security industry/area. Great for hackers, friends of hackers, and more.

Raspberry Pi 4

Raspberry Pi 4

Okay, I probably could’ve put the Raspberry Pi 4 in almost any of these categories because it’s such a versatile tool. It can be a young hacker’s first Linux computer, it can be a penetration testing dropbox, it can be a great tool for hardware hackers, and it can be a project unto itself. The user can use it to run a home media server, a network-level ad blocker, or just get familiar with another operating system. While I’ve been a fan of the Raspberry Pi in various forms for years, the Pi 4 has a quad core processor and can come with enough memory for some powerful uses. There’s a bunch of configurations, like:



The Keysy is a a small RFID duplicator. While it can be used for physical penetration testing, it’s also just super convenient if you have multiple RFID keyfobs you need to deal with (i.e., apartment, work, garage, etc.). Note that it only handles certain types of RFID cards, but most of the common standards are available and workable.

Home Automation Learning Kit

This is a really cool kit for learning about home automation with Arduino. It has sensors and inputs for learning about how home automation systems work – controlling things with relays, measuring light, temperature, etc. I love the implementation into a fake laser cut house for the purpose of learning – it’s really clever, and makes me think it would be great for anyone into tech and automation. Teens and adults wanting to learn about Arduino, security practitioners who want to examine how things could go wrong (could augment this with consumer-grade products) and more.

Boogie Board Writing Tablet

Sometimes you just want to hand write something. While I’m also a fan of Field Notes Notebooks in my pocket, this Boogie Board tablet strikes me as a pretty cool option. It allows the user to write on its surface overlaid over anything of your choice (it’s transparent) and then capture the written content into iOS or Android. I love to hand write for brainstorming, some form of note taking, and more. System diagrams are so much easier in writing than in digital format, even today.

General Offers

This is my attempt to collect special offers for the holiday season that are relevant to the hacking community. These are all subject to change, but I believe them correct at the time of writing.

No Starch Press

No Starch Press is possibly the highest quality tech book publisher. Rather than focusing on quantity of books published, they only accept books that will be high quality. I own at least a couple of dozen of their books and they have been consistently well-written and high quality coverage of the topics. They are currently offering 33.7% off their entire catalog for Black Friday (through 11/29/20).

Hooligan Keys

Hooligan Keys offering 10% off from Thanksgiving to Christmas with offer code HAPPYDAY2020.

on November 26, 2020 08:00 AM

November 24, 2020

In the last few weeks I have been asked by many people what topics we have in the Community Council and what we are doing. After a month in the Council, I want to give a first insight into what happened in the early days and what has been on my mind. Of course, these are all subjective impressions and I am not speaking here from the perspective of the Community Council, but from my own perspective.

In the beginning, of course, we had to deal with organisational issues. These include ensuring that everyone is included in the Community Council’s communication channels. There are two main channels that we use. On the one hand, we have a team channel on IRC on Freenode to exchange ideas. The channel has the advantage that you can ask the others small questions and have a relaxed chat. To reach everyone in the Council, we have set up the mailing list: community-council at

No, I haven’t yet managed to read through all the documents and threads that deal with the Community Council or how to make the community more active again. But I have already read a lot in the first month on the Community Hub and on mailing lists to get different impressions. I can only encourage everyone to get involved with constructive ideas and help us to improve the community of Ubuntu.

I haven’t worked on an international board since 2017 and had completely forgotten one topic that is more complex than national teams: the different timezones. But after a short time we managed to find a date where we all can basically do it and we had our public meeting of the council. This took place twice and the second time we all managed to attend. The minutes of the meetings are publicly available: 1st Meeting and 2nd Meeting. We have decided that we will hold the meeting twice a month.

The Community Council had not been active for a year now, so there had been further problems with filling positions that are dependent on the Community Council. So we had to tackle these issues as soon as possible. In the case of the Membership Board, we extended the existing memberships for a period of time after consultation with the members concerned, so that the ability to work would not be affected. After that, we launched a call for new candidates to join the Board. The result of this call was that sufficient candidates were found and we can fill this board again. Soon the new members will be selected and announced by us.

A somewhat more difficult issue proved to be the Local Community (LoCo) Council. Like the Community Council, this one had not been staffed for some time, and as a result some local communities have fallen out of the approved status, even though they applied for it. Here we have also launched a call for a new LoCo Council. But even though the pain seemed to be big there, not enough candidates were found, so that we were able to fill this council and fill it with life. After a discussion on how to deal with the situation, we decided to take a step back and look at why we got into this situation and what the needs of the existing local communities are (see the log of our second meeting). This will be the subject of a Committee that we will set up. This way we will discuss a basic framework of the community of Ubuntu and see what new ways we as a community can go about it.

As further topics we have started the discussion about our understanding of the work of the Community Council and how we want to work. One of the results was that we want to use Launchpad in the future to manage our tasks. As another board was threatened with membership expiring, we at the Technical Board extended the membership of members until the end of the year. This will allow us to start the process for a new election there.

All in all, there are more exciting topics to come in the community on Ubuntu in the near future. Are the structures currently so suitable for the community? There is no community team at Canonical at the moment and the future cooperation between Canonical and the community has not yet been clarified. These all seem to me to be very exciting topics and I’m happy that we are able to work on it together.

If you want to get involved in discussions about the community, you can do so at the Community Hub. You can also send us an email to the mailing list above if you have a community topic on your mind. If you want to contact me: you can do so by commenting, via Twitter or sending a mail to me: torsten.franz at

on November 24, 2020 09:30 PM

November 23, 2020

Welcome to the Ubuntu Weekly Newsletter, Issue 658 for the week of November 15 – 21, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on November 23, 2020 09:35 PM

Sliding Toward A Holiday

Stephen Michael Kellat

On a couple Telegram channels I have been radio silent for a couple days. I’ve gone silent elsewhere too. There has been a reason for this.

The health departments of Ashtabula County issued a stay at home advisory on Friday late in the day. Residents are advised to stay home as much as possible. There is also a statewide curfew in effect running from 10 PM to 5 AM each day.

This has required some hurried changes in operational circumstances. Twelve counties in Ohio have issued advisories and they general expire in mid-December. While compliance is notionally voluntary there is discussion of switching to enforced orders if this does not work out.

I now have to get a sermon written as well as a service planned so that “stay at home” services can be filmed. Considering that I am having to make hand-me-down analog tech do what I want this is not simple. That I am working out of my garage also compounds the level of difficulty.

Writing continues although I am definitely not meeting the NaNoWriMo usual daily word count goals. My target is to have a new novelette to go up on Kindle Direct Publishing at the start of December. Eventually I will get a Minimum Working Example posted as to how I am putting this together in LaTeX.

There was a project to launch a linear streaming video channel for the arts in my local area. Right now we have many multiple uncoordinated efforts strewn across YouTube. With YouTube’s upcoming Terms of Service changes it does seem like the risk of uncontrolled inappropriate mid-roll advertising is increasing. This matter is on hold at the moment as other players in the project are having to handle more pressing matters due to the rapidly worsening coronavirus situation locally.

A few other plates are spinning at the moment too. In the end that’s why I’ve been a bit quiet. A holiday or two may be coming up but it is not shaping up to be a vacation for me.

on November 23, 2020 12:28 AM

November 22, 2020

Kubuntu is not Free, it is Free

Kubuntu General News

Human perception has never ceased to amaze me, and in the context of this article, it is the perception of value, and the value of contribution that I want us to think about.

Photo by Andrea Piacquadio from Pexels

It is yours in title, deed and asset

A common miss perception with Open Source software is the notion of free. Many people associate free in its simplest form, that of no monetary cost, and unfortunately this ultimately leads to the second conclusion of ‘cheap’ and low quality. Proprietary commercial vendors, and their corporate marketing departments know this and use that knowledge to focus their audience on ‘perceived value’. In some ways free of cost in the open source software world is a significant disadvantage, because it means there are no funds available to pay for a marketing machine to generate ‘perceived value’.

Think, for a moment, how much of a disadvantage that is when trying to develop a customer/user base.

Kubuntu is completely and whole contributon driven. It is forged from passion and enthusiasm, built with joy and above all love. Throughout our community; users use it because they love it, supporters help users, and each other, maintainers fix issues and package improvements, developers extend functionality and add features, bloggers write articles and documentation, youtubers make videos and tutorials. All these people do this because they love what they’re doing and it brings them joy doing it.

Photo by Tima Miroshnichenko from Pexels

Today Linux is cloud native, ubiquitous and dominates wholesale in the internet space, it is both general purpose, and highly specialised, robust and extensive, yet focused and detailed.

Kubuntu is a general purpose operating system designed and developed by our community to be practical and intuitive to a wide audience. It is simple and non-intrusive to install, everyday it continues to grow a larger user base of people who download it, install it and for some, love it! Further more, some of those users will find their way into our community, they will see the contributions given so freely by others and be inspired to contribute themselves.

Image from Wikipedia

Anyone who has installed Windows 10 recently, will atest to the extent of personal information that Microsoft asks users of its operating system to ‘contribute’. This enables the Microsoft marketing teams to further refine their messaging to resonate with your personal ‘perceived value‘ and indeed to enable that across the Microsoft portfolio of ‘partners‘!
The story is identical with Apple, the recently announced Silicon M1 seeks, not only to lock Apple users into the Apple software ecosystem and their ‘partners‘ but also to lock down and isolate the software to the hardware.

With this background understanding, we are able to return full circle to the subject of this article ‘Kubuntu is not Free, it is Free‘ and further more Kubuntu users are free.
Free from intrusion, profiling, targeting, and marketing; Kubuntu user are free to share, modify and improve their beloved software however they choose.

Photo by from Pexels

Let us revisit that last sentence and add some clarity. Kubuntu users are free to share and improve ‘their’ beloved software however they choose.
The critical word here is ‘their’, and that is because Kubuntu is YOUR software, not Microsoft, Apple, and not even Canonical or Ubuntu’s. It is yours in title, deed and asset and that is the value that the
GNU GPL license bequithes to you.

This ownership also empowers you, and indeed puts you as an individual in a greater place of power than the marketeers from Microsoft or Apple. You can share, distribute, promote, highlight or low-light, Kubuntu wherever, and whenever you like. Blog about it, make YouTube videos about it, share it, change it, give it away and even sell it.

How about the for perceived value ?

About the Author:

Rick Timmis is a Kubuntu Councillor, and advocate. Rick has been a user and open contributor to Kubuntu for over 10 years, and a KDE user and contributor for 20

on November 22, 2020 04:32 PM

Full Circle Weekly News #191

Full Circle Magazine

on November 22, 2020 01:09 PM

November 19, 2020

Ep 117 – Influenciadores

Podcast Ubuntu Portugal

As estrelas da Internet que vocês mais admiram voltam a presentear o seu vasto auditório com pormenores de interesse duvidoso da sua vida pessoal e derivados. Aqui fica mais um episódio no vosso podcast preferido.

Já sabem: oiçam, subscrevam e partilhem!



Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on November 19, 2020 10:45 PM

S13E35 – Opposing mirrors

Ubuntu Podcast from the UK LoCo

This week we’ve been shapening knives and revisting Morrowind. We round up new from the Ubuntu community and discuss our favourite picks from the tech news.

It’s Season 13 Episode 35 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on November 19, 2020 03:00 PM

November 18, 2020

Automatic and continuous testing is a fundamental part of today’s development cycle. Given a Gitlab pipeline that runs for each commit, we should enforce not only all tests are passing, but also that a sufficient number of them are present.


Photo by Pankaj Patel on Unsplash

Aren’t you convinced yet? Read 4 Benefits of CI/CD! If you don’t have a proper Gitlab pipeline to lint your code, run your test, and manage all that other annoying small tasks, you should definitely create one! I’ve written an introductory guide to Gitlab CI, and many more are available on the documentation website.

While there isn’t (unfortunately!) a magic wand to highlight if the code is covered by enough tests, and, in particular, if these tests are of a sufficient good quality, we can nonetheless find some valuable KPI we can act on. Today, we will check code coverage, what indicates, what does not, and how it can be helpful.

Code coverage

In computer science, test coverage is a measure used to describe the degree to which the source code of a program is executed when a particular test suite runs. A program with high test coverage, measured as a percentage, has had more of its source code executed during testing, which suggests it has a lower chance of containing undetected software bugs compared to a program with low test coverage.


Basically, code coverage indicates how much of your code has been executed while your tests were running. Personally, I don’t find a high code coverage a significant measure: if tests are fallacious, or they run only on the happy path, the code coverage percentage will be high, but the tests will not actually guarantee a high quality of the code.

On the other hand, a low code coverage is definitely worrisome, because it means some parts of the code aren’t tested at all. Thus, code coverage has to be taken, as every other KPI based only exclusively on lines of code, with a grain of salt.

Code coverage and Gitlab

Gitlab allows collecting code coverage from test suites directly from pipelines. Major information on the setup can be found in the pipelines guide and in the Gitlab CI reference guide. Since there are lots of different test suites out there, I cannot include how to configure them here. However, if you need any help, feel free to reach out to me at the contacts reported below.

With Gitlab 13.5 there is also a Test Coverage Visualization tool, check it out! Gitlab will also report code coverage statistic for pipelines over time in nice graphs under Project Analytics > Repository. Data can also be exported as csv! We will use such data to check if, in the commit, the code coverage decreased comparing to the main branch.

This means that every new code written has to be tested at least as much as the rest of the code is tested. Of course, this strategy can be easily changed. The check is only one line of bash, and can be easily be replaced with a fixed number, or any other logic.

The Gitlab Pipeline Job

The job that checks the coverage runs in a stage after the testing stage. It uses alpine as base, and curl and jq to query the APIs and read the code coverage.

On self hosted instances, or on Bronze or above, you should use a project access token to give access to the APIs. On Free, use a personal access token. If the project is public, the API are accessible without any token. It needs three variables: the name of the job which generates the code coverage percentage (JOB_NAME), the target branch to compare the coverage with (TARGET_BRANCH), and a private token to read the APIs (PRIVATE_TOKEN). The job will not run when the pipeline is running on the target branch, since it would be comparing the code coverage with itself, wasting minutes of runners for nothing.

The last line is the one providing the logic to compare the coverages.

    image: alpine:latest
    stage: postTest
        JOB_NAME: testCoverage
        TARGET_BRANCH: main
        - apk add --update --no-cache curl jq
        - TARGET_PIPELINE_ID=`curl -s "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/pipelines?ref=${TARGET_BRANCH}&status=success&private_token=${PRIVATE_TOKEN}" | jq ".[0].id"`
        - TARGET_COVERAGE=`curl -s "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/pipelines/${TARGET_PIPELINE_ID}/jobs?private_token=${PRIVATE_TOKEN}" | jq --arg JOB_NAME "$JOB_NAME" '.[] | select(.name==$JOB_NAME) | .coverage'`
        - CURRENT_COVERAGE=`curl -s "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/pipelines/${CI_PIPELINE_ID}/jobs?private_token=${PRIVATE_TOKEN}" | jq --arg JOB_NAME "$JOB_NAME" '.[] | select(.name==$JOB_NAME) | .coverage'`
        - if  [ "$CURRENT_COVERAGE" -lt "$TARGET_COVERAGE" ]; then echo "Coverage decreased from ${TARGET_COVERAGE} to ${CURRENT_COVERAGE}" && exit 1; fi;

This simple job works both on and on private Gitlab instances, for it doesn’t hard-code any URL.

Gitlab will now block merging merge requests without enough tests! Again, code coverage is not the magic bullet, and you shouldn’t strive to have 100% of code coverage: better fewer tests, but with high quality, than more just for increasing the code coverage. In the end, a human is always the best reviewer. However, a small memo to write just one more test is, in my opinion, quite useful ;-)

Questions, comments, feedback, critics, suggestions on how to improve my English? Reach me on Twitter (@rpadovani93) or drop me an email at


on November 18, 2020 09:00 AM

November 17, 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, 221.50 work hours have been dispatched among 13 paid contributors. Their reports are available:
  • Abhijith PA did 16.0h (out of 14h assigned and 2h from September).
  • Adrian Bunk did 7h (out of 20.75h assigned and 5.75h from September), thus carrying over 19.5h to November.
  • Ben Hutchings did 11.5h (out of 6.25h assigned and 9.75h from September), thus carrying over 4.5h to November.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did 20.75h (out of 20.75h assigned).
  • Holger Levsen did 7.0h coordinating/managing the LTS team.
  • Markus Koschany did 20.75h (out of 20.75h assigned).
  • Mike Gabriel gave back the 8h he was assigned. See below 🙂
  • Ola Lundqvist did 10.5h (out of 8h assigned and 2.5h from September).
  • Roberto C. Sánchez did 13.5h (out of 20.75h assigned) and gave back 7.25h to the pool.
  • Sylvain Beucler did 20.75h (out of 20.75h assigned).
  • Thorsten Alteholz did 20.75h (out of 20.75h assigned).
  • Utkarsh Gupta did 20.75h (out of 20.75h assigned).

Evolution of the situation

October was a regular LTS month with a LTS team meeting done via video chat thus there’s no log to be shared. After more than five years of contributing to LTS (and ELTS), Mike Gabriel announced that he founded a new company called Frei(e) Software GmbH and thus would leave us to concentrate on this new endeavor. Best of luck with that, Mike! So, once again, this is a good moment to remind that we are constantly looking for new contributors. Please contact Holger if you are interested!

The security tracker currently lists 42 packages with a known CVE and the dla-needed.txt file has 39 packages needing an update.

Thanks to our sponsors

Sponsors that joined recently are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on November 17, 2020 09:06 AM

Welcome to the Ubuntu Weekly Newsletter, Issue 657 for the week of November 8 – 14, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on November 17, 2020 12:49 AM

November 15, 2020

Linux on the Desktop

Kubuntu General News

2020 has been a fascinating year, and an exciting one for Kubuntu. There seems to be a change in the market, driven by the growth in momentum of cloud native computing.

As markets shift towards creative intelligence, more users are finding themselves hampered by the daily Windows or MacOS desktop experience. Cloud native means Linux, and to interoperate seamlessly in the cloud space you need Linux.

Kubuntu Focus Linux Laptop

Here at Kubuntu we were approached in late 2019 by Mindshare Management Ltd. MSM wanting to work with us to bring a cloud native Kubuntu Linux laptop to the market, directly aimed at competing with the MacBook Pro. As 2020 has progressed the company has continued to grow and develop the market, releasing their second model the Kubuntu Focus M2 in October. Their machines are not just being bought by hobby and tech enthusiasts, the Kubuntu Focus team have sold several high spec machines to NASA via their Jet Propulsion Laboratory.

Lenovo launches Linux range

Lenovo also has a vision for Linux on the Desktop, and as an enterprise class vendor they know where the market is heading. The Lenovo Press Release of 20th September announced 13 machines with Ubuntu Linux installed by default.

These include 13 ThinkStation™ and ThinkPad™ P Series Workstations and an additional 14 ThinkPad T, X, X1 and L series laptops, all with the 20.04 LTS version of Ubuntu, with the exception of the L series which will have version 18.04.

When it comes to desktops, at Kubuntu, we believe the KDE desktop experience is unbeatable. In October KDE announced the release of Plasma-Desktop 5.20 as “New and improved inside and out”. Shortly after the release, the Kubuntu team set to work on building out Kubuntu with this new version of the KDE Plasma desktop.

KDE Plasma Desktop on Linux

Our open build process means that you can easily get your hands on the current developer build of Kubuntu Linux ‘Hirsute Hippo’ from our Nightly Builds Repo.

It’s been an exciting year, and 2021 looks even more promising, as we fully anticipate more vendors to bring machines to the market with Linux on the Desktop.

Even more inspiring is the fact that Kubuntu Linux is built by enthusiastic volunteers who devote their time, energy and effort. Those volunteers are just like you, they contribute what they can, when they can, and the results are awesome!

About the Author:

Rick Timmis is a Kubuntu Councillor, and advocate. Rick has been a user and open contributor to Kubuntu for over 10 years, and a KDE user and contributor for 20

on November 15, 2020 04:05 PM

November 14, 2020

I was lucky to support GitOps Days 2020 EMEA last week. The community of GitOps practitioniers came together again for round two and we saw lots of very engaged discussion and new ideas. It was a great pleasure to play at the event and bring some playfulness and silliness to the breaks in between. As last time I found it a bit hard to read the crowd (you can’t see anyone), so I tried to pick from a variety of styles of danceable music.
on November 14, 2020 06:15 AM

November 13, 2020

Linux App Summit 2020

Jonathan Riddell

For those who don’t follow KDE’s instagram feed, get with the programme chicos!

Here’s some pics from the Linux App Summit 2020 you’ll find there

Conference opening by Aleix

Rohan has he first talk about graphics on Linux. Rohan is an elite Linux graphics dev.

This is a reel. I’m not sure what a reel is but it’s a moving image on the Instagram.

Greg K-H had a keynote where he said KDE does it right, keeping libraries stable is hard (KF5 now 6 years in). Evolve you App.

Alexis (not to be confused with Aleix) talking about AppImage Builder, which magically works out how to make an AppImage package from your running app.

MyGNUHealth is a useful system for health records with an app using Kirigami.

There’s also talks on Saturday. You can watch the live stream on Youtube. Or register and join in directly.

Just now we are enjoying a tour of Amalfi, a comune in the province of Salerno, in the region of Campania, Italy, We are learning about the Lemons of Salerno and because it’s 2020 making limonchello.

on November 13, 2020 09:16 PM

LXD gives you system containers and virtual machines, usable from the same user interface. You would rather use system containers as they are more lightweight than VMs.

Previously we have seen how to use the Kali LXD containers (includes how to use a USB network adapter). There is documentation on using graphics applications (X11) in the Kali LXD containers at the Kali website. In this post we see again how to use graphics applications (X11) in the Kali LXD containers. The aim is to simplify and make the instructions more robust.

The following assume that you have configured LXD on your system.

Overview of the Kali LXD containers

Let’s have a look at the available Kali images. Currently, there are only container images (no VM images), for the x86_64, armel, arm64, armhf and i386 architectures. They all follow Kali current which is very fresh. There are plain and cloud images. The latter support cloud-init and are more user-friendly. The cloud images create a non-root account for us (username is debian)

$ lxc image list images:kali
|           ALIAS           |             DESCRIPTION             |
| kali (5 more)             | Kali current amd64 (20201112_17:14) |
| kali/arm64 (2 more)       | Kali current arm64 (20201112_17:14) |
| kali/armel (2 more)       | Kali current armel (20201112_17:14) |
| kali/armhf (2 more)       | Kali current armhf (20201112_17:14) |
| kali/cloud (3 more)       | Kali current amd64 (20201112_17:14) |
| kali/cloud/arm64 (1 more) | Kali current arm64 (20201112_17:14) |
| kali/cloud/armel (1 more) | Kali current armel (20201112_17:14) |
| kali/cloud/armhf (1 more) | Kali current armhf (20201112_17:14) |
| kali/cloud/i386 (1 more)  | Kali current i386 (20201112_17:14)  |
| kali/i386 (2 more)        | Kali current i386 (20201112_17:14)  |

From the above list, you would install either kali or kali/cloud. Since these are container images, your LXD will automatically consider your host’s architecture and install the appropriate one. In our case, we are interested in the cloud variant because we want to configure the container while it gets started so that it gets X11 support in an easy way.

LXD profile for Kali containers, X11/graphics support

We are using the following LXD profile to add X11 (graphics) support for applications running in the Kali containers. Download the file and save as x11kali.txt. Then, import the profile into your LXD installation.

$ wget
$ lxc profile create x11kali
$ cat x11kali.txt | lxc profile edit x11kali
$ lxc profile show x11kali

Launching a Kali container with X11 support

We have the LXD profile and we are ready to launch a Kali container with X11 support. We select the Kali container image with cloud-init support. We name the container xkali. We apply first the default profile, then the x11kali profile. Then, we run the cloud-init status --wait command in the container so that it shows the progress until the container is fully ready to be used. It should take a bit because it is updating the package list and installs a few X11 support packages.

$ lxc launch images:kali/cloud xkali --profile default --profile x11kali
Creating xkali
Starting xkali
$ lxc exec xkali -- cloud-init status --wait
status: done

Here is the container.

$ lxc list xkali
| NAME  |  STATE  |        IPV4        |   TYPE    |
| xkali | RUNNING | (eth0) | CONTAINER |

Using the Kali LXD container with X11 support

There are several ways to get a shell into a container. We are using a particular one here, and create a LXD alias for it. Use the following commands that create a kalishell alias to give you a shell (non-root) to the Kali container.

$ lxc alias add kalishell 'exec @ARGS@ --user 1000 --group 1000 --env HOME=/home/debian -- /bin/bash --login'
$ lxc alias list
| ALIAS     | TARGET     |
| kalishell | exec @ARGS@ --user 1000 --group 1000 --env HOME=/home/debian -- /bin/bash --login |

We get a shell into the Kali container and run a few commands to test X11 applications, OpenGL applications and finally play audio. Note that you can also run CUDA applications (not shown below).

$ lxc kalishell xkali
debian@xkali:~$ xclock
debian@xkali:~$ glxgears 
Running synchronized to the vertical refresh.  The framerate should be
approximately the same as the monitor refresh rate.
debian@xkali:~$ sudo apt install -y wget
debian@xkali:~$ wget
debian@xkali:~$ paplay Demo_chorus.ogg

You can now install all sort of packages. Note that to get the whole lot of Kali packages, you can install the kali-linux-default metapackage. If you are only interested in a specific package, you can install just that one instead.

debian@xkali:~$ sudo apt install -y kali-linux-default


You managed to get a Kali LXD container work with X11 applications, so that they will appear on the host. As if you are running Kali on the host but you have properly separated the filesystem and the networking from the host. Still, in the situation described in this post, the X server is that of the host. There is no separation here, and you should not use this setup if you are dealing with malicious code samples, or malicious hosts. You would need a separate X server for a proper separation between the container and the host.

In this post we did not mention any networking configuration, and we have been using the default private bridge that LXD provides to us. Depending on your requirements, you would use something else. Such as setting up a USB network adapter to be used exclusively by the Kali container, or using a bridge, macvlan, ipvlan or routed networking.

on November 13, 2020 08:02 PM

November 10, 2020

st, xft and ubuntu 20.04.1

Sebastian Schauenburg

Some time ago I switched to AwesomeWM and with that came another change, my default terminal emulator. Having used GNOME terminal for years, I soon switched to Terminator back in the day. Leaving GNOME behind, in search for a more lean desktop with less frills and more keyboard centric features, I also had to ditch that terminal emulator (it has too many dependencies for my use case). Eventually I stumbled upon st, which fit the bill.

st still seems almost perfect for me and I'm sticking with it, for now. There is one annoying bug though, which came to light when I started receiving e-mails with emoticons. Those emoticons crashed my 'st' instance!

This is actually caused by an upstream Xft bug. When emoticons are displayed, they crash st. I had to resort to using xterm sometimes, which is, well, not a great experience nowadays. I set out on a journey to fix my desktop.


So I checked the FAQ of st and found an answer to my issue:

 ## BadLength X error in Xft when trying to render emoji

 Xft makes st crash when rendering color emojis with the following error:

 "X Error of failed request:  BadLength (poly request too large or internal Xlib length error)"
   Major opcode of failed request:  139 (RENDER)
   Minor opcode of failed request:  20 (RenderAddGlyphs)
   Serial number of failed request: 1595
   Current serial number in output stream:  1818"

 This is a known bug in Xft (not st) which happens on some platforms and
 combination of particular fonts and fontconfig settings.

 See also:

 The solution is to remove color emoji fonts or disable this in the fontconfig
 XML configuration.  As an ugly workaround (which may work only on newer
 fontconfig versions (FC_COLOR)), the following code can be used to mask color

     FcPatternAddBool(fcpattern, FC_COLOR, FcFalse);

 Please don't bother reporting this bug to st, but notify the upstream Xft
 developers about fixing this bug.

The solution

Checking issue 6 at xft shows me that this is an active issue. Reading the posts I found this merge request which solves the issue in xft, but it is still being worked on by Maxime Coste.

Waiting for the patch to be finalized in xft, then released and then used in my Desktop distibution of choice (currently Ubuntu 20.04) will take too long (yes, I am impatient). So, I decided to patch libxft2 manually on my system, using the patch by Maxime (thank you Maxime!). I also created my own patch file, since I had merge errors. Here are the instructions:

apt-get source libxft2
patch -p0 < modified_for_ubuntu_20.04.1.patch
debuild -b -uc -us
sudo dpkg -i ../libxft2_2.3.3-0ubuntu1_amd64.deb


on November 10, 2020 09:00 PM

November 08, 2020

Full Circle Weekly News #189

Full Circle Magazine

Linux Mint Now Maintains Their Own Chromium
on November 08, 2020 12:03 PM

November 03, 2020

Given a PyTorch model, how should we put it in a Docker image, with all the related dependencies, ready to be deployed?


Photo by Michael Dziedzic on Unsplash

You know the drill: your Data Science team has created an amazing PyTorch model, and now they want you to put it in production. They give you a .pt file and some preprocessing script. What now?

Luckily, AWS and Facebook have created a project, called Torch Serve, to put PyTorch images in production, similarly to Tensorflow Serving. It is a well crafted Docker image, where you can upload your models. In this tutorial we will see how to customize the Docker image to include your model, how to install other dependencies inside it, and which configuration options are available.

We include the PyTorch model directly inside the Docker image, instead of loading it at runtime; while loading it at runtime as some advantages and makes sense in some scenario (as in testing labs where you want to try a lot of different models), I don’t think it is suitable for production. Including the model directly in the Docker image has different advantages:

  • if you use CI/CD you can achieve reproducible builds;
  • to spawn a new instance serving your model, you need to have available only your Docker registry, and not also a storage solutions to store the model;
  • you need to authenticate only to your Docker registry, and not to the storage solution;
  • it makes easier keeping track of what has been deployed, ‘cause you have to check only the Docker image version, and not the model version. This is especially important if you have a cluster of instances serving your model;

Let’s now get our hands dirty and dive in what is necessary to have the Docker image running!

Building the model archive

The Torch Serve Docker image needs a model archive to work: it’s a file with inside a model, and some configurations file. To create it, first install Torch Serve, and have a PyTorch model available somewhere on the PC.

To create this model archive, we need only one command:

torch-model-archiver --model-name <MODEL_NAME> --version <MODEL_VERSION>  --serialized-file <MODEL> --export-path <WHERE_TO_SAVE_THE_MODEL_ARCHIVE>

There are four options we need to specify in this command:

  • MODEL_NAME is an identifier to recognize the model, we can use whatever we want here: it’s useful when we include multiple models inside the same Docker image, a nice feature of Torch Serve that we won’t cover for now;
  • MODEL_VERSION is used to identify, as the name implies, the version of the model;
  • MODEL is the path, on the local PC, with the .pt file acting as model;
  • WHERE_TO_SAVE_THE_MODEL_ARCHIVE is a local directory where Torch Serve will put the model archive it generates;

Putting all together, the command should be something similar to:

torch-model-archiver --model-name predict_the_future --version 1.0 --serialized-file ~/models/predict_the_future --export-path model-store/

After having run it, we now have a file with .mar extension, the first step to put in production our PyTorch model! .mar files are actually just .zip files with a different extension, so feel free to open it and analyze it to see how it works behind the scenes.

Probably some pre-processing before invoking the model is necessary. If this is the case, we can create a file where we can put all the necessary instructions. This file can have external dependencies, so we can code an entire application in front of our model.

To include the handler file in the model archive, we need only to add the --handler flag to the command above, like this:

torch-model-archiver --model-name predict_the_future --version 1.0 --serialized-file ~/models/predict_the_future --export-path model-store/ --handler

Create the Docker image

Now we have the model archive, and we include it in the PyTorch Docker Image. Other than the model archive, we need to create a configuration file as well, to say to PyTorch which model to automatically load at the startup.

We need a file similar to this: Later in this tutorial we will see what these lines mean, and what other options are available.


Docker image with just the model

If we need to include just the model archive, and the config file, the Dockerfile is quite straightforward, since we just need to copy the files, all the other things will be managed by TorchServe itself. Our Dockerfile will thus be:

FROM pytorch/torchserve as production

COPY /home/model-server/
COPY predict_the_future.mar /home/model-server/model-store

TorchServe already includes torch, torchvision, torchtext, and torchaudio, so there is no need to add them. To see the current version of these libraries, please go see the requirements file of TorchServe over GitHub.

Docker image with the model and external dependencies

What if we need different Python Dependencies for our Python handler?

In this case, we want to use a two steps Docker image: in the first step we build our dependencies, and then we copy them over the final image. We list our dependencies in a file called requirements.txt, and we use pip to install them. Pip is the package installer for Python. Their documentation about the format of the requirements file is very complete.

The Dockerfile is now something like this:

ARG BASE_IMAGE=ubuntu:18.04

# Compile image loosely based on pytorch compile image
FROM ${BASE_IMAGE} AS compile-image

# Install Python and pip, and build-essentials if some requirements need to be compiled
RUN apt-get update && \
    DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y \
    python3-dev \
    python3-distutils \
    python3-venv \
    curl \
    build-essential \
    && rm -rf /var/lib/apt/lists/* \
    && cd /tmp \
    && curl -O \
    && python3

RUN python3 -m venv /home/venv

ENV PATH="/home/venv/bin:$PATH"

RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1
RUN update-alternatives --install /usr/local/bin/pip pip /usr/local/bin/pip3 1

# The part above is cached by Docker for future builds
# We can now copy the requirements file from the local system
# and install the dependencies
COPY requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

FROM pytorch/torchserve as production

# Copy dependencies after having built them
COPY --from=compile-image /home/venv /home/venv

# We use curl for health checks on AWS Fargate
USER root
RUN apt-get update && \
    DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y \
    curl \
    && rm -rf /var/lib/apt/lists/*

USER model-server

COPY /home/model-server/
COPY predict_the_future.mar /home/model-server/model-store

If PyTorch is among the dependencies, we should change the line to install the requirements from

RUN pip install --no-cache-dir -r requirements.txt


RUN pip install --no-cache-dir -r requirements.txt -f

In this way we will use the pre-build Python packages for PyTorch instead of installing them from scratch: it will be faster, and it requires less resources, making it suitable also for small CI/CD systems.

Configuring the Docker image

We created a configuration file above, but what does it? Of course, going through all the possible configurations would be impossible, so I leave here the link to the documentation. Among the other things explained there, there is a way to configure Cross-Origin Resource Sharing (necessary to use the model as APIs over the web), a guide on how to enable SSL, and much more.

There is a set of configurations parameters in particular I’d like to focus on: the ones related to logging. First, for production environment, I suggest setting async_logging to true: it could delay a bit the output, but allows a higher throughput. Then, it’s important to notice that by default Torch Serve captures every message, including the ones with severity DEBUG. In production, we probably don’t want this, especially because it can become quite verbose.

To override the default behavior, we need to create a new file, called For more information on every possible options I suggest familiarizing with the official guide. To start, copy the default Torch Serve configuration, and increase the severity of the printed messages. In particular, change = DEBUG, ts_log
log4j.logger.ACCESS_LOG = INFO, access_log

to = WARNING, ts_log
log4j.logger.ACCESS_LOG = WARNING, access_log

We need also to copy this new file to the Docker Image, so copy the logging config just after the config file:

COPY /home/model-server/
COPY /home/model-server/

We need to inform Torch Serve about this new config file, and we do so adding a line to


We have now a full functional Torch Serve Docker image, with our custom model, ready to be deployed!

For any question, comment, feedback, critic, suggestion on how to improve my English, reach me on Twitter (@rpadovani93) or drop an email at


on November 03, 2020 09:00 AM

November 02, 2020

A Suggestion: NaNoWriMo

Stephen Michael Kellat

November is the time when there is the creative writing project known as NaNoWriMo or National Novel Writing Month. The goal is to write a fifty thousand word creative fiction piece in one month. In many ways it gives people a chance to let their imaginations unfurl stories that are bunched up in confined spaces.

In England and elsewhere November is also a time of lockdown presently. My morning weekday newspaper USA TODAY provides plenty of updates from across my country and across the planet in the matter with one report after another. The fatigue is quite real.

Alongside what we are doing during the Hirsute Hippo cycle I do suggest folks take the time to create works of fiction. Sticking to routine can be good in many ways but it can also drive you nuts. It is okay to feel like this is the darkest timeline. Pulling yourself out of that and taking positive steps forward is something we can all attempt to do, though.

I will suggest writing in CommonMark and keeping your formatting to a minimum in your original manuscript. Why? You could then most easily go from then after some editing of the text and format it for Kindle Direct Publishing, LeanPub, Smashwords, or another outlet. With some careful tinkering with the CommonMark and some nice use of LuaLaTeX you can even make a ready copy to make a print version for the various marketplaces.

Every development cycle we tell the story of Ubuntu and its various flavours. With this year being such a challenged year on so many fronts perhaps we all have additional stories to tell. Who will write the novel, novella, or novelette (said lengths being defined by the Nebula Awards in word counts) that will speak to our Ubuntu realm? I imagine those stories exist and with as exciting as this year has been some of us could have creative releases to start 2021 with perhaps.

No special tools are required. As to an editor I will say that I used Visual Studio Code to write the last novelette. Nothing fancy or sophisticated is required if you stick to CommonMark.

Good Luck & Good Hunting.

on November 02, 2020 02:17 AM

October 31, 2020

My dad’s got a Brother DCP-7055W printer/scanner, and he wanted to be able to set it up as a network scanner to his Ubuntu machine. This was more fiddly than it should be, and involved a bunch of annoying terminal work, so I’m documenting it here so I don’t lose track of how to do it should I have to do it again. It would be nice if Brother made this easier, but I suppose that it working at all under Ubuntu is an improvement on nothing.

Anyway. First, go off to the Brother website and download the scanner software. At time of writing, has the software, but if that’s not there when you read this, search the Brother site for DCP-7055 and choose Downloads, then Linux and Linux (deb), and get the Driver Installer Tool. That’ll get you a shell script; run it. This should give you two new commands in the Terminal: brsaneconfig4 and brscan-skey.

Next, teach the computer about the scanner. This is what brsaneconfig4 is for, and is all done in the Terminal. You need to know the scanner’s IP address; you can find this out from the scanner itself, or you can use avahi-resolve -v -a -r to search your network for it. This will dump out a whole load of stuff, some of which should look like this:

=  wlan0 IPv4 Brother DCP-7055W                             UNIX Printer         local
   hostname = [BRN008092CCEE10.local]
   address = []
   port = [515]
   txt = ["TBCP=F" "Transparent=T" "Binary=T" "PaperCustom=T" "Duplex=F" "Copies=T" "Color=F" "usb_MDL=DCP-7055W" "usb_MFG=Brother" "priority=75" "adminurl=http://BRN008092CCEE10.local./" "product=(Brother DCP-7055W)" "ty=Brother DCP-7055W" "rp=duerqxesz5090" "pdl=application/" "qtotal=1" "txtvers=1"]

That’s your Brother scanner. The thing you want from that is address, which in this case is

Run brsaneconfig4 -a name="My7055WScanner" model="DCP-7055" ip= This should teach the computer about the scanner. You can test this with brsaneconfig4 -p which will ping the scanner, and brsaneconfig4 -q which will list all the scanner types it knows about and then list your added scanner at the end under Devices on network. (If your Brother scanner isn’t a DCP-7055W, you can find the other codenames for types it knows about with brsaneconfig4 -q and then use one of those as the model with brsaneconfig4 -a.)

You only need to add the scanner once, but you also need to have brscan-skey running always, because that’s what listens for network scan requests from the scanner itself. The easiest way to do this is to run it as a Startup Application; open Startup Applications from your launcher by searching from it, and add a new application which runs the command brscan-skey, and restart the machine so that it’s running.

If you don’t have the GIMP1 installed, you’ll need to install it.

On the scanner, you should now be able to press the Scan button and choose Scan to PC and then Scan Image, and it should work. What will happen is that your machine will pop up the GIMP with the image, which you will then need to export to a format of your choice.

This is quite annoying if you need to scan more than one thing, though, so there’s an optional extra step, which is to change things so that it doesn’t pop up the GIMP and instead just saves the scanned photo which is much nicer. To do this, first install imagemagick, and then edit the file /opt/brother/scanner/brscan-skey/script/ with sudo. Change the last line from

echo gimp -n $output_file 2>/dev/null \;rm -f $output_file | sh &


echo convert $output_file $output_file.jpg 2>/dev/null \;rm -f $output_file | sh &

Now, when you hit the Scan button on the scanner, it will quietly create a file named something like brscan.Hd83Kd.ppm.jpg in the brscan folder in your home folder and not show anything on screen, and this means that it’s a lot easier to scan a bunch of photos one after the other.

  1. I hate this name. It makes us look like sniggering schoolboys. GNU Imp, maybe, or the new Glimpse fork, but the upstream developers don’t want to change it
on October 31, 2020 11:20 AM

Another month, another bunch of uploads. The freeze for Debian 11 (bullseye) is edging closer, so I’ve been trying to get my package list in better shape ahead of that. Thanks to those who worked on fixing and the lintian reports on the QA pages, those are immensely useful and it’s great to have that back!

2020-10-04: Upload package gnome-shell-extension-draw-on-your-screen (8-1) to Debian unstable.

2020-10-05: Sponsor package flask-restful (0.3.8-4) for Debian unstable (Python Team request).

2020-10-05: Sponsor package python-potr (1.0.2-3) for Debian unstable (Python Team request).

2020-10-06: Sponsor package python-pyld (2.0.3-1) for Debian unstable (Python Team request).

2020-10-06: Sponsor package flask-openid (1.2.5+dfsg-4) for Debian unstable (Python Team request).

2020-10-06: Sponsor package qosmic (1.6.0-4) for Debian unstable (E-mail request).

2020-10-07: File removal for gnome-shell-extension-workspace-to-dock (RC Buggy, no longer maintained: #971803).

2020-10-07: Upload package gnome-shell-extension-pixelsaver (1.20-2) to Debian unstable (Closes:  #971689).

2020-10-07: Upload package calamares (3.2.31-1) to Debian unstable.

2020-10-07: Upload package gnome-shell-extension-dashtodock (69-1) to Debian unstable (Closes: #971654).

2020-10-08: Sponsor package python3-libcloud (3.020-1) for Debian unstable.

2020-10-09: Upload package gnome-shell-extension-dashtopanel (40-1) to Debian unstable (Closes: #971087).

2020-10-09: Upload package gnome-shell-extension-draw-on-your-screen (8.1-1) to Debian unstable.

2020-10-12: Upload package gnome-shell-extension-pixelsaver (1.24-1) to Debian unstable.

2020-10-14: Sponsor package python3-onewire (0.2-1) for Debian unstable (Python Team request).

2020-10-15: Sponsor package cheetah (3.2.5-1) for Debian unstable (Python Team request).

2020-10-15: Sponsor package xmodem (0.4.6+dfsg-1) for Debian unstable (Python Team request).

2020-10-15: Sponsor package ansi (0.1.5-1) for Debian unstable (Python Team request).

2020-10-15: Sponsor package cbor2 (5.2.0-1) for Debian unstable (Python Team request).

2020-10-16: Upload package calamares (3.2.32-1) to Debian unstable.

2020-10-17: Upload package calamares ( to Debian unstable.

2020-10-18: Upload package kpmcore (4.2.0-1) to Debian unstable.

2020-10-18: Upload package gnome-shell-extension-draw-on-your-screen (9-1) to Debian unstable.

2020-10-18: Upload package bundlewrap (4.2.1-1) to Debian unstable.

2020-10-18: Upload package bcachefs-tools (0.1+git20201017.8a4408-1~exp1) to Debian experimental.

2020-10-18: Upload package calamares ( to Debian unstable.

2020-10-18: Upload package partitionmanager (4.1.0-2) to Debian unstable.

2020-10-19: Upload package kpmcore (4.2.0-2) to Debian unstable.

2020-10-21: Upload package calamares ( to Debian unstable.

2020-10-21: Upload package calamares-settings-debian (11.0.3-1) to Debian unstable (Closes: #969930, #941301).

2020-10-21: Upload package partitionmanager (4.2.0-1) to Debian unstable.

2020-10-21: Upload package gnome-shell-extension-hard-disk-led (22-1) to Debian unstable (Closes: #971041).

2020-10-21: Merge MR!1 for catimg (Janitor improvements).

2020-10-21: Sponsor package r4d (1.7-1) for Debian unstable (Python Team request).

2020-10-22: Upload package aalib (1.4rc5-47) to Debian unstable.

2020-10-22: Upload package fabulous (0.3.0+dfsg1-8) to Debian unstable.

2020-10-22: Merge MR!1 for gdisk (Janitor improvements).

2020-10-22: Merge MR!1 for gnome-shell-extension-arc-menu (New upstream URLs, thanks Edward Betts).

2020-10-22: Upload package gnome-shell-extension-arc-menu (49-1) to Debian unstable.

2020-10-22: Upload package gnome-shell-extension-draw-on-your-screen (10-1) to Debian unstable.

2020-10-22: Merge MR!1 for vim-airline (Janitor improvements).

2020-10-22: Merge MR!1 for vim-airline-themes (Janitor improvements).

2020-10-22: Merge MR!1 for preload (Janitor improvements).

2020-10-22: Upload package aalib (1.4rc5-48) to Debian unstable.

2020-10-22: Upload package gnome-shell-extension-trash (0.2.0-git20200326.3425fcf1-1).

2020-10-26: Upload package bcachefs-tools (0.1+git20201025.742dbbdb-1) to Debian unstable.

2020-10-26: Sponsor package dunst (1.5.0-1) for Debian unstable ( request).

on October 31, 2020 10:13 AM

October 30, 2020

I frequently see a pattern in image build/refresh scripts where a set of packages is installed, and then all packages are updated:

apt update
apt install -y pkg1 pkg2 pkg2
apt dist-upgrade -y

While it’s not much, this results in redundant work. For example reading/writing package database, potentially running triggers (man-page refresh, ldconfig, etc). The internal package dependency resolution stuff isn’t actually different: “install” will also do upgrades of needed packages, etc. Combining them should be entirely possible, but I haven’t found a clean way to do this yet.

The best I’ve got so far is:

apt update
apt-cache dumpavail | dpkg --merge-avail -
(for i in pkg1 pkg2 pkg3; do echo "$i install") | dpkg --set-selections
apt-get dselect-upgrade

This gets me the effect of running “install” and “upgrade” at the same time, but not “dist-upgrade” (which has slightly different resolution logic that’d I’d prefer to use). Also, it includes the overhead of what should be an unnecessary update of dpkg’s database. Anyone know a better way to do this?

Update: Julian Andres Klode pointed out that dist-upgrade actually takes package arguments too just like install. *face palm* I didn’t even try it — I believed the man-page and the -h output. It works perfectly!

© 2020, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

on October 30, 2020 07:07 PM

October 23, 2020

Thanks to all the hard work from our contributors, Lubuntu 20.10 has been released! With the codename Groovy Gorilla, Lubuntu 20.10 is the 19th release of Lubuntu, the fifth release of Lubuntu with LXQt as the default desktop environment. Support lifespan Lubuntu 20.10 will be supported until July 2021. Our main focus will be on […]
on October 23, 2020 01:55 AM

October 22, 2020

The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Victoria on Ubuntu 20.10 (Groovy Gorilla) and Ubuntu 20.04 LTS (Focal Fossa) via the Ubuntu Cloud Archive. Details of the Victoria release can be found at:

To get access to the Ubuntu Victoria packages:

Ubuntu 20.10

OpenStack Victoria is available by default for installation on Ubuntu 20.10.

Ubuntu 20.04 LTS

The Ubuntu Cloud Archive for OpenStack Victoria can be enabled on Ubuntu 20.04 by running the following command:

sudo add-apt-repository cloud-archive:victoria

The Ubuntu Cloud Archive for Victoria includes updates for:

aodh, barbican, ceilometer, cinder, designate, designate-dashboard, glance, gnocchi, heat, heat-dashboard, horizon, ironic, keystone, magnum, manila, manila-ui, masakari, mistral, murano, murano-dashboard, networking-arista, networking-bagpipe, networking-baremetal, networking-bgpvpn, networking-hyperv, networking-l2gw, networking-mlnx, networking-odl, networking-sfc, neutron, neutron-dynamic-routing, neutron-vpnaas, nova, octavia, octavia-dashboard, openstack-trove, trove-dashboard, ovn-octavia-provider, panko, placement, sahara, sahara-dashboard, sahara-plugin-spark, sahara-plugin-vanilla, senlin, swift, vmware-nsx, watcher, watcher-dashboard, and zaqar.

For a full list of packages and versions, please refer to:

Reporting bugs

If you have any issues please report bugs using the ‘ubuntu-bug’ tool to ensure that bugs get logged in the right place in Launchpad:

sudo ubuntu-bug nova-conductor

Thank you to everyone who contributed to OpenStack Victoria. Enjoy and see you in Wallaby!


(on behalf of the Ubuntu OpenStack Engineering team)

on October 22, 2020 08:11 PM

The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 20.10, code-named “Groovy Gorilla”. This marks Ubuntu Studio’s 28th release. This release is a regular release, and as such it is supported for nine months until July 2021.

Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a complete list of changes and known issues.

You can download Ubuntu Studio 20.10 from our download page.

If you find Ubuntu Studio useful, please consider making a contribution.


Due to the change in desktop environment this release, direct upgrades to Ubuntu Studio 20.10 are not supported. We recommend a clean install for this release:

  1. Backup your home directory (/home/{username})
  2. Install Ubuntu Studio 20.10
  3. Copy the contents of your backed-up home directory to your new home directory.

New This Release

The biggest new feature is the switch of desktop environment to KDE Plasma. We believe this will provide a more cohesive and integrated experience for many of the applications that we include by default. We have previously outlined our reasoning for this switch as part of our 20.04 LTS release announcement.

This release includes Plasma 5.19.5. If you would like a newer version, the Kubuntu Backports PPA may include a newer version of Plasma when ready.

We are excited to be a part of the KDE community with this change, and have embraced the warm welcome we have received.

You will notice that our theming and layout of Plasma looks very much like our Xfce theming. (Spoiler: it’s the same theme and layout!)


Studio Controls replaces Ubuntu Studio Controls

Ubuntu Studio Controls has been spun-off into an independent project called Studio Controls. It contains much of the same functionality but also is available in many more projects than Ubuntu Studio. Studio Controls remains the easiest and most straightforward way to configure the Jack Audio Connection Kit and provide easy access to tools to help you with using it.

Ardour 6.3

We are including the latest version of Ardour, version 6.3. This version has plenty of new features outlined at the Ardour website, but contains one caviat:

Projects imported from Ardour 5.x are permanently changed to the new format. As such, plugins, if they are not installed, will not be detected and will result in a “stub” plugin. Additionally, Ardour 6 includes a new Digital Signal Processor, meaning projects may not sound the same. If you do not need the new functionality of Ardour 6, do not upgrade to Ubuntu Studio 20.10.

Other Notable Updates

We’ve added several new audio plugins this cycle, most notably:

  • Add64
  • Geonkick
  • Dragonfly Reverb
  • Bsequencer
  • Bslizr
  • Bchoppr

Carla has been upgraded to version 2.2. Full release announcement at


OBS Studio

Our inclusion of OBS Studio has been praised by many. Our goal is to become the #1 choice for live streaming and recording, and we hope that including OBS Studio out of the box helps usher this in. With the game availability on Steam, which runs native on Ubuntu Studio and is easily installed, and with Steam’s development of Proton for Windows games, we believe game streamers and other streamers on Youtube, Facebook, and Twitch would benefit from such an all-inclusive operating system that would save them both money and time.

Included this cycle is OBS Studio 26.0.2, which includes several new features and additions, too numerous to list here.

For those that would like to use the advanced audio processing power of JACK with OBS Studio, OBS Studio is JACK-aware!


We have chosen Kdenlive to be our default video editor for several reasons. The largest of which is that it is the most professional video editor included in the Ubuntu repositories, but also it integrates very well with the Plasma desktop.

This release brings version 20.08.1, which includes several new features that have been outlined at their website.

Graphics and Photography


Artists will be glad to see Krita upgraded to version 4.3. While this may not be the latest release, it does include a number of new features over that included with Ubuntu Studio 20.04.

For a full list of new features, check out the Krita website.


This version of the icon seemed appropriate for an October release. :)

For photographers, you’ll be glad to see Darktable 3.2.1 included by default. Additionally, Darktable has been chosen as our default RAW Image Processing Platform.

With Darktable 3.2 comes some major changes, such as an overhaul to the Lighttable, A new snapshot comparison line, improved tooltips, and more! For a complete list, check out the Darktable website.

Introducing Digikam

For the first time in Ubuntu Studio, we are including the KDE application Digikam by default. Digikam is the most-advanced photo editing and cataloging tool in Open Source and includes a number of major features that integrate well into the Plasma desktop.

The version we have by default is version 6.4.0. For more information about Digikam 6.4.0, read the release announcement.

We realize that the version we include, 6.4.0, is not the most recent version, which is why we include Digikam 7.1.0 in the Ubuntu Studio Backports PPA.

For more information about Digikam 7.1.0, read the release announcement.

More Updates

There are many more updates not covered here but are mentioned in the Release Notes. We highly recommend reading those release notes so you know what has been updated and know any known issues that you may encounter.

Introducing the Ubuntu Studio Marketplace

Have you ever wanted to buy some gear to show off your love for Ubuntu Studio? Now you can! We just launched the Ubuntu Studio Marketplace. From now until October 27th, you can get our special launch discount of 15% off.

We have items like backpacks, coffee mugs, buttons, and more! Items for men, women, and children, even babies! Get your gear today!

Proceeds from commissions go toward supporting further Ubuntu Studio development.

Now Accepting Donations!

If you find Ubuntu Studio useful, we highly encourage you to donate toward its prolonged development. We would be grateful for any donations given!

Three ways to donate!


Become a Patron!

The official launch date of our Patreon campaign is TODAY! We have many goals, including being able to pay one or more developers at least a part-time wage for their work on Ubuntu Studio. However, we do have some benefits we would like to offer our patrons. We are still hammering-out the benefits to patrons, and we would love to hear some feedback about what those benefits might be. Become a patron, and we can have that conversation together!


Liberapay is a great way to donate to Ubuntu Studio. It is built around projects, like ours, that are made of and using free and open source software. Their system is designed to provide stable crowdfunded income to creators.


You can also donate directly via PayPal. You can establish either monthly recurring donations or make one-time donations. Whatever you decide is appreciated!

Get Involved!

Another great way to contribute is to get involved with the project directly! We’re always looking for new volunteers to help with packaging, documentation, tutorials, user support, and MORE! Check out all the ways you can contribute!

Special Thanks

Huge special thanks for this release go to:

  • Len Ovens: Studio Controls, Ubuntu Studio Installer, Coding
  • Thomas Ward: Packaging, Ubuntu Core Developer for Ubuntu Studio
  • Eylul Dogruel: Artwork, Graphics Design, Website Lead
  • Ross Gammon: Upstream Debian Developer, Guidance
  • Rik Mills: Kubuntu Council Member, help with Plasma desktop
  • Mauro Gaspari: Tutorials, promotion, and documentation
  • Krytarik Raido: IRC Moderator, Mailing List Moderator
  • Erich Eickmeyer: Project Leader, Packaging, Direction, Treasurer, KDE Plasma Transition
on October 22, 2020 06:30 PM

The releases following an LTS are always a good time ⌚ to make changes the set the future direction 🗺️ of the distribution with an eye on where we want to be for the next LTS release. Therefore, Ubuntu MATE 20.10 ships with that latest MATE Desktop 1.24.1, keeps paces with other developments within Ubuntu (such as Active Directory authentication) and migrated to the Ayatana Indicators project.

If you want bug fixes :bug:, kernel updates :corn:, a new web camera control :movie_camera:, and a new indicator :point_right: experience, then 20.10 is for you :tada:. Ubuntu MATE 20.10 will be supported for 9 months until July 2021. If you need Long Term Support, we recommend you use Ubuntu MATE 20.04 LTS.

Read on to learn more… :point_down:

Ubuntu MATE 20.10 (Groovy Gorilla) Ubuntu MATE 20.10 (Groovy Gorilla)

What’s changed since Ubuntu MATE 20.04?

MATE Desktop

If you follow the Ubuntu MATE twitter account 🐦 you’ll know that MATE Desktop 1.24.1 was recently released. Naturally Ubuntu MATE 20.10 features that maintenance release of MATE Desktop. In addition, we have prepared updated MATE Desktop 1.24.1 packages for Ubuntu MATE 20.04 that are currently in the SRU process. Given the number of MATE packages being updated in 20.04, it might take some time ⏳ for all the updates to land, but we’re hopeful that the fixes and improvements from MATE Desktop 1.24.1 will soon be available for those of you running 20.04 LTS 👍

Active Directory

The Ubuntu Desktop team added the option to enroll your computer into an Active Directory domain 🔑 during install. We’ve been tracking that work and the same capability is available in Ubuntu MATE too.

Active Directory Enrollment Enroll your computer into an Active Directory domain

Ayatana Indicators

There is a significant under the hood change 🔧 in Ubuntu MATE 20.10 that you might not even notice 👀 at a surface level; we’ve replaced Ubuntu Indicators with Ayatana Indicators.

We’ll explain some of the background, why we’ve made this change, the short term impact and the long term benefits.

What are Ayatana Indicators?

In short, Ayatana Indicators is a fork of Ubuntu Indicators that aims to be cross-distro compatible and re-usable for any desktop environment 👌 Indicators were developed by Canonical some years ago, initially for the GNOME2 implementation in Ubuntu and then refined for use in the Unity desktop. Ubuntu MATE has supported the Ubuntu Indicators for some years now and we’ve contributed patches to integrate MATE support into the suite of Ubuntu Indicators. Existing indicators are compatible with Ayatana Indicators.

We have migrated Ubuntu MATE 20.10 to Ayatana Indicators and Arctica Greeter. I live streamed 📡 the development work to switch from Ubuntu Indicators to Ayatana Indicators which you can find below if you’re interested in some of the technical details 🤓

The benefits of Ayatana Indicators

Ubuntu MATE 20.10 is our first release to feature Ayatana Indicators and as such there are a couple of drawbacks; there is no messages indicator and no graphical tool to configure the display manager greeter (login window) 😞

Both will return in a future release and the greeter can be configured using dconf-editor in the meantime.

Arctica Greeter dconf configuration Configuring Arctica Greeter with dconf-editor

That said, there are significant benefits that result from migrating to Ayatana Indicators:

  • Debian and Ubuntu MATE are now aligned with regards to Indicator support; patches are no longer required in Ubuntu MATE which reduces the maintenance overhead.
  • MATE Tweak is now a cross-distro application, without the need for distro specific patches.
  • We’ve switched from Slick Greeter to Arctica Greeter (both forks of Unity Greeter)
    • Arctica Greeter integrates completely with Ayatana Indicators; so there is now a consistent Indicator experience in the greeter and desktop environment.
  • Multiple projects are now using Ayatana Indicators, including desktop environments, distros and even mobile phone projects such as UBports. With more developers collaborating in one place we are seeing the collection of available indicators grow 📈
  • Through UBports contributions to Ayatana Indicators we will soon have a Bluetooth indicator that can replace Blueman, providing a much simpler way to connect and manage Bluetooth devices. UBports have also been working on a network indicator and we hope to consolidate that to provide improved network management as well.
  • Other indicators that are being worked on include printers, accessibility, keyboard (long absent from Ubuntu MATE), webmail and display.

So, that is the backstory about how developers from different projects come together to collaborate on a shared interest and improve software for their users 💪


We’ve replaced Cheese :cheese: with Webcamoid :movie_camera: as the default webcam tool for several reasons.

  • Webcamoid is a full webcam/capture configuration tool with recording, overlays and more, unlike Cheese. While there were initial concerns :pensive:, since Webcamoid is a Qt5 app, nearly all the requirements in the image are pulled in via YouTube-DL :tada:.
  • We’ve disabled notifications :bell: for Webcamoid updates if installed from universe pocket as a deb-version, since this would cause errors in the user’s system and force them to download a non-deb version. This only affects users who don’t have an existing Webcamoid configuration.

Linux Kernel

Ubuntu MATE 20.10 includes the 5.8 Linux kernel. This includes numerous updates and added support since the 5.4 Linux kernel released in Ubuntu 20.04 LTS. Some notable examples include:

  • Airtime Queue limits for better WiFi connection quality
  • Btrfs RAID1 with 3 and 4 copies and more checksum alternatives
  • USB 4 (Thunderbolt 3 protocol) support added
  • X86 Enable 5-level paging support by default
  • Intel Gen11 (Ice Lake) and Gen12 (Tiger Lake) graphics support
  • Initial support for AMD Family 19h (Zen 3)
  • Thermal pressure tracking for systems for better task placement wrt CPU core
  • XFS online repair
  • OverlayFS pairing with VirtIO-FS
  • General Notification Queue for key/keyring notification, mount changes, etc.
  • Active State Power Management (ASPM) for improved power savings of PCIe-to-PCI devices
  • Initial support for POWER10

Raspberry Pi images

We have been preparing Ubuntu MATE 20.04 images for the Raspberry Pi and we will be release final image for 20.04 and 20.10 in the coming days 🙂

Major Applications

Accompanying MATE Desktop 1.24.1 and Linux 5.8 are Firefox 81, LibreOffice 7.0.2, Evolution 3.38 & Celluloid 0.18.

Major Applications

See the Ubuntu 20.10 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 20.10

This new release will be first available for PC/Mac users.


Upgrading from Ubuntu MATE 20.04 LTS

You can upgrade to Ubuntu MATE 20.10 from Ubuntu MATE 20.04 LTS. Ensure that you have all updates installed for your current version of Ubuntu MATE before you upgrade.

  • Open the “Software & Updates” from the Control Center.
  • Select the 3rd Tab called “Updates”.
  • Set the “Notify me of a new Ubuntu version” drop down menu to “For any new version”.
  • Press Alt+F2 and type in update-manager -c -d into the command box.
  • Update Manager should open up and tell you: New distribution release ‘XX.XX’ is available.
    • If not, you can use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
  • Click “Upgrade” and follow the on-screen instructions.

There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

Known Issues

Here are the known issues.

Component Problem Workarounds Upstream Links
Ayatana Indicators Clock missing on panel upon upgrade to 20.10


Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

on October 22, 2020 12:00 AM

October 21, 2020

Canonical is the publisher of official Ubuntu images on Microsoft Azure. Users can find the latest Ubuntu images in the Azure Marketplace when using the web interface. For a programmatic interface, users can use Microsoft’s Azure CLI. All images published by Canonical are discoverable using the following command: az vm image list --all --publisher Canonical The output will produce JSON output with the following information for each image: { "offer": "0001-com-ubuntu-server-focal", "publisher": "Canonical", "sku": "20_04-lts-gen2", "urn": "Canonical:0001-com-ubuntu-server-focal:20_04-lts-gen2:20.
on October 21, 2020 12:00 AM

October 20, 2020

I work on the Canonical Public Cloud team and we publish all of the Ubuntu server images used in the cloud.

We often get asked what the differences are between two released images. For example what is the difference between the Ubuntu 20.04 LTS image kvm optimised image from 20200921 and the Ubuntu 20.04 LTS image kvm optimised image from 20201014, specifically what packages changed and what was included in those changes?

For each of our download images published to we publish a package version manifest which lists all the packages installed and the versions installed at that time. It also lists any installed snaps the the revision of that snap currently installed. This is very useful for checking to see if an image you are about to use has the expected package version for your requirements or has the expected package version that addresses a vulnerability.

Example snippet from a package version manifest:

python3-apport	2.20.11-0ubuntu27.9
python3-distutils	3.8.5-1~20.04.1

This manifest is also useful to determine the differences between two images. You can do a simple diff of the manifests which will show you the version changes but you can also, with the help of a new ubuntu-cloud-image-changelog command line utility I have published to the Snap store, determine what changed in those packages.

ubuntu-cloud-image-changelog available from the snap storeubuntu-cloud-image-changelog available from the snap store

I’ll work through an example of how to use this tool now:

Using the the Ubuntu 20.04 LTS image kvm optimised image from 20200921 manifest and the Ubuntu 20.04 LTS image kvm optimised image from 20201014 manifest we can find the package version diff.

$ diff 20200921.1-ubuntu-20.04-server-cloudimg-amd64-disk-kvm.manifest 20201014-ubuntu-20.04-server-cloudimg-amd64-disk-kvm.manifest
< python3-apport	2.20.11-0ubuntu27.8
> python3-apport	2.20.11-0ubuntu27.9
< python3-distutils	3.8.2-1ubuntu1
> python3-distutils	3.8.5-1~20.04.1

This snippet above is a subset of the packages that changed but you can easily see the version changes. Full diff available @ .

To see the actual changelog for those package version changes…

$ #install ubuntu-cloud-image-changelog
$ sudo snap install ubuntu-cloud-image-changelog
$ ubuntu-cloud-image-changelog --from-manifest=20200921.1-ubuntu-20.04-server-cloudimg-amd64-disk-kvm.manifest --to-manifest=20201014-ubuntu-20.04-server-cloudimg-amd64-disk-kvm.manifest
Snap packages added: []
Snap packages removed: []
Snap packages changed: ['snapd']
Deb packages added: ['linux-headers-5.4.0-1026-kvm', 'linux-image-5.4.0-1026-kvm', 'linux-kvm-headers-5.4.0-1026', 'linux-modules-5.4.0-1026-kvm', 'python3-pexpect', 'python3-ptyprocess']
Deb packages removed: ['linux-headers-5.4.0-1023-kvm', 'linux-image-5.4.0-1023-kvm', 'linux-kvm-headers-5.4.0-1023', 'linux-modules-5.4.0-1023-kvm']
Deb packages changed: ['alsa-ucm-conf', 'apport', 'bolt', 'busybox-initramfs', 'busybox-static', 'finalrd', 'gcc-10-base:amd64', 'gir1.2-packagekitglib-1.0', 'language-selector-common', 'libbrotli1:amd64', 'libc-bin', 'libc6:amd64', 'libgcc-s1:amd64', 'libpackagekit-glib2-18:amd64', 'libpython3.8:amd64', 'libpython3.8-minimal:amd64', 'libpython3.8-stdlib:amd64', 'libstdc++6:amd64', 'libuv1:amd64', 'linux-headers-kvm', 'linux-image-kvm', 'linux-kvm', 'locales', 'mdadm', 'packagekit', 'packagekit-tools', 'python3-apport', 'python3-distutils', 'python3-gdbm:amd64', 'python3-lib2to3', 'python3-problem-report', 'python3-urllib3', 'python3.8', 'python3.8-minimal', 'secureboot-db', 'shim', 'shim-signed', 'snapd', 'sosreport', 'zlib1g:amd64']


python3-apport changed from version '2.20.11-0ubuntu27.8' to version '2.20.11-0ubuntu27.9'

Source: apport
Version: 2.20.11-0ubuntu27.9
Distribution: focal
Urgency: medium
Maintainer: Brian Murray < - >
Timestamp: 1599065319
Date: Wed, 02 Sep 2020 09:48:39 -0700
 apport (2.20.11-0ubuntu27.9) focal; urgency=medium
   [ YC Cheng ]
   * apport/apport/ add acpidump using built-in (LP: #1888352)
   * bin/oem-getlogs: add "-E" in the usage, since we'd like to talk to
     pulseaudio session and that need environment infomation. Also remove
     acpidump since we will use the one from hook.
 apport (2.20.11-0ubuntu27.8) focal; urgency=medium
   [Brian Murray]
   * Fix pep8 errors regarding ambiguous variables.

python3-distutils changed from version '3.8.2-1ubuntu1' to version '3.8.5-1~20.04.1'

Source: python3-stdlib-extensions
Version: 3.8.5-1~20.04.1
Distribution: focal-proposed
Urgency: medium
Maintainer: Matthias Klose <->
Timestamp: 1597062287
Date: Mon, 10 Aug 2020 14:24:47 +0200
Closes: 960653
 python3-stdlib-extensions (3.8.5-1~20.04.1) focal-proposed; urgency=medium
   * SRU: LP: #1889218. Backport Python 3.8.5 to 20.04 LTS.
   * Build as well for 3.9, except on i386.
 python3-stdlib-extensions (3.8.5-1) unstable; urgency=medium
   * Update 3.8 extensions and modules to the 3.8.5 release.
 python3-stdlib-extensions (3.8.4-1) unstable; urgency=medium
   * Update 3.8 extensions and modules to the 3.8.4 release.
 python3-stdlib-extensions (3.8.4~rc1-1) unstable; urgency=medium
   * Update 3.8 extensions and modules to 3.8.4 release candidate 1.
 python3-stdlib-extensions (3.8.3-2) unstable; urgency=medium
   * Remove bytecode files for 3.7 on upgrade. Closes: #960653.
   * Bump debhelper version.
 python3-stdlib-extensions (3.8.3-1) unstable; urgency=medium
   * Stop building extensions for 3.7.
   * Update 3.8 extensions and modules to 3.8.3 release.


Above is a snippet of the output where you can see the exact changes made between the two versions. Full changelog available @

I have found this very useful when tracking why a package version changes and also if a package version change includes patches addressing a specific vulnerability.

We don’t yet publish package version manifests for all of our cloud images so to help in generating manifests I published the ubuntu-package-manifest command line utility to easily generate a package version manifest for any Ubuntu or Debian based image or running instance for later use with ubuntu-cloud-image-changelog.

ubuntu-package-manifest available from the snap storeubuntu-package-manifest available from the snap store
$ sudo snap install ubuntu-package-manifest
$ # This is a strict snap and requires you to connect the system-backup interface
$ # 
$ # to access the host system package list. This is access read-only.
$ snap connect ubuntu-package-manifest:system-data
$ sudo ubuntu-package-manifest

You can even use this on a running desktop install to track package version changes.

ps. We’re hiring in the Americas and in EMEA 🙂

on October 20, 2020 05:50 PM

October 17, 2020

If you’re a prior reader of the blog, you probably know that when I have the opportunity to take a training class, I like to write a review of the course. It’s often hard to find public feedback on trainings, which feels frustrating when you’re spending thousands of dollars on that course.

Last week, I took the “Reverse Engineering with Ghidra” taught by Jeremy Blackthorne (0xJeremy) of the Boston Cybernetics Institute. It was ostensibly offered as part of the Infiltrate Conference, but 2020 being what it is, there was no conference and it was just an online training. Unfortunately for me, it was being run on East Coast time and I’m on the West Coast, so I got to enjoy some early mornings.

I won’t bury the lede here – on the whole, the course was a high-quality experience taught by an instructor who is clearly both passionate and experienced with technical instruction. I would highly recommend this course if you have little experience in reverse engineering and want to get bootstrapped on performing reversing with Ghidra. You absolutely do need to have some understanding of how programs work – memory sections, control flow, how data and code is represented in memory, etc., but you don’t need to have any meaningful RE experience. (At least, that’s my takeaway, see the course syllabus for more details.)

I would say that about 75% of the total time was spent executing labs and the other 25% was spent with lecture. The lecture time, however, had very little prepared material to read – most of it was live demonstration of the toolset, which made for a great experience when he would answer questions by showing you exactly how to get something done in Ghidra.

Like many information security courses, they provide a virtual machine image with all of the software installed and configured. Interestingly, they seem to share this image across multiple courses, so the actual exercises are downloaded by the student during the course. They provide both VirtualBox and VMWare VMs, but both are OVAs which should be importable into either virtualization platform. Because I always need to make things harder on myself, I actually used QEMU/KVM virtualization for the course, and it worked just fine as well.

The coverage of Ghidra as a tool for reversing was excellent. The majority of the time was spent on manual analysis tasks with examples in a variety of architectures. I believe we saw X86, AMD64, MIPS, ARM, and PowerPC throughout the course. Most of the reversing tasks were a sort of “crack me” style challenge, which was a fitting way to introduce the Ghidra toolkit.

We also spent some time on two separate aspects of Ghidra programming – extending Ghidra with scripts, plugins, and tools, and headless analysis of programs using the GhidraScript API. Though Ghidra is a Java program, it has both Java APIs and Jython bindings to those APIs, and all of the headless analysis exercises were done in Python (Jython).

Jeremy did a great job of explaining the material and was very clear in his teaching style. He provided support for students who were having issues without disrupting the flow for other students. One interesting approach is encouraging students to just keep going through the labs when they finish one, rather than waiting for that lab to be introduced. This ensures that nobody is sitting idle waiting for the course to move forward, and provides students the opportunity to learn and discover the tools on their own before the in-course coverage.

One key feature of Jeremy’s teaching approach is the extensive use of Jupyter notebooks for the lab exercises. This encourages students to produce a log of their work, as you can directly embed shell commands and python scripts (along with their output) as well as Markdown that can include images or other resources. A sort of a hidden gem of his approach was also an introduction to the Flameshot screenshot tool. This tool lets you add boxes, arrows, highlights, redactions, etc., to your screenshot directly in an on-screen overlay. I hadn’t seen it before, but I think it’ll be my goto screenshot tool in the future.

Other tooling used for making this a remote course included a Zoom meeting for the main lecture and a Discord channel for class discussion. Exercises and materials were shared via a Sharepoint server. Zoom was particularly nice because Jeremy recorded his end of the call and uploaded the recordings to the Sharepoint server, so if you wanted to revisit anything, you had both the lecture notes and video. (This is important since so much of the class was done as live demo instead of slides/text.)

It’s also worth noting that it was clear that Jeremy adjusted the course contents and pace to match the students goals and pace. At the beginning, he asked each student about their background and what they hoped to get out of the course, and he would regularly ask us to privately message him with what exercise we’re currently working on (the remote version of the instructor walking around the room) to get a sense of the pace. BCI clearly has more exercises than can fit in the four day timing of the course, so Jeremy selected the ones most relevant to student’s goals, but then provided all the materials at the end of the course so we could go forth and learn more on our own time. This was a really nice element to help get the most out of the course.

The combination of the live demo lecture style, lots of lab/hands-on exercises, and customized content and pace really worked well for me. I feel like I got a lot out of the course and am at least somewhat comfortable using Ghidra now. Overall, definitely a recommendation for those newer to reverse engineering or looking to use Ghidra for the first time.

I also recently purchased The Ghidra Book so I thought I’d make a quick comparison. The Ghidra Book looks like good reference material, but not a way to learn from first principles. If you haven’t used Ghidra at all, taking a course will be a much better way to get up to speed.

on October 17, 2020 07:00 AM

October 15, 2020

About Website Security

Ubuntu Studio

UPDATE 2020-10-16: This is now fixed.

We are aware that, as of this writing, our website is not 100% https. Our website is hosted by Canonical. There is an open ticket to get everything changed-over, but these things take time. There is nothing the Ubuntu Studio Team can do to speed this along or fix it ourselves. If you explicitly type-in https:// to your web browser, you should get the secure SSL version of our site.

Our download links, merchandise stores, and donation links are unaffected by this as they are hosted elsewhere.

We thank you for your understanding.

on October 15, 2020 05:21 PM

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In September, 208.25 work hours have been dispatched among 13 paid contributors. Their reports are available:
  • Abhijith PA did 12.0h (out of 14h assigned), thus carrying over 2h to October.
  • Adrian Bunk did 14h (out of 19.75h assigned), thus carrying over 5.75h to October.
  • Ben Hutchings did 8.25h (out of 16h assigned and 9.75h from August), but gave back 7.75h, thus carrying over 9.75h to October.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did 19.75h (out of 19.75h assigned).
  • Holger Levsen did 5h coordinating/managing the LTS team.
  • Markus Koschany did 31.75h (out of 19.75h assigned and 12h from August).
  • Ola Lundqvist did 9.5h (out of 12h from August), thus carrying 2.5h to October.
  • Roberto C. Sánchez did 19.75h (out of 19.75h assigned).
  • Sylvain Beucler did 19.75h (out of 19.75h assigned).
  • Thorsten Alteholz did 19.75h (out of 19.75h assigned).
  • Utkarsh Gupta did 8.75h (out of 19.75h assigned), while he already anticipated the remaining 11h in August.

Evolution of the situation

September was a regular LTS month with an IRC meeting.

The security tracker currently lists 45 packages with a known CVE and the dla-needed.txt file has 48 packages needing an update.

Thanks to our sponsors

Sponsors that joined recently are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on October 15, 2020 02:07 PM

October 08, 2020

On Ubuntu Linux snaps are app packages for desktop, cloud and IoT that are easy to install, secure, cross‐platform and dependency‐free and their main selling point is security and confinement.

Traditionally packaging for Ubuntu is via .deb packages but much as I try, I never find it straight forward to create or maintain deb packages and I find creating snap packages much easier.

One use case of snaps which doesn’t get talked about much is using snaps to bring no longer supported software back to life. For example, in Ubuntu 20.10 (Groovy Gorilla) which is soon to be released there is no longer support for python2 by default and many other packages have been deprecated too in favour of newer and better replacements. This does mean though that packages which depended on these deprecated packages are not installable and will not run. Snaps can fix this.

Snaps have the concept of Base snaps where is snap can specify a runtime which is based on a previous release of Ubuntu.

  • core20 base is based on Ubuntu 20.04
  • core18 base is based on Ubuntu 18.04
  • core base is based on Ubuntu 16.04

As such you can create snap packages of any software that is installable on any of these previous Ubuntu releases and run that snap on newer releases of Ubuntu.

My workflow relies on many applications, most of which are still installable on Ubuntu 20.10 but I have found three that are not.

To unblock my workflow I created snaps of these @, and which are all snaps using the core18 and core20 base snaps.

Note that these snaps are classic snaps and are not confined as is recommended for most snaps but it does unblock my workflow and is a neat use of snap packaging.

If you need help packaging a deprecated deb package as a snap please reach out.

Bazaar Explorer as a snapBazaar Explorer as a snap
Syncthing-gtk as a snapSyncthing-gtk as a snap
kitematic as a snapkitematic as a snap
on October 08, 2020 01:09 PM

October 06, 2020

Vote In It!

Bryan Quigley

I just launched which focuses on information on ballot measures. It's just a series of simple tables showing what different groups support which ballot measures in California. If anyone is interested in doing similar for their state/town/city, contributions welcome on Github! My primary goal is to make it a little less overwhelming to go through 10+ ballot measures.

Please Vote In It!

on October 06, 2020 08:00 PM

October 03, 2020

Yesterday I got a fresh new Pixel 4a, to replace my dying OnePlus 6. The OnePlus had developed some faults over time: It repeatedly loses connection to the AP and the network, and it got a bunch of scratches and scuffs from falling on various surfaces without any protection over the past year.

Why get a Pixel?

Camera: OnePlus focuses on stuffing as many sensors as it can into a phone, rather than a good main sensor, resulting in pictures that are mediocre blurry messes - the dreaded oil painting effect. Pixel have some of the best camera in the smartphone world. Sure, other hardware is far more capable, but the Pixels manage consistent results, so you need to take less pictures because they don’t come out blurry half the time, and the post processing is so good that the pictures you get are just great. Other phones can shoot better pictures, sure - on a tripod.

Security updates: Pixels provide 3 years of monthly updates, with security updates being published on the 5th of each month. OnePlus only provides updates every 2 months, and then the updates they do release are almost a month out of date, not counting that they are only 1st-of-month patches, meaning vendor blob updates included in the 5th-of-month updates are even a month older. Given that all my banking runs on the phone, I don’t want it to be constantly behind.

Feature updates: Of course, Pixels also get Beta Android releases and the newest Android release faster than any other phone, which is advantageous for Android development and being nerdy.

Size and weight: OnePlus phones keep getting bigger and bigger. By today’s standards, the OnePlus 6 at 6.18" and 177g is a small an lightweight device. Their latest phone, the Nord, has 6.44" and weighs 184g, the OnePlus 8 comes in at 180g with a 6.55" display. This is becoming unwieldy. Eschewing glass and aluminium for plastic, the Pixel 4a comes in at 144g.

First impressions


The Pixel 4a comes in a small box with a charger, USB-C to USB-C cable, a USB-OTG adapter, sim tray ejector. No pre-installed screen protector or bumper are provided, as we’ve grown accustomed to from Chinese manufacturers like OnePlus or Xiaomi. The sim tray ejector has a circular end instead of the standard oval one - I assume so it looks like the ‘o’ in Google?

Google sells you fabric cases for 45€. That seems a bit excessive, although I like that a lot of it is recycled.


Coming from a 6.18" phablet, the Pixel 4a with its 5.81" feels tiny. In fact, it’s so tiny my thumb and my index finger can touch while holding it. Cute! Bezels are a bit bigger, resulting in slightly less screen to body. The bottom chin is probably impracticably small, this was already a problem on the OnePlus 6, but this one is even smaller. Oh well, form over function.

The buttons on the side are very loud and clicky. As is the vibration motor. I wonder if this Pixel thinks it’s a Model M. It just feels great.

The plastic back feels really good, it’s that sort of high quality smooth plastic you used to see on those high-end Nokia devices.

The finger print reader, is super fast. Setup just takes a few seconds per finger, and it works reliably. Other phones (OnePlus 6, Mi A1/A2) take like half a minute or a minute to set up.


The software - stock Android 11 - is fairly similar to OnePlus' OxygenOS. It’s a clean experience, without a ton of added bloatware (even OnePlus now ships Facebook out of box, eww). It’s cleaner than OxygenOS in some way - there are no duplicate photos apps, for example. On the other hand, it also has quite a bunch of Google stuff I could not care less about like YT Music. To be fair, those are minor noise once all 130 apps were transferred from the old phone.

There are various things I miss coming from OnePlus such as off-screen gestures, network transfer rate indicator in quick settings, or a circular battery icon. But the Pixel has an always on display, which is kind of nice. Most of the cool Pixel features, like call screening or live transcriptions are unfortunately not available in Germany.

The display is set to display the same amount of content as my 6.18" OnePlus 6 did, so everything is a bit tinier. This usually takes me a week or two to adjust too, and then when I look at the OnePlus again I’ll be like “Oh the font is huge”, but right now, it feels a bit small on the Pixel.

You can configure three colour profiles for the Pixel 4a: Natural, Boosted, and Adaptive. I have mine set to adaptive. I’d love to see stock Android learn what OnePlus has here: the ability to adjust the colour temperature manually, as I prefer to keep my devices closer to 5500K than 6500K, as I feel it’s a bit easier on the eyes. Or well, just give me the ability to load a ICM profile (though, I’d need to calibrate the screen then - work!).

Migration experience

Restoring the apps from my old phone only restore settings for a few handful out of 130, which is disappointing. I had to spent an hour or two logging in to all the other apps, and I had to fiddle far too long with openScale to get it to take its data over. It’s a mystery to me why people do not allow their apps to be backed up, especially something innocent like a weight tracking app. One of my banking apps restored its logins, which I did not really like. KeePass2Android settings were restored as well, but at least the key file was not restored.

I did not opt in to restoring my device settings, as I feel that restoring device settings when changing manufactures is bound to mess up some things. For example, I remember people migrating to OnePlus phones and getting their old DND schedule without any way to change it, because OnePlus had hidden the DND stuff. I assume that’s the reason some accounts, like my work GSuite account were not migrated (it said it would migrate accounts during setup).

I’ve setup Bitwarden as my auto-fill service, so I could login into most of my apps and websites using the stored credentials. I found that often that did not work. Like Chrome does autofill fine once, but if I then want to autofill again, I have to kill and restart it, otherwise I don’t get the auto-fill menu. Other apps did not allow any auto-fill at all, and only gave me the option to copy and paste. Yikes - auto-fill on Android still needs a lot of work.


It hangs a bit sometimes, but this was likely due to me having set 2 million iterations on my Bitwarden KDF and using Bitwarden a lot, and then opening up all 130 apps to log into them which overwhelmed the phone a bit. Apart from that, it does not feel worse than the OnePlus 6 which was to be expected, given that the benchmarks only show a slight loss in performance.

Photos do take a few seconds to process after taking them, which is annoying, but understandable given how much Google relies on computation to provide decent pictures.


The Pixel has dual speakers, with the earpiece delivering a tiny sound and the bottom firing speaker doing most of the work. Still, it’s better than just having the bottom firing speaker, as it does provide a more immersive experience. Bass makes this thing vibrate a lot. It does not feel like a resonance sort of thing, but you can feel the bass in your hands. I’ve never had this before, and it will take some time getting used to.

Final thoughts

This is a boring phone. There’s no wow factor at all. It’s neither huge, nor does it have high-res 48 or 64 MP cameras, nor does it have a ton of sensors. But everything it does, it does well. It does not pretend to be a flagship like its competition, it doesn’t want to wow you, it just wants to be the perfect phone for you. The build is solid, the buttons make you think of a Model M, the camera is one of the best in any smartphone, and you of course get the latest updates before anyone else. It does not feel like a “only 350€” phone, but yet it is. 128GB storage is plenty, 1080p resolution is plenty, 12.2MP is … you guessed it, plenty.

The same applies to the other two Pixel phones - the 4a 5G and 5. Neither are particularly exciting phones, and I personally find it hard to justify spending 620€ on the Pixel 5 when the Pixel 4a does job for me, but the 4a 5G might appeal to users looking for larger phones. As to 5G, I wouldn’t get much use out of it, seeing as its not available anywhere I am. Because I’m on Vodafone. If you have a Telekom contract or live outside of Germany, you might just have good 5G coverage already and it might make sense to get a 5G phone rather than sticking to the budget choice.


The big question for me is whether I’ll be able to adjust to the smaller display. I now have a tablet, so I’m less often using the phone (which my hands thank me for), which means that a smaller phone is probably a good call.

Oh while we’re talking about calls - I only have a data-only SIM in it, so I could not test calling. I’m transferring to a new phone contract this month, and I’ll give it a go then. This will be the first time I get VoLTE and WiFi calling, although it is Vodafone, so quality might just be worse than Telekom on 2G, who knows. A big shoutout to congstar for letting me cancel with a simple button click, and to @vodafoneservice on twitter for quickly setting up my benefits of additional 5GB per month and 10€ discount for being an existing cable customer.

I’m also looking forward to playing around with the camera (especially night sight), and eSIM. And I’m getting a case from China, which was handed over to the Airline on Sep 17 according to Aliexpress, so I guess it should arrive in the next weeks. Oh, and screen protector is not here yet, so I can’t really judge the screen quality much, as I still have the factory protection film on it, and that’s just a blurry mess - but good enough for setting it up. Please Google, pre-apply a screen protector on future phones and include a simple bumper case.

I might report back in two weeks when I have spent some more time with the device.

on October 03, 2020 11:16 AM

October 01, 2020

We are pleased to announce that the beta images for Lubuntu 20.10 have been released! While we have reached the bugfix-only stage of our development cycle, these images are not meant to be used in a production system. We highly recommend joining our development group or our forum to let us know about any issues. […]
on October 01, 2020 09:55 AM