May 27, 2020

S13E08.5 – When a broken clock chimes

Ubuntu Podcast from the UK LoCo

We announce the Ubuntu Podcast crowd-funder on Patreon and why, after 13 years, we are seeking your support.

It’s Season 13 Episode 8.5 of the Ubuntu Podcast! Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this mini-show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. We are running a crowd funder to cover our audio production costs on Patreon. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on May 27, 2020 11:30 PM

Previously: v5.4.

I got a bit behind on this blog post series! Let’s get caught up. Here are a bunch of security things I found interesting in the Linux kernel v5.5 release:

restrict perf_event_open() from LSM
Given the recurring flaws in the perf subsystem, there has been a strong desire to be able to entirely disable the interface. While the kernel.perf_event_paranoid sysctl knob has existed for a while, attempts to extend its control to “block all perf_event_open() calls” have failed in the past. Distribution kernels have carried the rejected sysctl patch for many years, but now Joel Fernandes has implemented a solution that was deemed acceptable: instead of extending the sysctl, add LSM hooks so that LSMs (e.g. SELinux, Apparmor, etc) can make these choices as part of their overall system policy.

generic fast full refcount_t
Will Deacon took the recent refcount_t hardening work for both x86 and arm64 and distilled the implementations into a single architecture-agnostic C version. The result was almost as fast as the x86 assembly version, but it covered more cases (e.g. increment-from-zero), and is now available by default for all architectures. (There is no longer any Kconfig associated with refcount_t; the use of the primitive provides full coverage.)

linker script cleanup for exception tables
When Rick Edgecombe presented his work on building Execute-Only memory under a hypervisor, he noted a region of memory that the kernel was attempting to read directly (instead of execute). He rearranged things for his x86-only patch series to work around the issue. Since I’d just been working in this area, I realized the root cause of this problem was the location of the exception table (which is strictly a lookup table and is never executed) and built a fix for the issue and applied it to all architectures, since it turns out the exception tables for almost all architectures are just a data table. Hopefully this will help clear the path for more Execute-Only memory work on all architectures. In the process of this, I also updated the section fill bytes on x86 to be a trap (0xCC, int3), instead of a NOP instruction so functions would need to be targeted more precisely by attacks.

KASLR for 32-bit PowerPC
Joining many other architectures, Jason Yan added kernel text base-address offset randomization (KASLR) to 32-bit PowerPC.

seccomp for RISC-V
After a bit of long road, David Abdurachmanov has added seccomp support to the RISC-V architecture. The series uncovered some more corner cases in the seccomp self tests code, which is always nice since then we get to make it more robust for the future!

seccomp USER_NOTIF continuation
When the seccomp SECCOMP_RET_USER_NOTIF interface was added, it seemed like it would only be used in very limited conditions, so the idea of needing to handle “normal” requests didn’t seem very onerous. However, since then, it has become clear that the overhead of a monitor process needing to perform lots of “normal” open() calls on behalf of the monitored process started to look more and more slow and fragile. To deal with this, it became clear that there needed to be a way for the USER_NOTIF interface to indicate that seccomp should just continue as normal and allow the syscall without any special handling. Christian Brauner implemented SECCOMP_USER_NOTIF_FLAG_CONTINUE to get this done. It comes with a bit of a disclaimer due to the chance that monitors may use it in places where ToCToU is a risk, and for possible conflicts with SECCOMP_RET_TRACE. But overall, this is a net win for container monitoring tools.

Some EFI systems provide a Random Number Generator interface, which is useful for gaining some entropy in the kernel during very early boot. The arm64 boot stub has been using this for a while now, but Dominik Brodowski has now added support for x86 to do the same. This entropy is useful for kernel subsystems performing very earlier initialization whre random numbers are needed (like randomizing aspects of the SLUB memory allocator).

As has been enabled on many other architectures, Dmitry Korotin got MIPS building with CONFIG_FORTIFY_SOURCE, so compile-time (and some run-time) buffer overflows during calls to the memcpy() and strcpy() families of functions will be detected.

limit copy_{to,from}_user() size to INT_MAX
As done for VFS, vsnprintf(), and strscpy(), I went ahead and limited the size of copy_to_user() and copy_from_user() calls to INT_MAX in order to catch any weird overflows in size calculations.

That’s it for v5.5! Let me know if there’s anything else that I should call out here. Next up: Linux v5.6.

© 2020, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

on May 27, 2020 08:04 PM

Today Microsoft announced the general availability of Windows Subsystem for Linux 2 in the Windows 10 May 2020 update.

WSL 2 is based on a new architecture that provides full Linux binary application compatibility and improved performance. WSL 2 is powered by a real Linux kernel in a lightweight virtual machine that boots in under two seconds. WSL 2 is the best way to experience Ubuntu on WSL.

Ubuntu was the first Linux distribution for WSL and remains the most popular choice of WSL users. Ubuntu 20.04 LTS for WSL was released simultaneously with the general availability of Ubuntu 20.04 LTS in April.

Canonical supports Ubuntu on WSL in organizations through Ubuntu Advantage which includes Landscape for managing Ubuntu on WSL deployments, extended security, and e-mail and phone support.

Ubuntu is ready for WSL 2. All versions of Ubuntu can be upgraded to WSL 2. The latest version of Ubuntu, Ubuntu 20.04 LTS, can be installed on WSL directly from the Microsoft Store. For other versions of Ubuntu for WSL and other ways to install WSL see the WSL page on the Ubuntu Wiki.

Ubuntu on WSL supports powerful developer and system administrator tools, including microk8s, the simplest way to deploy a single node Kubernetes cluster for development and DevOps.

See our YouTube page for more WSL-related videos from WSLConf 2020.

Enable WSL 2

To enable WSL 2 on Windows 10 May 2020 update (build 19041 or higher) run the following in PowerShell as Administrator:

dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart

“WSL2 requires an update to its kernel component”

Some users upgrading from Insider builds of Windows 10 will encounter an error running the commands below. You will be directed to manually download and update the Linux kernel. Visit to download a .msi package, install it, and then try again.

Convert Ubuntu on WSL 1 to WSL 2

To convert an existing WSL 1 distro to WSL 2 run the following in PowerShell:

wsl.exe --set-version Ubuntu 2

Set WSL 2 as the default

To set WSL 2 as the default for installing WSL distributions in the future run the following in PowerShell:

wsl.exe --set-default-version 2

Upgrade to Ubuntu 20.04 LTS on WSL

To upgrade to the latest version of Ubuntu on WSL run the following in Ubuntu:

sudo do-release-upgrade -d

Windows Terminal 1.0

The new open source Windows Terminal recently reached 1.0 and makes an excellent companion to Ubuntu on WSL 2. Windows Terminal can be downloaded from the Microsoft Store or GitHub and can be extensively customized.

Community Help with Ubuntu on WSL

Community support is available for users:

Enterprise Support for Ubuntu on WSL

Ubuntu on WSL is fully supported by Canonical for enterprise and organizations through Ubuntu Advantage.

For more information on Ubuntu on WSL, go to

To read more about the new features coming to WSL 2 announced at Microsoft Build, see our blog post.

on May 27, 2020 07:04 PM

Hybrid cloud and multi-cloud are two exclusive terms that are often confused. While the hybrid cloud represents a model for extending private cloud infrastructure with one of the existing public clouds, a multi-cloud refers to an environment where multiple clouds are used at the same time, regardless of their type. Thus, while the hybrid cloud represents a very specific use case, multi-cloud is a more generic term and usually better reflects reality.

Although both architectures are relatively simple to implement from the infrastructure point of view, the more important question is about workloads orchestration in such environments. In the following blog, I describe the differences between hybrid clouds and the multi-cloud and discuss the advantages of orchestrating workloads in a multi-cloud environment with Juju.

Understanding the difference between the hybrid cloud and the multi-cloud

Let’s assume that you manage a transport company. The company owns ten cars which is sufficient in most cases. However, there are days when you really need more than ten. So how do you handle your customers during those traffic-heavy periods? Do you buy additional cars? No, you rent them instead. You rely on an external supplier who can lend you cars on-demand. As a result, you can continue to deliver your services. This is almost exactly how the hybrid cloud model works.

However, the reality is slightly different. First of all, companies never rely on a single supplier. You need at least two suppliers to ensure business continuity in case the first one cannot provide their services. Moreover, not all cars are the same. What if you need a really big one which none of your existing suppliers can provide? You would probably rent it from another supplier, even if this only happens once a year, wouldn’t you? And finally, it is quite possible that even though you are in need of buses, due to lack of experience with them you are not really willing to own and maintain them, even if in the long term that would have been a more cost-efficient option.

The second case represents the multi-cloud model and is closer to what we are observing in modern organisations. While the hybrid cloud concept was developed as a solution to offload a private cloud during computationally-intensive periods, an orchestration of workloads in multi-cloud environments is what most organisations are really struggling with nowadays. This is because not hybrid clouds, but the multi-cloud is a part of their daily reality. 

Hybrid cloud and multi-cloud: architectural differences

A hybrid cloud is an IT infrastructure that consists of a private cloud and one of the available public clouds. Both are connected with a persistent virtual private network (VPN). Both use a single identity management (IdM) system and unified logging, monitoring, and alerting (LMA) stack. Even their internal networks are integrated, so in fact, the public cloud becomes an extension of the private cloud. As a result, both behave as a single environment which is fully transparent from the workloads’ point of view.

The goal behind such an architecture is to use the public cloud only if the private cloud can no longer handle workloads. As the private cloud is always a cheaper option, an orchestration platform always launches workloads in the private cloud first. However, once the resources of the private cloud become exhausted, an orchestration platform moves some of the workloads to the public cloud and starts using it by default when launching new workloads. Once the peak period is over, the workloads are moved back to the private cloud, which becomes the default platform once again.

In turn, multi-cloud simply refers to using multiple clouds at the same time, regardless of their type. There is no dedicated infrastructure that facilitates it. There is no dedicated link, single IdM system, unified LMA stack or an integrated network. Just instead of a single cloud, an organisation uses at least two clouds at the same time.

The goal behind the multi-cloud approach is to reduce the risk of relying on a single cloud service provider. Workloads can be distributed across multiple clouds which improves independence and helps to avoid ‘vendor lock-in’. Furthermore, as the multi-cloud is usually a geographically-distributed environment, this helps to improve high availability of applications and their resiliency against failures. Finally, the multi-cloud approach combines the best advantages of various cloud platforms. For example, running databases on virtual machines (VMs) while hosting frontend applications inside of containers. Thus, workload orchestration remains the most prominent challenge in this case. 

Orchestrating workloads in a multi-cloud environment

When running workloads in the multi-cloud environment, having a tool that can orchestrate them becomes essential. This tool has to be able to provision cloud resources (VMs, containers, block storage, etc.), deploy applications on top of them and configure them so that they can communicate with each other. For example, fronted applications have to be aware of the IP address of the database, as they are consuming the data stored there. Moreover, as the resources are distributed across various cloud types, the entire platform has to be substrate-agnostic. Thus, the entire process needs to be fully transparent from the end user’s perspective.

One of the tools providing this kind of functionality is Juju. It supports leading public cloud providers as well as most of those used for private cloud implementations: VMware vSphere, OpenStack, Kubernetes, etc. Juju allows modelling and deployment of distributed applications, providing multi-cloud software-as-a-service (SaaS) experience. As a result, users can focus on shaping their applications, while the entire complexity behind the multi-cloud setup becomes fully abstracted.


Although looking similar at first sight, hybrid cloud and multi-cloud are two different concepts. While hybrid clouds focus on offloading private clouds, the multi-cloud approach attempts to address the challenges associated with using multiple clouds at the same time. The biggest challenge with multi-cloud is not the infrastructure setup. It’s workload orchestration. Juju solves this problem by providing a multi-cloud SaaS experience.

Learn more

Canonical provides all necessary components and services for building a modern private cloud infrastructure. Those include OpenStack and Kubernetes, as well as Juju – a software for workloads orchestration in multi-cloud environments.

Get in touch with us or watch our webinar – “Open source infrastructure: from bare metal to microservices” to learn more.

on May 27, 2020 08:00 AM

May 26, 2020

An interesting writeup by Brian Kardell on web engine diversity and ecosystem health, in which he puts forward a thesis that we currently have the most healthy and open web ecosystem ever, because we’ve got three major rendering engines (WebKit, Blink, and Gecko), they’re all cross-platform, and they’re all open source. This is, I think, true. Brian’s argument is that this paints a better picture of the web than a lot of the doom-saying we get about how there are only a few large companies in control of the web. This is… well, I think there’s truth to both sides of that. Brian’s right, and what he says is often overlooked. But I don’t think it’s the whole story.

You see, diversity of rendering engines isn’t actually in itself the point. What’s really important is diversity of influence: who has the ability to make decisions which shape the web in particular ways, and do they make those decisions for good reasons or not so good? Historically, when each company had one browser, and each browser had its own rendering engine, these three layers were good proxies for one another: if one company’s browser achieved a lot of dominance, then that automatically meant dominance for that browser’s rendering engine, and also for that browser’s creator. Each was isolated; a separate codebase with separate developers and separate strategic priorities. Now, though, as Brian says, that’s not the case. Basically every device that can see the web and isn’t a desktop computer and isn’t explicitly running Chrome is a WebKit browser; it’s not just “iOS Safari’s engine”. A whole bunch of long-tail browsers are essentially a rethemed Chrome and thus Blink: Brave and Edge are high up among them.

However, engines being open source doesn’t change who can influence the direction; it just allows others to contribute to the implementation. Pick something uncontroversial which seems like a good idea: say, AVIF image format support, which at time of writing (May 2020) has no support in browsers yet. (Firefox has an in-progress implementation.) I don’t think anyone particularly objects to this format; it’s just not at the top of anyone’s list yet. So, if you were mad keen on AVIF support being in browsers everywhere, then you’re in a really good position to make that happen right now, and this is exactly the benefit of having an open ecosystem. You could build that support for Gecko, WebKit, and Blink, contribute it upstream, and (assuming you didn’t do anything weird), it’d get accepted. If you can’t build that yourself then you ring up a firm, such as Igalia, whose raison d’etre is doing exactly this sort of thing and they write it for you in exchange for payment of some kind. Hooray! We’ve basically never been in this position before: currently, for the first time in the history of the web, a dedicated outsider can contribute to essentially every browser available. How good is that? Very good, is how good it is.

Obviously, this only applies to things that everyone agrees on. If you show up with a patchset that provides support for the <stuart> element, you will be told: go away and get this standardised first. And that’s absolutely correct.

But it doesn’t let you influence the strategic direction, and this is where the notions of diversity in rendering engines and diversity in influence begins to break down. If you show up to the Blink repository with a patchset that wires an adblocker directly into the rendering engine, it is, frankly, not gonna show up in Chrome. If you go to WebKit with a complete implementation of service worker support, or web payments, it’s not gonna show up in iOS Safari. The companies who make the browsers maintain private forks of the open codebase, into which they add proprietary things and from which they remove open source things they don’t want. It’s not actually clear to me whether such changes would even be accepted into the open source codebases or whether they’d be blocked by the companies who are the primary sponsors of those open source codebases, but leave that to one side. The key point here is that the open ecosystem is only actually open to non-controversial change. The ability to make, or to refuse, controversial changes is reserved to the major browser vendors alone: they can make changes and don’t have to ask your permission, and you’re not in the same position. And sure, that’s how the world works, and there’s an awful lot of ingratitude out there from people who demand that large companies dedicate billions of pounds to a project and then have limited say over what it’s spent on, which is pretty galling from time to time.

Brian references Jeremy Keith’s Unity in which Jeremy says: “But then I think of situations where complete unity isn’t necessarily a good thing. Take political systems, for example. If you have hundreds of different political parties, that’s not ideal. But if you only have one political party, that’s very bad indeed!” This is true, but again the nuance is different, because what this is about is influence. If one party wins a large majority, then it doesn’t matter whether they’re opposed by one other party or fifty, because they don’t have to listen to the opposition. (And Jeremy makes this point.) This was the problem with Internet Explorer: it was dominant enough that MS didn’t have to give a damn what anyone else thought, and so they didn’t. Now, this problem does eventually correct itself in both browsers and political systems, but it takes an awfully long time; a dominant thing has a lot of inertia, and explaining to a peasant in 250AD that the Roman Empire will go away eventually is about as useful as explaining to a web developer in 2000AD that CSS is coming soon, i.e., cold comfort at best and double-plus-frustrating at worst.

So, a qualified hooray, I suppose. I concur with Brian that “things are better and healthier because we continue to find better ways to work together. And when we do, everyone does better.” There is a bunch of stuff that is uncontroversial, and does make the web better, and it is wonderful that we’re not limited to begging browser vendors to care about it to get it. But I think that definition excludes a bunch of “things” that we’re not allowed, for reasons we can only speculate about.

on May 26, 2020 01:52 PM

ZFS focus on Ubuntu 20.04 LTS: ZSys general presentation

In our previous blog post, we presented some enhancements and differences between Ubuntu 19.10 and Ubuntu 20.04 LTS in term of ZFS support. We only alluded to ZSys, our ZFS system helper, which is now installed by default when selecting ZFS on root installation on the Ubuntu Desktop.

It’s now time to shed some lights on it and explain what exactly ZSys is bringing to you.

What is ZSys?

We called ZSys a ZFS system (hence its name) helper. It can be first seen as a boot environment, which are popular in the OpenZFS community, to help you booting on previous revision of your system (basically snapshots) in a coherent manner. However, ZSys goes beyond than that by providing regular user snapshots, system garbage collection and much more that we will detail below!

System state saving

We will go deeper in details about what is a state and how it behaves, but as we want to get exhaustive in term of ZSys features in this post, here is a little introduction.

Each time you install, remove or upgrade your packages, a state save is automatically taken by the system. This is done by apt transactions. Note that we have some specificities in ubuntu with background updates (via unattended-upgrades) which splits up upgrades in multiple apt transactions. We were able to group that in only one system saving. We split it up in two parts (saving state and rebuilding bootloader menu) as you can see when running the apt command manually:

$ apt install foo
INFO Requesting to save current system state      
Successfully saved as "autozsys_ip60to"
[installing/remove/upgrading package(s)]
INFO Updating GRUB menu

You can find them now stored on the system:

$ zsysctl show
Name:           rpool/ROOT/ubuntu_e2wti1
ZSys:           true
Last Used:      current
  - Name:       rpool/ROOT/ubuntu_e2wti1@autozsys_cgym7c
    Created on: 2020-04-28 12:23:12
  - Name:       rpool/ROOT/ubuntu_e2wti1@autozsys_kho1px
    Created on: 2020-04-28 12:16:34
  - Name:       rpool/ROOT/ubuntu_e2wti1@autozsys_w2kfiv
    Created on: 2020-04-28 11:52:46
  - Name:       rpool/ROOT/ubuntu_e2wti1@autozsys_ixfcpk
    Created on: 2020-04-28 11:52:24
  - Name:       rpool/ROOT/ubuntu_e2wti1@autozsys_ip60to
    Created on: 2020-04-28 11:50:06
  - Name:       rpool/ROOT/ubuntu_e2wti1@autozsys_08865s
    Created on: 2020-04-28 09:27:31
  - Name:       rpool/ROOT/ubuntu_e2wti1@autozsys_nqq08r
    Created on: 2020-04-28 09:07:42
  - Name:       rpool/ROOT/ubuntu_e2wti1@autozsys_258qec
    Created on: 2020-04-27 18:11:12
  - Name:       rpool/ROOT/ubuntu_e2wti1@autozsys_yldeob
    Created on: 2020-04-27 18:10:22
  - Name:       rpool/ROOT/ubuntu_e2wti1@autozsys_r66le8
    Created on: 2020-04-24 17:51:42

Our grub bootloader (available via [Escape] or [Shift] press on boot) will allow you to revert the system (and optionally user) states on demand! An history entry will propose you to boot on an older state of your system.

This goes beyond than OpenZFS rollbacks as we made revert a non destructive action: current and intermediate states aren’t destroyed, and you can imagine even reverting the revert (or doing system bisection!). Also, we store exactly some key dataset properties as to when the state saving was taken. The revert will then reapply original base filesystem dataset properties, to ensure better fidelity. For instance, changing a filesystem mountpoint won’t apply to the ZSys snapshots inheriting from it, and we will remount it at the correct place. Similarly, we will reboot with the exact same kernel you booted with on the state that you saved, even if this wasn’t the latest available version.

Note: simply put if you are unfamiliar with ZFS technology and terminology, you can think of a dataset as a directory that you can control independently of the rest of your system: each dataset can be saved and restored separately, have their own history, properties, quotas…

History with ZSys

In a nutshell, we try to ensure high fidelity when you revert your system so that you can trustfully and safely boot on an older state of your system.

Commit successful boot

You will tell us that pressing [Escape] or [Shift] on boot to show up grub isn’t the most discoverable feature and you are right.

However, in case of boot failing, the next boot will show up grub by default and you will have those “History entries” available to you!

Similarly, we save and commit every time you are able to successfully boot your system (we will define successful boot in the next blog post about states). This can trigger a grub menu update if needed (new states to add for instance). It means that if a boot is failing and you revert, just rebooting should keep the latest successful state as the default grub entry didn’t change!

This is just a small taste of what we do on our path of robust and bullet-proof Ubuntu desktop. We will explain all of this in greater details in the next blog post.

User integration

ZSys is deeply integrating users to ZFS. Each non system user created manually or automatically (via gnome-control-center, adduser or useradd) will have its own separate space (dataset) created. We handle home directory renaming with usermod but still need to do more work on userdel as we shouldn’t delete the user immediately (what if you revert to a state that had this user for instance?).

This allows us to take automatic hourly user state save… However, we only do it if the user is connected (to a GUI or CLI session)!

You can see below those hourly automated user state save. Hourly ones are automated time-based user snapshots and you can see some more linked to system state changes.

$ zsysctl show
  - Name:    didrocks
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_setmsc (2020-04-28 15:36:25)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_8wdamc (2020-04-28 14:35:25)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_y5tsor (2020-04-28 13:34:25)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_yysp1w (2020-04-28 12:33:25)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_cgym7c (2020-04-28 12:23:12)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_kho1px (2020-04-28 12:16:34)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_w2kfiv (2020-04-28 11:52:46)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_ixfcpk (2020-04-28 11:52:24)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_ip60to (2020-04-28 11:50:06)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_891br7 (2020-04-28 11:32:25)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_yl9fuu (2020-04-28 10:31:25)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_vg70mw (2020-04-28 09:30:25)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_08865s (2020-04-28 09:27:31)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_nqq08r (2020-04-28 09:07:42)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_xgntka (2020-04-28 08:29:45)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_0zisgn (2020-04-27 19:41:36)
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_44635k (2020-04-27 18:40:36)

User snapshots are basically free when you don’t change any files (you can think of them as if they were only storing the difference between current and previous states). It will allow us in the near future to offer easily restore of previous version of your user’s data. The “oh, I removed that file and don’t have any backups” will be something of the past soon :)

Note that this isn’t a real backup as all your data are on the same physical place (your disk!), so this shouldn’t replace backups. We have plan to integrate that with some backup tools in the future.

Garbage collection on state saving

As you can infer from the above, we will have a lot of state saves. While we allow the user to manually save and remove states, as most of them will be taken automatically, it would be complicated and counter-productive on a daily bases to handle them manually. Add to this that some states are dependent on other states to be purged (more on that in … you would have guess, the next blog post about state!), you can understand the complexity here.

The GC will have also its dedicated post, but in summary, we are trying to prune states as time passes, to ensure that you have a number of relevant states that you can revert to. The general idea is that as more time pass by, the less granularity you need. This will help saving disk space. You will have a very finer grain states to revert to for the previous day, a little bit less for the previous weeks, less for months… You get it I think. :)

Multiple machines

Something that isn’t really well supported nowdays was multiple OpenZFS installations on the same machines. This isn’t fully complete yet (you can’t have 2 pools of the same name, like rpool or bpool), but if you install manually a secondary machine on the same pools or different ones (with different names), this is handled by ZSys and our grub menu!

Multiple machine support with ZSys

Both will have their own separate history of states and ZSys will manage both! You can have shared or unshared user home data between the machines.

You can see easily see all machines attached to your machine, and if they are or not managed by ZSys:

$ zsysctl machine list
ID                        ZSys   Last Used
--                        ----   ---------
rpool/ROOT/ubuntu_e2wti1  true   current
rpool/ROOT/ubuntu_l33t42  true   23/04/20 18:04
rpool/ROOT/ubuntu_manual  false  20/03/20 14:17

Principles of ZSys

We built ZSys on multiple principles.

ZSys architecture


The first principle of ZSys is to be as lightweight as possible: ZSys only runs on demand, it’s made of a command line (zsysctl) which connects to a service zsysd, which is socket activated. This means that after a while, if zsysd has nothing else to do, it will shutdown and not take any additional memory on your system.

Similarly, we don’t want ZSys to slow down or interfere during the boot. This is why we hooked it up to the upstream OpenZFS systemd generator and it will only be run when doing a revert to a previous state.

Everything is stored on ZFS pools

Secondly, we don’t want to maintain a separate database for ZFS dataset layout and structure. This approach would have been dangerous as it would have been really easy to get out of sync with what is really on disk, and being unaware of any manual user interaction on ZFS. That way, we let ZFS familiar system administrators handling manual operations while still being compatible with ZSys. Any additional properties we need are handled via user properties on ZFS datasets directly. Basically, everything is in the ZFS pool and no additional data are needed (you can migrate your disk from one disk to another).

Permission mediation

Thirdly, permission handling is mediated via polkit which is a familiar mechanism to administrator and compatible with company-wide policies. If any priviledge escalation is needed, the system will ask you for it.

Polkit request for performing system write operation

We will develop this topic with more details in future posts.

Ease of use

This is the core of the command line experience: familiarity and discoverability. zsysctl has a lot of commands, subcommands and options. We are using advanced shell completion which will complete on [Tab][Tab] in both bash and zsh environments.

$ zsysctl state [Tab][Tab]
remove  save

We try to discard any optional arguments by limiting the number of them being required. However if you try to complete on - or --, we will present any available matching options (some being global, other being local to your subcommand):

$ zsysctl state save -[Tab][Tab]
--auto                -s                    -u                    --user=               --verbose
--no-update-bootmenu  --system              --user                -v

Similarly, an help command is available for each commands and subcommands and will complete as expected:

$ zsysctl help s[Tab][Tab]
save     service  show     state
$ zsysctl help state
Machine state management

  zsysctl state COMMAND [flags]
  zsysctl state [command]

Available Commands:
  remove      Remove the current state of the machine. By default it removes only the user state if not linked to any system state.
  save        Saves the current state of the machine. By default it saves only the user state. state_id is generated if not provided.

  -h, --help   help for state

Global Flags:
  -v, --verbose count   issue INFO (-v) and DEBUG (-vv) output

Use "zsysctl state [command] --help" for more information about a command.

We made aliases for popular commands (or what we think are going to be popular :)). For instance zsysctl show is an alias for zsysctl machine show. Less typing for the win! :)

Also, all those commands and subcommands are backed by man pages. Those are for now the equivalent of –help but if you have the desire to enhance any of them, this is a simple, but very valuable contribution that we strongly welcome as Pull Requests on our project!

The github repo README has a dedicated section with the details of all commands.

The best thing is that any of those are autogenerated at build time from source code (completion, man pages and README!). That means that the help and nice completions will never be out of sync with the released version. It also means that ZSys developers can (via the zsysctl completion command) already experience and have a feeling of command interactions without installing tip on the system.

Finally, typos are allowed and we try to match commands that are close enough to your command:

$ zsysctl sae
  zsysctl COMMAND [flags]
  zsysctl [command]

Available Commands:
  completion  Generates bash completion scripts
  help        Help about any command
  list        List all the machines and basic information.
  machine     Machine management
  save        Saves the current state of the machine. By default it saves only the user state. state_id is generated if not provided.
  service     Service management
  show        Shows the status of the machine.
  state       Machine state management
  version     Returns version of client and server

  -h, --help            help for zsysctl
  -v, --verbose count   issue INFO (-v) and DEBUG (-vv) output

Use "zsysctl [command] --help" for more information about a command.

ERROR zsysctl requires a valid subcommand. Did you mean this?

For the ones attached to details, you should have seen that some commands and subcommands on our README isn’t available through completion or that we have some man pages on commands that aren’t shown there. There are indeed system-oriented hidden commands and this is why they are not proposed by default (like the boot command: zsysctl bo[Tab][Tab] won’t display anything). However, for testing, we are found of completion and if you type it entirely, you are back on completion enjoyable land:

$ zsysctl boot [Tab][Tab]
commit       prepare      update-menu

If you want to implement something similar in your (golang) CLI program, we proposed our changes to the cobra upstream repository so that you can get advanced shell completion as well!


We didn’t want to force upon user a strict ZSys layout. First, we will have different default ZFS dataset layouts between server and desktop (this will be detailed in a future blog post). We are also very aware that there are excited and passionate ZFS system administrator or hobbyist that want to have full control over their system. This is why: * Any system can be untagged to prevent ZSys controlling it. The system will then boot and behave as any ZFS on root manual installation. However, of course, any additional features we are providing through ZSys will be unavailable. * Dataset layout can be a mix between what we provide by default and how system administrators have their habits or desired targets. For instance, bpool isn’t mandatory, any child dataset can be deleted, some persistent datasets can be created… We will explain more about dataset layouts and the types of datasets ZSys is handling in more advanced parts of this blog post series.

Strongly tested

We put a strong emphasize on testing. ZSys itself is currently covered by more than 680 tests (from units to integration tests). It can exercise real ZFS system but we have also built an in memory mock (which can parallelize tests for instance). We can thus validate it against the real system with the same set of tests!

We also have more than 400 integration tests for grub menu building, covering a wide variety of dataset layouts configuration.

Detailed bug reporting

I hope you can appreciate that we put a lot of thoughts and care on ZSys and how it integrates with the ZFS system.

Of course, bugs can and will occur. This is why our default bug report template is asking to run ubuntu-bug ZSys as we collect a bunch of non private information on your system, which will help us to understand what configuration your are running with and what actually happened in various part of the system (OpenZFS, grub or ZSys itself)!

Again, most of the features are completely working under the hood and transparently to the user! I hope this gives you a sneak preview of what ZSys is capable of doing. I teased a lot about further development and explanation I couldn’t include there (this is already long enough) on a lot of those concepts. This is why we will dive right away into state management (which will include revert, bisections and more)! See you there :)

Meanwhile, join the discussion via the dedicated Ubuntu discourse thread.

on May 26, 2020 07:22 AM

May 25, 2020

Welcome to the Ubuntu Weekly Newsletter, Issue 632 for the week of May 17 – 23, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on May 25, 2020 10:37 PM

When Netmux first released the Operator Handbook, I had to check it out. I had some initial impressions, but wanted to take some time to refine my thoughts on it before putting together a full review of the book. The book review will be a bit short, but that’s because this is a rather straightforward book.

Operator Handbook

I think the first things to know is that this book is strictly a reference. There’s nothing to read and learn things from in a cohesive way. It would be like reading a dictionary or a theasaurus – while you might learn things reading it, it’s not going to be in any meaningful way. There’s lots of things you can learn on a particular very narrow topic, but it is mostly organized to be “in the moment”, not as a “learning in advance” kind of thing.

The second thing to know is that unless you’re regularly in environments that don’t allow you to bring electronics in (e.g, heavily restricted customer sites), you really want this book in electronic format for quick searching and copy/paste. In fact, the tagline on the cover is “SEARCH.COPY.PASTE.L33T:)”. This is obviously a lot easier from the digital version. (Though I have to admit, I love the cover of the physical book – it’s got a robust feel and a cool “find it quick” yellow color.)

I rather suspect this book is inspired by books like the Red Team Field Manual, the Blue Team Field Manual, and Netmux’s own Hash Crack: Password Cracking Manual. When you crack it open, you’ll immediately see the similarities – very task focused, intended to get something done quickly, rather than a focus on the underlying theory or background.

I’ve actually referred to the book a couple of times while doing operations. Some of the things in it would be easily obtained elsewhere (e.g., a quick Google search for “nmap cheatsheet” gets you much the same information), but many other things would require distillation of the information into a more consumable format, and Netmux has already done that.

Many of the items in the book are also transformed into a security mindset – e.g., interacting with cloud platforms like AWS or GCP. Rather than trying to provide the information necessary to operate those platforms, the books focuses on the aspects relevant to security practitioners. The book also contains links to additional references, which is yet another reason you want to have this in a digital format. Some kind of URL shortener links would have been a nice touch for the print version.

One thing that I really want to applaud in this book is that there is a reference for mental health in the book. Whether or not the information security industry has a particular predisposition for mental health issues, I absolutely love the normalization of discussing mental health issues.

While there is content for both Red and Blue teamers, like so many resources, it seems to tend to the Red. Maybe it’s only my perception as a Red Teamer, maybe some of the contents I perceive as “Red” are also useful to Blue teamers. I’d love to hear from someone on the Blue side as to how they find the book contents for their role – any takers?

Overall, I think this is a useful book. A lot of effort clearly went into curating the content and covering the wide variety of topics that is included in it’s 123 references. There’s probably nothing ground-breaking in it, but it’s just presented so well that it’s totally worth having.

on May 25, 2020 07:00 AM

May 23, 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In April, 284.5 work hours have been dispatched among 14 paid contributors. Their reports are available:
  • Abhijith PA did 10.0h (out of 14h assigned), thus carrying over 4h to May.
  • Adrian Bunk did nothing (out of 28.75h assigned), thus is carrying over 28.75h for May.
  • Ben Hutchings did 26h (out of 20h assigned and 8.5h from March), thus carrying over 2.5h to May.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Dylan Aïssi did 6h (out of 6h assigned).
  • Emilio Pozuelo Monfort did not report back about their work so we assume they did nothing (out of 28.75h assigned plus 17.25h from March), thus is carrying over 46h for May.
  • Markus Koschany did 11.5h (out of 28.75h assigned and 38.75h from March), thus carrying over 56h to May.
  • Mike Gabriel did 1.5h (out of 8h assigned), thus carrying over 6.5h to May.
  • Ola Lundqvist did 13.5h (out of 12h assigned and 8.5h from March), thus carrying over 7h to May.
  • Roberto C. Sánchez did 28.75h (out of 28.75h assigned).
  • Sylvain Beucler did 28.75h (out of 28.75h assigned).
  • Thorsten Alteholz did 28.75h (out of 28.75h assigned).
  • Utkarsh Gupta did 24h (out of 24h assigned).

Evolution of the situation

In April we dispatched more hours than ever and another was new too, we had our first (virtual) contributors meeting on IRC! Logs and minutes are available and we plan to continue doing IRC meetings every other month.
Sadly one contributor decided to go inactive in April, Hugo Lefeuvre.
Finally, we like to remind you, that the end of Jessie LTS is coming in less than two months!
In case you missed it (or missed to act), please read this post about keeping Debian 8 Jessie alive for longer than 5 years. If you expect to have Debian 8 servers/devices running after June 30th 2020, and would like to have security updates for them, please get in touch with Freexian.

The security tracker currently lists 4 packages with a known CVE and the dla-needed.txt file has 25 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on May 23, 2020 04:10 PM

Full Circle Weekly News #172

Full Circle Magazine

There’s a Vulnerability in Timeshift
Linux Kernel 5.6 rc6 Out

Zorin OS 15.2 Out

Wine 5.4 Out

Red Hat’s Ceph Storage 4 Out

AWS’ Bottle Rocket Out

Tails 4.4 Out

Basilisk Browser Out

LibreELEC 9.2.1 Out

KDE Plasma 5.18.3 Out

SDL (or Simple DirectMedia Layer) 2 Out

Splice Machine 3.0 Out

4M Linux 32.0 Out

Ubuntu “Complete” sound: Canonical

Theme Music: From The Dust – Stardust

on May 23, 2020 10:19 AM

May 22, 2020

GitOps Days are over now - what a blast I had. Even though it was long hours, it was so much fun supporting the event: such a friendly and engaged audience (loads of great questions and discussion on Slack), excellent - very experienced and fun speakers, and a super well-organised team! Thanks everyone who made these two days as special as they were! 💓

It has happened before: people were picking a DJ name for me. The list of names on Slack and Twitter was long and gave me a laugh. Looks like as my GitOps DJ name, DJ Desired State was winning. You are all hilarious. 😂

As a follow-up to my blog post yesterday, here is the playlist from Day 2:

Mas Que Nada (UFe remix)
Morphy - Ragga Spindle
Acie - Sexymama
Jorge Ben - Take It Easy My Brother Charles
Marina Gallardo - Golden Ears (M.RUX Edit)
Cypress Hill - Insane In The Brain - Kasabian Cover (Matija & Richard Elcox Edit)
Siriusmo - EGO

Lokke - Song Nº 1

Quantic - Atlantic Oscillations
Khen - Manginot
The Chemical Brothers - Go (Claude VonStroke Remix)

RSL - Wesley Music
Adome Nyueto - Yta Jourias (Sopp's Party Edit)

The Silver Thunders - Fresales eternos
Alexander - Truth

Black Milk - Detroit's New Dance Show
Zeds Dead -  Rumble In The Jungle
Kalemba - Wegue Wegue (Krafty Kuts Remix)

Fdel - Let The Beat Kick
The Living Graham Bond - Werk
Pizeta - Nina Papa (Andy Kohlmann Remix)
Format:B - Gospel (Super Flu's Antichrist Remix)
Taisun - Senorita (Remix)
Pleasurekraft - Carny

If you sign up at, you can get links to the recordings and we’ll send you a GitOps conversation kit as well.

on May 22, 2020 07:54 AM

Okay, I’m not going to lie, the title was a bit of clickbait. I don’t believe that everyone in InfoSec really needs to know how to program, just almost everyone. Now, before my fellow practitioners jump on me, saying they can do their job just fine without programming, I’d appreciate you hearing me out.

So, how’d I get on this? Well, a thread on a private Slack discussing whether Red Team operators should know how to program, followed by people on Reddit asking if they should know how to program. I thought I’d share my views in a concrete (and longer) format here.

Computers are Useless without Programs

I realize that it sounds idomatic, but computers don’t do anything without programs. Programs are what gives a computer the ability to, well, be useful. So I think we can all agree that information security, as an industry, is based entirely around software.

I submit that knowing how to program makes most roles more effective merely by having a better understanding of how software works. Understanding I/O, network connectivity, etc., at the application layer will help professionals do a better job of understanding how software affects their role.

That being said, this is probably not reason enough to learn to program.

Learning to Program Opens Doors

I suppose this point can be summarized as “more skills makes you more employable”, which is probably (again) idiomatic, but it’s probably worth considering. There are roles and organizations that will expect you to be able to program as part of the core expectations.

For example, if you currently work in the SoC, and you want to work on building/refining the tools used in the SoC, you’ll need to program.

Alternatively, if you want to move laterally to certain roles, those roles will require programming – application security, tool development, etc.

You Will Be More Efficient

There are so many times where I could have done something manually, but ended up writing a program of some sort to do it instead. Maybe you have a range of IPs and need to check which of them are running a particular webserver, or you want to combine several CSVs based on one or two fields on them. Maybe you just want to automate some daily task.

As a Red Teamer, I often write scripts to accomplish a variety of tasks:

  • Check a bunch of servers for a Vulnerability/Misconfiguration
  • Proof of Concept to Exploit a Vulnerability
  • Analyze large sets of data
  • Write custom implants (“Remote Access Toolkits”)
  • Modify tools to limit scope

On the blue side, I know people who write programs to:

  • Analyze log files when Splunk, etc. just won’t do
  • Analyze large PCAPs
  • Convert configurations between formats
  • Provide web interfaces to tools that lack them

How much do you need to know?

Well, technically none, depending on your role. But if you’ve read this far, I hope you’re convinced of the benefits. I’m not suggesting everyone needs to be a full-on software engineer or be coding every day, but knowing something about programming is useful.

I suggest learning a language like Python or Ruby, since they have REPLs, a “read-eval-print loop”. These provide an interactive prompt where you can run statements and see the responses immediately. Python seems to be more commonly used for InfoSec tooling, but they both are good options to get things done.

I would focus on file and network operations, and not so much on complicated algorithms or data structures. While those can be useful, standard libraries tend to have common algorithms (searching, sorting, etc.) well-covered. Having a sensible data structure makes code more readable, but there’s not often a need for “low level” structures in a high level language.

Have I Convinced You?

Hopefully I’ve convinced you. If you want to learn programming with a security-specific slant, I can highly recommend some books from No Starch Press:

on May 22, 2020 07:00 AM

May 21, 2020

Ep 91 – O Mundo é dos Desktops

Podcast Ubuntu Portugal

Um episódio em que como de costumo conversámos sobre as nossas aventuras semanas, mas também cobrimos nostícias sobre Pine64 Pinetab com Ubuntu Touch, ms office no Linux, certificação de Ubuntu 20.04 para Raspberry Pi, o Ubuntutiverso em expansão, e a mudança radical no Ubuntu Studio.
Saibam também como até uma criança de 10 anos pode fazer a diferença no Ubuntu e como vocês podem passar também a ser heróis da comunidade.
Já sabem: oiçam, subscrevam e partilhem!


Neste episódio o discutimos tomadas inteligentes, extensões, gestão de cabos e controlo de consumo, redes e equipamento de redes, webcams, revisitamos a questão do ImageMagick no Nextcloud, imagens de Ubuntu Server.

Vejam lá que ainda tivemos tempo para para do UBports da OTA-12, da OTA-13, de PinePhone e Volla Phone

Já sabem: oiçam, subscrevam e partilhem!



Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link e vão estar também a apoiar-nos.

Esta semana recomendamos o bundle: Software Development by O’Reilly com o link de afiliado

Atribuição e licenças

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on May 21, 2020 10:04 PM

ZFS focus on Ubuntu 20.04 LTS: what’s new?

Ubuntu has supported ZFS as an option for some time. In 19.10, we introduced experimental support on the desktop. As explained by then, having a ZFS on root option on our desktop was only a first step in what we want to achieve by adopting this combined file system and logical volume manager. I strongly suggest you read the 2 above blog posts as introductions to this blog series we are starting. Here we cover what’s new compared to 19.10 in term of installation and general features. We then look at what ZSys, our dedicated helper for ZFS systems, can do for you and how you can interact with it. Finally, for the more tech savy, we will deep dive in to how we use ZFS, store properties and understanding how the puzzle fits together. We will give you tips on how to tweak it at your convenience if you are a ZFS sysadmin expert, while still keeping ZSys advanced capabilities compatible.

Without further ado, let’s dive into this!

ZFS & Ubuntu 20.04 LTS

First thing to note is that our ZFS support with ZSys is still experimental. The installer highlights this in the corresponding screen. With OpenZFS on Ubuntu 20.04 LTS, we are building the first steps of getting the Ubuntu bullet proof desktop.

We hope to be able to drop this experimental support in the coming cycle, and backport this to a 20.04.x release.

OpenZFS Version

We updated from 19.10 to 20.04 LTS OpenZFS to its latest and greatest available release (0.8.3). As usual, those releases (0.8.2 and 0.8.3) bring a lot of improvements and fixes (note though that we backported some fixes in Ubuntu 19.10 from 0.8.2 into our package to fix some critical issues for our users). We also backported in Ubuntu 20.04 LTS other fixes in our kernel from the incoming 0.8.4 (and OpenZFS 2.0) release, like encryption performance enhancements. ZSys and other components (for instance ZFS bindings) have been updated to work with the new libzfs version.

Thanks to this, we are committed to deliver the best OpenZFS on linux experience to our audience by continuing to fix any important issues that arise and backporting any critical fixes from newer releases.

More streamline desktop with ZFS on root installation

The installer experience has been reworked to be clearer and offer more focused options for the user. Installing ZFS on root has never been simpler, especially if you compare to the excellent, but very long and manual How-To Ubuntu Root on ZFS created by the OpenZFS community.

We support, for now, only full disk setup. The options related to this (LVM, LVM+encryption and ZFS) are all under the “Advanced features” screen.

Press Advanced features in Erase disk and install ubuntu

You will find there the Experimental ZFS support option.

Experimental ZFS support options screen

“ZFS selected” should now be displayed.

ZFS selected screen

If you have multiple disks available, an additional screen will present the available choices. Once validated, a recap of the different partitions that we are going to create is waiting upon your approval.

Recap of partitions to create

In a later blog posts, we will see why and what all those partitions are about, their sizes and various decisions which lead to that layout.

The installation will then proceed as usual. Upon reboot, your newly installed Ubuntu 20.04 LTS system will be powered with ZFS on root!

Here are some proofs (but you don’t need to type those commands):

ZFS is powering your system and ZSys is active

And that’s it! For your daily driver, you don’t need to do anything more. Unless you want to understand a little bit more what this will bring to you in addition to all the ZFS robustness, checksuming, decompression and such, this is where you can stop! The system will work silently for you and you can forget about it. :)

Pool enhancements

For technical savy users, they will be delighted to know that we upgraded the bpool version to 5000 (previously, it was at version 28). We created - only on bpool - a selection of features to ensure that grub will be able to read and boot from any ZFS filesystem or snapshot datasets.

However, zpool command line users were encouraged then to upgrade the pool to enable new features on it, those being incompatible with grub! Worse, once the pool was upgraded, it was impossible to downgrade and your system was basically broken. Seeing this happening and until there is a dedicated version-by-components (discussions started upstream), we removed this message on bpool from zpool status and prevented to upgrade is with zpool upgrade. rpool doesn’t have this limitation and have a bunch of features enabled.

Boot enhancements

With ZSys installed, building a boot grub menu sees some great enhancements. We basically divided by 4 the time to generate this menu thanks to multiple profiling technics, which will make any operation needing to rebuild it way more enjoyable.

Building the menu is something, but starting grub is another. With a number of state saves, you can rapidly hit the score of more than a hundred of them. Under those conditions, we saw performance degradation in navigating grub menus. In particular, displaying the first menu was taking 14s and navigating the history of state was taking more than 80s! We made some drastic enhancements there by making both of them instantaneous! We had to come with multiple -hem- “”“creative”“” strategies for fixing that one (reducing grub.cfg from 7329 lines that grub didn’t really like to 728, each new history entry being 3 additional lines instead of 50 :p). We crafted all this to still be readable for advanced users who want to manually edit the boot command line before starting a system.

Similarly, we were able to significantly speed up the boot time and make the systemd ZFS mount generator more robust: basically, unless you revert to another system state, ZSys will be out of the way, not taking any additional time on the critical boot path. Also, if some pools to import are not in a coherent status, you will be dropped to system emergency state, asking you to fix this before booting. However, this should only be the case of a failing revert, and booting in the current, last known good state, should work. This is an area we will continue to work on, of course.

Finally, we fixed a bunch of issues on both grub and boot chainload that people testing ZFS on 19.10 reported to us, like external pools being systematically exported, some configurations issues and more. Thanks to everyone here, it helps to deliver a more solid experience with ZFS on Ubuntu 20.04 LTS!

ZSys is now installed by default

Some Ubuntu flavors already shipped it in 19.10, but ZSys has seen a number of changes and new capabilities. After a security review, it is now seeded by default on the Ubuntu desktop 20.04 LTS installation!

There are a lot of things to discuss about this component in term of features and what it exactly brings to the user. Not only does this allow for system revert (and grub bootloader will now present them), but it is not limited to being a boot environment. In fact, there is so much to talk about that we will dedicate the next four blog posts entirely to it.

The good news is that most of the features are completely working under the hood and transparently to the user. However, if you need to revert an upgrade, get back old files from an user, or are just interested in it, those posts should be of interest to you! See you there :)

Meanwhile, join the discussion via the dedicated Ubuntu discourse thread.

on May 21, 2020 12:00 PM

What a brilliant day 1 of GitOps Days it was. Weeks of hard work from a great team went into this, as was quite apparent. Minor glitches, some last minute shuffling of speakers, but apart from that very very seamless. (You can still sign up and get links to the recordings.)

I had a bit of an unusual role: I was DJing at an online event.

Some questions online were about the setup and why I wasn’t changing records during playing (well spotted!). So here’s what I used during the event:

  • 2x Technics SL-1210 MK2 decks
  • Allen & Heath Xone:23 mixer
  • Serato time code vinyls
  • Native Instruments Audio 4 DJ interface
  • Dell XPS 13 (9370) with xwax on Ubuntu 20.04

I’ve used this setup for a long time and it’s rock-solid. Having played real vinyls for years, I just never updated my muscle memory to use a CDJ or any other fancy new controller. At some stage I just had to move on from buying records every weekend or two to digital - there’s just so much more stuff out there and you don’t suffer as much from tracks (and records catching dust) that you find to be quite short-lived.

On the transmission end, I was very lucky that Lucijan (one of my besties) gave me his Windows laptop (in use for making Windows builds of Sitala - their free drum plugin and standalone app). I needed it because we used Zoom (as in for transmission and only the Windows or Mac versions of Zoom offer the “Original Sound” option where your sound is sent “as-is”, which you very much want when you’re playing live music. I also used his Zoom H4n recorder as an external soundcard.

To get a better view of what I’m doing, I mounted an external webcam on an USB extension on a broomstick (yeah I know, very professional!). The webcam was the piece of equipment that most urgently would need replacement. It didn’t really cope well when it got darker here in Berlin (it was 23:59 here when day 1 ended) and I had to add additional lighting. Also the disco ball didn’t come out quite as well as I wanted.

All in all, I had a great time supporting the event and look forward to day 2. The vibe was incredible - the audience was super friendly and had great conversations. I also learned that one of the speakers, Vuk Gojnic (Deutsche Telekom), used to be a DJ in the past and we’re losely planning to play at an event near you when this pandemic is over. We also had somebody who used to VJ in the late nineties. I love interactions like these - so diverse interests apart from GitOps.

Today I’ll also make sure to move and sit a bit more during the times when I’m not playing. I was standing most of yesterday.

One thing that was asked for as well was a playlist. Looking at this I’m quite surprised how much I managed to play on day 1.

The Rebirth - Evil Vibrations
Daniël Leseman - Ease The Pain (Extended Mix)
Yuksek & Bertrand Burgalat - Icare (Yuksek Remix)
Mooqee & Beatvandals - Player (2019 Disco Rework)

Afterclapp - BRZL
Claudia - Deixa Eu Dizer (iZem ReShape)
Dengue Dengue Dengue - Simiolo (Cumbia Cosmonauts Remix)
Twerking Class Heroes - Vanakkam
Romare - Down the line (it takes a number)

Jorge Ben - Ponta de Lança Africano
Edu Lobo - Viola Fora De Moda (Cau Lopez Remix)

Quantic & Nickodemus feat Tempo & The Candela Allstars - Mi Swing Es Tropical

Cocotaxi - Dejala Corre
Daniel Haaksman - Puerto Rico (Neki Stranac Moombahton Mix)
Cupcake Project - This Ain't No Boogie (Prosper & Adam Polo Remix)
Frajle - Pare Vole Me Extended Club (Gramophonedzie remix)

Fela Kuti and The Africa 70 - Shakara (Diamond Setter Edit)

Romare - The Blues (It Began in Africa)
Owiny Sigoma Band - Doyoi Nyajo Nam (Quantic Dub)
Zed Bias - Trouble in the Streets (feat. Mark Pritchard)
Gramophonedzie - Why Dont You

David Walters - Mama

Please join us for day 2 of GitOps Days - the schedule just looks great. I look forward to seeing you there! 💞

on May 21, 2020 07:21 AM

Sometimes you just need to search using awk or want to use plain bash to search for an exception in a log file, it’s hard to go into google, stack overflow, duck duck go, or any other place to do a search, and find nothing, or at least a solution that fits your needs.

In my case, I wanted to know where a package was generating a conflict for a friend, and ended up scratching my head, because I didn’t want to write yet another domain specific function to use on the test framework of openQA, and I’m very stubborn, I ended up with the following solution

journalctl -k | awk 'BEGIN {print "Error - ",NR; group=0}
/ACPI BIOS Error \(bug\)/,/ACPI Error: AE_NOT_FOUND/{ print group"|", 
	$0; if ($0 ~ /ACPI Error: AE_NOT_FOUND,/ ){ print "EOL"; group++ }; 

This is short for:

  • Define $START_PATTERN as /ACPI BIOS Error \(bug\)/, and $END_PATTERN as /ACPI Error: AE_NOT_FOUND/
  • Look for $START_PATTERN
  • Look for $END_PATTERN
  • If you find $END_PATTERN add an EOL marker (that is not needed, since the group variable will be incremented)

And there you go: How to search for exceptions in logs, of course it could be more complicated, because you can have nested cases and whatnot, but for now, this does exactly what I need:

10| May 20 12:38:36 deimos kernel: ACPI BIOS Error (bug): Could not resolve symbol [\_SB.PCI0.LPCB.HEC.CHRG], AE_NOT_FOUND (20200110/psargs-330)
10| May 20 12:38:36 deimos kernel: ACPI Error: Aborting method \PNOT due to previous error (AE_NOT_FOUND) (20200110/psparse-529)
10| May 20 12:38:36 deimos kernel: ACPI Error: Aborting method \_SB.AC._PSR due to previous error (AE_NOT_FOUND) (20200110/psparse-529)
10| May 20 12:38:36 deimos kernel: ACPI Error: AE_NOT_FOUND, Error reading AC Adapter state (20200110/ac-115)
11| May 20 12:39:12 deimos kernel: ACPI BIOS Error (bug): Could not resolve symbol [\_SB.PCI0.LPCB.HEC.CHRG], AE_NOT_FOUND (20200110/psargs-330)
11| May 20 12:39:12 deimos kernel: ACPI Error: Aborting method \PNOT due to previous error (AE_NOT_FOUND) (20200110/psparse-529)
11| May 20 12:39:12 deimos kernel: ACPI Error: Aborting method \_SB.AC._PSR due to previous error (AE_NOT_FOUND) (20200110/psparse-529)
11| May 20 12:39:12 deimos kernel: ACPI Error: AE_NOT_FOUND, Error reading AC Adapter state (20200110/ac-115)
12| May 20 13:37:41 deimos kernel: ACPI BIOS Error (bug): Could not resolve symbol [\_SB.PCI0.LPCB.HEC.CHRG], AE_NOT_FOUND (20200110/psargs-330)
12| May 20 13:37:41 deimos kernel: ACPI Error: Aborting method \PNOT due to previous error (AE_NOT_FOUND) (20200110/psparse-529)
12| May 20 13:37:41 deimos kernel: ACPI Error: Aborting method \_SB.AC._PSR due to previous error (AE_NOT_FOUND) (20200110/psparse-529)
12| May 20 13:37:41 deimos kernel: ACPI Error: AE_NOT_FOUND, Error reading AC Adapter state (20200110/ac-115)

So I could later write some code that looks per string, separates the string using the pipe | discards the group id, and adds that record to an array of arrays or hashes: [ {group: id, errors: [error string .. error string] ]

on May 21, 2020 12:00 AM

May 20, 2020

The From: addresses used by Launchpad’s bug notifications have changed, to improve the chances of our messages being delivered over modern internet email.

Launchpad sends a lot of email, most of which is the result of Launchpad users performing some kind of action. For example, when somebody adds a comment to a bug, Launchpad sends that comment by email to everyone who’s subscribed to the bug.

Most of Launchpad was designed in an earlier era of internet email. In that era, it was perfectly reasonable to take the attitude that we were sending email on behalf of the user – in effect, being a fancy mail user agent or perhaps a little like a mailing list – and so if we generated an email that’s a direct result of something that a user did and consisting mostly of text they wrote, it made sense to put their email address in the From: header. Reply-To: was set so that replies would normally go to the appropriate place (the bug, in the case of bug notifications), but if somebody wanted to go to a bit of effort to start a private side conversation then it was easy to do so; and if email clients had automatic address books then those wouldn’t get confused because the address being used was a legitimate address belonging to the user in question.

Of course, some people always wanted to hide their addresses for obvious privacy reasons, so since 2006 Launchpad has had a “Hide my email address from other Launchpad users” switch (which you can set on your Change your personal details page), and since 2010 Launchpad has honoured this for bug notifications, so if you have that switch set then your bug comments will be sent out as something like “From: Your Name <>“. This compromise worked tolerably well for a while.

But spammers and other bad actors ruin everything, and the internet email landscape has changed. It’s reasonably common now for operators of email domains to publish DMARC policies that require emails whose From: headers are within that domain to be authenticated in some way, and this is incompatible with the older approach. As a result, it’s been getting increasingly common for Launchpad bug notifications not to be delivered because they failed these authentication checks. Regardless of how justifiable our notification-sending practices were, we have to exist within the reality of internet email as it’s actually deployed.

So, thanks to a contribution from Thomas Ward, Launchpad now sends all its bug notifications as if the user in question had the “Hide my email address from other Launchpad users” switch set: that is, they’ll all appear as something like “From: Your Name <>“. Over time we expect to extend this sort of approach to the other types of email that we send, possibly with different details depending on the situation.

Please let us know if this causes any strange behaviour in your email client. We may not be able to fix all of them, depending on how they interact with DMARC’s requirements, but we’d like to be aware of what’s going on.

on May 20, 2020 02:43 PM

S13E08 – Black cats

Ubuntu Podcast from the UK LoCo

This week we’ve been live streaming on YouTube. We discuss upgrading home networks and optimising power line adapters, WiFi and broadband connections. A bumper crop of network-related command line love and all your wonderful feedback.

It’s Season 13 Episode 08 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

sudo ss -tlp

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on May 20, 2020 11:30 AM

May 18, 2020

Welcome to the Ubuntu Weekly Newsletter, Issue 631 for the week of May 10 – 16, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on May 18, 2020 08:58 PM

May 15, 2020

If you want to draw on your computer, you can always use your mouse. But if you want to really draw, you can upgrade to a drawing tablet (also known as graphics tablets). Here is one below. It is an entry level drawing tablet, has a pressure-sensitive pen that does not require batteries, and the actual tablet to draw on, looks like an over-sized laptop touchpad. Instead of your fingers, you use the pen to move the pointer on your screen.

An entry-level drawing tablet.

In this post we are using the Huion Inspiroy 430P, an entry-level drawing tablet that is supported on Ubuntu. There is mainline support in the Linux kernel for this tablet since 2015, therefore the following should work with Ubuntu 16.04 or newer. They have been tested on Ubuntu 18.04 LTS and Ubuntu 20.04 LTS. It is suggested to use a recent version of Ubuntu as there is drawing tablet support in the system settings.

Setting up the the Huion 430P on Ubuntu 20.04

Just connect the drawing tablet to a USB port on your computer. That’s it. Just to make sure, run the following. If you do not get that, write a comment.

$ xsetwacom --list devices
HUION Huion Tablet Pen stylus id: 15 type: STYLUS
HUION Huion Tablet Pad pad id: 16 type: PAD

Configuring the Huion 430P on Ubuntu 20.04

There are settings in GNOME Shell to configure a drawing tablet. There is no specific configuration for the Huion 430P but there are for other Huion models. It is easy to add support for the 430P and I have some configuration. For now, we accept the default setup. If we want to perform specific configuration, we can always use the xsetwacom utility.

You can use the drawing tablet as a mouse. The small area of the drawing tablet (4.3″ diagonally) is mapped directly to your full display. That is, unlike a mouse where you can lift it to move the pointer further away, on a tablet instead pointer is mapped directly to the full display. Which means that a small 4.3″ active tablet area might be too dense/sensitive for your FullHD or 4K display. Each application like GIMP, Inkscape and Blender have support to configure input devices like drawing tablets. You can configure in them whether you want to map the active area to the whole screen, or the window’s drawing area.

Configuring GIMP

We have installed GIMP 2.10 (snap package) and we have started drawing with the Brush tool. We use the default settings and switched the Dynamics to Pressure Size. This meas that the pen, which detects the pressure, should draw a line that has a variable size, depending on the pressure. But it does not. The first line has been drawn with the mouse, and the second with the drawing tablet. At this stage, the drawing tablet is just smoother than the mouse. We need to properly configure the drawing tablet.

Drawing in GIMP. First line is with the mouse, the second with the drawing tablet.
We have not configured the drawing tablet yet.

In the above screenshot there is a tab called Device Status. It looks like the following. It only shows the Core Pointer and has not information about the drawing tablet. This means that the drawing tablet is not configured. It also means that the drawing tablet will probably work, but only as a mouse. The pressure sensitivity of the stylus will not be taken into account when we draw.

Device Status. Here it shows only the Core Pointer (mouse) and no drawing tablet.

Let’s look into Edit → Input Settings in GIMP to activate the drawing tablet. There are two input devices for the Huion tablet, one for the Pad and one for the Stylus. By default, their Mode was both Disabled. In the following, I switched them to Screen, as shown below. It says the the X axis has no curve, meaning that this drawing tablet has no tilt support. In addition, we have not configured yet any of the buttons on the drawing tablet. Note that both the stylus and the pad have buttons. We just want to draw already!

Configuring the input devices in GIMP for the Huion 430P drawing tablet.

Let’s see again the Device Status tab in GIMP. There are entries for the Tab and the Pen (Stylus). Ignore for now the Tab settings (in red). We have the Core Pointer (Mouse) and the Pen, both with the exact same drawing settings.

Device Status showing both the Mouse (Core Pointer) and the Pen (Huion Tablet Pen).

It is time to draw again, with the mouse and then the pen, and compare. The pen is smoother, and evidently the line understands the pressure. The thickness depends on the pressure we exert to the pen!

The line on the left was drawn with the mouse. The one on the right, with the pen of the drawing tablet.

Drawing with the pen

We are going to draw with the pen using the Paintbrush tool. And we are going to experiment with the Dynamics option of the Paintbrush tool. And here is the most important tip of the day:

When you change the settings of a tool in the GIMP with the mouse, you are changing the mouse settings for this tool. You need to use the pen to change the pen settings!

Here I tried all the stock Dynamics presets in GIMP (see bottom left). I used black and white, and black to white gradient. Some are really colorful if you use other colors. You can also create your own dynamics!
We switched to the Ink tool. When we draw, we feel like we are using a calligraphy pen.

Another tool is the MyPaint Brush. It has many types of brushes. The following screenshot shows about half of them.

Different types of brushes. The one selected, is a 2B pencil.


Apart from using the drawing tablet in GIMP, you can also use it in Inkscape and Blender. It is especially useful in Grease Pencil in Blender.

A drawing tablet is a big upgrade to drawing on your Ubuntu desktop.You can start off with a Huion drawing tablet such as the Huion 430P. It is available from Aliexpress at around €20 from the official Huion Store.

The Huion 430P drawing tablet is supported in Linux thanks to the effort of the DIGImend project. They have done an amazing work adding Linux support to various brands to drawing tablets.

If you end up loving your drawing tablet, please consider donating to the maintainer of the DIGImend project, Nikolai Kondrashov, on Patreon, on Liberapay, or just buy him a coffee.

on May 15, 2020 12:42 AM

May 14, 2020

Ep 90 – O insecto contra-ataca

Podcast Ubuntu Portugal

Neste episódio o discutimos tomadas inteligentes, extensões, gestão de cabos e controlo de consumo, redes e equipamento de redes, webcams, revisitamos a questão do ImageMagick no Nextcloud, imagens de Ubuntu Server.

Vejam lá que ainda tivemos tempo para para do UBports da OTA-12, da OTA-13, de PinePhone e Volla Phone

Já sabem: oiçam, subscrevam e partilhem!



Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link e vão estar também a apoiar-nos.

Esta semana recomendamos o bundle: Software Development by O’Reilly com o link de afiliado

Atribuição e licenças

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on May 14, 2020 09:45 PM

Remediating sites

Stuart Langridge

Sometimes you’ll find yourself doing a job where you need to make alterations to a web page that already exists, and where you can’t change the HTML, so your job is to write some bits of JavaScript to poke at the page, add some attributes and some event handlers, maybe move some things around. This sort of thing comes up a lot with accessibility remediations, but maybe you’re working with an ancient CMS where changing the templates is a no-no, or you’re plugging in some after-the-fact support into a site that can’t be changed without a big approval process but adding a script element is allowed. So you write a script, no worries. How do you test it?

Well, one way is to actually do it: we assume that the way your work will eventually be deployed is that you’ll give the owners a script file, they’ll upload it somehow to the site and add a script element that loads it. That’s likely to be a very slow and cumbersome process, though (if it wasn’t, then you wouldn’t need to be fixing the site by poking it with JS, would you? you’d just fix the HTML as God intended web developers to do) and so there ought to be a better way. A potential better way is to have them add a script element that points at your script on some other server, so you can iterate on that and then eventually send over the finished version when done. But that’s still pretty annoying, and it means putting that on the live server (“a ‘staging’ server? no, I don’t think we’ve got one of those”) and then having something in your script which only runs it if it’s you testing. Alternatively, you might download the HTML for the page with Save Page As and grab all the dependencies. But that never works quite right, does it?

The way I do this is with Greasemonkey. Greasemonkey, or its Chrome-ish cousin Tampermonkey, has been around forever, and it lets you write custom scripts which it then takes care of loading for you when you visit a specified URL. Great stuff: write your thing as a Greasemonkey script to test it and then when you’re happy, send the script file to the client and you’re done.

There is a little nuance here, though. A Greasemonkey script isn’t exactly the same as a script in the page. This is partially because of browser security restrictions, and partially because GM scripts have certain magic privileged access that scripts in the page don’t have. What this means is that the Greasemonkey script environment is quite sandboxed away; it doesn’t have direct access to stuff in the page, and stuff in the page doesn’t have direct access to it (in the early days, there were security problems where in-page script walked its way back up the object tree until it got hold of one of the magic Greasemonkey objects and then used that to do all sorts of naughty privileged things that it shouldn’t have been able to, and so it all got rigorously sandboxed away to prevent that). So, if the page loads jQuery, say, and you want to use that, then you can’t, because your script is in its own little world with a peephole to the page, and getting hold of in-page objects is awkward. Obviously, your remediation script can’t be relying on any of these magic GM privileges (because it won’t have them when it’s deployed for real), so you don’t intend to use them, but because GM doesn’t know that, it still isolates your script away. Fortunately, there’s a neat little trick to have the best of both worlds; to create the script in GM to make it easy to test and iterate, but have the script run in the context of the page so it gets the environment it expects.

What you do is, put all your code in a function, stringify it, and then push that string into an in-page script. Like this:

// ==UserScript==
// @name     Stuart's GM remediation script
// @version  1
// @grant    none
// ==/UserScript==

function main() {
    /* All your code goes below here... */

    /* ...and above here. */

let script = document.createElement("script");
script.textContent = "(" + main.toString() + ")();";

That’s it. Your code is defined in Greasemonkey, but it’s actually executed as though it were a script element in the page. You should basically pretend that that code doesn’t exist and just write whatever you planned to inside the main() function. You can define other functions, add event handlers, whatever you fancy. This is a neat trick; I’m not sure if I invented it or picked it up from somewhere else years ago (and if someone knows, tell me and I’ll happily link to whoever invented it), but it’s really useful; you build the remediation script, doing whatever you want it to do, and then when you’re happy with it, copy whatever’s inside the main() function to a new file called whatever.js and send that to the client, and tell them: upload this to your creaky old CMS and then link to it with a script element. Job done. Easier for you, easier for them!

on May 14, 2020 05:05 PM

I met up with the excellent hosts of the The Changelog podcast at OSCON in Austin a few weeks back, and joined them for a short segment:

That podcast recording is now live!  Enjoy!

The Changelog 256: Ubuntu Snaps and Bash on Windows Server with Dustin Kirkland
Listen on

on May 14, 2020 04:39 PM

May 11, 2020

Hello penguins, I hope this short post helps someone.

As you know, I have been using Ubuntu since 2004, and I must confess that I like it more every day.

I have already some years (5 years at this time) that I left the country where I was born (Venezuela). Now, I live in the United States of America. Place where I have been able to lay roots and start a new life.

In my professional world, I had the opportunity to start and advance many new projects. But always have something is common, it is that in many project, some partners who always look at me strangely, when they see my computer desktop and that does not use Windows.

So they usually ask me, “And what is that?” Obviously, I answer “LINUX, Ubuntu”.

Today, I would like to leave here, the procedure of how I updated my Ubuntu 18.04 to the new version 20.04.

First of all, what I did was make sure that my ubuntu was completely updated, for this I updated my repositories, opening a Terminal and then I ran:

sudo apt upddate

Once the repositories were updated, then proceeded to upgrade the complete system with the command:

sudo apt upgrade

Because I had packages pending upgrade, they were downloaded and updated.

When this was ready, then take advantage of uninstalling everything that was not needed by the system, executing the command:

sudo apt autoremove

I really prefer having the most amount of space on my hard drives before upgrading, so clear the APT system cache by running:

sudo apt clean

At the end of the uninstallation an cleaning process, then restart my computer, I logged in again and reopened my terminal (I use “Deepin Terminal” with FISH). Then I had everything ready to start the installation (Upgrade) to Ubuntu 20.04. So I ran:

sudo do-release-upgrade

This procedure asked me several confirmation questions as well as made it clear that it would take some time to complete.

Please wait patiently (it took about an hour) and when finished my computer rebooted. So when I returned, everything was ready. Everything worked perfectly.

My only recommendation is for those who do not have a fast internet, in which case I would recommend doing such an update from an external medium or a clean installation (If you are going to do a clean installation, remember to back up your data and then be able to restore them)

If you wanna daownload the ISO file to be burned in a DVD or in a USB, try downloading here:

Download Ubuntu 20.04 LTS

Images can be downloaded from a location near you.

You can download ISOs and flashable images from: (Ubuntu Desktop and Server for AMD64) (Less Frequently Downloaded Ubuntu Images) (Less Frequently Downloaded Ubuntu Images) (Ubuntu Cloud Images) (Kubuntu) (Lubuntu) (Ubuntu Budgie) (Ubuntu Kylin) (Ubuntu MATE) (Ubuntu Studio) (Xubuntu)

on May 11, 2020 08:36 PM

Ubuntu 20.04

Rolando Blanco

I have finally been able to update my Ubuntu to version 20.04. As we know, the update from 18.04 through the command ‘do-release-upgrade’ took a while to arrive, however it was only yesterday that I tried it and it worked perfectly.

At first glance I have found that my system is working much, but much better, especially in terms of resource consumption; If it was already optimal before, at this time the statistics of it (I use htop) shows that it is much more optimized.

On the other hand, my system runs on a DELL Inspiron 17 7000-7737 Series. Its operation is total, all the hardware components have been recognized without major problem.

As usual, in this version we will have 5-year support (until April 2025) for both its desktop and server versions.

In addition to having its original flavor, we can count on its additional flavors that you can find HERE.

Finally, all the applications that I had installed, work perfectly.

For now I will stay here, testing and reviewing the new changes and improvements. Soon I will be posting everything new that I find, as well as testing and commenting on its new features.

on May 11, 2020 07:55 PM

May 07, 2020

Full Circle Weekly News #170

Full Circle Magazine

The Structure and Administration of the GNU Project Announced
CTO calls for patience after devs complain promised donations platform has stalled
Raspberry Pi 4 with 2GB RAM Reduced to $35
Ubuntu “Complete” sound: Canonical

Theme Music: From The Dust – Stardust

on May 07, 2020 10:01 AM

For some odd reason, I couldn’t find quickly (2 mins in the manpage and some others on DuckDuckGo) a way to import a GPG key.

Just because one of the repos gives this ugly message:

Repository vscode does not define additional 'gpgkey=' URLs.
Warning: File 'repomd.xml' from repository 'vscode' is signed with an unknown key 'EB3E94ADBE1229CF'.

    Note: Signing data enables the recipient to verify that no modifications occurred after the data
    were signed. Accepting data with no, wrong or unknown signature can lead to a corrupted system
    and in extreme cases even to a system compromise.

    Note: File 'repomd.xml' is the repositories master index file. It ensures the integrity of the
    whole repo.

    Warning: We can't verify that no one meddled with this file, so it might not be trustworthy
    anymore! You should not continue unless you know it's safe.

I know you can use zypper ar --gpg-auto-import-keys, but I didn’t do it when I added it (and I’m too lazy to remove the repo and add it again, just to see if it works)

rpm --import
zypper ref
on May 07, 2020 12:00 AM

May 06, 2020

So you got that awesome new Raspberry Pi High-Res camera that they released last week but you don’t really know what to do with it ?

Well, there is salvation, Ubuntu Core can turn it for you into a proper streaming device and help you with using it for your Zoom meetings giving you a really professional look !

What you need:

  • A Raspberry Pi 3 (2 will work too but you need ethernet or a wlan dongle)
  • An SD card
  • The new High Quality Camera
  • A C-Mount or CS-Mount lens
  • A tripod/stand (optional but really helpful)

Setting up the Pi:

Attach the camera to the CSI port of the Raspberry Pi.

Download the Ubuntu Core 18 image and write it to an SD card.

Boot the Pi with keyboard/monitor attached and run though the setup wizard to create a User and configure the WLAN.

Now ssh into the pi like it tells you to on the screen.

On the Pi you do:

$ snap set system pi-config.start-x=1
$ snap install picamera-streaming-demo
$ sudo reboot # to make the start-x setting above take effect

This is it for the Pi side, you can check that the setup worked by pointing your browser to (replace IP_OF_YOUR_PI with the actual IP address):


You should see the picture of the camera in your browser…

The PC side:

To make your stream from your newly created Ubuntu Core network camera available to your video conferencing applications on the desktop we need to teach it to show up on your PC as a v4l /dev/video device.

Luckily the linux kernel has the awesome v4l2loopback module that helps with this, lets install it on the PC:

$ sudo apt install v4l2loopback-dkms
... [ some compiling ] ...
$ ls /dev/video*
/dev/video0  /dev/video1  /dev/video2
$ sudo modprobe v4l2loopback
$ ls /dev/video*
/dev/video0  /dev/video1  /dev/video2  /dev/video3

Loading the module created a new /dev/video3 device that we will use …

Now we need to capture the stream from the pi and route it into this /dev/video3 device, to do this we will use ffmpeg (replace IP_OF_YOUR_PI with the actual IP address):

$ sudo snap install ffmpeg
$ sudo snap connect ffmpeg:camera
$ sudo ffmpeg -re -i http://IP_OF_YOR_PI:8000/stream.mjpg -vcodec rawvideo -pix_fmt yuv420p -threads 0 -f v4l2 /dev/video3

And thats it …

If you do not yet have any video conferencing software, try out the zoom-client snap …

$ sudo snap install zoom-client

Set up an account and select the “Dummy Camera Device” in the Video Input Settings.

The whole thing will then look like (hopefully with a less ugly face in it though 🙂 ):


And this is it … the camera quality is classes better than anything you get with a similar priced USB camera or builtin laptop one and you can replace the lens with a wider angle one etc …

on May 06, 2020 04:16 PM

May 05, 2020

Hello students,

I no longer have access to your proposal or emails, thus the open letter on my blog.

If you allowed commenting before the student proposal deadline, I along with other admins and mentors tried to help you improve your proposal. Some of you took the suggestions and sharpened your presentation, fleshed out your timeline and in general created a proposal you can be proud of.

If you did not allow commenting or only uploaded your proposal right before the deadline, you missed out on this mentoring opportunity, and for that I am sorry. That cut us off from a vital communication link with you.

This proposal process, along with fixing some bugs and creating some commits mean that you have real experience you can take with you into the future
. I hope you also learned how to use IRC/Matrix/Telegram channels to get information, and help others as well. Even if you do not continue your involvement with the KDE Community, we hope you will profit from these accomplishments, as we have.

We hope that your experiences with the KDE community up to now make you want to continue to work with us, and become part of the community. Many students whom we were not able to accept previously were successfully accepted later. Some of those students now are mentoring and/or part of the administration team, which is, in our eyes, the zenith of GSoC success.

Some of you we were unable to accept because we could not find suitable mentors. The GSoC team is asking us this year to have three mentors per student, because the world has become so uncertain in this pandemic time. So more developers who will mentor are a precious resource.

Almost every single proposal we got this year is work we want and need, or we wouldn't have published "Ideas" to trigger those proposals. If you are interested in doing this work and do not need the funding and deadlines that GSoC provides, we would welcome working with you outside of GSoC. In fact, each year we have Season of KDE which provides some mentoring, structure and timeline and no funding. This has been very successful for most mentees. And of course all are welcome to join our worldwide army of volunteers, who code, fix bugs, triage bug reports, write, analyze, plan, administer, create graphics, art, promo copy, events, videos, tutorials, documentation, translation, internationalization, and more! It is the KDE community who makes the software, keeps it up-to-date, plans and hosts events, and engages in events planned and hosted by others.

Please join the KDE-Community mail list and dig in! Hope to see you at KDE Akademy.

on May 05, 2020 08:38 PM

May 02, 2020

Debian project leader

This month contained the first week and a half or so of my term as Debian Project Leader. So far my focus has been getting up to speed and keeping the gears turning with day to day DPL tasks. The updates listed here will also be available on the DPL blog, where I aim to make more frequent updates.

During May, Debian Brazil will host Debian talks throughout the month which they will stream to their YouTube channel. You can find the schedule in this git repository, most of the talks will be in Portuguese, but on the 4th of May at 21:00 UTC, I’ll be covering some Debian project topics for an hour or so and take some questions if there’s time left.

2020-04-19: Handover session with Sam, our outgoing DPL. We covered a lot of questions I had and main areas that the DPL works in. Thanks to Sam for having taken the time to do this.

2020-04-21: First day of term! Thank you to everybody who showed support and have offered to help!

2020-04-21: Request feedback from the trademark team on an ongoing trademark dispute.

2020-04-21: Join the GNOME Advisory Board as a representative from Debian.

2020-04-21: Reply on an ongoing community conflict issue.

2020-04-21: Update Debian project summary for SPI annual report.

2020-04-21: Received a great e-mail introduction from Debian France and followed up on that.

2020-04-21: Posted “Bits from the new DPL” to debian-devel-announce.

2020-04-22: Became Debian’s OSI Affilliate representative.

2020-04-22: Reply to a bunch of media inquiries for interviews, will do them later when initial priorities are on track.

2020-04-23: Resign as Debian FTP team trainee and mailing list moderator. In both these areas there are enough people taking care of it and I intend to maximise available time for DPL and technical areas in the project.

2020-04-25: Review outgoing mail for trademark team.

2020-04-25: Answer some questions in preparation for DAM/Front Desk delegation updates.

2020-04-26: Initiated wiki documentation for delegation updates process.

2020-04-27: Update delegation for the Debian Front Desk team.

2020-04-29: Schedule video call with Debian treasurer team.

2020-04-29: OSI affiliate call. Learned about some Open Source projects including OpenDev, OpenSourceMatters, FOSS Responders and Democracy Labs.

Debian Social

Work on the Debian Social project is progressing, we plan to start a separate blog syndicated to Planet Debian that contains progress and status updates. I’ve been terrible at tracking the work we’ve been doing on this, so for now, here are some micro updates:

  • We currently have more than 30 beta testers using and testing the Debian Social services.
  • PeerTube seems to be working quite well for the kind of needs Debian has for a video platform, our instance currently hosts 1379 videos (most of it being from DebConf and other Debian meetings) with a video archive source size of 449.9GB (videos are also re-encoded to other resolutions which uses additional space) and with 2 093 170 total views. We currently follow 9 other instances and have 2 users who have added non-DebConf content.
  • We set up a Jitsi instance for testing. After a bunch of initial hick-ups it seems to be very stable now. By default videos are 720p, but we capped it down to 480p so that it works better for people who have weaker connections. Please avoid using FireFox with this service for now, since it doesn’t support some features that reduce bandwidth which leads to a drastically worse experience for other participants in the call. This should get better with some experimental new features in FireFox that should land in version 76 and that can be enabled in configuration flags. In the meantime, please use Chromium or another webkit(ish) based browser. We changed also changed from using Google’s STUN servers (which brokers P2P connections when there are just two participants in a call) to using upstream’s STUN servers which should result in some better privacy.
  • It’s now possible for Debian projects to authenticate against Salsa, as now does, although not all the sites we host yet supports that. We’ve filed some bugs with upstream too with promising results so far. In the meantime, Debian is also looking at other solutions for identity management, like Keycloak and LemonLDAP NG. It will probably be a significant amount of time before this problem is completely solved, so at some point we’re going to have to decide whether we want to keep authentication using Debian services as a must-have for our services in order to leave their beta state (otherwise they seem to be working fine).

MiniDebConf Online

In the DebConf video team, we’ve been wondering whether we have all the tools required to successfully host a DebConf (or even a mini DebConf) entirely online. There’s really just one way to know for sure, so we’re going to host MiniDebConf Online from 28 May to 31 May. The first two days will be an online MiniDebCamp, where we can try to hold online spints, meetings and general chit-chat. The last two days will be for talks and lightnight talks, with BoFs likely to take place throughout the 4 days (this will probably be decided once we have a content team). Announcements should go out within the next week, in the meantime, save the dates.

Debian package uploads

2020-04-07: Upload package flask-autoindex (0.6.6-1) to Debian unstable.

2020-04-07: Upload package gamemode (1.5.1-1) to Debian unstable.

2020-04-08: Accept MR#2 for live-tasks (add usbutils to live-task-standard).

2020-04-08: Upload package live-tasks (11.0.2) to Debian unstable (Closes: #955526, #944578, #942837, #942834).

2020-04-08: Close live-config bug #655198 (Only affects squeeze which is no longer supported).

2020-04-08: Upload package live-config (11.0.1) to Debian unstable.

2020-04-08: Upload package calamares (3.2.22-1) to Debian unstable.

2020-04-15: Upload package xabacus (8.2.6-1) to Debian unstable.

2020-04-16: Merge MR#1 for gnome-shell-extension-dashtodock (new upstream release).

2020-04-16: Upload package gnome-shell-extension-dashtodock (67+git20200225-1) to Debian unstable.

2020-04-16: Merge MR#1 for gnome-shell-extension-hard-disk-led (fix some lintian issues).

2020-04-16: Merge MR#1 for gnome-shell-extension-system-monitor (fix some lintian issues).

2020-04-17: Upload package calamares (3.2.23-1) to Debian unstable.

2020-04-17: Upload package catimg (2.6.0-1) to Debian unstable (Closes: #956150).

2020-04-17: Upload package fabulous (0.3.0+dfsg1-7) to Debian unstable (Closes: #952242).

2020-04-17: Upload package gnome-shell-extension-system-monitor (38+git20200414-32cc79e-1) to Debian unstable (Closes: #956656, #956171).

2020-04-17: Upload package gnome-shell-extension-arc-menu (45-1) to Debian unstable (Closes: #956168).

2020-04-18: Upload package toot (0.26.0-1) to Debian unstable.

2020-04-23: Update packaging for gnome-shell-extension-tilix-shortcut, upstream section needs some additional work before release is possible.

2020-04-23: Upload package xabacus (8.2.7-1) to Debian unstable.

2020-04-27: Upload package gnome-shell-extension-dash-to-panel (37-1) to Debian unstable (Closes: #956162, #954978).

2020-04-27: Upload package gnome-shell-extension-dash-to-panel (37-2) to Debian unstable.

2020-04-27: Upload package gnome-shell-extension-dashtodock (68-1) to Debian unstable.

2020-04-30: Merge MR#8 for gamemode (add symbol files) (Closes: #952425).

2020-04-30: Merge MR#9 for gamemode (reduce number of -dev packages generated).

2020-04-30: Merge MR#10 for gamemode (deal better with upgrades on a buggy version).

2020-04-30: Manually merge changes from MR#11 for gamemode (packaging fixes).

2020-04-30: Upload package gamemode (1.5.1-2) to Debian unstable.

on May 02, 2020 01:47 PM
Os contamos nuestra impresión con Ubuntu 20.04, en nuestra opinión, la mejor versión de Ubuntu hasta la fecha. Además analizamos qué alternativas hay para usar móviles abiertos. Escúchanos en: Ivoox Telegram Youtube Y en tu cliente de podcast habitual con el RSS
on May 02, 2020 08:41 AM

May 01, 2020 is a website to manage some of your AWS resources: since this is an early preview, at the moment, it supports a subset of Networking, EC2, SQS, and SNS

The AWS Console is an amazing piece of software: it has hundreds of thousands of features, it is reliable, and it is the front end of an incredible world. However, as any software, it is not perfect: sometimes is a bit slow, so many features can be confusing, and it is clear it has evolved over time, so there are a lot of different styles, and if it would be made from scratch today, some choices would probably be different.

Daintree has born wanting to fix one particular problem of the AWS Console: the impossibility to see resources from multiple regions in the same view.

I’ve starting working on it last month, and now I’m ready to publish a first version: it’s still quite young and immature, but I’m starting using it to check some resources on AWS accounts I have access to. A lot of features are still missing, of course, and if you like, you can contribute to its development.

Multiple region support

The main reason Daintree exists is to display resources from multiple regions in the same screen: why limiting to one, when you can have 25?

Also, changing enabled regions doesn’t require a full page reload, but just a click on a flag: Daintree will smartly require resources from the freshly enabled regions.


Fast role switching

If you belong to an AWS organization, and you have multiple accounts, you probably switch often role: such operation on the original AWS console requires a full page reload, and it always brings you to the homepage.

On Daintree, changing roles will only reload the resources in the page you are currently in, without having to wait for a full page reload!


Coherent interface

Beauty is in the eye of the beholder, so claiming Daintree is more beautiful than the original console would be silly: however, Daintree has been built to be coherent: all the styling is made thanks to the Gitlab UI project, and the illustrations are made by Katerina Limpitsouni from unDraw.

This guarantees a coherent and polished experience all over the application.

Free software

Daintree is licensed under AGPL-3.0, meaning is truly free software: this way, everyone can contribute improving the experience of using it. The full source code is available over Gitlab.

The project doesn’t have any commercial goal, and as explained in the page about security, no trackers are present on the website. To help Daintree, you can contribute and spread the word!

Fast navigation

Daintree heavily uses Javascript (with Vue.js) to do not have to reload the page: also, it tries to perform as few operations as possible to keep always updated on your resources. In this way, you don’t waste your precious time waiting for the page to load.

And much more!

While implementing Daintree, new features have been introduced to make life easier to whoever uses it. As an example, you can create Internet Gateway and attach them to a VPC in the same screen, without having to wait for the gateway to be created. Or, while visualizing a security groups, you can click on the linked security groups, without having to look for them or remember complicated IDs. If you have any idea on how to improve workflows, please share it with developers!

Supported components

Daintree is still at early stages of development, so the number of supported resources is quite limited. You can report which feature you’d like to see to the developers, or you can implement them!

Daintree allows to view VPCs, Subnets, Internet Gateways, Nat Gateways, Route Tables, Elastic IPs, Security Groups, Instances, SNS, and SQS. You can also create, delete, and edit some of these resources. Development is ongoing, so remember to check the changelog from time to time.

What are you waiting? Go to and enjoy your AWS resources in a way you haven’t before!

Needless to say, Daintree website is not affiliated to Amazon, or AWS, or any of their subsidiaries. ;-)

For any comment, feedback, critic, suggestion on how to improve my English, reach me on Twitter (@rpadovani93) or drop an email at


on May 01, 2020 05:30 PM
Ubuntu is the most popular Linux distro today. Developers love it for it’s flexibility and reliability. You can find Ubuntu everywhere - from IoT devices to servers and on the Cloud. Yet, most Ubuntu users are not yet subscribed to Ubuntu Advantage program, which for many is (and will remain) available for free. Ubuntu Advantage (UA) offers key services, security, and updates from Canonical to Ubuntu users. In 20.04, a new user experience arrives to simplify users’ access to these key offerings.
on May 01, 2020 12:00 AM

April 30, 2020

Today I needed to capture a rather large kernel stack dump, this is rather trivial using virsh.  Using virt-manager I created a VM named vm-focal and in the guest ran:

sudo systemctl enable serial-getty@ttyS0.service 

Then on the host running the VM I ran:

virsh console vm-focal

Then all I needed to do was produce the stack dump and the console output was successfully dumped by virsh. Easy.

on April 30, 2020 12:01 PM
With the release of Ubuntu 20.04 LTS (Focal Fossa) the Ubuntu Server Guide has received a major set of updates and has moved to a new location on the Ubuntu website! The new locations makes it much easier to read and contribute improvements. There is a link on the bottom of each page that points directly to the corresponding Discourse page which contains the source for each page of the Ubuntu Server Guide.
on April 30, 2020 12:00 AM

April 28, 2020


Currently Ubuntu does not offer an easy way to set up a "global" DNS for all network connections: whenever you connect to a new WiFi network, if you don't want to use the DNS server provided by the WiFi, you are forced to go to the network settings and manually set your preferred DNS server.

With this brief guide I want to show how you can setup a global DNS to be used for all the WiFi and network connections, both old and new ones. I will also show you how to use DNSSEC, DNS-over-TLS and randomized MAC addresses for all connections.

This guide is written for Ubuntu 20.04, but in general it will work on every distribution using systemd-resolved and NetworkManager.

Step 1: setup the Global DNS in resolved

In Ubuntu (as well as many other distributions), DNS is managed by systemd-resolved. Its configuration is in /etc/systemd/resolved.conf. Open that file and add a DNS= line inside the [Resolve] section listing your preferred DNS servers. For example, if you want to use, your resolved.conf should look like this:

DNS= 2606:4700:4700::1111 2606:4700:4700::1001

Once you are done with the changes, reload systemd-resolved:

sudo systemctl restart systemd-resolved.service

You can check your changes with resolvectl status: you should see your DNS servers on top of the output, under the Global section:

$ resolvectl status
       LLMNR setting: no
MulticastDNS setting: no
  DNSOverTLS setting: opportunistic
      DNSSEC setting: allow-downgrade
    DNSSEC supported: no
  Current DNS Server:
         DNS Servers:

This however won't be enough to use that DNS! In fact, the Global DNS of systemd-resolved is just a default option that is used whenever no DNS servers are configured for an interface. When you connect to a WiFi network, NetworkManager will ask the access point for a list of DNS servers and will communicate that list to systemd-resolved, effectively overriding the settings that we just edited. If you scroll down the output of resolvectl status, you will see the DNS servers added by NetworkManager. We have to tell NetworkManager to stop doing that.

Step 2: Disable DNS processing in NetworkManager

In order for systemd-resolved to consider our global DNS, we need to tell NetworkManager not to provide any DNS information for new connections. Doing that is easy: just create a new file /etc/NetworkManager/conf.d/dns.conf (or any name you like) with this content:

# do not use the dhcp-provided dns servers, but rather use the global
# ones specified in /etc/systemd/resolved.conf

To apply the settings either run restart your computer or run:

sudo systemctl reload NetworkManager.service

Now, when you connect to a new network connection, NetworkManager won't push the list of DNS servers to systemd-resolved and only the global ones will be used. If you check resolvectl status, you should see that, for every interface, there is no DNS server specified. If you specified as your DNS servers, then you can also head over to to verify that they've been correctly set up.


If you would like to enable DNSSEC and/or DNS-over-TLS, the file to edit is /etc/systemd/resolved.conf. You can add the following options:

  • DNSSEC=true if you want all queries to be DNSSEC-validated. The default is DNSSEC=allow-downgrade, which attempts to use DNSSEC if it works properly, and falls back to disabling validation otherwise.
  • DNSOverTLS=true if you want all queries to go through TLS. You can also specify DNSOverTLS=opportunistic to attempt to use TLS if it supported, and fall back to the plaintext DNS protocol if it's not.

With those options, my /etc/systemd/resolved.conf looks like this:

DNS= 2606:4700:4700::1111 2606:4700:4700::1001

Note that I'm using DNSOverTLS=opportunistic because I found that some access points with captive portals don't work properly when using DNSOverTLS=true. Also note that DNSSEC=true may cause some pain because there are still many misconfigured domain records out there that will make make DNSSEC validation fail.

Like before, to apply the changes, run:

sudo systemctl restart systemd-resolved.service

And to verify the changes:

resolvectl status

If you're using, you can also go to to verify DNS-over-TLS.

Random MAC address

NetworkManager supports 3 options to have a random MAC address (also known as "cloned" or "spoofed" MAC address):

  1. wifi.scan-rand-mac-address controls the MAC address used when scanning for WiFi devices. This goes into the [device] section
  2. wifi.cloned-mac-address controls the MAC address for WiFi connections. This goes into the [connection] section
  3. ethernet.cloned-mac-address controls the MAC address for Ethernet connections. This goes into the [connection] section

The first option can take either yes or no. The last two can take various values, but if you want a randomized MAC address you are interested in these two:

  • random: generate a new random MAC address each time you establish a connection
  • stable: this generates a MAC address that is kinda random (it's a hash), but will be reused when you connect to the same network again.

random is better if you don't want to be tracked, but it has the disadvantage that captive portals won't remember you. Instead stable allows captive portals to remember you and therefore won't show up whenever you reconnect.

Whatever options you want to go with, put them into a file /etc/NetworkManager/conf.d/mac.conf (or any other name you like). Mine looks like this:

# use a random mac address when scanning for wifi networks

# use a random mac address when connecting to a network

To apply the settings either run restart your computer or run:

sudo systemctl reload NetworkManager.service

You can test your changes with:

ip link
on April 28, 2020 06:30 AM

April 25, 2020

An - EPYC - Focal Upgrade

Julian Andres Klode

Ubuntu “Focal Fossa” 20.04 was released two days ago, so I took the opportunity yesterday and this morning to upgrade my VPS from Ubuntu 18.04 to 20.04. The VPS provides:

  • SMTP via Postfix
  • Spam filtering via rspamd
  • HTTP(S) via nginx and letsencrypt (certbot)
  • Weechat relay
  • OpenVPN server
  • Shadowsocks proxy
  • Unbound recursive DNS resolver, for the spam filtering

I rebooted one more time than necessary, though, as my cloud provider Hetzner recently started offering 2nd generation EPYC instances which I upgraded to from my Skylake Xeon based instance. I switched from the CX21 for 5.83€/mo to the CPX11 for 4.15€/mo. This involved a RAM downgrade - from 4GB to 2GB, but that’s fine, the maximum usage I saw was about 1.3 GB when running dose-distcheck (running hourly); and it’s good for everyone that AMD is giving Intel some good competition, I think.

Anyway, to get back to the distribution upgrade - it was fairly boring. I started yesterday by taking a copy of the server and launching it locally in a lxd container, and then tested the upgrade in there; to make sure I’m prepared for the real thing :)

I got a confusing prompt from postfix as to which site I’m operating (which is a normal prompt, but I don’t know why I see it on an upgrade); and a few config files I had changed locally.

As the server is managed by ansible, I just installed the distribution config files and dropped my changes (setting DPkg::Options { "--force-confnew"; };" in apt.conf), and then after the upgrade, ran ansible to redeploy the changes (after checking what changes it would do and adjusting a few things).

There are two remaining flaws:

  1. I run rspamd from the upstream repository, and that’s not built for focal yet. So I’m still using the bionic binary, and have to keep bionic’s icu 60 and libhyperscan4 around for it.

    This is still preventing CI of the ansible config from passing for focal, because it won’t have the needed bionic packages around.

  2. I run weechat from the upstream repository, and apt can’t tell the versions apart. Well, it can for the repositories, because they have Size fields - but status does not. Hence, it merges the installed version with the first repository it sees.

    What happens is that it installs from, but then it believes the installed version is from and replaces it each dist-upgrade.

    I worked around it by moving the repo to the front of sources.list, so that the it gets merged with that instead of the one, as it should be, but that’s a bit ugly.

I also should start the migration to EC certificates for TLS, and 0-RTT handshakes, so that the initial visit experience is faster. I guess I’ll have to move away from certbot for that, but I have not investigated this recently.

on April 25, 2020 07:28 PM

April 24, 2020

Quick Rust Comparison

Bryan Quigley

I've been wanting to try out Rust with something very simple as a first pass through the language.

Rust Impressions

Although I didn't do much with functions on this quick pass - I love the ability to not have the order of main in a program to matter.

Super helpful error messages. Here is an example:

warning: value assigned to `temp` is never read
 --> src/
4 |     let mut temp=0u32;
  |             ^^^^
  = note: `#[warn(unused_assignments)]` on by default
  = help: maybe it is overwritten before being read?

I know others have said this, but the Rust compiler feels like it was designed to help me code, rather than just throw errors.


I decided to write a simple unoptimized version of the fibonacci sequence. My goal was to take enough time to be noticable...

On my first pass:
  • Rust runs took 1m34seconds (using cargo run)

  • Python took more than 6 minutes

  • C got 7 seconds

Clearly I must have done something wrong...

It turns out that by default it has debug info and checks that slow Rust down. So a

cargo build --release

Then it was faster than C.. and I realized I need to turn off C's debug bits too with:

gcc -O2 -s -DNDEBUG to gcc helped. gcc  fib.c
The final results (all approximate):
  • Python: 6+ minutes.

  • C: 1.101s

  • Rust: .95sE

The Rust

fn main() {
    let mut previous=0u32;
    let mut current=1u32;
    let mut temp;
    let maxvalue = 2000000000u32;

    for _n in 0..2000000000 {
            if current >= maxvalue {
                previous=0; current=1;
        temp = current;
        current = previous + current;
        previous = temp;
    println!("{}", current);

The C

#include <stdio.h>
int main() {

    unsigned long int previous=0;
    unsigned long int current=1;
    unsigned long int temp;
    unsigned long int maxvalue = 2000000000;
    for ( int n=0; n < 2000000000; n++ ) {
        if (current >= maxvalue) {
                previous=0; current=1;
        temp = current;
        current = previous + current;
        previous = temp;
    printf("%lu", current);

The Python3

temp = 0;
maxvalue = 2000000000;

for n in range(2000000000):
    if current >= maxvalue:
        previous=0; current=1;
    temp = current;
    current = previous + current;
    previous = temp;
on April 24, 2020 04:31 PM

The Kubuntu Team is happy to announce that Kubuntu 20.04 LTS has been released, featuring the beautiful KDE Plasma 5.18 LTS: simple by default, powerful when needed.

Codenamed “Focal Fossa”, Kubuntu 20.04 continues our tradition of giving you Friendly Computing by integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.

The team has been hard at work through this cycle, introducing new features and fixing bugs.

Under the hood, there have been updates to many core packages, including a new 5.4-based kernel, KDE Frameworks 5.68, Plasma 5.18 LTS and KDE Applications 19.12.3


Kubuntu has seen many updates for other applications, both in our default install, and installable from the Ubuntu archive.

Elisa, the wonderful new music collection player from KDE replaces Cantata as our default. KDE connect has a major new feature release. Krita, Kdevelop, Digikam, Latte-dock, and many many more applications are updated.

Applications that are key for day to day usage are included and updated, such as Firefox, VLC and Libreoffice.

For this release we provide Thunderbird for email support, however the KDE PIM suite (including kontact and kmail) is still available to install from the archive.

For a list of other application updates, upgrading notes and known bugs be sure to read our release notes.

Download 20.04 LTS or read how to upgrade from 19.10 and 18.04.

Note: From 19.10, there may a delay of a few hrs to days between the official release announcements and the Ubuntu Release Team enabling upgrades. From 18.04, upgrades will not be enabled until approximately the date of the 1st 20.04 point release at the end of July.

on April 24, 2020 08:58 AM

Ubuntu 20.04 LTS… on Big Iron!

Elizabeth K. Joseph

Today we saw the release of Ubuntu 20.04 LTS!

Alongside the fanfare of a new server and desktop release for AMD64, and my own beloved Xubuntu, this new version walks in the path of 16.04 and 18.04 to be the third LTS to support the s390x mainframe architecture for IBM Z.

If you have been following my adventures over the past year, you’ll know that I’m just shy of my one year anniversary at IBM, where I’ve been working on the IBM Z team to spread the word among open source communities about the mainframe. The epic hardware on these machines was definitely one of the hooks for me, but the big one was the amount of open source tooling that was being developed for them. The ability to run Linux on them sealed the deal. I wrote last week about some new hardware, and mentioned then that Ubuntu 20.04 supports the new Secure Execution technology for virtual machines.

So, what else is new for Ubuntu 20.04? At the top of my list would be improved support for the new IBM z15 hardware, released back in September. A number of changes made it into the 19.10 release, but 20.04 builds further upon this, especially around support for the compression and encryption features of the z15. Additionally, Subiquity is now the default installer for Ubuntu Server for s390x, which you can read more about here: A first glimpse at subiquity, the new server installer, now also on s390x.

This is just a taste of what is in store for users of Ubuntu on the mainframe. The list of major changes, along with the Launchpad bug/feature report numbers that tracked development throughout this cycle can be found over on the Ubuntu on the Big Iron blog in a post by Frank Heimes: A new Ubuntu LTS is available: Focal Fossa aka 20.04.

Finally, that fossa stuffed toy is mighty cute, right? You can have one too! With a donation to the World Wildlife Fund to “Adopt a Fossa.” Just keep it away from your lemur toys.

on April 24, 2020 01:25 AM