January 25, 2021

10 REM TL;DR My Raspberry Pi 400 boots from this: To this. BBC BASIC! This blog post is what parts I smashed together to make this work and why. 20 PRINT “HELLO” 40 years ago this Christmas, I got my first “personal computer”. It was a Sinclair ZX81 with 1KiB of RAM and a tape deck for storage. Every time I powered it on, like all ‘81 owners, I was greeted with this.
on January 25, 2021 12:00 PM

January 24, 2021

I originally titled this post “Don’t be afraid of the command line”, but decided “Black Oblong of Monospace Mystery” was more fun. Is the command line really scary? It doesn’t feel like that to me, but I grew up with an interface which looks like this on first boot. Not exactly friendly, but I was 9 at the time, and this was normal. Typing things using the keyboard was pretty much a daily activity.
on January 24, 2021 12:00 PM

January 23, 2021

Full Circle Weekly News #197

Full Circle Magazine

Full Circle Weekly News is now on Spotify: https://open.spotify.com/show/0AYBF3gfbHpYvhW0pnjPrK or search for ‘full circle weekly news’ in the Spotify app.

Project Lenix from CloudLinux Gets a Name
Fedora Kinoite, a New Immutable OS
on January 23, 2021 01:16 PM

January 22, 2021

New times, new solutions

José Antonio Rey

Our world is changing every day. Drastic changes can happen really quickly. Technology is advancing at a much faster pace than it used to a hundred years ago, and humans are adapting to those changes. The way we think, the way we operate, and how even how we communicate has drastically changed in the last 15 years.

Just as humans change, the Ubuntu community is also changing. People interact in different ways. Platforms that did not exist before are now available, and the community changes as the humans in it change as well.

When we started the Local Communities project several years ago, we did it with the sole purpose of celebrating Ubuntu. The ways in which we celebrated included release parties, conferences, and gatherings in IRC. However, we have lately seen a decline in the momentum we had with regards to participation in this project. We have not done a review of the project since its inception, and inevitably, the Community Council believes that it is time to do a deep dive at how we can regain that momentum and continue getting together to celebrate Ubuntu.

As such, we are putting together the Local Communities Research Committee, an independent entity overseen by the Community Council, which will help us understand the behavior of Local Community teams, how to better adapt to their needs, and to create a model that is suitable for the world we are living in today.

We are looking for between 6 and 9 people, and we will require to have at least one person per continent. We require that you are an Ubuntu Member, are not a current Community Council member, have experience working with worldwide communities, and strongly recommend that you have participated with a Local Community team in the past. If this sounds like you, instructions on how to apply can be found here: https://discourse.ubuntu.com/t/local-communities-research-committee/20186/4

I am personally very excited about this project, as it will allow us to gather perspectives from people all around the world, and to better adapt the project for you, the community.

If you have any questions or want to chat with me, you can always reach out to me at jose at ubuntu dot com, or jose on irc.freenode.net.

Eager to see your nominations/applications!

on January 22, 2021 11:36 PM

January 21, 2021

Ep 126 – Galope

Podcast Ubuntu Portugal

Neste episódio, falámos do #SamsungUnpacked, do #twitterchatpt, falámos de mais algumas ferramentas e técnicas. A migração em massa de utilizadores do whatsapp para Telegram, e porque é que o whatsapp é mau. E ainda abordamos as notícias do Peertube e o novo Slimbook Titan.

Já sabem: oiçam, subscrevam e partilhem!

$ sudo apt install libemail-outlook-message-perl

Limpeza de caches do kernel, pelo método “tradicional”:
sync; echo 3 > /proc/sys/vm/drop_caches
Limpeza de caches do kernel, com o novo método:
sync; sysctl vm.drop_caches=3
sync – Synchronize cached writes to persistent storage
sysctl – configure kernel parameters at runtime
  • https://twitter.com/search?q=%23SamsungUnpacked&src=typed_query&lf=on
  • https://twitter.com/JoanaRSSousa
  • https://twitter.com/search?q=%23twitterchatpt&src=hashtag_click
  • https://carrondo.pt/posts/2015-03-06-telegram-e-esquecam-o-whatsapp/
  • https://framablog.org/2021/01/07/peertube-v3-its-a-live-a-liiiiive/
  • https://slimbook.es/pedidos/titan/titan-comprar
  • https://slimbook.es/titan
  • https://slimbook.es/noticias-notas-de-prensa-y-reviews/481-nuevo-slimbook-titan
  • https://github.com/subspacecommunity/subspace
  • https://wiki.ubuntu.com/DiogoConstantino
  • https://wiki.ubuntu.com/Membership/NewMember
  • https://www.humblebundle.com/books/front-end-web-development-packt-books?partner=PUP
  • https://www.humblebundle.com/books/programming-fundamentals-mercury-books?partner=PUP
  • http://keychronwireless.refr.cc/tiagocarrondo
  • https://shop.nitrokey.com/shop/product/nk-pro-2-nitrokey-pro-2-3?aff_ref=3
  • https://shop.nitrokey.com/shop?aff_ref=3


Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on January 21, 2021 10:45 PM

Compact and Bijou

Ubuntu Blog

Snaps are designed to be self-contained packages of binaries, libraries and other assets. A snap might end up being quite bulky if the primary application it contains has many additional dependencies. This is a by-product of the snap needing to run on any Linux distribution where dependencies cannot always be expected to be installed.

This is offset by the snap being compressed on disk, and the Snap Store delivering delta updates rather than force a full download on each update. Furthermore the concept of “shared content” or “platform” snaps allows for common bundles of libraries to be installed only once and then reused across multiple snaps. 

Typically in documentation we detail building snaps with the command line tool snapcraft. Snapcraft has logic to pull in and stage any required dependencies. We generally recommend using snapcraft because it helps automate things, and make the snapping process more reliable.

But what if your application has minimal, or no dependencies?. Your program might be a single binary written in a modern language like go or rust. Maybe it’s a simple shell or python script, which requires no additional dependencies. Well, there’s a couple of other interesting ways to build a snap we should look at.

How meta

Snapcraft behaviour is controlled by the snapcraft.yaml file. One of the outputs of the snap build process is the is snap.yaml. The snap.yaml contains metadata about the contents of the snap, which is consumed by the Snap Store when published, and by snapd on clients at download and installation time.

In the final stages of a snapcraft run, all the assembled components are primed (that is, collated in a folder prior to compression), then compressed into a .snap file. The generated snap.yaml is bundled into the snap file in the /meta folder. It is required by the system which receives the snap.

The snap.yaml has some similarities to the snapcraft.yaml, but is usually generated by snapcraft, not manually crafted by hand. That doesn’t have to be the case though. It’s possible to bypass snapcraft completely and create a snap using only the snap.yaml, the snap command and the binaries, scripts, libraries and other assets, which need snapping.

Let’s take an example of snapping a very simple shell script using this method. The “script” (such that it is) is created as bin/tinysnap.sh, but it could equally be a pre-compiled static binary.

echo “Hello world!”

The meta/snap.yaml looks like this.

name: tinysnap
version: 0
summary: A very small shell script
description: |
  This shell script is about as simple as they get.
  But it could do a lot more.
base: core18
   command: bin/tinysnap.sh

Here’s what that directory structure looks like on the disk.

$ tree .
├── bin
│   └── tinysnap.sh
└── meta
    └── snap.yaml

2 directories, 2 files

That’s it. There’s no bundled dependencies, just the shell script itself. We specify core18 as the base, which means the Ubuntu 18.04 LTS-based core18 snap will be required. This is likely to already be installed on a system which has any snaps installed. The core18 snap contains the /bin/bash binary, so no need for us to bundle that inside our tiny snap. We just ship the shell script itself and the metadata in snap.yaml.

Assembling the snap is very straightforward and fast.

$ snap pack .
built: tinysnap_0_all.snap

We have a small snap!

Small is beautiful

$ ls -l tinysnap_0_all.snap 
-rw-r--r-- 1 alan alan 4096 Jan 21 10:56 tinysnap_0_all.snap

4096 bytes is the smallest we can get it down to, even though the script itself is mere tens of bytes in length. Given 4KiB is the likely smallest allocation unit on disk, I’m not going to stress about the padding in the squashfs file taking it up to that size.

Installing the resulting tiny snap is just the same as any other locally installed package. Specify the --dangerous flag to indicate we accept the risk associated with installing local un-checked packages.

$ snap install tinysnap_0_all.snap --dangerous
tinysnap 0 installed

Running is simple, since we expose the binary to the outside of the snap with the apps stanza in the snap.yaml as tinysnap, so we can just run that.

$ tinysnap
Hello world

We have specified no plugs, so there will be no interfaces connected with this snap. It’s completely confined. If we wanted to make something which can reach the network, or monitor system usage, we could specify the plugs in the snap.yaml.

It’s worth noting the snapcraft command also has a pack option which achieves the same as snap pack, but with extra checks, and developer feedback.

$ snapcraft pack .
Snapping |
Snapped tinysnap_0_all.snap

I imagine there’s quite a bit of functionality it would be possible to fit in a shell script, which packs down to 4KiB. It would be interesting to see how much you can squeeze in that space. Anyone up for the challenge?

You can find us over on the snapcraft forums, if you have any questions, comments or want to show off your tiny marvels.

Photo by David Maltais on Unsplash

on January 21, 2021 01:39 PM

January 20, 2021

Standards are boring

Marcin Juszkiewicz

We have made Arm servers boring.

Jon Masters

Standards are boring. Satisfied users may not want to migrate to other boards the market tries to sell them.

So Arm market is flooded with piles of small board computers (SBC). Often they are compliant to standards only when it comes to connectors.

But our hardware is not standard

It is not a matter of ‘let produce UEFI ready hardware’ but rather ‘let write EDK2 firmware for boards we already have’.

Look at Raspberry/Pi then. It is shitty hardware but got popular. And group of people wrote UEFI firmware for it. Probably without vendor support even.

Start with EBBR

Each new board should be EBBR compliant at start. Which is easy — do ‘whatever hardware’ and put properly configured U-Boot on it. Upstreaming support for your small device should not be hard as you often base on some already existing hardware.

Add 16MB of SPI flash to store firmware. Your users will be able to boot ISO without wondering where on boot media they need to write bootloaders.

Then work on EDK2 for board. Do SMBIOS (easy) and keep your existing Device Tree. You are still EBBR. Remember about upstreaming your work — some people will complain, some will improve your code.


Next step is moving from Device Tree to ACPI. May take some time to understand why there are so many tables and what ASL is. But as several other systems show it can be done.

And this brings you to SBBR compliance. Or SystemReady ES if you like marketing.

SBSA for future design

Doing new SoC tends to be “let us take previous one and improve a bit”. So this time change it a bit and make your next SoC compliant with SBSA level 3. All needed components are probably already included in your Arm license.

Grab EDK2 support you did for previous board. Look at QEMU SBSA Reference Platform support, look at other SBSA compliant hardware. Copy, reuse their drivers, their code.

Was it worth?

At the end you will have SBSA compliant hardware running SBBR compliant firmware.

Congratulations, your board is SystemReady SR compliant. Your marketing team may write that you are on same list as Ampere with their Altra server.

Users buy your hardware and can install whatever BSD, Linux distribution they want. Some will experiment with Microsoft Windows. Others may work on porting Haiku or other exotic operating system.

But none of them will have to think “how to get this shit running”. And they will tell friends that your device is as boring as it should be when it comes to running OS on it == more sales.

on January 20, 2021 04:33 PM

A Debian LTS logo Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian project funding

In December, we put aside 2100 EUR to fund Debian projects. The first project proposal (a tracker.debian.org improvement for the security team) was received and quickly approved by the paid contributors, then we opened a request for bids and the bid winner was announced today (it was easy, we had only one candidate). Hopefully this first project will be completed until our next report.

We’re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article.

Debian LTS contributors

In December, 12 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 7.0h (out of 14h assigned), thus carrying over 7h to January.
  • Ben Hutchings did 16.5h (out of 16h assigned and 9h from November), thus carrying over 8.5h to January.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned), thus carrying over h to January.
  • Emilio Pozuelo Monfort did 16.5h (out of 26h assigned), thus carrying over 9.5h to January.
  • Holger Levsen did 3.5h coordinating/managing the LTS team.
  • Markus Koschany did 26h (out of 26h assigned and 10.75h from November), thus carrying over 10.75 h to January.
  • Ola Lundqvist did 9.5h (out of 12h assigned and 11h from November), thus carrying over 11.5h to January.
  • Roberto C. Sánchez did 18.5h (out of 26h assigned and 2.25h from November) and gave back the remaining 9.75h.
  • Sylvain Beucler did 26h (out of 26h assigned).
  • Thorsten Alteholz did 26h (out of 26h assigned).
  • Utkarsh Gupta did 26h (out of 26h assigned).

Evolution of the situation

December was a quiet month as we didn’t have a team meeting nor any other unusual activity and we released 43 DLAs.

The security tracker currently lists 30 packages with a known CVE and the dla-needed.txt file has 25 packages needing an update.

This month we are pleased to welcome Deveryware as new sponsor!

Thanks to our sponsors

Sponsors that joined recently are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on January 20, 2021 09:39 AM

Consolidating Positions

Stephen Michael Kellat

The social media landscape in the United States has been getting weirder as 2021 has continued to unfold. I don’t need to recount the dramatics about various sites being knocked off the Internet. Those stories have gotten boring.

What is interesting at this point is what is happening on sites like Facebook and Twitter. They’ve been trying to get their sites cleared of extremist content. The attack on the United States Capitol has given them impetus to finally push forward in that respect.

Unfortunately it appears that these efforts do lead to some collateral damage. My experience with one site has been deteriorating steadily over the past few months and the decline accelerated after January 6th. When your timeline stops updating for half a day and simply remains frozen it makes you feel as if something is wrong. Having it happen repeatedly makes it seem as if it is time to move on from using that site.

Although the options are out there to host my own social media presence frankly I do not want the headache. I’m going to most likely be focusing more on my blog as we head deeper into 2021. That seems to be the least headache-inducing.

I have no clue which way 2021 may go. When a certain social media site crashes my browser due to out of memory issues so frequently it seems like we have some issues to handle this year…

on January 20, 2021 01:36 AM

January 19, 2021

<noscript> <img alt="" height="443" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_589,h_443/https://ubuntu.com/wp-content/uploads/d8ca/databasemodel.jpg" width="589" /> </noscript>

PostgreSQL is a powerful, open source object-relational database system that is known for reliability, feature robustness, and performance. PostgreSQL is becoming the preferred database for more and more enterprises. It is currently ranked #4 in popularity amongst hundreds of databases worldwide according to the DB-Engines Ranking.

The basics first – What is PostgreSQL?

PostgreSQL is a relational database. It stores data points in rows, with columns as different data attributes. A table stores multiple related rows. The relational database is the most common type of database in use. It differentiates itself with a focus on integrations and extensibility. It works with a lot of other technologies and conforms to various database standards, that ensures it is extensible.

In recent years, many companies have officially supported the development of the PostgreSQL project. Let’s dig deeper into why it is gaining popularity.

Why use PostgreSQL?

An enterprise class database, PostgreSQL boasts sophisticated features such as Multi-Version Concurrency Control (MVCC), point in time recovery, tablespaces, asynchronous replication, nested transactions, online/hot backups, a sophisticated query planner/optimiser, and write ahead logging for fault tolerance.

PostgreSQL works on most popular operating systems – almost all Linux and Unix distributions, Windows, Mac OS X. Its open source nature makes it easy to upgrade or extend. In PostgreSQL, you can define your own data types, build custom functions, and even write code in another programming language (e.g. Python) without recompiling the database. And, of course, PostgreSQL is free!

The reliable database

PostgreSQL isn’t just relational, it’s object-relational and supports complex structures and a breadth of built-in and user-defined data types. It provides extensive data capacity and is trusted for its data integrity.  This gives it some advantages over other open source SQL databases like MySQL, MariaDB and Firebird. PostgreSQL comes with many features aimed to help developers build applications, administrators to protect data integrity and build fault-tolerant environments.

It offers its users a huge (and growing) number of functions. These help programmers to create new applications, admins better protect data integrity, and developers build resilient and secure environments. PostgreSQL gives its users the ability to manage data, regardless of how large the database is.

The extensible database

In addition to being free and open source, PostgreSQL is highly extensible. There are two areas that PostgreSQL shines when users need to configure and control their database. First, it is compliant to a high degree with SQL standards. This increases its interoperability with other applications.

Second, PostgreSQL gives users, control over the metadata. PostgreSQL is extensible because its operation is catalog-driven. One key difference between PostgreSQL and standard relational database systems is that PostgreSQL stores much more information in its catalogs: not only information about tables and columns, but also information about data types, functions, access methods, and so on. These tables can be modified by the user, and since PostgreSQL bases its operation on these tables, this means that PostgreSQL can be extended by users. By comparison, conventional database systems can only be extended by changing hard-coded procedures in the source code or by loading modules specially written by the DBMS vendor.

How is PostgreSQL used?

PostgreSQL has a rich history for support of advanced data types, and supports a level of performance optimisation that is usually associated with commercial database counterparts, like Oracle and SQL Server. PostgreSQL is used as the primary data store or data warehouse for many web, mobile, geospatial, and analytics applications.

PostgreSQL can store structured and unstructured data in a single product.  Unstructured data, found in audio, video, emails and social media postings, can be used to improve customer service, discover new product requirements, and find ways to prevent a customer from churning among countless other uses.

PostgreSQL also has superior online transaction processing capabilities (OLTP) and can be configured for automatic fail-over and full redundancy, making it suitable for financial institutions and manufacturers. As a highly capable analytical database it can be integrated effectively with mathematical software, such as Matlab and R. Due to PostgreSQL’s replication capabilities, websites can easily be scaled out to as many database servers as you need.

PostgreSQL when used with the PostGIS extension, supports geographic objects, and can be used as a geospatial data store for location based services and geographic information systems (GIS).

Day-N operational challenges of using PostgreSQL

Despite the benefits, there are challenges that enterprises shall be faced with when it comes to PostgreSQL adoption. PostgreSQL has one of the fastest-growing communities but unlike traditional database vendors, the PostgreSQL community does not have the luxury of a mature database ecosystem. In addition, PostgreSQL is often used in tandem with several different databases, such as Oracle or MongoDB and each database requires specialised expertise and hiring technical staff with relevant PostgreSQL skill set can be a challenge for enterprises. In addition to management tools for PostgreSQL, DevOps teams and database professionals need to be able to manage multiple databases from multiple vendors without having to change existing processes.

Second, as PostgreSQL is open-source, different IT development teams within an organisation may start using it organically. This can give rise to another challenge – no single point of knowledge for all instances of PostgreSQL in enterprise IT landscape. Further, there is redundancy and duplication of work, as different teams may be solving the same problem with it, independently.

How can start-ups and larger enterprises deal with these challenges? Let’s look at a Canonical offering that aims to solve this.

Optimised PostgreSQL, managed for you

Managed PostgreSQL from Canonical is a trusted, secure, and scalable database service deployable on the cloud of your choice or on-prem. Never worry about maintenance operations or updates – we handle that for you.  With Canonical’s managing your PostgreSQL, you get the following benefits.

  • Canonical’s PostgreSQL experts manage your database servers. You do not have to go through the delay and difficulties of hiring DevOps engineers who know how to stand up a high availability cluster.
  • Canonical’s open source app management team does the heavy operational lifting so you can focus on building your applications.
  • Canonical will manage PostgreSQL on any conformant Kubernetes on the cloud of your choice or on-premise. This means, you get to bring your cloud, and we handle the rest.


PostgreSQL, an advanced enterprise class open source relational database backed by over 30 years of community development is the backbone to many key technologies and apps we use each day. Canonical supports PostgreSQL through a fully managed database service that automates the mundane task of application operations so enterprises and developers can focus on building their core apps with PostgreSQL. To optimise your deployment, improve quality and economics, speak to our PostgreSQL engineers today.

Get in touch for a PostgreSQL deployment assessment

on January 19, 2021 08:54 AM

Because I always forget

To reboot a SUPERMICRO server:

Just remember that the default user/password might still be ADMIN/ADMIN :)

ipmitool -I lanplus -H $HOST -U USER -P PASSWORD power cycle

To connect to the serial console

ipmitool -I lanplus -H $HOST -U USER -P PASSWORD sol activate

on January 19, 2021 12:00 AM

January 18, 2021

Welcome to the Ubuntu Weekly Newsletter, Issue 666 for the week of January 10 – 16, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on January 18, 2021 10:23 PM

About CVE-2020-27348

Daniel Llewellyn

Well this is a doozey. Made public a while back was a security vulnerability in many Snap Packages and the Snapcraft tool used to create them. Specifically, this is the vulnerability identified as CVE-2020-27348. It unfortunately affects many many snap packages… What is it? The issue at the heart of this vulnerability is the prolific way that we in the Snapcraft community are overriding the Dynamic Linker search path. The Dynamic Linker is the library that searches for and loads libraries into memory when an application needs them.
on January 18, 2021 06:05 PM

Mixtape: Passagers

Daniel Holbach

https://www.mixcloud.com/dholbach/passagers/ It’s a new year and I have new hope that we’ll all get to enjoy loud music together soon again. Never before have I danced as much in my own four walls… in celebration of online dance workshops, pyjama parties with friends and spontaneous dance intermezzos at home here’s my morning mix for you. Caetano Veloso - It’s a Long Way (Mettabbana Edit) Cipő (BéTé Rework) Thornato - Shu Swamp Kotelett & Zadak - Take Me Back Urubu Marinka & Superbreak - That Loving Feeling (Dj Steef Edit) Blick Bassy - Aké (Brynjard Edit) Lukas Endhardt - Solo Tu Nicola Cruz - Bruxo (Von Party Remix) Kusht - O i Og i O Ft.
on January 18, 2021 10:08 AM

Leveraging LaTeX In This Time

Stephen Michael Kellat

From time to time I like to bring up fun adventures in LaTeX. In these stranges times in the United States it is important to look at somewhat practical applications beyond the normal reports and formal papers most people think of. With a Minimum Working Example we can mostly look at an idea.

The Comprehensive TeX Archive Network has a package known as newspaper which is effectively subject to nominative determinism. You can make things with it that look like newspapers out of the 1940s-1960s in terms of layout. The page on CTAN shows nice examples of its use and provides a nice story as to why the package was created.

The example source file on CTAN has a bug in it, though. We’re going to make a new one based on it. I am also going to add but not yet utilize the markdown package to the example.

For these purposes I will assume you’re using the latest LTS version of Ubuntu and have TeX Live installed.

Here is our basic neighborhood newspaper:

\documentclass{article}     % Since this is not a report or book we use the article class
\usepackage[english]{babel} % Multilingual handling that a later package hooks into
\usepackage{newspaper}      % The newspaper package sets things up
\usepackage[T1]{fontenc}    % This is the legacy bit for using Type 1 fonts.
\usepackage{ibarra}         % A serif font.
\usepackage{graphicx}       % Graphics handling package
\usepackage{xcolor}         % Package to allow for use of easier color names 
\usepackage{multicol}       % Multi-column handling
\usepackage{markdown}       % Markdown interpreter 

                            % Handle quotes smartly in any imported Markdown text
                            % This needs babel loaded above

\usepackage[autostyle, english = american]{csquotes}

\usepackage{hyperref}       % Hypertext support that also allows for control of some PDF metadata
\usepackage{xurl}           % Easier breaking of URLs

                         % The hypersetup stanza is the regular metadata
                         % you would find if you pull up the document 
                         % properties in your PDF reader
                         % I have put placeholder values there.
                         % Set allcolors to something other than black
                         % if you are not making a print-first product.
                         % colorlinks is set to true so you don't have
                         % boxed links in your PDF that look odd.

pdftitle={The Newspaper},
pdfauthor={Publisher Name Here},

SetPaperName{NEWS}              % Name your newspaper but keep it short
SetHeaderName{News}             % Running header of your paper's name
SetPaperLocation{Laser Printer} % Ashtabula?  Cleveland?  Short place name here
SetPaperSlogan{Semper Supra}    % Keep it short
SetPaperPrice{FREE}             % You can charge but that is your choice
\date{\today}                    % Unless you have a late edition you will have a different date
\currentvolume{1}                % No negative numbers
\currentissue{1}                 % No negative numbers

\maketitle                       % This makes the banner at the top of the front page

\begin{multicols*}{3}            % This creates a three column environment for stories to appear.
                                 % The star in this example means that the columns do not balance
                                 % which results in the example stories appearing in one column.

\headline{\sc\Large Headline In Headline Case}  % Headline a story without a byline
Body text here.  % Story text goes here 
% Grab story text from an external file using \markdownInput perhaps?

\byline{\sc\Large Headline In Headline Case}{Gumshoe Reporter} % Headlining a story with a byline
Body text here.  % Story text goes here 
% Grab story text from an external file using \markdownInput perhaps?



There are many ways to customize and adjust this. A newspaper in the American context is normally supposed to be set in a serif font. In the example I picked one to use but The LaTeX Font Catalogue has other serif font options. There is a collection of images of newspaper front pages found on the Internet Archive that you can look at for ideas.

Whether it is to make a retro newsletter for your lockdown games group online or to make a stand-in newspaper if regional communications hit an outage, there are possibilities here. Creativity is key.

on January 18, 2021 05:01 AM

January 14, 2021

Ep 125 – Feijoada acidente

Podcast Ubuntu Portugal

Tornamos histórias enfadonhas em aventuras fantásticas, acontecimentos cinzentos em verdadeiros contos de fadas, ou então falamos só sobre Ubuntu e outras cenas… Aqui fica mais um episódio no vosso podcast preferido.

Já sabem: oiçam, subscrevam e partilhem!

  • https://wiki.ubuntu.com/DiogoConstantino
  • https://wiki.ubuntu.com/Membership/NewMember
  • https://svartrecords.com/product/feijoada-international/
  • https://store.steampowered.com/hwsurvey/Steam-Hardware-Software-Survey-Welcome-to-Steam
  • https://www.humblebundle.com/books/linux-apress-books?partner=PUP
  • https://www.humblebundle.com/books/front-end-web-development-packt-books?partner=PUP
  • http://keychronwireless.refr.cc/tiagocarrondo
  • https://shop.nitrokey.com/shop/product/nk-pro-2-nitrokey-pro-2-3?aff_ref=3
  • https://shop.nitrokey.com/shop?aff_ref=3


Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on January 14, 2021 10:45 PM

January 13, 2021

Find new ways

Torsten Franz

Sometimes it is time to critically question things and look for new ways. This is what we as the Ubuntu Community Council have initiated with the existing Local Communities (LoCo) project.

The LoCos have been an integral part of the Ubuntu family since almost the beginning of Ubuntu. The aim of the LoCos is that people who are involved with Ubuntu find contact persons and like-minded people in their area, so that they are included in the Ubuntu community and also get help with possible questions or problems with Ubuntu. It is also the aim that these local units fill Ubuntu with life and organise events. In the past years they have been an important institution in building the community around Ubuntu.

Last year, we at the newly elected Community Council wanted to re-staff the international council that oversees this LoCo and called for nominations. Unfortunately, there were not enough candidates so that we could re-staff this council.

We thought about what we could do to bring new momentum to this community issue. We came up with the idea of setting up a committee with members from all continents to see how we can change, improve or even completely change the concept. It is expressly desired that we break out of the existing Ubuntu cosmos and look elsewhere and analyse what good accents others have in their community and how we can perhaps learn something from them. The idea of the Local Communities Research Committee (LCRC) was born, which is supposed to reinvent the LoCos. The whole idea can be found on the Community Hub.

I would be very happy if committed Ubuntu members would like to take on this task and contribute to this LCRC, thus improving Ubuntu and shaping the structures for the future. Actually, there is nothing standing in our way, but of course we have to get going. We accept applications at the mail address community-council at lists.ubuntu.com.

on January 13, 2021 09:30 PM

January 11, 2021

Welcome to the Ubuntu Weekly Newsletter, Issue 665 for the week of January 3 – 9, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on January 11, 2021 11:53 PM


I'm grateful for translations by translators. But translating everything causes icons to break. Ubuntu MATE 20.04 has several broken icons and most of them are fixed in Ubuntu MATE 20.10 already.

Advice: Please do NOT translate the 'Icon' text, just leave that translation blank (""). Copy/pasting the English text will cause superfluous lines in .desktop files and might cause additional work later (if the original name is updated, you will need to copy and paste that string again). So getting a 100% translation score, might even be non-optimal.

Ubuntu MATE 20.04.1 with broken icons

You probably know the feeling of being the IT guy for your family (in this specific case, my mother-in-law). Her Linux laptop needed to be upgraded to the latest LTS, so I did that for her.

Back when she got the laptop, I installed a non-LTS release. That was required, otherwise her brand spanking new hardware, wouldn't have worked correctly.

I tried using the GUI to upgrade the system, but that didn't work. Usually I live in the terminal, so I quickly went to my comfort zone. I noticed the repositories were not available anymore, of course, this was not an LTS. That meant also that 'do-release-upgrade' did not work. Fortunately I was around when that tool did not exist yet, so I knew to manually modify apt sources files and run apt-get manually. The upgrade was a success of course. But, what is that, why am missing icons here? I also run Ubuntu MATE on some of my other systems and the icons never broke before. The upgrade seemed to have been flawless, but still something went wrong? No, that couldn't be... and it wasn't.

Switching her desktop to English, instead of Dutch (Nederlands), "fixed" the icons. That is strange, but is providing the user of the laptop with a workaround. Luckily my mother-in-law is proficient in English, but prefers Dutch. And there are enough people (I know some of them) who can not read/write/speak English and are dependant on translations. So I thought I'd go fix the issue (or at least, so I thought).


Ubuntu 20.04: ubuntu mate 20.04

Ubuntu 20.10: ubuntu mate 20.10

The .desktop file

Checking the .desktop file (I'm going to use /usr/share/applications/mate-screensaver-preferences.desktop here as an example), I noticed the following lines:

# Translators: Do NOT translate or transliterate this text (this is an icon file name)!

Hmmm, apparently several translations exist (which generally have been kept identical to original English text).

Note: if a localized translation exists, that will be used. If no localized translation exists, the original English one will be used.

Let me have a look at their source code on their github. It contains several .po files, which contain the translations. So it's only a matter of cloning the repository and submitting a pull request... wrong. I already made the repository fork, when I noticed the commit log. It shows that the translations are being synced from Transifex.

P.S. I should've checked the ubuntu-mate website first, since they have an entire section about translations.


Transifex seems to be a proprietary system for doing translations, but I need to go there to fix this issue, so let's get this fixed. Apparently I need to 'join' the team to even see the strings and translations and also to fix them. Would be nice if guest access (read only) would be enabled, because then I could at least check if I will be bothering the correct team. And once you sent a request to join, there is no way to track it, or see the team members etc. (unless you are part of that team perhaps). But nevermind, let's continue.

Clicking the 'Join team' button, I made the assumption that I would automagically be joined to that team. Somehow that did not happen immediately (e.g. it requires human intervention). And I thought this was just going to be quick 'go in, fix it, leave' thing...

Current status

My translator membership was declined, which I don't mind actually, since I don't want to become a full-fledged translator (I just want to fix this specific bug). A helpful translator (with access) checked it out and is working on it.

Joining all teams for each item/language is a quite a hassle (and once declined, sending them all the messages to ask them for fixes etc.), so I'm "only" going to scratch my own itch here. But it seems prudent to give all translators a headsup about this, so they might fix it in their translations (if applicable), hence this blog post.

If anyone could eventually get the updated translations into Ubuntu 20.04, that would be much appreciated ;-)

Advice: Please do NOT translate the 'Icon' text, just leave that translation blank (""). Copy/pasting the English text will cause superfluous lines in .desktop files and might cause additional work later (if the original name is updated, you will need to copy and paste that string again). So getting a 100% translation score, might even not-optimal.

on January 11, 2021 07:00 PM

January 10, 2021

OpenUK Honours

Stuart Langridge

So, I was awarded a medal.

OpenUK, who are a non-profit organisation supporting open source software, hardware, and data, and are run by Amanda Brock, have published the honours list for 2021 of what they call “100 top influencers across the UK’s open technology communities”. One of them is me, which is rather nice. One’s not supposed to blow one’s own trumpet at a time like this, but to borrow a line from Edmund Blackadder it’s nice to let people know that you have a trumpet.

There are a bunch of names on this list that I suspect anyone in a position to read this might recognise. Andrew Wafaa at ARM, Neil McGovern of GNOME, Ben Everard the journalist and Chris Lamb the DPL and Jonathan Riddell at KDE. Jeni Tennison and Jimmy Wales and Simon Wardley. There are people I’ve worked with or spoken alongside or had a pint with or all of those things — Mark Shuttleworth, Rob McQueen, Simon Phipps, Michael Meeks. And those I know as friends, which makes them doubly worthy: Alan Pope, Laura Czajkowski, Dave Walker, Joe Ressington, Martin Wimpress. And down near the bottom of the alphabetical list, there’s me, slotted in between Terence Eden and Sir Tim Berners-Lee. I’ll take that position and those neighbours, thank you very much, that’s lovely.

I like working on open source things. It’s been a strange quarter-of-a-century, and my views have changed a lot in that time, but I’m typing this right now on an open source desktop and you’re probably viewing it in an open source web rendering engine. Earlier this very week Alan Pope suggested an app idea to me and two days later we’d made Hushboard. It’s a trivial app, but the process of having made it is sorta emblematic in my head — I really like that we can go from idea to published Ubuntu app in a couple of days, and it’s all open-source while I’m doing it. I like that I got to go and have a curry with Colin Watson a little while ago, the bloke who introduced me to and inspired me with free software all those years ago, and he’s still doing it and inspiring me and I’m still doing it too. I crossed over some sort of Rubicon relatively recently where I’ve been doing open source for more of my life than I haven’t been doing it. I like that as well.

There are a lot of problems with the open source community. I spoke about divisiveness over “distros” in Linux a while back. It’s still not clear how to make open source software financially sustainable for developers of it. The open source development community is distinctly unwelcoming at best and actively harassing and toxic at worst to a lot of people who don’t look like me, because they don’t look like me. There’s way too much of a culture of opposing popularity because it is popularity and we don’t know how to not be underdogs who reflexively bite at the cool kids. Startups take venture capital and make a billion dollars when the bottom 90% of their stack is open source that they didn’t write, and then give none of it back. Products built with open source, especially on the web, assume (to use Bruce Lawson’s excellent phrasing) that you’re on the Wealthy Western Web. The list goes on and on and on and these are only the first few things on it. To the extent that I have any influence as one of the one hundred top influencers in open source in the UK, those are the sort of things I’d like to see change. I don’t know whether having a medal helps with that, but last year, 2020, was an extremely tough year for almost everyone. 2021 has started even worse: we’ve still got a pandemic, the fascism has gone from ten to eleven, and none of the problems I mentioned are close to being fixed. But I’m on a list with Tim Berners-Lee, so I feel a little bit warmer than I did. Thank you for that, OpenUK. I’ll try to share the warmth with others.

Yr hmbl crspndnt, wearing his medal

on January 10, 2021 03:30 PM

January 04, 2021

Over the past year there has been focused work on improving the test coverage of the Linux Kernel with stress-ng.  Increased test coverage exercises more kernel code and hence improves the breadth of testing, allowing us to be more confident that more corner cases are being handled correctly.

The test coverage has been improved in several ways:

  1. testing more system calls; most system calls are being now exercised
  2. adding more ioctl() command tests
  3. exercising system call error handling paths
  4. exercise more system call options and flags
  5. keeping track of new features added to recent kernels and adding stress test cases for these
  6. adding support for new architectures (RISC-V for example)

Each stress-ng release is run with various stressor options against the latest kernel (built with gcov enabled).  The gcov data is processed with lcov to produce human readable kernel source code containing coverage annotations to help inform where to add more test coverage for the next release cycle of stress-ng. 

Linux Foundation sponsored Piyush Goyal for 3 months to add test cases that exercise system call test failure paths and I appreciate this help in improving stress-ng. I finally completed this tedious task at the end of 2020 with the release of stress-ng 0.12.00.

Below is a chart showing how the kernel coverage generated by stress-ng has been increasing since 2015. The amber line shows lines of code exercised and the green line shows kernel functions exercised.


..one can see that there was a large increase of kernel test coverage in the latter half of 2020 with stress-ng.  In all, 2020 saw ~20% increase on kernel coverage, most of this was driven using the gcov analysis, however, there is more to do.

What next?  Apart from continuing to add support for new kernel system calls and features I hope to improve the kernel coverage test script to exercise more file systems; it will be interesting to see what kind of bugs get found. I'll also be keeping the stress-ng project page refreshed as this tracks bugs that stress-ng has found in the Linux kernel.

As it stands, release 0.12.00 was a major milestone for stress-ng as it marks the completion of the major work items to improve kernel test coverage.

on January 04, 2021 04:44 PM

Full Circle Weekly News #195

Full Circle Magazine

Ubuntu’s Snap Theming Will See Changes for the Better
GTK4 Is Available After 4 Years In Development
Linux Mint 20.1 Ulyssa Beta Out

Rescuezilla 2.1.2 Out

Manjaro ARM 20.12 Out

Linux Kernel 5.11 rc1 Out

Bash 5.1 Out

Darktable 3.4 Out

Thunderbird 78.6.0 Out

LibreOffice 7.0.4 Out

Kdenlive 20.12 Out

Anbox Cloud 1.8.2 Out

on January 04, 2021 11:21 AM

January 03, 2021

Wrong About Signal

Bryan Quigley

Another update - it's been 6 months and Signal still does not let you register.

Updated Riot was renamed to Element. XMPP info added in comment.

A couple years ago I was a part of a discussion about encrypted messaging.

  • I was in the Signal camp - we needed it to be quick and easy to setup for users to get setup. Using existing phone numbers makes it easy.
  • Others were in the Matrix camp - we need to start from scratch and make it distributed so no one organization is in control. We should definitely not tie it to phone numbers.

I was wrong.

Signal has been moving in the direction of adding PINs for some time because they realize the danger of relying on the phone number system. Signal just mandated PINs for everyone as part of that switch. Good for security? I really don't think so. They did it so you could recover some bits of "profile, settings, and who you’ve blocked".

Before PIN

If you lose your phone your profile is lost and all message data is lost too. When you get a new phone and install Signal your contacts are alerted that your Safety Number has changed - and should be re-validated.

>>Where profile data lives1318.60060075387.1499999984981Where profile data livesYour Devices

After PIN

If you lost your phone you can use your PIN to recover some parts of your profile and other information. I am unsure if Safety Number still needs to be re-validated or not.

Your profile (or it's encryption key) is stored on at least 5 servers, but likely more. It's protected by secure value recovery.

There are many awesome components of this setup and it's clear that Signal wanted to make this as secure as possible. They wanted to make this a distributed setup so they don't even need to tbe only one hosting it. One of the key components is Intel's SGX which has several known attacks. I simply don't see the value in this and it means there is a new avenue of attack.

>>Where profile data lives1370.275162.94704773529975250.12499999999997371.0529522647003Where profile data livesYour DevicesSignal servers

PIN Reuse

By mandating user chosen PINs, my guess is the great majority of users will reuse the PIN that encrypts their phone. Why? PINs are re-used a lot to start, but here is how the PIN deployment went for a lot of Signal users:

  1. Get notification of new message
  2. Click it to open Signal
  3. Get Mandate to set a PIN before you can read the message!

That's horrible. That means people are in a rush to set a PIN to continue communicating. And now that rushed or reused PIN is stored in the cloud.

Hard to leave

They make it easy to get connections upgraded to secure, but their system to unregister when you uninstall has been down Since June 28th at least (tried last on July22nd). Without that, when you uninstall Signal it means:

  • you might be texting someone and they respond back but you never receive the messages because they only go to Signal
  • if someone you know joins Signal their messages will be automatically upgraded to Signal messages which you will never receive


In summary, Signal got people to hastily create or reuse PINs for minimal disclosed security benefits. There is a possibility that the push for mandatory cloud based PINS despite all of the pushback is that Signal knows of active attacks that these PINs would protect against. It likely would be related to using phone numbers.

I'm trying out the Element which uses the open Matrix network. I'm not actively encouraging others to join me, but just exploring the communities that exist there. It's already more featureful and supports more platforms than Signal ever did.

Maybe I missed something? Feel free to make a PR to add comments


kousu posted

In the XMPP world, Conversastions has been leading the charge to modernize XMPP, with an index of popular public groups (jabber.network) and a server validator. XMPP is mobile-battery friendly, and supports server-side logs wrapped in strong, multi-device encryption (in contrast to Signal, your keys never leave your devices!). Video calling even works now. It can interact with IRC and Riot (though the Riot bridge is less developed). There is a beautiful Windows client, a beautiful Linux client and a beautiful terminal client, two good Android clients, a beautiful web client which even supports video calling (and two others). It is easy to get an account from one of the many servers indexed here or here, or by looking through libreho.st. You can also set up your own with a little bit of reading. Snikket is building a one-click Slack-like personal-group server, with file-sharing, welcome channels and shared contacts, or you can integrate it with NextCloud. XMPP has solved a lot of problems over its long history, and might just outlast all the centralized services.

Bryan Reply

I totally forgot about XMPP, thanks for sharing!

on January 03, 2021 08:18 PM

January 02, 2021

In episode 100 of Late Night Linux I talked a little bit about trying out Pi Hole and AdGuard to replace my home grown ad blocker based on dnsmasq and a massive hosts file.

I came down in favour of Pi Hole for a couple of reasons but the deciding factor was that Pi Hole felt a bit more open and that it was built on top of dnsmasq which allowed me to reuse config for TFTP which netboots some devices which needed it.

Now that I’ve been using Pi Hole for a few months I have a much better understanding of its limitations and the big one for me is performance. Not the performance when servicing DNS requests but performance when querying the stats data, when reloading block lists and when enabling and disabling certain lists. I suspect a lot of the problems I was having is down to flaky SD cards.

I fully expect that for most people this will never be a problem, but for me it was an itch I wanted to scratch, so here’s what I did:

Through the actually quite generous Amazon Alexa AWS Credits promotion I have free money to spend on AWS services, so I spun up a t2.micro EC2 instance (1 vCPU, 1GB RAM – approx £10 a month) running Ubuntu.

I installed Pi Hole on that instance along with Wireguard which connects it back to my local network at home. I used this guide from Linode to get Wireguard set up.

The Pi Hole running in AWS hosts the large block files and is configured with a normal upstream DNS server as its upstream (I’m using Cloudflare).

Pi Hole running in AWS configured with Cloudflare as its upstream DNS

I use three Ad block lists:

Pi Hole running on a t2.micro instance is really speedy. I can reload the block list in a matter of seconds (versus minutes on the Pi) and querying the stats database no longer locks up and crashes Pi Hole’s management engine FTL.

The Pi Hole running on my LAN is configured to use the above AWS based Pi Hole as its upstream DNS server and also has a couple of additional block lists for YouTube and TikTok.

This allows me use Pi Hole on a Pi as the DHCP server on my LAN and benefit from the GUI to configure things. I can quickly and easily block YouTube when the kids have done enough and won’t listen to reason and the heavy lifting of bulk ad blocking is done on an AWS EC2 instance. The Pi on the LAN will cache a good amount of DNS and so everything whizzes along quickly.

Pi Hole on the LAN has a block list of about 3600 hosts, whereas the version running in AWS has over 1.5 million.

All things considered I’m really happy with Pi Hole and the split-load set up I have now makes it even easier to live with. I would like to see an improved Pi Hole API for enabling and disabling specific Ad lists so that I can make it easier to automate (e.g. unblock YouTube for two hours on a Saturday morning). I think that will come in time. The split-load set up also allows for easy fallback should the AWS machine need maintenance – it would be nice to have a “DNS server of last resort” in Pi Hole to make that automatic. Perhaps it already does, I should investigate.

Why not just run Pi Hole on a more powerful computer in the first place? That would be too easy.

If you fancy trying out Pi Hole in the cloud or just playing with Wireguard you can get $100 free credit with Linode with the Late Night Linux referral code: https://linode.com/latenightlinux

on January 02, 2021 05:45 PM

Here’s a list of some Debian packaging work for December 2020.

2020-12-01: Sponsor package mangohud (0.6.1-1) for Debian unstable (mentors.debian.net request).

2020-12-01: Sponsor package spyne (2.13.16-1) for Debian unstable (Python team request).

2020-12-01: Sponsor package python-xlrd (1.2.0-1) for Debian unstable (Python team request).

2020-12-01: Sponsor package buildbot for Debian unstable (Python team request).

2020-12-08: Upload package calamares ( to Debian unstable.

2020-12-09: Upload package btfs (2.23-1) to Debian unstable.

2020-12-09: Upload package feed2toot (0.15-1) to Debian unstable.

2020-12-09: Upload package gnome-shell-extension-harddisk-led (23-1) to Debian unstable.

2020-12-10: Upload package feed2toot (0.16-1) to Debian unstable.

2020-12-10: Upload package gnome-shell-extension-harddisk-led (24-1) to Debian unstable.

2020-12-13: Upload package xabacus (8.3.1-1) to Debian unstable.

2020-12-14: Upload package python-aniso8601 (8.1.0-1) to Debian unstable.

2020-12-19: Upload package rootskel-gtk (1.42) to Debian unstable.

2020-12-21: Sponsor package goverlay (0.4.3-1) for Debian unstable (mentors.debian.net request).

2020-12-21: Sponsor package pastel (0.2.1-1) for Debian unstable (Python team request).

2020-12-22: Sponsor package python-requests-toolbelt (0.9.1-1) for Debian unstable (Python team request).

2020-12-22: Upload kpmcore (20.12.0-1) to Debian unstable.

2020-12-26: Upload package bundlewrap (4.3.0-1) to Debian unstable.

2020-12-26: Review package python-strictyaml (1.1.1-1) (Needs some more work) (Python team request).

2020-12-26: Review package buildbot (2.9.3-1) (Needs some more work) (Python team request).

2020-12-26: Review package python-vttlib (0.9.1+dfsg-1) (Needs some more work) (Python team request).

2020-12-26: Sponsor package python-formencode (2.0.0-1) for Debian unstable (Python team request).

2020-12-26: Sponsor package pylev (1.2.0-1) for Debian unstable (Python team request).

2020-12-26: Review package python-absl (Needs some more work) (Python team request).

2020-12-26: Sponsor package python-moreorless (0.3.0-2) for Debian unstable (Python team request).

2020-12-26: Sponsor package peewee (3.14.0+dfsg-1) for Debian unstable (Python team request).

2020-12-28: Sponsor package pympler (0.9+dfsg1-1) for Debian unstable (Python team request).

2020-12-28: Sponsor package bidict (0.21.2-1) for Debian unstable (Python team request).

on January 02, 2021 07:19 AM

January 01, 2021

On Hiatus

Simon Raffeiner

There have been no new posts on this blog for the last 20 months, so I am finally putting the site on hiatus.

The post On Hiatus appeared first on LIEBERBIBER.

on January 01, 2021 12:13 PM

As you may know, I am Qt 5 maintainer in Debian. Maintaning Qt means not only bumping the version each time a new version is released, but also making sure Qt builds successfully on all architectures that are supported in Debian (and for some submodules, the automatic tests pass).

An important sort of build failures are endianness specific failures. Most widely used architectures (x86_64, aarch64) are little endian. However, Debian officially supports one big endian architecture (s390x), and unofficially a few more ports are provided, such as ppc64 and sparc64.

Unfortunately, Qt upstream does not have any big endian machine in their CI system, so endianness issues get noticed only when the packages fail to build on our build daemons. In the last years I have discovered and fixed some such issues in various parts of Qt, so I decided to write a post to illustrate how to write really cross-platform C/C++ code.

Issue 1: the WebP image format handler (code review)

The relevant code snippet is:

if (srcImage.format() != QImage::Format_ARGB32)
    srcImage = srcImage.convertToFormat(QImage::Format_ARGB32);
// ...
if (!WebPPictureImportBGRA(&picture, srcImage.bits(), srcImage.bytesPerLine())) {
    // ...

The code here is serializing the images into QImage::Format_ARGB32 format, and then passing the bytes into WebP’s import function. With this format, the image is stored using a 32-bit ARGB format (0xAARRGGBB). This means that the bytes will be 0xBB, 0xGG, 0xRR, 0xAA or little endian and 0xAA, 0xRR, 0xGG, 0xBB on big endian. However, WebPPictureImportBGRA expects the first format on all architectures.

The fix was to use QImage::Format_RGBA8888. As the QImage documentation says, with this format the order of the colors is the same on any architecture if read as bytes 0xRR, 0xGG, 0xBB, 0xAA.

Issue 2: qimage_converter_map structure (code review)

The code seems to already support big endian. But maybe you can spot the error?


It is the missing comma! It is present in the little endian block, but not in the big endian one. This was fixed trivially.

Issue 3: QHandle, part of Qt 3D module (code review)

QHandle class uses a union that is declared as follows:

struct Data {
    quint32 m_index : IndexBits;
    quint32 m_counter : CounterBits;
    quint32 m_unused : 2;
union {
    Data d;
    quint32 m_handle;

The sizes are declared such as IndexBits + CounterBits + 2 is always equal to 32 (four bytes).

Then we have a constructor that sets the values of Data struct:

QHandle(quint32 i, quint32 count)
    d.m_index = i;
    d.m_counter = count;
    d.m_unused = 0;

The value of m_handle will be different depending on endianness! So the test that was expecting a particular value with given constructor arguments was failing. I fixed it by using the following macro:

#define GET_EXPECTED_HANDLE(qHandle) ((qHandle.index() << (qHandle.CounterBits + 2)) + (qHandle.counter() << 2))
#else /* Q_LITTLE_ENDIAN */
#define GET_EXPECTED_HANDLE(qHandle) (qHandle.index() + (qHandle.counter() << qHandle.IndexBits))

Issue 4: QML compiler (code review)

The QML compiler used a helper class named LEUInt32 (based on QLEInteger) that always stored the numbers in little endian internally. This class can be safely mixed with native quint32 on little endian systems, but not on big endian.

Usually the compiler would warn about type mismatch, but here the code used reinterpret_cast, such as:

quint32 *objectTable = reinterpret_cast<quint32*>(data + qmlUnit->offsetToObjects);

So this was not noticed on build time, but the compiler was crashing. The fix was trivial again, replacing quint32 with QLEUInt32.

Issue 5: QModbusPdu, part of Qt Serial Bus module (code review)

The code snippet is simple:

QModbusPdu::FunctionCode code = QModbusPdu::Invalid;
if (stream.readRawData((char *) (&code), sizeof(quint8)) != sizeof(quint8))
    return stream;

QModbusPdu::FunctionCode is an enum, so code is a multi-byte value (even if only one byte is significant). However, (char *) (&code) returns a pointer to the first byte of it. It is the needed byte on little endian systems, but it is the wrong byte on big endian ones!

The correct fix was using a temporary one-byte variable:

quint8 codeByte = 0;
if (stream.readRawData((char *) (&codeByte), sizeof(quint8)) != sizeof(quint8))
    return stream;
QModbusPdu::FunctionCode code = (QModbusPdu::FunctionCode) codeByte;

Issue 6: qt_is_ascii (code review)

This function, as the name says, checks whether a string is ASCII. It does that by splitting the string into 4-byte chunks:

while (ptr + 4 <= end) {
    quint32 data = qFromUnaligned<quint32>(ptr);
    if (data &= 0x80808080U) {
        uint idx = qCountTrailingZeroBits(data);
        ptr += idx / 8;
        return false;
    ptr += 4;

idx / 8 is the number of trailing zero bytes. However, the bytes which are trailing on little endian are actually leading on big endian! So we can use qCountLeadingZeroBits there.

Issue 7: the bundled copy of tinycbor (upstream pull request)

Similar to issue 5, the code was reading into the wrong byte:

if (bytesNeeded <= 2) {
    read_bytes_unchecked(it, &it->extra, 1, bytesNeeded);
    if (bytesNeeded == 2)
        it->extra = cbor_ntohs(it->extra);

extra has type uint16_t, so it has two bytes. When we need only one byte, we read into the wrong byte, so the resulting number is 256 times higher on big endian than it should be. Adding a temporary one-byte variable fixed it.

Issue 8: perfparser, part of Qt Creator (code review)

Here it is not trivial to find the issue just looking at the code:

qint32 dataStreamVersion = qToLittleEndian(QDataStream::Qt_DefaultCompiledVersion);

However the linker was producing an error:

undefined reference to `QDataStream::Version qbswap(QDataStream::Version)'

On little endian systems, qToLittleEndian is a no-op, but on big endian systems, it is a template function defined for some known types. But it turns out we need to explicitly convert enum values to a simple type, so the fix was passing qint32(QDataStream::Qt_DefaultCompiledVersion) to that function.

Issue 9: Qt Personal Information Management (code review)

The code in test was trying to represent a number as a sequence of bytes, using reinterpret_cast:

static inline QContactId makeId(const QString &managerName, uint id)
    return QContactId(QStringLiteral("qtcontacts:basic%1:").arg(managerName), QByteArray(reinterpret_cast<const char *>(&id), sizeof(uint)));

The order of bytes will be different on little endian and big endian systems! The fix was adding this line to the beginning of the function:

id = qToLittleEndian(id);

This will cause the bytes to be reversed on big endian systems.

What remains unfixed

There are still some bugs, which require deeper investigation, for example:

P.S. We are looking for new people to help with maintaining Qt 6. Join our team if you want to do some fun work like described above!

on January 01, 2021 09:35 AM

December 31, 2020

Ubuntu life in 2020

Torsten Franz

The year 2020 was quite extraordinary, because a lot of things developed quite differently from how they were supposed to because of the Covid-19 crisis. Even though a lot of things happen virtually at Ubuntu, it also had an impact on my Ubuntu life.

Every year I attend a few trade fairs to present Ubuntu and/or give talks. In 2020, this only took place virtually and in a very limited way for me. In March, the Chemnitzer Linuxtage were cancelled and one fair after the other was cancelled.

In my home town I go to a Fablab where we also work on Ubuntu. After the meetings in January and February, this was also cancelled. Now and then this still took place virtually, but somehow it didn’t create the same atmosphere as when we met in real life.

With the team members of the German-speaking Ubuntu forum (ubuntuusers.de) we organise a team meeting every year, which is always very funny and partly productive. In 2020 it had to be cancelled. Since I have also reduced my other contacts to help contain the virus, I have only met two people from the Ubuntu environment in real life since March.

But, of course, Ubuntu life was also progressing in 2020. The whole year I had the responsibility as project leader for ubuntuusers.de in a three man team and had some issues to deal with there. In „ubuntu Deutschland e.V.“ I am the chairman and had to take care of tax benefits again this year, which we were able to do successfully.

I also deal with translations in Ubuntu, namely into German. There are always ups and downs here and things don’t always go well. At the beginning of the year, we were at 86.71 per cent with the German Ubuntu translations. One year later, we are now at 86.33 per cent. Okay, a little bit less, but overall almost at the same level. By the way, this means that the German translation in Ubuntu was and still is number 2. Only in Ukrainian has Ubuntu been translated more so far. Perhaps becoming number 1 is once again a goal we can tackle in 2021.

In 2020, my LoCo has also lost its verified status. This is mainly due to the fact that there was no longer a LoCo Council and therefore no application was written. However, there have now been a few movements, so that we can also tackle this at the beginning of 2021. I also had a hand in these movements. In October, I stood for election to the Community Council and was also elected to this board. In the last two months, I was able to move a few issues forward and clean up the mess.

In Ubuntu we say: I am because we are. This saying has been very interesting in 2020, because many of my work colleagues and friends have focused on exactly one part of it: what I do has an effect on my fellow human beings and vice versa. Perhaps we can also develop this approach socially and see this not only in the crisis, but also in life as a whole.

Now there is only one thing left for me to say: Happy New Year 2021.

on December 31, 2020 12:00 PM

The Brexit Deal

Jonathan Riddell

Now that both halves of the Brexit Deal (Withdrawal Agreement and Trade Deal) have been written the UK is finally in a position to spend some months having a discourse about their merits before having a referendum on whether to go with it or go with the status quo. Alas the broken democratic setup won’t allow that as there was a referendum over 4 years ago without the basics needed for discussion. One lesson that needs to be learnt, but I haven’t seen anyone propose, is to require referendums to have pre-written legislation or international agreement text on what is being implemented.

This on top of the occasionally discussed fixes needed to democracy around transparency of campaigning funds, proper fines when they steal data, banning or limiting online advertising, transparency around advertising and proper fines for campaigns that over-spend.

The new GB <-> UK setup will of course remove freedoms and add vast amounts of new bureaucracy. It might get three of the UK’s countries out of the properly run court of the ECJ but for what end? To be replaced with endless committees discussing the exact same points and the threat of tariffs when standards diverge. Making predictions in this game is daft but I’m pretty sure the UK will push the boundaries on when labour or environmental standards it can reduce soon, probably starting with the working time directive. What export tariffs or quotas will be introduced once that is changed?

The trade deal is incomplete of course and there will be endless future negotiations about services and data transfer and the like. This is only the start of the Brexit process and politicians who claim this is the end are, as we have become used, talking lies. The worries of no-deal Brexit have lessened but the new customs checks going out of GB and the ones to come in future months coming into GB will cause some shortages, prices to rise, businesses to struggle, service companies and the jobs they hold to move abroad. The rise in business related fraud will be a hidden but very real cost.

Johnson deliberately ran down the clock to wait until the final days before making the trade deal. It’s a disgusting tactic which removes the very small democratic oversight that could be expected (the UK parliament having long since had the power removed to approve or deny any such deal). Again I’ve not read anyone pointing out this deliberate tactic which caused much stress on businesses and individuals by playing up the chances of a cliff edge Brexit but it must have been the plan all along. It means he’ll get applauded in the right wing press for limiting democracy, and nobody will be any the wiser.

There is a new bureaucratic border from Scotland and Wales to Northern Ireland with lorry parks and checks for goods. What I haven’t seen any coverage of is increased checks for people crossing. The police have always had the power to check IDs when people crossed into or out of Northern Ireland but that’s not much used since the violence subsided. Now that free movement remains in Ireland but is removed from Great Britain (making Northern Ireland a bit of a no-mans land I suppose) those checks must surely be upgraded to stop foreigners coming over here doing whatever it is the racists moaned about. This will be a new front of low level human rights abuses that will need to be watched, I wonder if anyone is doing so.

With the new setup comes new political campaigning. The election next May will again vote in a Scottish government on a pledge to hold an independence referendum but of course it’ll be blocked by Johnson and delegitimised by the unionists. The Scottish cringe (“too small, too poor”) was a strong factor in the 2014 referendum to make people vote No and it’ll come into play in a new force this time. Firstly with whether any referendum is legitimate. The Catalan referendum of 2014 was accompanied by a massive propaganda campaign by the Spanish Tories (the PP) with huge adverts saying it was illegal and therefor illegitimate. The same thing will happen here. Unlike in Spain there’s a small chance the legal route will be open, UK parliament says there is a Claim of Right for Scots to choose their own form of government so there must be some legal method for that to express itself. I doubt the Court of Session and certainly not the UK Supreme Court will magically give the Scottish Parliament the power to hold a decisive referendum, but maybe thay’ll allow a not-quite-decisive one (which will be deligitimised all it can be by unionists) or maybe they’ll require the UK parliament to hold one (which will be rigged if it ever happens). But there’s every chance the courts will agree that we’ve had our referendum and we need to eat our cereal. In which case it’s hard to see what to do, many Scots won’t accept the Catalan method of just holding one with out agreement and there is a strong need to carry the popular will when holding a referendum. And while I’m a supporter of the Catalan method, one has to admit that it hasn’t worked, there’s been no international support for their self determination right as unfair and illogical as that is.

There will be new concerns in the new referendum. The new border from Scotland to Northern Ireland (and everywhere else that has flight connections to the EU) is made concrete. We can reasonably assume the new bureaucracy there will be moved Scotland to England after independence. Massive new lorry parks and customs checks might be needed. Freedom of movement will remain with the common travel area but might the English want to impose ID checks like you get going between Scotland and Northern Ireland? While I care about my freedoms Europe wide there border from Scotland to England holds a stronger emotional impact for all. When I first wrote to a newspaper to say the border should be closed for Covid controls that was then taken up by the Scottish Governement and many people protested. It’s now law and even the Tories support it on health grounds (except Mundell) but it will be heart breaking to see it happen for customs as well and it’ll be a strong issue in the debate to come.

Join us in campaigning for an independent Scotland in the EU with Yes for EU and sign the European Movement in Scotland petition.

Happy new year.

on December 31, 2020 11:08 AM

December 30, 2020

A custom and global shortcut key to mute / unmute yourself in Zoom or Google Meet

Just like everyone else, 2020 was the year of having more and more video-conference calls. How many times did we struggle to find the meeting window during a call, and say “Sorry, I was on mute”? I tried to address the pain and ended up with the following setup.


xdotool is a great automation tool for X, and it can search a window, activate it, and simulate keyboard input. That’s a perfect match for the use case here.

Here is an example command for Google Meet.

$ xdotool search --name '^Meet - .+ - Chromium$' \
    windowactivate --sync \
    key ctrl+d

In the chained commands, it does:

  1. search windows named like Meet - <MEETING_ID> - Chromium
  2. activate the first window passed by the previous line and wait until it gets active (--sync)
  3. send a keystroke as Ctrl+D, which is the default shortcut in Meet

By the way, my main browser is Firefox, but I have to use Chromium to join Meet calls since it tends to have less CPU utilization.

You can do something similar for Zoom with Alt+A.

$ xdotool search --name '^Zoom Meeting$' \
    windowactivate --sync \
    key alt+a

Microsoft Teams should work with xdotool and Ctrl+Shift+M at least for the web version.

GNOME keyboard shortcuts

The commands above can be mapped to a shortcut key with GNOME.

It’s pretty simple, but some tricks may be required. As far as I see, gsd-media-keys will invoke a command when a shortcut key is pressed, not released. In my case, I use Ctrl+space as the shortcut key, so Meet may recognize keys pressed as Ctrl+space + Ctrl+D = Ctrl+space+D which doesn’t trigger the mute/unmute behavior actually. Keys can be canceled with keyup, so the key command was turned into keyup space key ctrl+d in the end.

Also, I wanted to use the same shortcut key for multiple services, and I have the following line which tries Google Meet first, then Zoom if no Meet window is found. It should work most of the cases unless you join multiple meetings at the same time.

sh -c "
    xdotool search --name '^Meet - .+ - Chromium$'
        windowactivate --sync
        keyup space key ctrl+d
    || xdotool search --name '^Zoom Meeting$'
        windowactivate --sync
        keyup ctrl+space key alt+a

--clearmodifiers can be used to simplify the whole commands, but when I tried, it sometimes left the Ctrl key pressed depending on the timing I released the key.

Hardware mute/unmute button

Going further, I wanted to have a dedicated button to mute/unmute myself especially for some relaxed meetings where I don’t have to keep my hands on the keyboard all the time.

Back in October, I bought a USB volume controller, which is recognized as “STMicroelectronics USB Volume Control” from the OS. It was around 15 USD.

It emits expected events as KEY_VOLUMEUP and KEY_VOLUMEDOWN with the dial, and KEY_MUTE when the knob is pressed.

I created a “hwdb” file to remap the mute key to something else as follows in /etc/udev/hwdb.d/99-local-remap-usb-volume-control.hwdb.

# STMicroelectronics USB Volume Control
# Remap the click (Mute) to XF86Launch

Once the hardware database is updated with systemd-hwdb update and the device is unplugged and plugged again (if without udevadm commands), I was able to map Launch4(prog4) to the xdotool commands in GNOME successfully.

It looks like everyone had the same idea. There are fancier buttons out there :-)

on December 30, 2020 06:49 PM

December 28, 2020

¿Será buena para Linux una más que factible migración de x86 a ARM? ¿Significará la muerte de Linux? Creemos que se avecinan tiempos oscuros… Y el navegador Edge llega a Linux. Escúchanos en: Ivoox Telegram Youtube Y en tu cliente de podcast habitual con el RSS
on December 28, 2020 07:13 PM

December 25, 2020

First off, I want to wish everyone a Happy Holidays and a Merry Christmas. I know 2020 has been a hard year for so many, and I hope you and your families are healthy and making it through the year.

Over the past few years, I’ve gotten into making holiday ornaments for friends and family. In 2017, I did a snowflake PCB ornament. In 2018, I used laser cutting service Ponoko to cut acrylic fir trees with interlocking pieces. In 2019, I used my new 3D printer to print 3-dimensional snowflakes. In 2020, I’ve returned to my roots and gone with another PCB design. As a huge fan of DEFCON #badgelife, it felt appropriate to go back this way. I ended up with a touch-sensitive snowman with 6 LEDs.

Front of Ornament

The ornament features a snowman created by the use of the black silkscreen and white soldermask. The front artwork was created by drawing it in Inkscape, then exporting to a PNG, and pulling into KiCad’s bmp2component. Of course, bmp2component wants to put this as a footprint, so I had to adjust the resulting kicad_mod file to put things on the silkscreen layer.

There are 6 LEDs. The eyes and buttons are white LEDs and the nose, befitting the typical carrot, is an orange LED. All the remaining components are on the reverse.

Back of Ornament

The back of the ornament houses all of the working bits. The main microcontroller is the Microchip ATtiny84A. It directly drives the LEDs via 6 of the I/O pins with 200Ω resistors for current limiting.

The power supply, at the lower right of the back side, is a boost converter to maintain 3.6V (necessary for the white LEDs with a bit of overhead) out of the coin cell battery. Coin cells start at 3V, which can barely run a white LED under a lot of conditions, but they drop fairly quickly. This power supply will keep things going down to at least 2.2V of input. Note that the actual chip for the power supply is a 2mm-by-2mm component – I didn’t realize just how hard that would be to actually assemble until I had them in my hands!

At the bottom left of the back is the capacitive touch sensor, the Microchip AT42QT1010. It connects to a copper area on the front of the ornament to detect a touch in that area. It produces a signal when the touch is detected, but that had to be debounced in software due to stray signals, probably from the LEDs.

Each ornament was hand assembled, leading to a limited run of 14. (15 if you count a prototype that’s wired up to a power supply instead of a battery supply.) The firmware running on the microcontroller is written in C, and was programmed onto the boards using the Tigard. I had intended to use pogo pins to program via the pads above the microcontroller, but I ended up using a chip clip to program instead.

I hope this might inspire others to give DIY PCB artwork a try. It’s quite simple if you know some basic electronics, and it’s really fun to see something you built come to life. Merry Christmas to all, and may 2021 be infinitely better than 2020.

on December 25, 2020 08:00 AM

December 24, 2020

S13E40 – Ravens

Ubuntu Podcast from the UK LoCo

This week we have been fixing network and audio noise and playing Hotshot Racing. We look back and celebrate the good things that happened in 2020, bring you some GUI love and go over all your wonderful feedback.

It’s Season 13 Episode 40 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

Normal use

mangohud /path/to/app

Steam launcher

Open Properties for a game in Steam and set this in “SET LAUNCH OPTIONS…”

mangohud %command%

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on December 24, 2020 03:00 PM

December 19, 2020

The previous post went over the planned redundancy aspect of this setup at the storage, networking and control plane level. Now let’s see how to get those systems installed and configured for this setup.

Firmware updates and configuration

First thing first, whether its systems coming from an eBay seller or straight from the factory, the first step is always to update all firmware to the latest available.

In my case, that meant updating the SuperMicro BMC firmware and then the BIOS/UEFI firmware too. Once done, perform a factory reset of both the BMC and UEFI config and then go through the configuration to get something that suits your needs.

The main things I had to tweak other than the usual network settings and accounts were:

  • Switch the firmware to UEFI only and enable Secure Boot
    This involves flipping all option ROMs to EFI, disabling CSM and enabling Secure Boot using the default keys.
  • Enable SR-IOV/IOMMU support
    Useful if you ever want to use SR-IOV or PCI device passthrough.
  • Disable unused devices
    In my case, the only storage backplane is connected to a SAS controller with nothing plugged into the SATA controller, so I disabled it.
  • Tweak storage drive classification
    The firmware allows configuring if a drive is HDD or SSD, presumably to control spin up on boot.

Base OS install

With that done, I grabbed the Ubuntu 20.04.1 LTS server ISO, dumped it onto a USB stick and booted the servers from it.

I had all servers and their BMCs connected to my existing lab network to make things easy for the initial setup, it’s easier to do complex network configuration after the initial installation.

The main thing to get right at this step is the basic partitioning for your OS drive. My original plan was to carve off some space from the NVME drive for the OS, unfortunately after an initial installation done that way, I realized that my motherboard doesn’t support NVME booting so ended up reinstalling, this time carving out some space from the SATA SSD instead.

In my case, I ended up creating a 35GB root partition (ext4) and 4GB swap partition, leaving the rest of the 2TB drive unpartitioned for later use by Ceph.

With the install done, make sure you can SSH into the system, also check that you can access the console through the BMC both through VGA and through the IPMI text console. That last part can be done by dumping a file in /etc/default/grub.d/ that looks like:

GRUB_CMDLINE_LINUX="${GRUB_CMDLINE_LINUX} console=tty0 console=ttyS1,115200n8"

Finally you’ll want to make sure you apply any pending updates and reboot, then check dmesg for anything suspicious coming from the kernel. Better catch compatibility and hardware issues early on.

Networking setup

On the networking front you may remember I’ve gotten configs with 6 NICs, two gigabit ports and four 10gbit ports. The gigabit NICs are bonded together and go to the switch, the 10gbit ports are used to create a mesh with each server using a two ports bond to the others.

Combined with the dedicated BMC ports, this ends up looking like this:

Here we can see the switch receiving its uplink over LC fiber, each server has its BMC plugged into a separate switch port and VLAN (green cables), each server is also connected to the switch with a two port bond (black cables) and each server is connected to the other two using a two port bond (blue cables).

Ubuntu uses Netplan for its network configuration these days, the configuration on those servers looks something like this:

  version: 2
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 9000
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 9000
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 9000
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 9000
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 9000
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 9000

    # Connection to first other server
        - enp3s0f0
        - enp3s0f1
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 9000
        mode: 802.3ad
        lacp-rate: fast
        mii-monitor-interval: 100
        transmit-hash-policy: layer3+4

    # Connection to second other server
        - enp1s0f0
        - enp1s0f1
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 9000
        mode: 802.3ad
        lacp-rate: fast
        mii-monitor-interval: 100
        transmit-hash-policy: layer3+4

    # Connection to the switch
        - ens1f0
        - ens1f1
      link-local: []
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 1500
        mode: 802.3ad
        lacp-rate: fast
        mii-monitor-interval: 100
        transmit-hash-policy: layer3+4

    # WAN-HIVE
      link: bond-sw01
      id: 50
      link-local: []
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 1500

      link: bond-sw01
      id: 100
      link-local: []
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 1500

      link: bond-sw01
      id: 101
      link-local: []
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 1500

      link: bond-sw01
      id: 102
      link-local: []
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 1500

    # WAN-HIVE
        - bond-sw01.50
      link-local: []
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 1500

        - bond-sw01.100
      link-local: []
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 1500

        - bond-sw01.101
      accept-ra: true
      dhcp4: false
      dhcp6: false
      mtu: 1500
          - stgraber.net
          - 2602:XXXX:Y:10::1

        - bond-sw01.102
      link-local: []
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 1500

That’s the part which is common to all servers, then on top of that, each server needs its own tiny bit of config to setup the right routes to its other two peers, this looks like this:

  version: 2
    # server 2
        - 2602:XXXX:Y:ZZZ::101/64
        - to: 2602:XXXX:Y:ZZZ::100/128
          via: fe80::ec7c:7eff:fe69:55fa

    # server 3
        - 2602:XXXX:Y:ZZZ::101/64
        - to: 2602:XXXX:Y:ZZZ::102/128
          via: fe80::8cd6:b3ff:fe53:7cc

        - 2602:XXXX:Y:ZZZ::101/64

My setup is pretty much entirely IPv6 except for a tiny bit of IPv4 for some specific services so that’s why everything above very much relies on IPv6 addressing, but the same could certainly be done using IPv4 instead.

With this setup, I have a 2Gbit/s bond to the top of the rack switch configured to use static addressing but using the gateway provided through IPv6 router advertisements. I then have a first 20Gbit/s bond to the second server with a static route for its IP and then another identical bond to the third server.

This allows all three servers to communicate at 20Gbit/s and then at 2Gbit/s to the outside world. The fast links will almost exclusively be carrying Ceph, OVN and LXD internal traffic, the kind of traffic that’s using a lot of bandwidth and requires good latency.

To complete the network setup, OVN is installed using the ovn-central and ovn-host packages from Ubuntu and then configured to communicate using the internal mesh subnet.

This part is done by editing /etc/default/ovn-central on all 3 systems and updating OVN_CTL_OPTS to pass a number of additional parameters:

  • --db-nb-addr to the local address
  • --db-sb-addr to the local address
  • --db-nb-cluster-local-addr to the local address
  • --db-sb-cluster-local-addr to the local address
  • --db-nb-cluster-remote-addr to the first server’s address
  • --db-sb-cluster-remote-addr to the first server’s address
  • --ovn-northd-nb-db to all the addresses (port 6641)
  • --ovn-northd-sb-db to all the addresses (port 6642)

The first server shouldn’t have the remote-addr ones set as it’s the bootstrap server, the others will then join that initial server and join the cluster at which point that startup argument isn’t needed anymore (but it doesn’t really hurt to keep it in the config).

If OVN was running unclustered, you’ll want to reset it by wiping /var/lib/ovn and restarting ovn-central.service.

Storage setup

On the storage side, I won’t go over how to get a three nodes Ceph cluster, there are many different ways to achieve that using just about every deployment/configuration management tool in existence as well as upstream’s own ceph-deploy tool.

In short, the first step is to deploy a Ceph monitor (ceph-mon) per server, followed by a Ceph manager (ceph-mgr) and a Ceph metadata server (ceph-mds). With that done, one Ceph OSD (ceph-osd) per drive needs to be setup. In my case, both the HDDs and the NVME SSD are consumed in full for this while for the SATA SSD I created a partition using the remaining space from the installation and put that into Ceph.

At that stage, you may want to learn about Ceph crush maps and do any tweaking that you want based on your storage setup.

In my case, I have two custom crush rules, one which targets exclusively HDDs and one which targets exclusively SSDs. I’ve also made sure that each drive has the proper device class and I’ve tweaked the affinity a bit such that the faster drives will be prioritized for the first replica.

I’ve also created an initial ceph fs filesystem for use by LXD with:

ceph osd pool create lxd-cephfs_metadata 32 32 replicated replicated_rule_ssd
ceph osd pool create lxd-cephfs_data 32 32 replicated replicated_rule_hdd
ceph fs new lxd-cephfs lxd-cephfs_metadata lxd-cephfs_data
ceph fs set lxd-cephfs allow_new_snaps true

This makes use of those custom rules, putting the metadata on SSD with the actual data on HDD.

The cluster should then look something a bit like that:

root@langara:~# ceph status
    id:     dd7a8436-46ff-4017-9fcb-9ef176409fc5
    health: HEALTH_OK
    mon: 3 daemons, quorum abydos,langara,orilla (age 37m)
    mgr: langara(active, since 41m), standbys: abydos, orilla
    mds: lxd-cephfs:1 {0=abydos=up:active} 2 up:standby
    osd: 12 osds: 12 up (since 37m), 12 in (since 93m)
  task status:
    scrub status:
        mds.abydos: idle
    pools:   5 pools, 129 pgs
    objects: 16.20k objects, 53 GiB
    usage:   159 GiB used, 34 TiB / 34 TiB avail
    pgs:     129 active+clean

With the OSDs configured like so:

root@langara:~# ceph osd tree
-1         34.02979  root default                               
-3         11.34326      host abydos                            
 4    hdd   3.63869          osd.4         up   1.00000  0.12500
 7    hdd   5.45799          osd.7         up   1.00000  0.25000
 0    ssd   0.46579          osd.0         up   1.00000  1.00000
10    ssd   1.78079          osd.10        up   1.00000  0.75000
-5         11.34326      host langara                           
 5    hdd   3.63869          osd.5         up   1.00000  0.12500
 8    hdd   5.45799          osd.8         up   1.00000  0.25000
 1    ssd   0.46579          osd.1         up   1.00000  1.00000
11    ssd   1.78079          osd.11        up   1.00000  0.75000
-7         11.34326      host orilla                            
 3    hdd   3.63869          osd.3         up   1.00000  0.12500
 6    hdd   5.45799          osd.6         up   1.00000  0.25000
 2    ssd   0.46579          osd.2         up   1.00000  1.00000
 9    ssd   1.78079          osd.9         up   1.00000  0.75000

LXD setup

The last piece is building up a LXD cluster which will then be configured to consume both the OVN networking and Ceph storage.

For OVN support, using an LTS branch of LXD won’t work as 4.0 LTS predates OVN support, so instead I’ll be using the latest stable release.

Installation is as simple as: snap install lxd --channel=latest/stable

Then on run lxd init on the first server, answer yes to the clustering question, make sure the hostname is correct and that the address used is that on the mesh subnet, then create the new cluster setting an initial password and skipping over all the storage and network questions, it’s easier to configure those by hand later on.

After that, run lxd init on the remaining two servers, this time pointing them to the first server to join the existing cluster.

With that done, you have a LXD cluster:

root@langara:~# lxc cluster list
|   NAME   |                 URL                 | DATABASE | STATE  |      MESSAGE      | ARCHITECTURE | FAILURE DOMAIN |
| server-1 | https://[2602:XXXX:Y:ZZZ::100]:8443 | YES      | ONLINE | fully operational | x86_64       | default        |
| server-2 | https://[2602:XXXX:Y:ZZZ::101]:8443 | YES      | ONLINE | fully operational | x86_64       | default        |
| server-3 | https://[2602:XXXX:Y:ZZZ::102]:8443 | YES      | ONLINE | fully operational | x86_64       | default        |

Now that cluster needs to be configured to access OVN and to use Ceph for storage.

On the OVN side, all that’s needed is: lxc config set network.ovn.northbound_connection tcp:<server1>:6641,tcp:<server2>:6641,tcp:<server3>:6641

As for Ceph creating a Ceph RBD storage pool can be done with:

lxc storage create ssd ceph source=lxd-ssd --target server-1
lxc storage create ssd ceph source=lxd-ssd --target server-2
lxc storage create ssd ceph source=lxd-ssd --target server-3
lxc storage create ssd ceph

And for Ceph FS:

lxc storage create shared cephfs source=lxd-cephfs --target server-1
lxc storage create shared cephfs source=lxd-cephfs --target server-2
lxc storage create shared cephfs source=lxd-cephfs --target server-3
lxc storage create shared cephfs

In my case, I’ve also setup a lxd-hdd pool, resulting in a final setup of:

root@langara:~# lxc storage list
| hdd    |             | ceph   | CREATED | 1       |
| shared |             | cephfs | CREATED | 0       |
| ssd    |             | ceph   | CREATED | 16      |

Up next

The next post is likely to be quite network heavy, going into why I’m using dynamic routing and how I’ve got it all setup. This is the missing piece of the puzzle in what I’ve shown so far as without it, you’d need an external router with a bunch of static routes to send traffic to the OVN networks.

on December 19, 2020 12:46 AM

December 18, 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In November, 239.25 work hours have been dispatched among 13 paid contributors. Their reports are available:

Evolution of the situation

In November we held the last LTS team meeting for 2020 on IRC, with the next one coming up at the end of January.
We announced a new formalized initiative for Funding Debian projects with money from Freexian’s LTS service.
Finally, we would like to remark once again that we are constantly looking for new contributors. Please contact Holger if you are interested!

We’re also glad to welcome two new sponsors, Moxa, a device manufacturer, and a French research lab (Institut des Sciences Cognitives Marc Jeannerod).

The security tracker currently lists 37 packages with a known CVE and the dla-needed.txt file has 40 packages needing an update.

Thanks to our sponsors

Sponsors that joined recently are in bold.

One comment | Liked this article? Click here. | My blog is Flattr-enabled.

on December 18, 2020 10:02 AM
The Lubuntu Team is pleased to announce we are running a Hirsute Hippo artwork competition, giving you, our community, the chance to submit, and get your favorite wallpapers for both the desktop and the greeter/login screen (SDDM) included in the Lubuntu 21.04 release. Show Your Artwork To enter, simply post your image into this thread on our […]
on December 18, 2020 01:15 AM

December 17, 2020

S13E39 – Walking backwards

Ubuntu Podcast from the UK LoCo

This week we’ve been playing Cyberpunk 2077 and applying for Ubuntu Membership. We round up the goings on in the Ubuntu community and also bring you our favourite news picks from the wider tech world.

It’s Season 13 Episode 39 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on December 17, 2020 03:00 PM

December 16, 2020

In the previous post I went over the reasons for switching to my own hardware and what hardware I ended up selecting for the job.

Now it’s time to look at how I intend to achieve the high availability goals of this setup. Effectively limiting the number of single point of failure as much as possible.

Hardware redundancy

On the hardware front, every server has:

  • Two power supplies
  • Hot swappable storage
  • 6 network ports served by 3 separate cards
  • BMC (IPMI/redfish) for remote monitoring and control

The switch is the only real single point of failure on the hardware side of things. But it also has two power supplies and hot swappable fans. If this ever becomes a problem, I can also source a second unit and use data and power stacking along with MLAG to get rid of this single point of failure.

I mentioned that each server has four 10Gbit ports yet my switch is Gigabit. This is fine as I’ll be using a mesh type configuration for the high-throughput part of the setup. Effectively connecting each server to the other two with a dual 10Gbit bond each. Then each server will get a dual Gigabit bond to the switch for external connectivity.

Software redundancy

The software side is where things get really interesting, there are three main aspects that need to be addressed:

  • Storage
  • Networking
  • Compute


For storage, the plan is to rely on Ceph, each server will run a total of 4 OSDs, one per physical drive with the SATA SSD acting as boot drive too with the OSD being a large partition on it instead of the full disk.

Each server will also act as MON, MGR and MDS providing a fully redundant Ceph cluster on 3 machines capable of providing both block and filesystem storage through RBD and FS.

Two maps will be setup, one for HDD storage and one for SSD storage.
Storage affinity will also be configured such that the NVME drives will be used for the primary replica in the SSD map with the SATA drives holding secondary/tertiary replicas instead.

This makes the storage layer quite reliable. A full server can go down with only minimal impact. Should a server being offline be caused by hardware failure, the on-site staff can very easily relocate the drives from the failed server to the other two servers allowing Ceph to recover the majority of its OSDs until the defective server can be repaired.


Networking is where things get quite complex when you want something really highly available. I’ll be getting a Gigabit internet drop from the co-location facility on top of which a /27 IPv4 and a /48 IPv6 subnet will be routed.

Internally, I’ll be running many small networks grouping services together. None of those networks will have much in the way of allowed ingress/egress traffic and the majority of them will be IPv6 only.

The majority of egress will be done through a proxy server and IPv4 access will be handled through a DNS64/NAT64 setup.
Ingress when needed will be done by directly routing an additional IPv4 or IPv6 address to the instance running the external service.

At the core of all this will be OVN which will run on all 3 machines with its database clustered. Similar to Ceph for storage, this allows machines to go down with no impact on the virtual networks.

Where things get tricky is on providing a highly available uplink network for OVN. OVN draws addresses from that uplink network for its virtual routers and routes egress traffic through the default gateway on that network.

One option would be for a static setup, have the switch act as the gateway on the uplink network, feed that to OVN over a VLAN and then add manual static routes for every public subnet or public address which needs routing to a virtual network. That’s easy to setup, but I don’t like the need to constantly update static routing information in my switch.

Another option is to use LXD’s l2proxy mode for OVN, this effectively makes OVN respond to ARP/NDP for any address it’s responsible for but then requires the entire IPv4 and IPv6 subnet to be directly routed to the one uplink subnet. This can get very noisy and just doesn’t scale well with large subnets.

The more complicated but more flexible option is to use dynamic routing.
Dynamic routing involves routers talking to each other, advertising and receiving routes. That’s the core of how the internet works but can also be used for internal networking.

My setup effectively looks like this:

  • Three containers running FRR each connected to both the direct link with the internet provider and to the OVN uplink network.
  • Each one of those will maintain BGP sessions with the internet provider’s routers AS WELL as with the internal hosts running OVN.
  • VRRP is used to provide a single highly available gateway address on the OVN uplink network.
  • I wrote lxd-bgp as a small BGP daemon that integrates with the LXD API to extract all the OVN subnets and instance addresses which need to be publicly available and announces those routes to the three routers.

This may feel overly complex and it quite possibly is, but that gives me three routers, one on each server and only one of which need to be running at any one time. It also gives me the ability to balance routing traffic both ingress or egress by tweaking the BGP or VRRP priorities.

The nice side effect of this setup is that I’m also able to use anycast for critical services both internally and externally. Effectively running three identical copies of the service, one per server, all with the exact same address. The routers will be aware of all three and will pick one at the destination. If that instance or server goes down, the route disappears and the traffic goes to one of the other two!


On the compute side, I’m obviously going to be using LXD with the majority of services running in containers and with a few more running in virtual machines.

Stateless services that I want to always be running no matter what happens will be using anycast as shown above. This also applies to critical internal services as is the case above with my internal DNS resolvers (unbound).

Other services may still run two or more instances and be placed behind a load balancing proxy (HAProxy) to spread the load as needed and handle failures.

Lastly even services that will only be run by a single instance will still benefit from the highly available environment. All their data will be stored on Ceph, meaning that in the event of a server maintenance or failure, it’s a simple matter of running lxc move to relocate them to any of the others and bring them back online. When planned ahead of time, this is service downtime of less than 5s or so.

Up next

In the next post, I’ll be going into more details on the host setup, setting up Ubuntu 20.04 LTS, Ceph, OVN and LXD for such a cluster.

on December 16, 2020 10:15 PM

December 13, 2020

As you know from our previous post, back in 2019 the Kubuntu team set to work collaborating with MindShare Management Ltd to bring a Kubuntu dedicated laptop to the market. Recently, Chris Titus from the ‘Chris Titus Tech’ YouTube channel acquired a Kubuntu Focus M2 for the purpose of reviewing it, and he was so impressed he has decided to keep it as his daily driver. That’s right; Chris has chosen the Kubuntu Focus M2 instead of the Apple MacBook Pro M1 that he had intended to get. That is one Awesome recommendation!

Chris stated that the Kubuntu Focus was “The most unique laptop, and I am not talking about the Apple M1, and neither I am talking about AMD Ryzen.” he says.

In the review on his channel, not only did he put our Kubuntu based machine through it’s software paces, additionally he took the hardware to pieces and demonstrated the high quality build. Chris made light work of opening the laptop up and installing additional hardware, and he went on to say “The whole build out is using branded, high quality parts, like the Samsung EVO Plus, and Crucial memory; not some cheap knock-off”

The Kubuntu Focus team have put a lot of effort into matching the software selection and operating system to the hardware. This ensures that users get the best possible performance from the Kubuntu Focus package. As Chris says in his review video “The tools, scripts and work this team has put together has Impressed the hell out of me!”

By using the power optimizations available in Kubuntu, and additionally providing a GPU switcher which makes it super simple to change between the discreet Nvidia GPU and the integrated Intel based GPU. This impressed Chris a lot “I was able to squeeze 7 to 8 hours out of it on battery, absolutely amazing!” he said.

The Kubuntu Focus is an enterprise ready machine, and arguably ‘The Ultimate Linux laptop”. In his video, Chris goes on to demonstrate that the Kubuntu Focus includes Insync integration support for DropBox, OneDrive and GoogleDrive file sharing.

The Kubuntu Focus is designed from the get-go to be a transition device, providing Apple MacBook and Microsoft Windows users with a Cloud Native device in a laptop format which delivers desktop computing performance.

Chris ran our machine through a variety of benchmark testing tools, and the results are super impressive “Deep Learning capabilities are unparalleled, but more impressive is that it is configured for deep learning out of the box, and took just 10 minutes to be up and running. This is the best mobile solution you could possibly get.” Chris states.

To bring this article to a close it would be remiss of me not to mention Chris Titus’s experience with the support provided by the Kubuntu Focus team. Chris was able to speak directly to the engineering team, and get fast accurate answers to all his questions. Chris says “Huge shout out to the support team, I am beyond impressed”

Congratulations to the support team at MindShare Management Ltd, delivering great customers support is very challenging, and their experience and expertise is obviously coming across with their customers.

WoW! this is a monumental YouTube review of Kubuntu, and the whole Kubuntu community should congratulate themselves for creating ‘The Ultimate Linux Desktop’ which is being used to build ‘The Ultimate Linux Laptop’. Below is the YouTube review on the ‘Chris Titus Tech’ YouTube channel. Check it out, and see for yourself how impressed he is with this machine. Do remember to share this article.

About the Author:

Rick Timmis is a Kubuntu Councillor, and advocate. Rick has been a user and open contributor to Kubuntu for over 10 years, and a KDE user and contributor for 20

on December 13, 2020 02:30 PM

December 10, 2020

CentOS Stream, or Debian?

Jonathan Carter

It’s the end of CentOS as we know it

Earlier this week, the CentOS project announced the shift to CentOS stream. In a nutshell, this means that they will discontinue being a close clone of RHEL along with security updates, and instead it will serve as a development branch of RHEL.

As you can probably imagine (or gleam from the comments in that post I referenced), a lot of people are unhappy about this.

One particular quote got my attention this morning while catching up on this week’s edition of Linux Weekly News, under the distributions quotes section:

I have been doing this for 17 years and CentOS is basically my life’s work. This was (for me personally) a heart wrenching decision. However, i see no other decision as a possibility. If there was, it would have been made.

Johnny Hughes

I feel really sorry for this person and can empathize, I’ve been in similar situations in my life before where I’ve poured all my love and energy into something and then due to some corporate or organisational decisions (and usually poor ones), the project got discontinued and all that work that went into it vanishes into the ether. Also, 17 years is really long to be contributing to any one project so I can imagine that this must have been especially gutting.

Throw me a freakin’ bone here

I’m also somewhat skeptical of how successful CentOS Stream will really be in any form of a community project. It seems that Red Hat is expecting that volunteers should contribute to their product development for free, and then when these contributors actually want to use that resulting product, they’re expected to pay a corporate subscription fee to do so. This seems like a very lop-sided relationship to me, and I’m not sure it will be sustainable in the long term. In Red Hat’s announcement of CentOS Stream, they kind of throw the community a bone by saying “In the first half of 2021, we plan to introduce low- or no-cost programs for a variety of use cases”- it seems likely that this will just be for experimental purposes similar to the Windows Insider program and won’t be of much use for production users at all.

Red Hat does point out that their Universal Base Image (UBI) is free to use and that users could just use that on any system in a container, but this doesn’t add much comfort to the individuals and organisations who have contributed huge amounts of time and effort to CentOS over the years who rely on a stable, general-purpose Linux system that can be installed on bare metal.

Way forward for CentOS users

Where to from here? I suppose CentOS users could start coughing up for RHEL subscriptions. For many CentOS use cases that won’t make much sense. They could move to another distribution, or fork/restart CentOS. The latter is already happening. One of the original founders of the CentOS project, Gregory Kurtzer, is now working on Rocky Linux, which aims to be a new free system built from the RHEL sources.

Some people from Red Hat and Canonical are often a bit surprised or skeptical when I point out to them that binary licenses are also important. This whole saga is yet another data point, but it proves that yet again. If Red Hat had from the beginning released RHEL with free sources and unobfuscated patches, then none of this would’ve been necessary in the first place. And while I wish Rocky Linux all the success it aims to achieve, I do not think that working for free on a system that ultimately supports Red Hat’s selfish eco-system is really productive or helpful.

The fact is, Debian is already a free enterprise-scale system already used by huge organisations like Google and many others, which has stable releases, LTS support and ELTS offerings from external organisations if someone really needs it. And while RHEL clones have come and gone through the years, Debian’s mission and contract to its users is something that stays consistent and I believe Debian and its ideals will be around for as long as people need Unixy operating systems to run anywhere (i.e. a very long time).

While we sometimes fall short of some of our technical goals in Debian, and while we don’t always agree on everything, we do tend to make great long-term progress, and usually in the right direction. We’ve proved that our method of building a system together is sustainable, that we can do so reliably and timely and that we can collectively support it. From there on it can only get even better when we join forces and work together, because when either individuals or organisations contribute to Debian, they can use the end result for both private or commercial purposes without having to pay any fee or be encumbered by legal gotchas.

Don’t get caught by greedy corporate motivations that will result in you losing years of your life’s work for absolutely no good reason. Make your time and effort count and either contribute to Debian or give your employees time to do so on company time. Many already do and reap the rewards of this, and don’t look back.

While Debian is a very container and virtualization friendly system, we’ve managed to remain a good general-purpose operating system that manages to span use cases so vast that I’d have to use a blog post longer than this one just to cover them.

And while learning a whole new set of package build chain, package manager and new organisational culture and so on can be uhm, really rocky at the start, I’d say that it’s a good investment with Debian and unlikely to be time that you’ll ever felt was wasted. As Debian project leader, I’m personally available to help answer any questions that someone might have if they are interested in coming over to Debian. Feel free to mail leader_AT_debian.org (replace _AT_ with @) or find me on the oftc IRC network with the nick highvoltage. I believe that together, we can make Debian the de facto free enterprise system, and that it would be to the benefit of all its corporate users, instead of tilting all the benefit to just one or two corporations who certainly don’t have your best interests in mind.

on December 10, 2020 02:45 PM