June 15, 2021

Why do we need digital transformation? Really! 

In the last several years, we have witnessed the creation of many technologies, starting with the cloud and going further to machine learning, artificial intelligence, IoT, big data, robotics, automation and much more. The more the tech evolves, the more organizations thrive to adopt these technologies seeking digital transformation and disrupting industries along their journey, all for the benefit of better serving their consumers.

With every technology having its own requirements, costs and benefits, the only common aspect between any technology you decide to invest in is one thing: it is all based on achieving a business goal that will help your organization better position itself in the market. You might be luckily taking advantage of leading your field, or looking to better serve your customers, or even keeping up with tough competition. Whatever your motive is, the aim will always be to realise a business goal out of your investment.

What are the overlooked costs?

As much as organisations benefit from digital transformation, everything comes at a cost. With cost here, I mean to refer to the friction that opposes the successful implementation and smooth operation of the technology rather than the financial cost of acquiring the technology. This leads to the most important overlooked question when considering a digital transformation: Who cleans the windows after raising a shiny artifact? The massive acceleration in evolving technology has made it even more challenging to achieve sustainable operations and has added a technical debt in post-deployment that organisations should not be immersed in rather than keeping their focus on activities that drive impact. 

“When we pick up one end of a stick, we pick up the other end”, Stephen Covey said in his famous bestseller The 7 Habits of Highly Effective People. The truth is that the rule applies to every aspect of life, but is it debatable in technology adoption? Is it possible to achieve a successful digital transformation journey without having to pay its costs? Is it possible to remove the friction opposing your journey, and avoid increasing the complexity of your operations, growing your team exponentially to keep the lights on while spending months on finding the right talent? 

How to pick up only one end of the stick, your goal?

We all want to pick up the gain and avoid the cost whenever possible, it’s human nature! Think of your car insurance for a moment, what makes you want to pay the insurance company for their service (apart from being mandatory)? They simply pay your cheque, and may even provide you with a temporary ride until your car is ready to hit the streets again. They might even come to pick it up and hand it over at your doorstep after it’s fixed. The important thing is that you don’t worry much about what’s happening behind the scenes, as you have more important things in life to take care of rather than spending time, effort and money getting your car fixed. You bought the car to get you around conveniently and fixing it was not part of your goals, but a consequence of buying the car instead. You simply let the insurance company take care of this, and hold the other end of the stick for you.

Building a private cloud can be complicated, but operating a private cloud efficiently is definitely challenging. Taking Openstack as an example, being the preferred choice for organisations building private and hybrid/multi-clouds and being the most widely used open source cloud software. The post-deployment operations are said to be complex due to requirements of expertise in different layers of the stack, in addition to regular firmware upgrades that require proper planning, backup and fallback strategies to ensure safe upgrades. What if you further require to deploy Kubernetes on top of Openstack to further enable focus on developing your cloud-native applications? Luckily, there’s a similar “car insurance” story when building your private cloud using Openstack or deploying Kubernetes, but let’s first wrap up what you’re looking for at that stage:

  • Access to a pool of specialised experts, at a reasonable cost
  • 24×7 operations and support
  • Full visibility and control over the costs
  • Secure your data, infrastructure and applications
  • Efficient deployment and even more efficient operations
  • Shared risks and minimum failures and downtimes
  • An up-to-date cloud infrastructure
  • And most importantly, you need to maintain focus on the business goal rather than commodity activities that do not drive impact. 
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/81L2hdxSFvuXuXmrMjqsDG96w0U-5oMnq--Iq0loNFiC46PniTIeffoIJ-cfILmuzJ-DvPJUbpkCzYFLeT7YB1FsmTrXPPitHoBRVMs1gNzRXfJwdbpT8XG6tetcZ8DGNMcnVmXn" width="720" /> </noscript>

Managed IT services can provide you with the opportunity to offload many commodity activities and benefit from the knowledge and experience brought by the managed service provider’s team of cloud experts, and help you maintain your focus on driving your business. 

Let’s Talk Numbers! 

It’s time to think about efficiency, and the financial cost of outsourcing to a managed service provider compared to hiring a dedicated team. So first, let us breakdown the operational costs of a self-managed cloud:

  • For 24x 7 operations throughout the year, you require coverage of 8,760 hours
  • A full-time equivalent (FTE), after removing weekends and PTO, has an annual availability of 1,776 hours (222 days x 8 hours/day)
  • Accordingly, 5 FTE are required to operate the cloud with no redundancy
  • For production-level SLAs, 2x FTEs are required at the same time for redundancy, and accordingly, 10x FTEs is the minimum number of engineers required to operate 24x 7 
  • The average annual income of a Cloud Operations Engineer is ~ $ 98.8K, and the minimum is   ~$ 72K
  • Considering the minimum income, operating a private cloud will require an annual human resources budget of at least $ 720K. This does account for any turnover, time and costs for training and development or any other work-related activities.

What if you want to start your cloud small, and scale as the business grows? Would you want to add this massive overhead to your expansion plans rather than allocate it to core activities that will support your successful journey? 

Now Really, How can I Manage a 24×7 Private Cloud with one Engineer?

Let’s get this straight, it is not possible to operate a private cloud with only one engineer. This is due to the architecture of private cloud solutions having many integrated components, each requiring a unique technical skillset and expertise. Although this is true, you can still operate your private cloud at the cost of one engineer by partnering with the right managed service provider (MSP). 

An MSP can help you significantly minimise costs and accelerate the adoption of the cloud while maintaining your focus on what you do best, driving your business. This is possible because an MSP simply has the ability to leverage the same pool of specialists to serve different customers. This way, multiple organizations benefit from running their 24×7 operations by experienced professionals at a much lower cost compared to hiring the same engineers exclusively.

If we consider Canonical’s offering for cost comparison, a managed Openstack service is provided at the cost of $ 5,475 per host annually. The minimum number of nodes to build an Openstack cloud is 12, making the total annual cost for operating your cloud less than $ 66,000, which is $ 6,000 less than the minimum annual income of one full-time cloud operations engineer. This allows your team of IT specialists to focus more on innovation and strategy rather than keeping the lights on. 

Want a fully managed private cloud?

Kubernetes or OpenStack? On public clouds, on your premises or hosted? Tell us what your preference is!

If you want to learn more, watch the “Hosted private cloud infrastructure: A Cost Analysis” webinar.

on June 15, 2021 04:51 PM

June 14, 2021

Welcome to the Ubuntu Weekly Newsletter, Issue 687 for the week of June 6 – 12, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on June 14, 2021 10:30 PM

June 14th, 2021: Canonical and Google Cloud today announce Ubuntu Pro on Google Cloud, a new Ubuntu offering available to all Google Cloud users. Ubuntu Pro on Google Cloud allows instant access to security patching covering thousands of open source applications for up to 10 years and critical compliance features essential to running workloads in regulated environments.

Google Cloud has long partnered with Canonical to offer innovative developer solutions, from desktop to Kubernetes and AI/ML. In the vein of this collaboration, Google Cloud and Canonical have created a more secure, hardened, and cost-effective devops environment: Ubuntu Pro on Google Cloud for all enterprises to accelerate their cloud adoption.

“Enterprise customers are increasingly adopting Google Cloud to run their core business-critical and customer-facing applications,” said June Yang, VP and GM, Compute, Google Cloud. “The availability of Ubuntu Pro on Google Cloud will offer our enterprise customers the additional security and compliance services needed for their mission-critical workloads.”

Ubuntu Pro on Google Cloud is a premium version of Ubuntu focusing on enterprise and production use. It provides developers/administrators with a secured devops environment, addressing Security, one of the fundamental pillars for all IT systems. It is based on standard Ubuntu components, but comes with a set of additional services activated out of the box, including:

  • Patching of high and critical CVEs for Ubuntu’s universe repository, which covers over 30,000 packages, including Node.js, MongoDB, Redis and Apache Kafka, to name a few.
  • A 10 year maintenance commitment (For 18.04 LTS onwards. The maintenance period for Ubuntu Pro 16.04 LTS is 8 years.)
  • Live kernel patching, which offers VM instances increased security and higher uptimes.
  • Officially certified components to enable operating environments under compliance regimes such as FedRAMP, HIPAA, PCI, GDPR, and ISO.
  • Features to be available in H2 2021: Certified FIPS 140-2 components; security dashboard for Security Command Center, Managed Apps and more.
  • All the standard optimizations and security updates included in Ubuntu.

Ubuntu Pro for Google Cloud at Work

Gojek has evolved from a ride-hailing company to a technology company offering a suite of more than 20 services across payment, e-commerce, and transportation. Through their applications, they’re now serving millions of users across Southeast Asia.

“We needed more time to comprehensively test and migrate our Ubuntu 16.04 LTS workloads to Ubuntu 20.04 LTS, which would mean stretching beyond the standard maintenance timelines for Ubuntu 16.04 LTS. With Ubuntu Pro on Google Cloud, we now can postpone this, and in moving our 16.04 workloads to Ubuntu Pro, we benefit from its live kernel patching and improved security coverage for our key open source components,” said Kartik Gupta, Engineering Manager for CI/CD & FinOps at Gojek

To help customers get better visibility of cost and money savings, Ubuntu Pro for Google Cloud embeds a transparent and innovative approach to pricing. For instance, Ubuntu Pro will be 3-4.5% of your average computing cost, meaning the more computing resources you consume, the smaller percentage you pay for Ubuntu Pro. Customers can purchase Ubuntu Pro directly through GCP Console or Google Cloud Marketplace for a streamlined procurement process, enabling quicker access to these commercial features offered by Canonical.

“Since 2014, Canonical has been providing Ubuntu for Google Cloud customers. We continuously expand security coverage, great operational efficiency, and native compatibility with Google Cloud features,” said Alex Gallagher, VP of Cloud GTM at Canonical. “I’m excited to witness the collaboration between Canonical and Google Cloud to make Ubuntu Pro available. Ubuntu Pro on Google Cloud sets a new standard for security of operating systems and facilitates your migration to Google Cloud.”

Getting started

Getting started with Ubuntu Pro on Google Cloud is simple. You can now purchase these premium images directly from Google Cloud by selecting Ubuntu Pro as the operating system straight from the Google Cloud Console.

To learn more about Ubuntu Pro on Google Cloud, please visit the documentation page and read the announcement from Google.

on June 14, 2021 04:30 PM


Alan Pope

Over the weekend I participated in FOSS Talk Live. Before The Event this would have been an in-person shindig at a pub in London. A bunch of (mostly) UK-based podcasters get together and record live versions of their shows in front of a “studio audience”. It’s mostly an opportunity for a bunch of us middle-aged farts who speak into microphones to get together, have a few beers and chat. Due to The Event, this year it was a virtual affair, done online via YouTube.
on June 14, 2021 11:00 AM

June 13, 2021

Earlier this week it was time for GitOps Days again. The third time now and the event has grown quite a bit since we started. Born out of the desire to bring GitOps practitioners together during pandemic times initially, this time we had a proper CFP and the outcome was just great: lots of participation from a very diverse crowd of experts - we had panels, case studies, technical deep dives, comparisons of different solutions and more.
on June 13, 2021 06:06 AM

June 11, 2021

Over the past few posts, I covered the hardware I picked up to setup a small LXD cluster and get it all setup at a co-location site near home. I’ve then gone silent for about 6 months, not because anything went wrong but just because of not quite finding the time to come back and complete this story!

So let’s pick things up where I left them with the last post and cover the last few bits of the network setup and then go over what happened over the past 6 months.

Routing in a HA environment

You may recall that the 3 servers are both connected to a top of the rack switch (bonded dual-gigabit) as well as connected to each other (bonded dual-10-gigabit). The netplan config in the previous post would allow each of the servers to talk to the others directly and establish a few VLANs on the link to the top of the rack switch.

Those are for:

  • WAN-HIVE: Peering VLAN with my provider containing their core routers and mine
  • INFRA-UPLINK: OVN uplink network (where all the OVN virtual routers get their external addresses)
  • INFRA-HOSTS: VLAN used for external communication with the servers
  • INFRA-BMC: VLAN used for the management ports of the servers (BMCs) and switch, isolated from the internet

Simply put, the servers have their main global address and default gateway on INFRA-HOSTS, the BMCs and switch have their management addresses in INFRA-BMC, INFRA-UPLINK is consumed by OVN and WAN-HIVE is how I access the internet.

In my setup, I then run three containers, one on each server which each gets direct access to all those VLANs and act as a router using FRR. FRR is configured to establish BGP sessions with both of my provider’s core routers, getting routing to the internet that way and announcing my IPv4 and IPv6 subnets that way too.

LXD output showing the 3 FRR routers

On the internal side of things, I’m using VRRP to provide a virtual router internally. Typically this means that frr01 is the default gateway for all egress traffic while ingress traffic is somewhat spread across all 3 thanks to them having the same BGP weight (so my provider’s routers distribute the connections across all active peers).

With that in place, so long as one of the FRR instances are running, connectivity is maintained. This makes doing maintenance quite easy as there is effectively no SPOF.

Enter LXD networks with OVN

Now for where things get a bit trickier. As I’m using OVN to provide virtual networks inside of LXD, each of those networks will typically need some amount of public addressing. For IPv6, I don’t do NAT so each of my networks get a public /64 subnet. For IPv4, I have a limited number of those, so I just assign them one by one (/32) directly to specific instances.

Whenever such a network is created, it will grab an IPv4 and IPv6 address from the subnet configured on INFRA-UPLINK. That part is all good and the OVN gateway becomes immediately reachable.

The issue is with the public IPv6 subnet used by each network and with any additional addresses (IPv4 or IPv6) which are routed directly to its instances. For that to work, I need my routers to send the traffic headed for those subnets to the correct OVN gateway.

But how do you do that? Well, there are pretty much three options here:

  • You use LXD’s default mode of performing NDP proxying. Effectively, LXD will configure OVN to directly respond to ARP/NDP on the INFRA-UPLINK VLAN as if the gateway itself was holding the address being reached.
    This is a nice trick which works well at pretty small scale. But it relies on LXD configuring a static entry for every single address in the subnet. So that’s fine for a few addresses but not so much when you’re talking a /64 IPv6 subnet.
  • You add static routing rules to your routers. Basically you run lxc network show some-name and look for the IPv4 and IPv6 addresses that the network got assigned, then you go on your routers and you configure static routes for all the addresses that need to be sent to that OVN gateway. It works, but it’s pretty manual and effectively prevents you from delegating network creation to anyone who’s not the network admin too.
  • You use dynamic routing to have all public subnets and addresses configured on LXD to be advertised to the routers with the correct next-hop address. With this, there is no need to configure anything manually, keeping the OVN config very simple and allowing any user of the cluster to create their own networks and get connectivity.

Naturally I went with the last one. At the time, there was no way to do that through LXD, so I made my own by writing lxd-bgp. This is a pretty simple piece of software which uses the LXD API to inspect its networks, determine all OVN networks tied to a particular uplink network (INFRA-UPLINK in my case) and then inspect all instances running on that network.

It then sends announcements both for the subnets backing each OVN networks as well as for specific routes/addresses that are routed on top of that to specific instances running on the local system.

The result is that when an instance with a static IPv4 and IPv6 starts, the lxd-bgp instance running on that particular system will send an announcement for those addresses and traffic will start flowing.

Now deploy the same service on 3 servers, put them into 3 different LXD networks and set the exact same static IPv4 and IPv6 addresses on them and you now have a working anycast service. When one of the containers or its host go down for some reason, that route announcement goes away and the traffic now heads to the remaining instances. That does a good job at some simplistic load-balancing and provides pretty solid service availability!

LXD output of my 3 DNS servers (backing ns1.stgraber.org) and using anycast

The past 6 months

Now that we’ve covered the network setup I’m running, let’s spend a bit of time going over what happened over the past 6 months!

The servers and switch installed in the cabinet

In short, well, not a whole lot. Things have pretty much just been working. The servers were installed in the datacenter on the 21st of December. I’ve then been busy migrating services from my old server at OVH over to the new cluster, finalizing that migration at the end of April.

I’ve gotten into the habit of doing a full reboot of the entire cluster every week and developed a bit of tooling for this called lxd-evacuate. This makes it easy to relocate any instance which isn’t already highly available, emptying a specific machine and then letting me reboot it. By and large this has been working great and it’s always nice to have confidence that should something happen, you know all the machines will boot up properly!

These days, I’m running 63 instances across 9 projects and a dozen networks. I spent a bit of time building up a Grafana dashboard which tracks and alerts on my network consumption (WAN port, uplink to servers and mesh), monitors the health of my servers (fan speeds, temperature, …), tracks CEPH consumption and performance, monitors the CPU, RAM and load of each of the servers and also track performance on my top services (NSD, unbound and HAProxy).

LXD also rolled out support for network ACLs somewhat recently, allowing for proper stateful firewalling directly through LXD and implemented in OVN. It took some time to setup all those ACLs for all instances and networks but that’s now all done and makes me feel a whole lot better about service security!

What’s next

On the LXD front, I’m excited about a few things we’re doing over the next few months which will make environments like mine just that much nicer:

  • Native BGP support (no more lxd-bgp)
  • Native cluster server evacuation (no more lxd-evacuate)
  • Built-in DNS server for instance forward/reverse records as well as DNS zones tied to networks
  • Built-in metrics (prometheus) endpoint exposing CPU/memory/disk/network usage of all local instances

This will let me deprecate some of those side projects I had to start as part of this work, will reduce the amount of manual labor involved in setting up all the DNS records and will give me much better insight on what’s consuming resources on the cluster.

I’m also in the process of securing my own ASN and address space through ARIN, mostly because that seemed like a fun thing to do and will give me a tiny bit more flexibility too (not to mention let me consolidate a whole bunch of subnets). So soon enough, I expect to have to deal with quite a bit of re-addressing, but I’m sure it will be a fun and interesting experience!

on June 11, 2021 10:07 PM

We are pleased to announce that Plasma 5.22.0, is now available in our backports PPA for Kubuntu 21.04 Hirsute Hippo.

The release announcement detailing the new features and improvements in Plasma 5.22 can be found here.

To upgrade:

Add the following repository to your software sources list:


or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt full-upgrade


Please note that more bugfix releases are scheduled by KDE for Plasma 5.22, so while we feel these backports will be beneficial to enthusiastic adopters, users wanting to use a Plasma release with more rounds of stabilisation/bugfixes ‘baked in’ may find it advisable to stay with Plasma 5.21 as included in the original 21.04 (Hirsute) release.

The Kubuntu Backports PPA for 21.04 also currently contains newer versions of KDE Frameworks, Applications, and other KDE software. The PPA will also continue to receive updates of KDE packages other than Plasma.

Issues with Plasma itself can be reported on the KDE bugtracker [1]. In the case of packaging or other issues, please provide feedback on our mailing list [2], IRC [3], and/or file a bug against our PPA packages [4].

1. KDE bugtracker: https://bugs.kde.org
2. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
3. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.libera.chat
4. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa

on June 11, 2021 07:19 PM

SSH quoting

Colin Watson

A while back there was a thread on one of our company mailing lists about SSH quoting, and I posted a long answer to it. Since then a few people have asked me questions that caused me to reach for it, so I thought it might be helpful if I were to anonymize the original question and post my answer here.

The question was why a sequence of commands involving ssh and fiddly quoting produced the output they did. The first example was this:

$ ssh user@machine.local bash -lc "cd /tmp;pwd"

Oh hi, my dubious life choices have been such that this is my specialist subject!

This is because SSH command-line parsing is not quite what you expect.

First, recall that your local shell will apply its usual parsing, and the actual OS-level execution of ssh will be like this:

[0]: ssh
[1]: user@machine.local
[2]: bash
[3]: -lc
[4]: cd /tmp;pwd

Now, the SSH wire protocol only takes a single string as the command, with the expectation that it should be passed to a shell by the remote end. The OpenSSH client deals with this by taking all its arguments after things like options and the target, which in this case are:

[0]: bash
[1]: -lc
[2]: cd /tmp;pwd

It then joins them with a single space:

bash -lc cd /tmp;pwd

This is passed as a string to the server, which then passes that entire string to a shell for evaluation, so as if you’d typed this directly on the server:

sh -c 'bash -lc cd /tmp;pwd'

The shell then parses this as two commands:

bash -lc cd /tmp

The directory change thus happens in a subshell (actually it doesn’t quite even do that, because bash -lc cd /tmp in fact ends up just calling cd because of the way bash -c parses multiple arguments), and then that subshell exits, then pwd is called in the outer shell which still has the original working directory.

The second example was this:

$ ssh user@machine.local bash -lc "pwd;cd /tmp;pwd"

Following the logic above, this ends up as if you’d run this on the server:

sh -c 'bash -lc pwd; cd /tmp; pwd'

The third example was this:

$ ssh user@machine.local bash -lc "cd /tmp;cd /tmp;pwd"

And this is as if you’d run:

sh -c 'bash -lc cd /tmp; cd /tmp; pwd'

Now, I wouldn’t have implemented the SSH client this way, because I agree that it’s confusing. But /usr/bin/ssh is used as a transport for other things so much that changing its behaviour now would be enormously disruptive, so it’s probably impossible to fix. (I have occasionally agitated on openssh-unix-dev@ for at least documenting this better, but haven’t made much headway yet; I need to get round to preparing a documentation patch.) Once you know about it you can use the proper quoting, though. In this case that would simply be:

ssh user@machine.local 'cd /tmp;pwd'

Or if you do need to specifically invoke bash -l there for some reason (I’m assuming that the original example was reduced from something more complicated), then you can minimise your confusion by passing the whole thing as a single string in the form you want the remote sh -c to see, in a way that ensures that the quotes are preserved and sent to the server rather than being removed by your local shell:

ssh user@machine.local 'bash -lc "cd /tmp;pwd"'

Shell parsing is hard.

on June 11, 2021 10:22 AM

June 10, 2021

Ep 146 – Caixote

Podcast Ubuntu Portugal

Neste episódio falámos sobre a comunidade e o retomar ods encontros presenciais pós pandemia, actualidade da comunidade Ubuntu internacional e o revitalizar das relações entre a Canonical e as várias LoCos espalhadas pelo mundo e fizemos ainda um apanhado das actualidade Ubuntu.

Já sabem: oiçam, subscrevam e partilhem!

  • https://www.youtube.com/watch?v=oMJKru83NEs
  • https://hacks.mozilla.org/2021/05/introducing-firefox-new-site-isolation-security-architecture/
  • https://www.youtube.com/watch?v=SmthCRF-NQQ&t=586s
  • https://www.youtube.com/watch?v=SmthCRF-NQQ
  • https://www.youtube.com/watch?v=pKCfIXlHXw4
  • https://www.youtube.com/watch?v=T1YbGX4liww
  • https://www.youtube.com/watch?v=R3YJ0brUmb0
  • https://www.opensourcelisbon.com
  • https://www.humblebundle.com/software/python-development-software?partner=PUP
  • https://www.humblebundle.com/books/learn-you-more-python-books?partner=PUP
  • https://www.humblebundle.com/books/knowledge-101-adams-media-books?partner=PUP
  • https://keychronwireless.referralcandy.com/3P2MKM7
  • https://www.humblebundle.com/books/head-first-programming-oreilly-books?parner=PUP
  • https://shop.nitrokey.com/shop/product/nk-pro-2-nitrokey-pro-2-3?aff_ref=3
  • https://shop.nitrokey.com/shop?aff_ref=3
  • https://youtube.com/PodcastUbuntuPortugal


Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on June 10, 2021 09:45 PM

S14E14 – Letter Copy Magic

Ubuntu Podcast from the UK LoCo

This week we got a portable touch screen monitor. We discuss our favourite Linux apps, bring you a command line lurve and go over all your wonderful feedback.

It’s Season 14 Episode 14 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

rpg ~/Scripts

    hero[1][xxx-] -11hp
  spider[2][xxxx]  dodged!
    hero[1][x---] -13hp
  spider[2][xxxx]  dodged!
    hero[1][----] -12hp

    hero[1][----][----]@~/Scripts &#x1f480;

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on June 10, 2021 02:00 PM

June 09, 2021

Remembering Planning

Stephen Michael Kellat

The month has started off with some big surprises for me. For the low price equal to roughly 34 Beta Edition PinePhones or roughly 72 Raspberry Pi 400 units I wound up having to pay to get my home’s central heating and cooling system replaced. It has been a few days of disruption since the unit failed which combined with the rather hot weather has made my home not quite fit for habitation.

Things like that help me appreciate events like the Fastly outage on Tuesday morning. A glitch in that content delivery network provider damaged the presences of quite a number of sites. While it was a brief event that happened while I was asleep it was apparently jarring to many people.

Both happenings point out that resilience is a journey rather than a concrete endpoint. How easily can you bounce back from the unexpected? If you operate an online service do you even have a plan for when something goes horribly wrong?

Fortunately when the central air unit at home ceased functioning we were able to stay with family while I tracked down a contractor to do an assessment which then turned into a replacement job. Fastly had a contingency plan that it executed to keep the incident down to less than an hour. Whether you are running a massive service or just a small shared server for friends you need to have some notion of what you intend to do when disaster strikes.

Tags: Contingencies

on June 09, 2021 04:42 AM

June 07, 2021

Welcome to the Ubuntu Weekly Newsletter, Issue 686 for the week of May 30 – June 5, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on June 07, 2021 10:49 PM

June 05, 2021

Note: Though this testing was done on Google Cloud and I work at Google, this work and blog post represent my personal work and do not represent the views of my employer.

As a red teamer and security researcher, I occasionally find the need to crack some hashed passwords. It used to be that John the Ripper was the go-to tool for the job. With the advent of GPGPU technologies like CUDA and OpenCL, hashcat quickly eclipsed John for pure speed. Unfortunately, graphics cards are a bit hard to come by in 2021. I decided to take a look at the options for running hashcat on Google Cloud.

There are several steps involved in getting hashcat running with CUDA, and because I often only need to run the instance for a short period of time, I put together a script to spin up hashcat on a Google Cloud VM. It can either run the benchmark or spin up an instance with arbitrary flags. It starts the instance but does not stop it upon completion, so if you want to give it a try, make sure you shut down the instance when you’re done with it. (It leaves the hashcat job running in a tmux session for you to examine.)

At the moment, there are 6 available GPU accelerators on Google Cloud, spanning the range of architectures from Kepler to Ampere (see pricing here):

  • NVIDIA A100 (Ampere)
  • NVIDIA T4 (Turing)
  • NVIDIA V100 (Volta)
  • NVIDIA P4 (Pascal)
  • NVIDIA P100 (Pascal)
  • NVIDIA K80 (Kepler)

Performance Results

I chose a handful of common hashes as representative samples across the different architectures. These include MD5, SHA1, NTLM, sha512crypt, and WPA-PBKDF2. These represent some of the most common password cracking situations encountered by penetration testers. Unsurprisingly, overall performance is most directly related to the number of CUDA cores, followed by speed and architecture.

Relative Performance Graph

Speeds in the graph are normalized to the slowest model in each test (the K80 in all cases).

Note that the Ampere-based A100 is 11-15 times as a fast as the slowest K80. (On some of the benchmarks, it can reach 55 times as fast, but these are less common.) There’s a wide range of hardware here, and depending on availability and GPU type, you can attach from 1 to 16 GPUs to a single instance and hashcat can spread the load across all of the attached GPUs.

Full results of all of the tests, using the slowest hardware as a baseline for percentages:

0 - MD54.3 GH/s100.0%27.1 GH/s622.2%16.6 GH/s382.4%55.8 GH/s1283.7%18.8 GH/s432.9%67.8 GH/s1559.2%
100 - SHA11.9 GH/s100.0%9.7 GH/s497.9%5.6 GH/s286.6%17.5 GH/s905.4%6.6 GH/s342.8%21.7 GH/s1119.1%
1400 - SHA2-256845.7 MH/s100.0%3.3 GH/s389.5%2.0 GH/s238.6%7.7 GH/s904.8%2.8 GH/s334.8%9.4 GH/s1116.7%
1700 - SHA2-512230.3 MH/s100.0%1.1 GH/s463.0%672.5 MH/s292.0%2.4 GH/s1039.9%789.9 MH/s343.0%3.1 GH/s1353.0%
22000 - WPA-PBKDF2-PMKID+EAPOL (Iterations: 4095)80.7 kH/s100.0%471.4 kH/s584.2%292.9 kH/s363.0%883.5 kH/s1094.9%318.3 kH/s394.5%1.1 MH/s1354.3%
1000 - NTLM7.8 GH/s100.0%49.9 GH/s643.7%29.9 GH/s385.2%101.6 GH/s1310.6%33.3 GH/s429.7%115.3 GH/s1487.3%
3000 - LM3.8 GH/s100.0%25.0 GH/s661.9%13.1 GH/s347.8%41.5 GH/s1098.4%19.4 GH/s514.2%65.1 GH/s1722.0%
5500 - NetNTLMv1 / NetNTLMv1+ESS5.0 GH/s100.0%26.6 GH/s533.0%16.1 GH/s322.6%54.9 GH/s1100.9%19.7 GH/s395.6%70.6 GH/s1415.7%
5600 - NetNTLMv2322.1 MH/s100.0%1.8 GH/s567.5%1.1 GH/s349.9%3.8 GH/s1179.7%1.4 GH/s439.4%5.0 GH/s1538.1%
1500 - descrypt, DES (Unix), Traditional DES161.7 MH/s100.0%1.1 GH/s681.5%515.3 MH/s318.7%1.7 GH/s1033.9%815.9 MH/s504.6%2.6 GH/s1606.8%
500 - md5crypt, MD5 (Unix), Cisco-IOS $1$ (MD5) (Iterations: 1000)2.5 MH/s100.0%10.4 MH/s416.4%6.3 MH/s251.1%24.7 MH/s989.4%8.7 MH/s347.6%31.5 MH/s1260.6%
3200 - bcrypt $2\*$, Blowfish (Unix) (Iterations: 32)2.5 kH/s100.0%22.9 kH/s922.9%13.4 kH/s540.7%78.4 kH/s3155.9%26.7 kH/s1073.8%135.4 kH/s5450.9%
1800 - sha512crypt $6$, SHA512 (Unix) (Iterations: 5000)37.9 kH/s100.0%174.6 kH/s460.6%91.6 kH/s241.8%369.6 kH/s975.0%103.5 kH/s273.0%535.4 kH/s1412.4%
7500 - Kerberos 5, etype 23, AS-REQ Pre-Auth43.1 MH/s100.0%383.9 MH/s889.8%186.7 MH/s432.7%1.0 GH/s2427.2%295.0 MH/s683.8%1.8 GH/s4281.9%
13100 - Kerberos 5, etype 23, TGS-REP32.3 MH/s100.0%348.8 MH/s1080.2%185.3 MH/s573.9%1.0 GH/s3123.0%291.7 MH/s903.4%1.8 GH/s5563.8%
15300 - DPAPI masterkey file v1 (Iterations: 23999)15.6 kH/s100.0%80.8 kH/s519.0%50.2 kH/s322.3%150.9 kH/s968.9%55.6 kH/s356.7%187.2 kH/s1202.0%
15900 - DPAPI masterkey file v2 (Iterations: 12899)8.1 kH/s100.0%36.7 kH/s451.0%22.1 kH/s271.9%79.9 kH/s981.4%31.3 kH/s385.0%109.2 kH/s1341.5%
7100 - macOS v10.8+ (PBKDF2-SHA512) (Iterations: 1023)104.1 kH/s100.0%442.6 kH/s425.2%272.5 kH/s261.8%994.6 kH/s955.4%392.5 kH/s377.0%1.4 MH/s1304.0%
11600 - 7-Zip (Iterations: 16384)91.9 kH/s100.0%380.5 kH/s413.8%217.0 kH/s236.0%757.8 kH/s824.2%266.6 kH/s290.0%1.1 MH/s1218.6%
12500 - RAR3-hp (Iterations: 262144)12.1 kH/s100.0%64.2 kH/s528.8%20.3 kH/s167.6%102.2 kH/s842.3%28.1 kH/s231.7%155.4 kH/s1280.8%
13000 - RAR5 (Iterations: 32799)10.2 kH/s100.0%39.6 kH/s389.3%24.5 kH/s240.6%93.2 kH/s916.6%30.2 kH/s297.0%118.7 kH/s1167.8%
6211 - TrueCrypt RIPEMD160 + XTS 512 bit (Iterations: 1999)66.8 kH/s100.0%292.4 kH/s437.6%177.3 kH/s265.3%669.9 kH/s1002.5%232.1 kH/s347.3%822.4 kH/s1230.8%
13400 - KeePass 1 (AES/Twofish) and KeePass 2 (AES) (Iterations: 24569)10.9 kH/s100.0%67.0 kH/s617.1%19.0 kH/s174.8%111.2 kH/s1024.8%27.3 kH/s251.2%139.0 kH/s1281.0%
6800 - LastPass + LastPass sniffed (Iterations: 499)651.9 kH/s100.0%2.5 MH/s390.4%1.5 MH/s232.2%6.0 MH/s914.8%2.0 MH/s304.7%7.6 MH/s1160.0%
11300 - Bitcoin/Litecoin wallet.dat (Iterations: 200459)1.3 kH/s100.0%5.0 kH/s389.9%3.1 kH/s241.5%11.4 kH/s892.3%4.1 kH/s325.3%14.4 kH/s1129.2%

Value Results

Believe it or not, speed doesn’t tell the whole story, unless you’re able to bill the cost directly to your customer – in that case, go straight for that 16-A100 instance. :)

You’re probably more interested in value however – that is, hashes per dollar. This is computed based on the speed and price per hour, resulting in hash per dollar value. For each card, I computed the median relative performance across all of the hashes in the default hashcat benchmark. I then divided performance by price per hour, then normalized these values again.

Relative Value

Relative value is the mean speed per cost, in terms of the K80.


Though the NVIDIA T4 is nowhere near the fastest, it is the most efficient in terms of cost, primarily due to its very low $0.35/hr pricing. (At the time of writing.) If you have a particular hash to focus on, you may want to consider doing the math for that hash type, but the relative performances seem to have the same trend. It’s actually a great value.

So maybe the next time you’re on an engagement and need to crack hashes, you’ll be able to figure out if the cloud is right for you.

on June 05, 2021 07:00 AM

June 03, 2021

Ep 145 – Amália

Podcast Ubuntu Portugal

Um episódio feito em condições vocais particularmente exigentes para um dos elementos e que se revelou bastante exigente também para o outro, mas como dizia o Freddie Mercury: The Show Must Go On!

Já sabem: oiçam, subscrevam e partilhem!

  • https://cdimage.ubuntu.com/daily-live/current/
  • https://www.youtube.com/watch?v=CbyoaBrB6O8
  • http://media.parlamento.pt/www/XIVLEG/SL2/COM/01_CACDLG/CACDLG_20210525_2_VC.mp3
  • https://ubports.com/pt/blog/ubports-blogs-noticias-1/post/ubuntu-touch-ota-17-release-3755
  • https://www.youtube.com/watch?v=_v3CdCTJQms
  • https://xray2000.gitlab.io/anbox/
  • https://www.gofundme.com/f/anbox-on-ut?pc=tw_u&utm_source=twitter&utm_medium=social&utm_campaign=p_lico+update
  • https://twitter.com/Khode_Erfan/status/1395788336653078530
  • https://www.gofundme.com/f/anbox-on-ut?pc=tw_u&utm_source=twitter&utm_medium=social&utm_campaign=p_lico+update
  • https://www.humblebundle.com/software/python-development-software?partner=PUP
  • https://www.humblebundle.com/books/learn-you-more-python-books?partner=PUP
  • https://www.humblebundle.com/books/knowledge-101-adams-media-books?partner=PUP
  • https://keychronwireless.referralcandy.com/3P2MKM7
  • https://www.humblebundle.com/books/head-first-programming-oreilly-books?parner=PUP
  • https://shop.nitrokey.com/shop/product/nk-pro-2-nitrokey-pro-2-3?aff_ref=3
  • https://shop.nitrokey.com/shop?aff_ref=3
  • https://youtube.com/PodcastUbuntuPortugal


Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on June 03, 2021 09:45 PM

S14E13 – Wants Photo Booth

Ubuntu Podcast from the UK LoCo

This week we’ve been fixing phones and relearning trigonometry. We round up the news and events from the Ubuntu community and discuss news from the wider tech scene.

It’s Season 14 Episode 13 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on June 03, 2021 02:00 PM

June 01, 2021

Where win means becomes the universal way to get apps on Linux.

In short, I don't think either current iteration will. But why?

I started writing this a while ago, but Disabling snap Autorefresh reminded me to finish it. I also do not mean this as a "hit piece" against my former employer.

Here is a quick status of where we are:

Use case     Snaps   Flatpak
Desktop app  ☑️       ☑️    
Service/Server app  ☑️       🚫   
Embedded  ☑️       🚫   
Command Line apps  ☑️       🚫
Full independence option   🚫      ☑️  
Build a complete desktop   🚫      ☑️  
Controlling updates   🚫      ☑️  

Desktop apps

Both Flatpaks and Snaps are pretty good at desktop apps. They share some bits and have some differences. Flatpak might have a slight edge because it's focused only on Desktop apps, but for the most part it's a wash.

Service/Server / Embedded / Command Line apps

Flatpak doesn't target these at all. Full stop.

Snap wins these without competition from Flatpak but this does show a security difference. sudo snap install xyz will just install it - it won't ask you if you think it's a service, desktop app or some combination (or prompt you for permissions like Flatpak does).

With Embedded using Ubuntu Core it requires strict confinement which is a plus (Which you read correctly, means "something less" confinement everywhere else).

Aside: As Fedora SilverBlue and Endless OS both only let you install Flatpaks, they also come with the container based Toolbox to make it possible to run other apps.

Full independence option / Build a complete desktop


You can not go and (re)build your own distro and use upstream snapd.

Snaps are generally running from one LTS "core" behind what you might expect from your Ubuntu desktop version. For example: core18 is installed by default on Ubuntu 21.04. The embedded Ubuntu Core option is the only one that is using just one version of Ubuntu core code..


With Flatpak you can choose to use one of many public bases like the Freedesktop platform or Gnome platform. You can also build your own Platform like Fedora Silverblue does. All of the default flatpak that Silverblue comes with are derived from the "regular" Fedora of the same version. You can of course add other sources too. Example: The Gnome Calculator from Silverblue is built from the Fedora RPMs and depends on the org.fedoraproject.Platform built from that same version of Fedora.

Aside: I should note that to do that you need OSTree to make the Platforms.

Controlling updates

Flatpak itself does not do any updates automatically. It relies on your software application to do it (Gnome Software). It also has the ability for apps to check for their own updates and ask to update itself.

Snaps are more complicated, but why? Let's look at the Ubuntu IoT and device services that Canonical sells:

Dedicated app store ...complete control of application versions, updates and controlled rollouts for $15,000 per year.

Enterprise app store ...control snap updates and upgrades. Ensure that all device traffic goes through an audited communications channel and determine the precise versions of snaps used inside the business.

Control of the update process is one of the ways Canonical is trying to make money. I don't believe anyone has ever told me explicitly that this is why Snaps update work this way. it just makes sense given the business considerations.

So who is going to "win"?

One of them might go away, but neither is set to become the universal way to get apps on Linux at least not today.

It could change starting with something like:

  • Flatpak (or something like it) evolves to support command line or other apps.
  • A snap based Ubuntu desktop takes off and becomes the default Ubuntu.

Either isn't going to get it all the way there, but is needed to prove what the technology can do. In both cases, the underlying confinement technology is being improved for all.


Maybe I missed something? Feel free to make a PR to add comments!

on June 01, 2021 08:00 PM

May 28, 2021

rout is out

James Hunt

I’ve just released the rout tool I mentioned in my last blog post about command-line parsing semantics.

rout is a simple tool, written in rust, that produces unicode utf-8 output in interesting ways. It uses the minimal command-line parsing crate ap. It also uses a fancy pest parser for interpreting escape sequences and range syntax.

Either grab the source, or install the crate:

$ cargo install rout

Full details (with lots of examples! ;) are on both sites:

on May 28, 2021 07:54 PM

It took a while, but now Launchpad finally allows users to edit their comments on questions, bug reports and merge proposal pages.

The first request for this feature dates back from 2007. Since then, Launchpad increased a lot in terms of new features, and the other priorities took precedence over that request, but the request was still more than valid. More recently, we managed to bump the priority of this feature, and now we have it: users are now allowed to edit their comments on Launchpad answers, bugs and merge proposals!

This has been available in the API for a few days already, but today we finally released the fresh new pencil icon in the top-right corner of your messages. Once you click it, the message is turned into a small form that allows you to edit your message content.

For messages that were edited before, it is possible to see old versions of that edited message by clicking the “last edit …” link, also at the top of the message.

In case you introduce sensitive information by mistake in your comment and need to remove it from the message history after editing it, you can always use the API to do so. We plan to add a remove button to the message’s revision history UI soon, to make this work easier.

The Launchpad team is proud of this new feature, and we hope that it will be useful for everyone! Let us know if you have any feedback!

on May 28, 2021 06:26 PM

Full Circle Magazine #169

Full Circle Magazine

This month:
* Command & Conquer : LMMS
* How-To : Python, Latex and Using USB3 on USB2
* Graphics : Inkscape
* My Opinion – Use Case For Alpha Software
* Everyday Ubuntu : BibleTime Pt2
* Micro This Micro That
* Ubports Devices – OTA-17
* Review : Ubuntu 21.04
* Book Review: Big Book Of Small Python Projects
* Ubuntu Games : Mutropolis
plus: News, The Daily Waddle, Q&A, and more.

Get it while it’s hot: https://fullcirclemagazine.org/issue-169/

on May 28, 2021 03:35 PM

May 26, 2021

Preamble Until recently, I worked for Canonical on the Snap Advocacy Team. Some of the things in this blog post may have changed or been fixed since I left. It’s quite a long post, but I feel it’s neccessary to explain fully the status-quo. This isn’t intended to be a “hit piece” on my previous employer, but merely information sharing for those looking to control their own systems. I’ve previously provided feedback in my previous role as Snap Advocate, to enable them to better control updates.
on May 26, 2021 11:00 AM

Hello all,

As many of you might have heard, the freenode IRC network changed management a couple of days ago, in what I personally identify as a hostile takeover. As part of that, the Ubuntu IRC Council, supported by the Community Council, published a resolution suggesting the move to Libera Chat. The former IRC Council and their successors immediately started working on moving our channels, users, and tooling over to Libera Chat.

As of around 3:00 UTC today, freenode’s new management believed that our channel topics and messaging pointing users in the right direction were outside of their policy, and rather than consulting with the IRC Council, they performed yet another hostile takeover, this time of the Ubuntu namespaces, including flavors, as well as other spaces from projects who were also using freenode for communication.

In order to provide you with the best experience on IRC, Ubuntu is now officially moving to Libera Chat. You will be able to find the same channels, the same people, and the same tools that you are used to. In the event that you see something is not quite right, please, don’t hesitate to reach out to our Ubuntu IRC Team, on #ubuntu-irc.

While this is a bump on the road, we hope that it will give our community some fresh air to revitalize, and we will be able to rebuild what we had, but 10x better. I sincerely appreciate the IRC Council’s efforts in making this move a success.

Please join us at #ubuntu, on irc.libera.chat:6697 (TLS).

On behalf of the Ubuntu Community Council,

Jose Antonio Rey

on May 26, 2021 04:47 AM

May 24, 2021

Following the Ubuntu IRC Council resolution, Lubuntu will be moving all of the Lubuntu IRC channels to LiberaChat as well. Some of the channels have already moved at the time of this announcement and the others will follow shortly. We are also working on updating our links to reflect the change. Telegram to IRC bridge offline. As a result of […]

The post Lubuntu IRC channels are moving networks! first appeared on Lubuntu.

on May 24, 2021 02:21 AM

May 21, 2021

C has the useful feature of adjacent allowing literal strings to be automatically concatenated. This is described in K&R "The C programming language" 2nd edition, page 194, section A2.6 "String Literals":

"Adjacent string literals are concatenated into a single string."

Unfortunately over the years I've seen several occasions where this useful feature has led to bugs not being detected, for example, the following code:


A simple typo in the "Does Not Compute" error message ends up with the last two literal strings being "silently" concatenated, causing an array out of bounds read when error_msgs[7] is accessed.

This can also bite when a new literal string is added to the end of the array.  The previous string requires a comma to be added when a new string is added to the array, and sometimes this is overlooked.  An example of this is in ACPICA commit 81eb9c383e6dee0f1b6620e91e5c3dbb48234831  - fortunately static analysis detected this and it has been fixed with commit 3f8c79fc22fd64b739f51268654a6783a874520e

The concerning issue is that this useful string concatenation feature can produce hazardous outcomes with coding mistakes that are simple to make but hard to notice.



on May 21, 2021 10:58 AM

May 20, 2021

Say What Now?

Stephen Michael Kellat

I generally try to keep posts brief lately. Rather than a bulleted list I will try to say what I can in a different format. It might be a bit more approachable.

I do have some amd64 hardware revived and it is running Xubuntu Impish Indri. It is very early on so it is not as if there has been any opportunity for anything to go wrong. I will have to come up with something wild or crazy to push limits this cycle in terms of testing that box, I suppose.

The election campaign is slowly but surely starting. I am extremely hesitant about proceeding with it considering the current atmosphere. There have been many, many unseemly things happening with the political party nationally that frankly just creep me out. Some days they creep me out to the point of considering dropping out of the race and going back to herding goats or maybe alpacas once again.

The county I live in just hit the very top of the league table for coronavirus incidence in the state. Out of Ohio’s 88 counties it appears that Ashtabula County is now the number one county for coronavirus cases as of the report that came out today from the Ohio Department of Health. This explains what had seemed like an odd statement in a newspaper report from the local health department that they were strongly recommending continuing to wear masks locally notwithstanding the rest of the state relaxing wearing masks. As of today’s data cut-off my county is over five percentage points behind the state average in terms of having delivered first vaccine shots to residents. Sometimes having access to data is not always the happiest thing to have.

The third novella is stuck with the second reader. They said they have feedback but they’re still not done reading it. I’ve started moving things ahead to get it up on Amazon but still have finishing touches to do before the typeset manuscript goes up. The novel package on CTAN is a fabulous tool for doing such things in LaTeX in addition to using VSCode. Visual Studio Code isn’t Scrivener but it works well for me.

In the end I can acknowledge that I have been offline quite a bit. That’s probably why the mess with freenode took me by surprise. There have just been many face to face things that have required direct attention. That I am having to increasingly do things that are radio-related is something I did not expect when the year began.


Tags: Stay-Alive

on May 20, 2021 09:11 PM

May 19, 2021

To all the people I have interacted with in freenode, and to all the contributors I have worked with over there:

I recently celebrated my 10-year anniversary of having an account in freenode. I have a lot of fond memories, met a lot of amazing people in that period of time.

Some time ago, the former head of freenode staff sold `freenode ltd` (a holding company) to a third party, Andrew Lee[1], under terms that have not been disclosed to the staff body. Mr Lee at the time had promised to never exercise any operational control over freenode.

In the past few weeks, this has changed[2][3], and the existance of a legal threat to freenode has become apparent. We cannot know the substance of this legal threat as it contains some kind of gag order preventing its broader discussion within staff.

As a result, Mr Lee now has operational control over the freenode IRC network. I cannot stand by such a (hostile?) corporate takeover of the freenode network, and I am resigning as a staff volunteer along with most other freenode staff. We simply do not feel that the network now remains independent after two heads of staff appear to have been compelled to make changes to our git repo for the website[4].

Where to now?

We are founding a new network with the same goals and ambitions: libera.chat.

You can connect to the new network at `irc.libera.chat`, ssl port 6697 (and the usual clearnet port).

We’re really sorry it’s had to come to this, and hope that you’re willing to work with us to make libera a success, independent from outside control.

What about Ubuntu?

Whether Ubuntu decides to stay on freenode or move to libera would be a decision of the Ubuntu IRC Council. Please refer to them with any questions you might have. While I am a part of the Community Council, the IRC Council operates independently, and I will personally leave the final decision to them.


[1]: https://find-and-update.company-information.service.gov.uk/company/10308021/officers

[2]: A blogpost has been removed without explanation: https://freenode.net/news/freenode-reorg (via the wayback machine)

[3]: The freenode testnet, for experimental deployment and testing of new server features was shutdown on Friday 30th April, for reasons that have not been disclosed to us.

[4]: Unexplained change to shells.com as our sponsor: web-7.0/pull/489, followed by a resignation: web-7.0/pull/493

on May 19, 2021 04:49 PM

May 16, 2021

Full Circle Weekly News #210

Full Circle Magazine

Interface for smartwatches added to postmarketOS:

New Releases of GNUstep Components:

Armbian Distribution Release 21.05:

SSH client PuTTY 0.75 released:

Ubuntu RescuePack 21.05 Antivirus Boot Disk Available:

DragonFly BSD 6.0 released:

VLC 3.0.14 media player update with vulnerability fixes:

Coreboot 4.14 Released:

Hubzilla 5.6 Released:

IBM opens CodeNet for machine learning systems that translate and validate code:

Full Circle Magazine
Host: @bardictriad, @zaivala@hostux.social
Bumper: Canonical
Theme Music: From The Dust - Stardust
on May 16, 2021 07:25 PM

May 15, 2021

Are you using Kubuntu 21.04 Hirsute Hippo, our current Stable release? Or are you already running our development builds of the upcoming 21.10 Impish Indri?

We currently have Plasma 5.21.90 (Plasma 5.22 Beta)  available in our Beta PPA for Kubuntu 21.04, and 21.10 development series.

However this is a beta release, and we should re-iterate the disclaimer from the upstream release announcement:

DISCLAIMER: This is beta software and is released for testing purposes. You are advised to NOT use Plasma 5.22 Beta in a production environment or as your daily desktop. If you do install Plasma 5.22 Beta, you must be prepared to encounter (and report to the creators) bugs that may interfere with your day-to-day use of your computer.


If you are prepared to test, then…..

Add the beta PPA and then upgrade:

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt full-upgrade -y

Then reboot.

In case of issues, testers should be prepared to use ppa-purge to remove the PPA and revert/downgrade packages.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use a launchpad.net to post testing feedback to the Kubuntu team as a bug, or give feedback on IRC [1], or mailing lists [2].
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

Please review the release announcement and changelog.

[Test Case]
* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.21?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.
* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing.
– Test the ‘fixed’ functionality or ‘new’ feature.

Testing may involve some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu and the KDE community as a whole.


Please stop by the Kubuntu-devel IRC channel if you need clarification of any of the steps to follow.

[1] – irc://irc.freenode.net/kubuntu-devel
[2] – https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel

on May 15, 2021 09:59 AM

May 14, 2021

If you are in the USA - Please use my new site KeepSummerTime.com to write to your congresspeople asking for summer time all year long.

The USA has an active bill in congress to keep us from changing the clocks and stay on time like it is in the summer year round (also called permanent DST). Changing the clocks has not been shown to have substantial benefits and the harms have been well documented.

For global communities - like FLOSS -

  • It makes it that much harder to schedule across the world.
  • The majority of the world does not do clock switching. It's generally EU/US specific.

If you are in the USA - Please use my new site KeepSummerTime.com to write to your congresspeople asking for summer time all year long.

If you want to help out

  • the site is all available on Github although the actual contact congress bit is from ActionNetwork.
  • I'd be very happy to make this site global in nature for all of us stuck with unstable time. Please get in touch!
on May 14, 2021 07:45 PM

May 11, 2021

Do you know a person, project or organisation doing great work in open tech in the UK? We want to hear about it. We are looking for nominations for people and projects working on open source software, hardware and data. We are looking for companies or orgnisations working in fintech with open, helping achieve the objectives of any of the United Nations Sustainable Development Goals. Nominations are open for projects, organisations and individuals that demonstrate outstanding contribution and impact for the Diversity and Inclusion ecosystem. This includes solving unique challenges, emphasis transparency of opportunities, mentorship, coaching and nurturing the creation of diverse, inclusive and neurodiverse communities. And individuals who you admire either under 25 or of any age.

Self nominations are welcome and encouraged. You can also nominate in more than one category.

Nominations may be submitted until 11.59pm on 13 June 2021.

Awards Event 11 November 2021.

Those categories again:

Hardware – sponsored by The Stack
Software – sponsored by GitLab
Financial Services – sponsored by FINOS
Sustainability – sponsored by Centre for Net Zero
Belonging Network – sponsored by Osmii
Young Person (under 25) – sponsored by JetStack
Individual – sponsored by Open Source Connections

Read more and find the nomination form on the OpenUK website.

Winners of Awards 2020, First edition

Young Person • Josh Lowe
Individual • Liz Rice
Financial Services and Fintech in Open Source • Parity
Open Data • National Library of Wales
Open Hardware • LowRISK
Open Source Software • HospitalRun

on May 11, 2021 03:53 PM

May 10, 2021

The Big Iron Hippo

Elizabeth K. Joseph

It’s been about a year since I last wrote about an Ubuntu release on IBM Z (colloquially known as “mainframes” and nicknamed “Big Iron”). In my first year at IBM my focus really was Linux on Z, along with other open source software like KVM and how that provides support for common tools via libvirt to make management of VMs on IBM Z almost trivial for most Linux folks. Last year I was able to start digging a little into the more traditional systems for IBM Z: z/OS and z/VM. While I’m no expert, by far, I have obtained a glimpse into just how powerful these operating systems are, and it’s impressive.

This year, with this extra background, I’m coming back with a hyper focus on Linux, and that’s making me appreciate the advancements with every Linux kernel and distribution release. Engineers at IBM, SUSE, Red Hat, and Canonical have made an investment in IBM Z, and are supporting those with kernel and other support for IBM Z hardware.

So it’s always exciting to see the Ubuntu release blog post from Frank Heimes over at Canonical! And the one for Hirsute Hippo is no exception: The ‘Hippo’ is out in the wild – Ubuntu 21.04 got released!

Several updates to the kernel! A great, continued focus on virtualization and containers! I can already see that the next LTS, coming out in the spring of 2022, is going to be a really impressive one for Ubuntu on IBM Z and LinuxONE.

on May 10, 2021 08:07 PM

Here are some uploads for April.

2021-04-06: Upload package bundlewrap (4.7.1-1) to Debian unstable.

2021-04-06: Upload package calamares ( to Debian experimental.

2021-04-06: Upload package flask-caching (1.10.1-1) to Debian unstable.

2021-04-06: Upload package xabacus (8.3.5-1) to Debian unstable.

2021-04-06: Upload package python-aniso8601 (9.0.1-1) to Debian experimental.

2021-04-07: Upload package gdisk (1.0.7-1) to Debian unstable.

2021-04-07: Upload package gnome-shell-extension-disconnect-wifi (28-1) to Debian unstable.

2021-04-07: Upload package gnome-shell-extension-draw-on-your-screen (11-1) to Debian unstable.

2021-04-12: Upload package s-tui (1.1.1-1) to Debian experimental.

2021-04-12: Upload package speedtest-cli (2.1.3-1) to Debian unstable.

2021-04-19: Spnsor package bitwise (0.42-1) to Debian unstable (E-mail request).

2021-04-19: Upload package speedtest-cli (2.1.3-2) to Debian unstable.

2021-04-23: Upload package speedtest-cli (2.0.2-1+deb10u2) to Debian buster (Closes: #986637)

on May 10, 2021 03:01 PM

Writing software is similar to translating from one language to another. Specifically, it is similar to translating from your native language to some other language. You are translating to that other language so that you can help those others do some task for you. You might not understand this other language very well, and some concepts might be difficult to express in the other language. You are doing your best though when translating, but as we know, some things can get lost in translation.

On software testing

When writing software, some things do get lost in translation. You know what your software should do, but you need to express your needs into the particular programming language that you are using. Even small pieces of software will have some sort of problem, which are called software defects. There is a whole field in computer science which is called software testing, and their goal is to find early such software defects so that they get fixed before the software gets released and reaches the market. When you buy a software package, it has gone through intensive software testing. Because if a customer uses the software package, and then it crashes or malfunctions, it reflects really poorly. They might even return the software and demand their money back!

In the field of software testing, you try to identify actions that a typical customer will likely perform, and may crash the software. If you could, you would try to find all possible software defects and have them fixed. But in reality, identifying all software defects is not possible. And this is a hard fact and a known issue in software testing; no matter how hard you try, there will still be some more software defects.

This post is about security though, and not about software testing. What gives? Well, a software defect can make the software to malfunction. This malfunctioning can make the software to perform an action that was not intended by the software developers. It can make the software do what some attacker wants it to do. Farfetched? Not at all. This is what a big part of computer security works on.

Security fuzzing

When security researchers perform software testing with an aim of finding software defects, we say that they are performing security fuzzing, or just fuzzing. Therefore, fuzzing is similar to software testing, but with the focus on identifying ways to make the software malfunction in a really bad way.

Security researchers find security vulnerabilities, ways to break into a computer system. This means that fuzzing is the first half of the job to find security vulnerabilities. The second part is to analyse each software defect and try to figure out, if possible, a way to break into the system. In this post we are only focusing on the first part of the job.

Defects and vulnerabilities

Are all software defects a potential candidate for a security vulnerability? Let’s see an example of a text editor. If you are using the text editor only to edit your own documents, but not open downloaded text documents, then there is no chance for a security vulnerability. Because an attacker would not have a way to influence this text editor. There would be no input of this text editor that is exposed to the attacker.

A text editor.

However, most computers are connected to the Internet. And most operating systems, either Windows, OS/X or a Linux distribution, are pre-configured to open text documents with a text editor. If you are browsing the Internet, you may find an interesting text document and decide to download and open it on your computer. Or, you may receive an email with an attachment of a text document. In both cases, it is the document file that is fully in control of an attacker. That means that an attacker can modify any aspect of that file. A Word document is a ZIP file that contains several individual files. There are opportunities to modify any of the individual files, ZIP it back into a Doc file and try to open it. If you get a crash, you successfully managed to fuzz the application, in a manual way. If you manage to crash the application simply by editing a Doc document due to your own work, then you are a natural in security fuzzing. Just keep a copy of that exact crashing document because it could be gold to a security researcher.

If you rename a .doc file and change the extension into .zip, then you can open it with a ZIP file manager. And can see the individual files inside it.

Artificial intelligence

If there is a complex task that a person could do but it is tedious and expensive, then you can either use a computer and make it work as just like a person would, or break down the task into a simpler but repetitive form so that it is suitable for a computer. The latter is quite enticing because computing power is way cheaper and more abundant than employing an expert.

Suppose you want to recognize apples from digital images. You can either employ an apple expert to identify if there is an apple in a photograph (any variety of apple). Or, get an expert to share the domain knowledge of apples and have them help in creating software that understands all shapes and colors of apples. Or, obtain several thousands of photos of different apples and train an AI system to detect apples in new images.

Employing a domain expert to manually identify the apples does not scale. Developing software using domain knowledge does not scale easily to, let’s say, other fruits. And developing this domain-specific software is also expensive compared to training an AI system to detect the specific objects.

Similarly, with security fuzzing. A security expert working manually does not scale and the process is expensive to perform repeatedly. Developing software that acts exactly like a security expert is also expensive as well the software would have to capture the whole domain knowledge of software security. And the very best next option is to break the problem into smaller tasks, and use primarily cheap computer power.

Advanced Fuzzing League++

And that leads as to the Advanced Fuzzing League++ (afl++). It is a security fuzzing software that requires lots of computer power, it runs the software that we are testing many times with slightly different inputs each time, and looks whether any of the attempts have managed to lead to a software crash.

afl++ does security fuzzing, and this is just the first part of the security work. A security researcher will take the results of the fuzzing (i.e. the list of crash reports) and manually look whether these can be exploited so that an attacker can make the software let them in.


Up to now, afl++ has been developed so that it can use as much computer power as possible. There are many ways to parallelise to multiple computers.

afl++ uses software instrumentation. When you have access to the source code, you can recompile it in a special way so that when afl++ does the fuzzing, afl++ will know if a new input causes the execution to reach new unexplored areas of the executable. It helps afl++ to expand the coverage to all of the executable.

afl++ does not automatically recognize the different inputs to a software. You have to guide it whether the input is from the command-line, from the network, or elsewhere.

afl++ can be fine-tuned in order to perform even better. Running an executable repeatedly from scratch is not as performant as to just running the same main function of the executable repeatedly.

afl++ can be used whether you have the source code of the software or whether you do not have it.

afl++ can fuzz binaries from a different architecture that your fuzzing server. It uses Qemu for hardware virtualization and can also use CPU emulation through unicorn.

afl++ has captured the mind share on security fuzzing and there are more and more new efforts to expand support to different things. For example, there is support for Frida (dynamic instrumentation).

afl++ has a steep learning curve. Good introductory tutorials are hard to find.

on May 10, 2021 01:09 PM

May 09, 2021


This post explores some of the darker corners of command-line parsing that some may be unaware of.

You might want to grab a coffee.


No, I’m not questioning your debating skills, I’m referring to parsing command-lines!

Parsing command-line option is something most programmers need to deal with at some point. Every language of note provides some sort of facility for handling command-line options. All a programmer needs to do is skim read the docs or grab the sample code, tweak to taste, et voila!

But is it that simple? Do you really understand what is going on? I would suggest that most programmers really don’t think that much about it. Handling the parsing of command-line options is just something you bolt on to your codebase. And then you move onto the more interesting stuff. Yes, it really does tend to be that easy and everything just works… most of the time.

Most? I hit an interesting issue recently which expanded in scope somewhat. It might raise an eyebrow for some or be a minor bomb-shell for others.



Back in the mists of time (~2012), I wrote a simple CLI utility in C called utfout. utfout is a simple tool that basically produces output. It’s like echo(1) or printf(3), but maybe slightly better ;)

Unsurprisingly, utfout uses the ubiquitous and venerable getopt(3) library function to parse the command-line. Specifically, utfout relies on getopt(3) to:

  • Parse the command-line arguments in strict order.
  • Handle multiple identical options as and when they occur.
  • Handle positional (non-option) arguments.

(Note: We’re going to come back to the term “in strict order” later. But, for now, let’s move on).

One interesting aspect of utfout is that it allows the specification of a repeat value so you can do something like this to display “hello” three times:

$ utfout "hello" -r 2

That -r repeat option takes an integer as the repeat value. But the integer can be specified as -1 meaning “repeat forever”. Looking at such a command-line we have:

$ utfout "hello\n" -r -1

We’re going to come back to examples like this later. For now, just remember that this option can accept a numeric (negative) value.


Recently, I decided to rewrite utfout in rust. Hence, rout was born (well, it’s almost been born: I’m currently writing a test suite for it, but I should be releasing it soon).

When I started working on rout, I looked for a rust command-line argument parsing crate that had the semantics of getopt(3). Although there are getopt() clones, I wanted something a little more “rust-like”. The main contenders didn’t work out for various reasons (I’ll come back to this a little later), and since I was looking for an excuse to write some more rust, I decided, in the best tradition of basically every programmer ever, to reinvent the wheel and write my own. This was fun. But more than that, I uncovered some interesting behavioural points that may be unknown to many. More on this later.

I soon had some command-line argument parsing code and sine it ended up being useful to me, I ended up publishing it as my first rust crate. It’s called ap for “argument parser”. Not a very creative name maybe, but succinct and simple, like the crate itself.

By this stage, the rout codebase was coming along nicely and it was time to add the CLI parsing. But when I added ap to rout and tried running the repeat command (-r -1), it failed. The problem? ap was assuming that -1 was a command-line option, not an option argument. Silly bug right? Err, yes and no. Read on for an explanation!

getopt minutiae

It may not common knowledge, but getopt(3), and in fact most argument parsing packages, provide support for numeric option names. If you haven’t read the back-story, this means it supports options like -7 which might be a short-hand for the long option --enable-lucky-seven-mode (whatever that means ;) And so to our first revelation:

Revelation #1:

getopt(3) supports any ASCII option name that is not -, ; or :.

In fact, it’s a little more subtle that that: although you can create an option called +, it cannot be the first character in optstring.

If you didn’t realise this, don’t feel too bad! You need to squint a bit when reading the man page to grasp this point, since it is almost painfully opaque on the topic of what constitutes a valid option character. Quoting verbatim from getopt(3):

optstring is a string containing the legitimate option characters.

Aside from the fact that the options are specified as a const char *, yep, that is your only clue! The FreeBSD man page is slightly clearer, but I would still say not clear enough personally. Yes, you could read the source, but I’ll warn you now, it’s not pretty or easy to grok!

But let this sink in: you can use numeric option names

The more astute reader may be hearing faint alarm bells ringing at this point. Not to worry if that’s not you as I’ll explain all later.

An easy way to test getopt behaviour

I’ve created a simple C program called test_getopt.c that allows you to play with getopt(3) without having to create lots of test programs, or recompile a single program constantly as you tweak it.

The program allows you to specify the optstring as the first command-line argument with all subsequent arguments being passed to getopt(3).

See the README for some examples.

Real-world evidence

If you’ve ever run the ss(1) or socat(1) commands, you may have encountered numeric options as both commands accept -4 and -6 options to denote IPv4 and IPv6 respectively. I’m reasonably sure I’ve also seen a command use -# as an option but cannot remember which.

The ap bug

The real bug in ap was that it was prioritising options over argument order: it was not parsing “in strict order”.

Parsing arguments in strict order

Remember we mentioned parsing argument “in strict order” earlier? Well “in strict order” doesn’t just mean that arguments are parsed sequentially in the order presented (first, second, third, etc), it also means that option arguments will be consumed by the “parent” (aka previous) option, regardless of whether the option argument starts with a dash or not. It’s beautifully simple and logical and crucially results in zero ambiguity for getopt(3).

To explain this, imagine your program calls getopt() like this:

getopt(argc, argv, "12:");

The program could then be invoked in any of the following ways:

$ prog -1
$ prog -2 foo

But: it could also be called like this:

$ prog -2 -1

getopt(3) parses this happily: there is no error and no ambiguity. As far as getopt(3) is concerned the user specified the -2 option passing it the value of -1. To be clear, as far as getopt(3) is concerned, the -1 option was not been specified!

Revelation #2:

In argument parsing, “in strict order” means the parser considers each argument in sequential order and if a command-line argument is an option and that option requires an argument, the next command-line argument will become that options argument, regardless of whether it starts with dash or not!

Going back to the revelation. Consuming the next argument after an option requiring a value is a brilliantly simple design. It’s also easy to implement. And since getopt(3) is part of the POSIX standard, it’s actually the behaviour you should be expecting from a command-line parser, atleast if you started out as a systems programmer. But since the details of this parsing behaviour have been somewhat shrouded in mystery, you may not be aware that you should be expecting such behaviour from other parsers!

But, alas, POSIX or not, this behaviour isn’t necessarily intuitive (see above) and indeed this is not how all command-line parsers work.

Summary of command-line argument parsers

As my curiosity was now piqued, I decided to do a quick survey of command-line parsing packages for a variety of languages. This is in no way complete and I’ve missed out many languages and packages. But it’s an interesting sample nonetheless.

The table below summarises the behaviour for various languages and parsing libraries:

languagelibrary/packagestrict ordering?
bashgetoptsYes (uses getopt(3))
C/C++getopt(3)Yes (POSIX standard! ;)
rustclap (v2+v3)No
zshgetoptsYes (uses getopt(3))


The libraries that do not use strict ordering (aka “the getopt way”) are not wrong or broken, they just work slightly differently! As long as you are aware of the difference, there is no problem ;)

Why are some libraries different?

It comes down to how the command-line arguments are parsed by the package.

Assume the library has just read an argument and determined definitively that it is an option and that the option requires a value. It then reads the next argument:

  • If the library is like getopt(3), it will just consume the argument as the value for the just-seen option (regardless of whether the argument starts with a dash or not).

  • Alternatively, if this new argument starts with a dash, the library will consider it an option and then error since the previous argument (the option) was expecting a value.

    The subtlety here is that “getopt()-like” implementations allow option values to look like options, which may surprise you.

So what?

We’ve had two revelations:

  1. Most argument parsers support numeric option names.
  2. Strict argument parsing means consuming the next argument, even if it starts with a dash.

You may be envisaging some of the potential problems now:

“What if my program accepts a numeric option and also has an option that accepts a numeric argument?”

There is also the slightly more subtle issue:

"What if my program has a flag option and also has an option that can accept a free-form string value?

Indeed! Here be dragons! To make these problems clearer, we’re going to look at some examples.

Example 1: Missile control

Imagine an evil and powerful tech-savvy despot asks his minions to write a CLI program for him to launch missiles. The program uses getopt(3) with an optstring of “12n:” so that he can launch a single missile (-1), two missiles (-2), or lots (-n <count>):

Here’s how he could do his evil work:

$ fire-missile -1
Firing 1 missile!
$ fire-missile -2
Firing 2 missiles!

Unfortunately, the poor programmer who wrote this program didn’t check the inputs correctly. Here’s what happens when the despot decides to fire a single missile, but maybe in a drunken stupor / tab-complete fail, runs the following by mistake:

$ fire-missiles -n -1
Firing 4294967295 missiles!

He’s meant to run fire-missiles -1 (or indeed fire-missiles -n 1), but got confused and appears to have started Armageddon by mistake since the program parsed the -n option value as a signed integer.

Example 2: Get rich quick or get fired?

Another example. Imagine a program used to transfer money between banks by allowing the admin to specify two IBAN (International Bank Account Number) numbers, an amount and a transaction summary field. Here are the arguments the program will accept:

  • -f <IBAN>: Source account.
  • -t <IBAN>: Destination account.
  • -a <amount>: Amount of money to transfer (let’s ignore things like different currencies and exchange rates to keep it simple).
  • -s <text>: Human readable summary of the transaction.
  • -d: Dry-run mode - don’t actually send the money, just show what would be done.

We could use it to send 100 units of currency like this:

$ prog -f me -t you -a 100 -s test

For this program we specify a getopt(3) optstring of “a:df:s:t:”. Fine. But using strict ordering, if I run the program as follows, I’ll probably get fired!

$ prog -f me -t you -a 10000000000 -s -d

Oops! I meant to specify a summary, but I forgot. But hey, that’s fine as I specified to run this in dry-run mode using -d. Oh. Wait a second…

Yes, I’m in trouble because the money was sent as in fact I didn’t specify to run in dry-run mode: I specified a summary of “-d” due to the strict argument parsing semantics of getopt(3).

Example 3: Something to give you nightmares

Using the knowledge of the revelations, you can easily contrive some real horrors. Take, for example, the following abomination:

$ prog -12 3 -4 -5 -67 8 -9

How is that parsed? Is that first -12 argument a simple negative number? Or is it actually a -1 option with the option argument value 2? Or is it a -1 option and a -2 option “bundled” together?

The answer of course depends on how you’ve defined the optstring value to getopt(3). But please, please never write programs with interfaces like this! ;)

You can use the test_getopt.c program to test out various ways of parsing that horrid command-line. For example, one way to handle them might be like this:

$ test_getopt "1::45:9" -12 3 -4 -5 -67 8 -9
INFO: getopt option: '1' (optarg: '2', optind: 2, opterr: 1, optopt: 0)
INFO: getopt option: '4' (optarg: '', optind: 4, opterr: 1, optopt: 0)
INFO: getopt option: '5' (optarg: '-67', optind: 6, opterr: 1, optopt: 0)
INFO: getopt option: '9' (optarg: '', optind: 8, opterr: 1, optopt: 0)

But alternatively, it could be parsed like this:

$ test_getopt "12:4567:9" -12 3 -4 -5 -67 8 -9
INFO: getopt option: '1' (optarg: '', optind: 1, opterr: 1, optopt: 0)
INFO: getopt option: '2' (optarg: '3', optind: 3, opterr: 1, optopt: 0)
INFO: getopt option: '4' (optarg: '', optind: 4, opterr: 1, optopt: 0)
INFO: getopt option: '5' (optarg: '', optind: 5, opterr: 1, optopt: 0)
INFO: getopt option: '6' (optarg: '', optind: 5, opterr: 1, optopt: 0)
INFO: getopt option: '7' (optarg: '8', optind: 7, opterr: 1, optopt: 0)
INFO: getopt option: '9' (optarg: '', optind: 8, opterr: 1, optopt: 0)


Coincidentally, by combining test_getopt with utfout, you can prove Revelation #1 rather simply:

$ (utfout -a "\n" "\{\x21..\x7e}"; echo) |\
while read char
test_getopt "x$char" -"$char"

Note: The leading “x” in the specified optstring argument is to avoid having to special case the string since the first character is “special” to getopt(3). See the man page for further details.


Admittedly, these are very contrived (and hopefully unrealistic!) examples. The missile control example is also a very poor use of getopt(3) since in this scenario, a simple check on argv[1] would be sufficient to determine how many missiles to fire. However, you can now see the potential pitfalls of numeric options and strict argument parsing.

To test a parser

If you want to establish if your chosen command-line parsing library accepts numeric options and if it parses in strict order, create a program that:

  • Accepts a -1 flag option (an option that does not require an argument).

  • Accepts a -2 argument option (that does accept an argument).

  • Run the program as follows:

    $ prog -2 -1
  • If the program succeeds (and sets the value for your -2 option to -1), your parser is “getopt()-like” (is parsing in strict order) and implicitly also supports numeric options.


Here’s what we’ve unearthed:

  • The getopt(3) man page on Linux is currently ambiguous.

    I wrote a patch to resolve this, and the patch has been accepted. Hopefully it will land in the next release of the man pages project.

  • All command-line parsing packages should document precisely how they consume arguments.

    Unfortunately, most don’t say anything about it! However, ap does. Specifically, see the documentation here.

  • getopt(3) doesn’t just support alphabetic option names: a name can be almost any ASCII character (-3, -%, -^, -+, etc).

  • Numeric options should be used with caution as they can lead to ambiguity; not for getopt(3) et al, but for the end user running the program. Worst case, there could be security implications.

  • Permitting negative numeric option values should also be considered carefully. Rather than supporting -r -1, it would be safer if utfout and rout required the repeat count to be >= 1 and if the user wants to repeat forever, support -r max or -r forever rather than -r -1.

  • Some modern command-line parsers prioritise options over argument ordering (meaning they are not “getopt()-like”).

  • You should understand how your chosen parser works before using it.

  • Parsing arguments “in strict order” does not only mean “in sequential order”: it means the parser prioritises command-line arguments over option values.

  • If your chosen parsing package prioritises arguments over options (like getopt(3), you need to take care if you use numeric options since arguments will be consumed “greedily” (and silently).

  • If your chosen parsing package prioritises options over arguments, you will probably be safer (since an incorrect command-line will generate an error), but you need to be aware that the package is not “getopt()-like”.

  • a CLI program must validate all command-line option values; command-line argument parsers provide a way for users to inject data into a program, so a wise programmer will always be paranoid!

  • The devil is in the detail ;)

on May 09, 2021 08:53 AM

May 04, 2021


Benjamin Mako Hill

In exciting professional news, it was recently announced that I got an National Science Foundation CAREER award! The CAREER is the US NSF’s most prestigious award for early-career faculty. In addition to the recognition, the award involves a bunch of money for me to put toward my research over the next 5 years. The Department of Communication at the University of Washington has put up a very nice web page announcing the thing. It’s all very exciting and a huge honor. I’m very humbled.

The grant will support a bunch of new research to develop and test a theory about the relationship between governance and online community lifecycles. If you’ve been reading this blog for a while, you’ll know that I’ve been involved in a bunch of research to describe how peer production communities tend to follow common patterns of growth and decline as well as a studies that show that many open communities become increasingly closed in ways that deter lots of the kinds contributions that made the communities successful in the first place.

Over the last few years, I’ve worked with Aaron Shaw to develop the outlines of an explanation for why many communities because increasingly closed over time in ways that hurt their ability to integrate contributions from newcomers. Over the course of the work on the CAREER, I’ll be continuing that project with Aaron and I’ll also be working to test that explanation empirically and to develop new strategies about what online communities can do as a result.

In addition to supporting research, the grant will support a bunch of new outreach and community building within the Community Data Science Collective. In particular, I’m planning to use the grant to do a better job of building relationships with community participants, community managers, and others in the platforms we study. I’m also hoping to use the resources to help the CDSC do a better job of sharing our stuff out in ways that are useful as well doing a better job of listening and learning from the communities that our research seeks to inform.

There are many to thank. The proposed work was the direct research of the work I did as the Center for Advanced Studies in the Behavioral Sciences at Stanford where I got to spend the 2018-2019 academic year in Claude Shannon’s old office and talking through these ideas with an incredible range of other scholars over lunch every day. It’s also the product of years of conversations with Aaron Shaw and Yochai Benkler. The proposal itself reflects the excellent work of the whole CDSC who did the work that made the award possible and provided me with detailed feedback on the proposal itself.

on May 04, 2021 02:29 AM

So, you’re in the middle of a review, and have couple of commits but one of the comments is asking you to modify a line that belongs to second to last, or even the first commit in your list, and you’re not willing to do:

git commit -m "Add fixes from the review" $file

Or you simply don’t know, and have no idea what squash or rebase means?, well I won’t explain squash today, but I will explain rebase


See how I do it, and also how do I screw up!


It all boils down to making sure that you trust git, and hope that things are small enough so that if you lose the stash, you can always rewrite it.

So in the end, for me it was:

git fetch origin
git rebase -i origin/master # if your branch is not clean, git will complain and stop
git stash # because my branch was not clean, and my desired change was already done
git rebase -i origin/master # now, let's do a rebase
# edit desired commits, for this add the edit (or an e) before the commit
# save and quit vim ([esc]+[:][x] or [esc]+[:][w][q], or your editor, if you're using something else
git stash pop # because I already had my change
$HACK # if you get conflicts or if you want to modify more
      # be careful here, if you rewrite history too much
      # you will end up in Back to the Future II
      # Luckly you can do git rebase --abort
git commit --amend $files #(alternatively, git add $files first then git commit --amend
git rebase --continue
git push -f # I think you will need to add remote+branch, git will remind you
# go on with your life

Note: A squash is gonna put all of the commits together, just make sure that there’s an order:

I lied, here’s a very quick and dirty squash guide

  • pick COMMIT1
  • pick COMMIT2
  • squash COMMIT3 # (Git will combine this commit, with the one above iir, so COMMIT2+COMMIT3 and git will ask you for a new commit message)

I lied

on May 04, 2021 12:00 AM

April 29, 2021

Can you smell 👃 that? That’s the smell of fresh paint 🖌 with just a hint of cucumber 🥒

Ubuntu MATE 21.04 is here and it has a new look thanks to the collaboration with the Yaru team. This release marks the start of a new visual direction for Ubuntu MATE, while retaining the features you’ve come to love 💚 Read on to learn 🎓 what we’ve been working on over the last 6 months and get some insight to what we’ll be working on next.

We would like to take this opportunity to extend our thanks 🙏 to everyone who contributed to this release, including:

  • The Yaru team for welcoming us so warmly 🤗 into the Yaru project and for all their hard work during this development cycle
  • The Ayatana Indicator team who helped add new features and fix bugs that improved the indicator experience
  • Everyone who participated in the QA/testing and bug filing 🐛
  • Those of you who have been contributing to documentation and translations

Thank you! Thank you all very much indeed 🥰

Ubuntu MATE 21.04 Ubuntu MATE 21.04 (Hirsute Hippo)

What changed since the Ubuntu MATE 20.10?

Here are the highlights of what’s changed since the release of Groovy 🕶 Gorilla 🦍

MATE Desktop 🧉

The MATE Desktop team released maintenance 🔧 updates for the current stable 1.24 release of MATE. We’ve updated the MATE packaging in Debian to incorporate all these bug 🐛 fixes and translation updates and synced those packages to Ubuntu so they all feature in this 21.04 release. There are no new features, just fixes 🩹

Ayatana Indicators 🚥

A highlight of the Ubuntu MATE 20.10 release was the transition to Ayatana Indicators. You can read 👓 the 20.10 release notes to learn what Ayatana Indicators are and why this transition will be beneficial in the long term.

We’ve added new versions of Ayatana Indicators including ‘Indicators’ settings to the Control Center, which can be used to configure the installed indicators.

Ayatana Indicator Settings Ayatana Indicators Settings

Other indicator changes include:

Expect to see more Ayatana Indicators included in future releases of Ubuntu MATE. Top candidates are:

  • Display Indicator - needs uploading to Debian and Ubuntu
  • Messages Indicator - needs uploading to Debian and Ubuntu
    • ayatana-webmail is available for install in Ubuntu MATE 21.04
  • Keyboard Indicator - requires feature parity with MATE keyboard applet
  • Bluetooth Indicator - requires integration work with Blueman

Yaru MATE 🎨

This is where most of the work was invested 💦

A new derivative of the Yaru theme, called Yaru MATE, has been created in collaboration with the Yaru team. During our discussions with the Yaru team we decided to make one significant departure from how Yaru MATE is delivered; Yaru MATE is only providing a light and dark theme, with the light theme being default. This differs from Yaru in Ubuntu which features a mixed light/dark by default.

We’ve decided to offer only light and dark variants of Yaru MATE as it makes maintaining the themes much easier, the mixed light/dark Yaru theme does require extra work to maintain due to the edge cases it surfaces. Offering just light and dark variants also ensures better application compatibility.

This work touched on a number of projects, here’s what Ubuntu MATE now enjoys as a result of Yaru MATE:

  • GTK 2.x, 3.x and 4.x Yaru MATE light and dark themes
  • Suru icons along with a number of new icons specifically made for MATE Desktop and Ubuntu MATE
  • LibreOffice Yaru MATE icon theme, which are enabled by default on new installs
  • Font contrast is much improved throughout the desktop and applications
  • Websites honour dark mode at the operating system level
    • If you enable the Yaru MATE Dark theme, websites that provide a dark mode will automatically use their dark theme to match your preferences.

In return for the excellent theme and icons from the Yaru team, the Ubuntu MATE team worked on the following which are now features of Yaru and Yaru MATE:

As a result of our window manager and GTKSourceView contributions it is now possible to use all three upstream Yaru themes from Ubuntu in Ubuntu MATE 💪

Yaru MATE GTKSourceView Yaru MATE GTKSourceView, Tiled Windows and Plank theme

Going the extra mile 🎽

In order to make Yaru MATE shine we’ve also created:

Yaru MATE Snaps

snapd will soon be able to automatically install snaps of themes that match your currently active theme. The snaps we’ve created are ready to integrate with that capability when it is available.

The gtk-theme-yaru-mate and icon-theme-yaru-mate snaps are pre-installed in Ubuntu MATE, but are not automatically connected to snapped applications. Running the following commands in a terminal periodically, or after you install a snapped GUI application, will connect the themes to compatible snaps until such time snapd supports doing this automatically.

for PLUG in $(snap connections | grep gtk-common-themes:gtk-3-themes | awk '{print $2}'); do sudo snap connect ${PLUG} gtk-theme-yaru-mate:gtk-3-themes; done
for PLUG in $(snap connections | grep gtk-common-themes:gtk-2-themes | awk '{print $2}'); do sudo snap connect ${PLUG} gtk-theme-yaru-mate:gtk-2-themes; done
for PLUG in $(snap connections | grep gtk-common-themes:icon-themes | awk '{print $2}'); do sudo snap connect ${PLUG} icon-theme-yaru-mate:icon-themes; done

What’s next? 🔮

While we made lots of progress with Yaru MATE for 21.04, the work is on going. Here’s what we’ll be working on next:

  • Some symbolic icons are being provided by a fallback to the Ambiant MATE and Radiant MATE icon themes, something we are keen to address for Ubuntu MATE 21.10.
  • Ubuntu MATE doesn’t have a full compliment of Suru icons for MATE Desktop, yet.
  • Plymouth boot theme will be aligned with the EFI respecting theme shipped in Ubuntu.

Mutiny 🏴‍☠️

The Mutiny layout, which provides a desktop layout that somewhat mimics Unity, has been a source of bug reports and user frustration 😤 for sometime now. Switching to/from Mutiny has often crashed resulting in a broken desktop session 😭

We have removed MATE Dock Applet from Ubuntu MATE and refactored the Mutiny layout to use Plank instead.

Mutiny layout with Yaru MATE Dark Mutiny layout with Yaru MATE Dark

  • Switching to the Mutiny layout via MATE Tweak will automatically theme Plank
    • Light and dark Yaru themes for Plank are included
  • Mutiny no longer enables Global Menus and also doesn’t undecorate maximised windows by default
    • If you like those features you can enable them via MATE Tweak
  • Window Buttons Applet is no longer integrated in the Mutiny top panel by default.
    • You can manually add it to your custom panel configuration should you want it.
    • Window Buttons Applet has been updated to automatically use window control buttons from the active theme. HiDPI support is also improved.

As a result of these changes Mutiny is more reliable and retains much of the Unity look and feel that many people like.

Command line love 🧑‍💻

We’ve included a few popular utilities requested by command line warriors. neofetch, htop and inxi are all included in the default Ubuntu MATE install. neofetch also features an Ubuntu MATE ASCII logo.

Raspberry Pi images

We will release Ubuntu MATE 21.04 images for the Raspberry Pi in the days following the release for PC 🙂

Major Applications

Accompanying MATE Desktop 1.24.1 and Linux 5.11 are Firefox 87, LibreOffice, Evolution 3.40 & Celluloid 0.20.

Major Applications

See the Ubuntu 21.04 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 21.04

This new release will be first available for PC/Mac users.


Upgrading from Ubuntu MATE 20.10

You can upgrade to Ubuntu MATE 21.04 from Ubuntu MATE 20.10. Ensure that you have all updates installed for your current version of Ubuntu MATE before you upgrade.

  • Open the “Software & Updates” from the Control Center.
  • Select the 3rd Tab called “Updates”.
  • Set the “Notify me of a new Ubuntu version” dropdown menu to “For any new version”.
  • Press Alt+F2 and type in update-manager -c into the command box.
  • Update Manager should open up and tell you: New distribution release ‘21.04’ is available.
    • If not, you can use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
  • Click “Upgrade” and follow the on-screen instructions.

There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

Known Issues

Here are the known issues.

Component Problem Workarounds Upstream Links
Plank When snaps update, they disappear from Plank.
Ubuntu MATE Screen reader installs using orca are currently not working
Ubuntu Ubiquity slide shows are missing for OEM installs of Ubuntu MATE
Ubuntu Shim-signed causes system not to boot on certain older EFI systems


Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

on April 29, 2021 10:51 PM

Lubuntu 18.04 (Bionic Beaver) was released April 27, 2018 and will reach End of Life on Friday, April 30, 2021. This means that after that date there will be no further security updates or bugfixes released. We highly recommend that you re-install with 20.04 as soon as possible if you are still running 18.04. After […]

The post Lubuntu 18.04 LTS End of Life and Current Support Statuses first appeared on Lubuntu.

on April 29, 2021 10:31 AM

April 28, 2021

Yes, you read it right. timg is a text mode image viewer and can also play videos. But, but, how is that possible?

timg uses suitable Unicode characters and also the colour support that is available in many terminal emulators.

The timg application that can show images and videos on a terminal emulator. Create your own teddy bear with Blender by following this Blender tutorial by tutor4u.

timg is developed by Henner Zeller, and in 2017 I wrote a blog post about creating a snap package for timg. A snap package was created and published on the Snap Store. I even registered the name timg although some time later it became much stricter to register a package name if you are not the maintainer. In addition, it was so early days for snap packages that I think you could not setup the license of the software in the package, and it always came up as Proprietary.

Fast forward from 2017 to a couple of weeks ago, a user posted an issue that the snap package of timg does not have the proper license. I was pinged through that Github issue and decided to update the snapcraft.yaml to whatever is now supported in snap packages. Apparently, you can now set the license in snap packages. Moreover, timg has been updated and can play many more image and video formats. I figured out the latter because timg now has a lot more dependencies than before.

What use would you have of a text mode image viewer and video player?

  1. Security. The snap package at least does not have access to the X11 server, nor the network, neither the audio server.
  2. Convenience. You are on a remote server (like a VPS) and do not want to ssh -X after you install an X11 application with all the dependencies.
  3. Workflow. The image you view, is part of your text session. No popup windows that open and dissappear.

on April 28, 2021 05:48 PM

April 27, 2021

As of a few days ago, a new feature in clang-query allows introspecting the source locations for a given clang AST node. The feature is also available for experimentation in Compiler Explorer. I previously delivered a talk at EuroLLVM 2019 and blogged in 2018 about this feature and others to assist in discovery of AST matchers and source locations. This is a major step in getting the Tooling API discovery features upstream into LLVM/Clang.


When creating clang-tidy checks to perform source to source transformation, there are generally two steps common to all checks:

  • Matching on the AST
  • Replacing particular source ranges in source files with new text

To complete the latter, you will need to become familiar with the source locations clang provides for the AST. A diagnostic is then issued with zero or more “fix it hints” which indicate changes to the code. Almost all clang-tidy checks are implemented in this way.

Some of the source locations which might be interesting for a FunctionDecl are illustrated here:

Pick Your Name

A common use case for this kind of tooling is to port a large codebase from a deprecated API to a new API.

A tool might replace a member call pushBack with push_back on a custom container, for the purpose of making the API more like standard containers. It might be the case that you have multiple classes with a pushBack method and you only want to change uses of it on a particular class, so you can not simply find and replace across the entire repository.

Given test code like

    struct MyContainer
        // deprected:
        void pushBack(int t);

        // new:
        void push_back(int t);    

    void calls()
        MyContainer mc;


A matcher could look something like:

    match cxxMemberCallExpr(

Try experimenting with it on Compiler Explorer.

An explanation of how to discover how to write this AST matcher expression is out of scope for this blog post, but you can see blogs passim for that too.

Know Your Goal

Having matched a call to pushBack the next step is to replace the source text of the call with push_back. The call to mc.pushBack() is represented by an instance of CXXMemberCallExpr. Given the instance, we need to identify the location in the source of the first character after the “.” and the location of the opening paren. Given those locations, we create a diagnostic with a FixItHint to replace that source range with the new method name:

    diag(MethodCallLocation, "Use push_back instead of pushBack")
        << FixItHint::CreateReplacement(
            sourceRangeForCall, "push_back");

When we run our porting tool in clang-tidy, we get output similiar to:

warning: Use push_back instead of pushBack [misc-update-pushBack]

Running clang-tidy with -fix then causes the tooling to apply the suggested fix. Once we have tested it, we can run the tool to apply the change to all of our code at once.

Find Your Place

So, how do we identify the sourceRangeForCall?

One way is to study the documentation of the Clang AST to try to identify what API calls might be useful to access that particular source range. That is quite difficult to determine for newcomers to the Clang AST API.

The new clang-query feature allows users to introspect all available locations for a given AST node instance.

note: source locations here
 * "getExprLoc()"

note: source locations here
 * "getEndLoc()"
 * "getRParenLoc()"

With this output, we can see that the location of the member call is retrievable by calling getExprLoc() on the CXXMemberCallExpr, which happens to be defined on the Expr base class. Because clang replacements can operate on token ranges, the location for the start of the member call is actually all we need to complete the replacement.

One of the design choices of the srcloc output of clang-query is that only locations on the “current” AST node are part of the output. That’s why for example, the arguments of a function call are not part of the locations output for a CXXMemberCallExpr. Instead it is necessary to traverse to the argument and introspect the locations of the node which represents the argument.

By traversing to the MemberExpr of the CXXMethodCallExpr we can see more locations. In particular, we can see that getOperatorLoc() can be used to get the location of the operator (a “.” in this case, but it could be a “->” for example) and getMemberNameInfo().getSourceRange() can be used to get a source range for the name of the member being called.

The Best Location

Given the choice of using getExprLoc() or getMemberNameInfo().getSourceRange(), the latter is preferable because it is more semantically related to what we want to replace. Aside from the hint that we want the “source range” of the “member name”, the getExprLoc() should be disfavored as that API is usually only used to choose a position to indicate in a diagnostic. That is not specifically what we wish to use the location for.

Additionally, by experimenting with slightly more complex code, we can see that getExprLoc() on a template-dependent call expression does not give the desired source location (At time of publishing! – This is likely undesirable in this case). At any rate, getMemberNameInfo().getSourceRange() gives the correct source range in all cases.

In the end, our diagnostic can look something like:

    diag(MethodCallLocation, "Use push_back instead of pushBack")
        << FixItHint::CreateReplacement(
            theMember->getMemberNameInfo().getSourceRange(), "push_back");

This feature is a powerful way to discover source locations and source ranges while creating and maintaining clang-tidy checks. Let me know if you find it useful!

on April 27, 2021 09:08 AM

April 25, 2021

Six months ago I was elected to the Ubuntu Community Council. After the first month, I wrote a text about how I experienced the first month. Time flies and now six months have already passed.

In the first few months we have been able to fill some of the councils and boards that needed to be refilled in the community. But even where this has not been possible, we have initiated new ways to ensure that we move forward on the issues. One example is the LoCo Council, which could not be filled again, but we found people who were given the task of rethinking this council and proposing new structures. This process of evaluating and rethinking this area will take some time.

There are some issues that we have on the agenda at the moment. Some of these are general issues related to the community, but some affects individual members of the community or where there are problems.

For some topics, we quickly realised that it makes sense to have contact persons at Canonical who can advance these topics. We were very pleased to find Monica Ayhens-Madon and Rhys Davies, two employees at Canonical, who support us in bringing topics into the organisation and also implement tasks. One consequence of this has been the reactivation of the Community Team at Canonical.

One topic that we have also come across, through the staffing of the board and the update of the benefits that you get as a member, is the Ubuntu Membership. At this point I would like to advertise the community and to show your community connection with Ubuntu through a membership. If you want to do this and know what benefits you are entitled to, you can read about it in the Ubuntu Wiki.

There are still enough construction sites, but structurally we are already on the right track again. Since the topics are dealt with in our free time and everyone on the Community Council has other things to do, topics sometimes drag on a bit. Sometimes I’m a bit impatient, but I’m getting better at it.

After six months, I can see that we as a Community Council have laid many building blocks and have already had some discussions where we have different approaches and thus also very different ideas. This is good for the community and leads to the different positions and opinions finding their way into the community.

You can read about our public meetings in the Community Hub. There is also the possibility, when we call for topics for our meetings, to bring in topics that we should look at in the Council, because this is important for the cooperation of the community.

If you want to get involved in discussions about the community, you can do so at the Community Hub. You can also send us an email to the mailing list community-council at lists.ubuntu.com if you have a community topic on your mind. If you want to contact me: you can do so by commenting, via Twitter or sending a mail to me: torsten.franz at ubuntu.com

on April 25, 2021 03:30 PM