September 26, 2023








Caps lock disabled

That’s better.

on September 26, 2023 11:00 AM

London, United Kingdom, 26 September 2023. Canonical announced today that Charmed MLFlow, Canonical’s distribution of the popular machine learning platform, is now generally available. Charmed MLFlow is part of Canonical’s growing MLOps portfolio. Ideal for model registry and experiment tracking, Charmed MLFlow is integrated with other AI and big data tools such as Apache Spark and Kubeflow. The solution runs on any infrastructure, from workstations to public and private clouds. Conveniently, it is offered as part of Canonical’s Ubuntu Pro subscription and priced per node, with a support tier available. It comes with extensive developer features and a ten-year security maintenance commitment.

<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>
Canonical’s Charmed MLFLow distribution is now generally available

Simplified deployment from workstations to any infrastructure

Charmed MLFlow can be deployed on a laptop within minutes, facilitating quick experimentation. It is fully tested on Ubuntu and can be used on other operating systems through Canonical’s Multipass or Windows Subsystem for Linux (WSL). 

“MLFlow has become the leading AI framework for streamlining all ML stages. Its popularity arises from its flexibility in facilitating modest local desktop experimentation and extensive cloud deployment, catering to both individual and enterprise needs”, said Cedric Gegout VP Product Management at Canonical. “This made Charmed MLFlow a fitting addition to our Canonical MLOps suite, offering cost-effective solutions that enable developers to start small and scale up as their business grows, without the typical ML infrastructure hassle and with a simple Ubuntu Pro subscription”.

Charmed MLFlow has model registry capabilities, enabling professionals to store, annotate and manage models in a centralised repository.  This brings order to the machine learning development phase and provides visibility on the status of all experiments performed, including results, changes made and possible configurations.

<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>

Charmed MLFlow runs on any environment, public or private cloud, and supports hybrid and multi-cloud scenarios. Charmed MLFlow works on any CNCF-conformant Kubernetes distribution, such as MicroK8s, Charmed Kubernetes or EKS. Data scientists can move their models from laptops to their infrastructure of choice, using the same tooling. This allows for a seamless migration between clouds, enabling professionals to benefit from the computing power they need for their use case. 

Automated lifecycle management and integrations

Charmed MLFlow benefits from improved lifecycle management, for easy upgrades and updates. In addition to the upstream capabilities, Canonical’s distribution automates these tasks and enables users to easily perform them.This reduces time spent on operations and takes away the burden of library, framework and tool incompatibility. The solution also integrates seamlessly with other machine-learning tools.

Charmed MLFlow can be deployed standalone and integrated with tools such as Jupyter Notebook, Charmed Kubeflow and KServe. Additionally, it includes infrastructure monitoring through Canonical Observability Stack (COS). When combined with Charmed Kubeflow, users can tap into other features like hyper-parameter tuning, GPU scheduling or model serving. 

Enterprise-ready ML projects with secure and supported tooling

Charmed MLFlow benefits from security patching through Canonical’s Ubuntu Pro subscription. Customers get timely patches for common vulnerabilities and exposures (CVEs) and a ten-year security maintenance commitment. Ubuntu Pro also provides hardening features and compliance with standards like FedRAMP, HIPAA and PCI-DSS, which are ideal for enterprises running AI/ML workloads in highly regulated environments. 

Besides security patching, enterprises can get 24/7 support for Charmed MFlow deployment, uptime monitoring, bug-fixes and operations.  For organisations that lack internal expertise in machine learning infrastructure but aim to jumpstart their efforts, Canonical offers managed services. 

Learn about Charmed MLFlow and generative AI at Canonical’s AI roadshow

<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>
The Canonical AI Roadshow features a lineup of events across the globe.

The upcoming Canonical AI Roadshow, which starts on 21 October 2023, will showcase how Canonical can help enterprises speed up their AI journeys. A lineup of presentations, talks, demos, interviews and case studies will focus on the latest trends in generative AI, and the critical role of open source in driving innovation in this space. 

Browse the full line-up or contact us to learn more.

on September 26, 2023 08:12 AM

September 25, 2023

Welcome to the Ubuntu Weekly Newsletter, Issue 806 for the week of September 17 – 23, 2023. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on September 25, 2023 10:39 PM

My work computer is a ThinkPad Z13. It’s on most of the time, including overnight and during the weekend. I’m one of those horrible people who like to just wiggle their mouse, unlock, and get working. I often leave a ton of windows open, so I quite like to sit down and start working without having to wait for boot up, and subsequent app launch.


So when I arrive at my desk on a Monday and discover my GPU has crashed, that’s a poor start to the week. The GPU crashing doesn’t completely kill the machine, just my desktop session and all the applications that were open. 😭

I see this kind of thing in the output of dmesg -Tw | grep amdgpu.

[Mon Aug 14 08:06:06 2023] [drm:amdgpu_job_timedout [amdgpu]] *ERROR* ring gfx_0.0.0 timeout, signaled seq=5346515, emitted seq=5346517
[Mon Aug 14 08:06:06 2023] [drm:amdgpu_job_timedout [amdgpu]] *ERROR* Process information: process Xorg pid 3456 thread Xorg:cs0 pid 3464
[Mon Aug 14 08:06:06 2023] amdgpu 0000:63:00.0: amdgpu: GPU reset begin!

I use Xorg instead of Wayland on my laptop. I’ve tried Wayland, but it’s never been great for the software I use on a daily basis, and the hardware combination I’m using. I use two external monitors, attached via a USB-C docking thing. So my desk looks a bit like this.

ThinkPad Z13 with two external screens on

Although, more accurately, like this, when the GPU driver dies.

ThinkPad Z13 with two external screens off

This crash happened on a second Monday morning in succession. So I figured it was time to file a bug. I ran ubuntu-bug linux and followed the prompts. That got me a bug 2031289.

Within a couple of days, I got a reply from Juerg Haefliger on the Ubuntu Kernel Team offering this suggestion.

“There are some AMD FW updates in lunar-proposed linux-firmware 20230323.gitbcdcfbcf-0ubuntu1.6. Can you give that a try?”

It’s not a good idea to enable the proposed pocket. So instead, I just grabbed the deb via, then did the old sudo apt install ./linux-firmware_20230323.gitbcdcfbcf-0ubuntu1.6_all.deb dance to install it.

Four days later, the following Monday, I arrived at the office with all my fingers and toes crossed.

Launchpad comment

Great success 🥳

Juerg followed up asking if we could close the bug. I left it until today, another Monday to make sure, then confirmed. Bug closed!

I’m awarding one hundred Internet points to Juerg for the quick and friendly bug interaction. Plus more points for doing the upload of that package in the first place, according to the changelog.

As I understand it, what I have done here is update a binary blob of GPU firmware on my machine, in the hope that it fixed a crasher. I always understood that the bad, evil, horrible people at nVidia made nasty binary blobs, but the Godlike do-no-wrong people at AMD only made saintly open source stuff.

Seems we still need that horrid non-free stuff, even for the “good” kind of GPU. I went looking for more info and found a thread on Reddit (spit!) from the past, with a post from an AMD person, explaining this situation.

Reddit comment

Today, I learned.

on September 25, 2023 05:00 PM

September 22, 2023

With user edition out the door last week, this week was spent stabilizing unstable!

Spent some time sorting out our Calamares installer being quite grumpy which is now fixed by reverting an upstream change. Unstable and developer ISO rebuilt and installable. Spent some time sorting out some issues with using an unreleased appstream ( thanks ximion for help with packagekit! ) KDE applications are starting to switch to Qt6 in master this week, the big one being KDE PIM! This entails an enormous amount of work re-packaging. I have made a dent, sorta. To be continued next week. I fixed our signond / kaccounts line for qt6 which entailed some work on upstream code that uses QStringList.toSet which was removed in Qt6! Always learning new things!

I have spent some time working on the KF6 content snap, working with Jarred to make sure his qt6 content snap will work for us. Unfortunately, I do not have much time for this as I must make money to survive, donations help free up time for this 🙂 Our new proposal with Kevin’s super awesome management company has been submitted and we will hopefully hear back next week.

Thanks for stopping by! Till next week.

If you can spare some change, consider a donation

Thank you!

on September 22, 2023 06:10 PM

The Ubuntu team is pleased to announce the Beta release of the Ubuntu 23.10 Desktop, Server, and Cloud products.

Ubuntu 23.10, codenamed “Mantic Minotaur”, continues Ubuntu’s proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been hard at work through this cycle, introducing new features and fixing bugs.

This Beta release includes images from not only the Ubuntu Desktop, Server, and Cloud products, but also the Edubuntu, Kubuntu, Lubuntu, Ubuntu Budgie, Ubuntu Cinnamon, Ubuntu Kylin, Ubuntu MATE, Ubuntu Studio, Ubuntu Unity, and Xubuntu flavours.

The Beta images are known to be reasonably free of showstopper image build or installer bugs, while representing a very recent snapshot of 23.10 that should be representative of the features intended to ship with the final release expected on October 12, 2023.

Ubuntu, Ubuntu Server, Cloud Images:

Mantic Beta includes updated versions of most of our core set of packages, including a current 6.5 kernel, and much more.

To upgrade to Ubuntu 23.10 Beta from Ubuntu 23.04, follow these instructions:

The Ubuntu 23.10 Beta images can be downloaded at: (Ubuntu and Ubuntu Server on x86)

The default Ubuntu Desktop installer is now a Flutter snap backed by Subiquity.
The legacy installer is still available in case of issues with the new installer.

This Ubuntu Server image features the next generation Subiquity server installer, bringing the comfortable live session and speedy install of the Ubuntu Desktop to server users.

Additional images can be found at the following links: (Cloud Images) (Non-x86)

As fixes will be included in new images between now and release, any daily cloud image should be considered a Beta image. Bugs found should be filed against the appropriate packages or, failing that, the cloud-images project in Launchpad.

The full release notes for Ubuntu 23.10 Beta can be found at:


Edubuntu is a flavor of Ubuntu designed as a free education oriented operating system for children of all ages.

The Beta images can be downloaded at:


Kubuntu is the KDE based flavor of Ubuntu. It uses the Plasma desktop and includes a wide selection of tools from the KDE project.

The Beta images can be downloaded at:


Lubuntu is a flavor of Ubuntu which uses the Lightweight Qt Desktop Environment (LXQt). The project’s goal is to provide a lightweight yet functional Linux distribution based on a rock-solid Ubuntu base.

The Beta images can be downloaded at:

Ubuntu Budgie:

Ubuntu Budgie is a community developed desktop, integrating Budgie Desktop Environment with Ubuntu at its core.

The Beta images can be downloaded at:

Ubuntu Cinnamon

Ubuntu Cinnamon is a flavor of Ubuntu featuring the Cinnamon desktop environment.

The Beta images can be downloaded at:

Ubuntu Kylin:

Ubuntu Kylin is a flavor of Ubuntu that is more suitable for Chinese users.

The Beta images can be downloaded at:

Ubuntu MATE:

Ubuntu MATE is a flavor of Ubuntu featuring the MATE desktop environment.

The Beta images can be downloaded at:

Ubuntu Studio:

Ubuntu Studio is a flavor of Ubuntu that provides a full range of multimedia content creation applications for each key category: audio, graphics, video, photography and publishing.

The Beta images can be downloaded at:


Ubuntu Unity:

Ubuntu Unity is a flavor of Ubuntu featuring the Unity7 desktop environment.

The Beta images can be downloaded at:


Xubuntu is a flavor of Ubuntu that comes with Xfce, which is a stable, light and a configurable desktop environment.

The Beta images can be downloaded at:

Regular daily images for Ubuntu, and all flavours, can be found at:

Ubuntu is a full-featured Linux distribution for clients, servers and clouds, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional technical support is available from Canonical Limited and hundreds of other companies around the world. For more information about support, visit

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:

Your comments, bug reports, patches and suggestions really help us to improve this and future releases of Ubuntu. Instructions can be found at:

You can find out more about Ubuntu and about this Beta release on our website, IRC channel and wiki.

To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

Originally posted to the ubuntu-announce mailing list on Fri Sep 22 08:27:04 UTC 2023 by Utkarsh Gupta, on behalf of the Ubuntu Release Team

on September 22, 2023 12:29 PM

India Mobile Congress (IMC) is the largest telecom, media, and technology forum in Asia, jointly organised by India’s Department of Telecommunications and the country’s Cellular Operators Association. It is also the biggest networking event in India, establishing itself as a showcase of innovation, technology and digital transformation.

Canonical is excited to participate in this year’s IMC event in Pragati Maidan, New Delhi, on 27-29 October.

Telecommunications is a key sector for Canonical. Our open source building blocks and solutions for the telecom industry address the needs of operators, system integrators, software and hardware vendors, and enterprises. 

Make sure to visit our booth and speak to us to learn more about how we can help you with your open source telco needs.

Hot topics

You probably know Canonical for publishing the most popular Linux operating system:  Ubuntu. 

Our telco solutions are built with the same philosophy as Ubuntu:  secure, trusted, and production-grade open source backed by  full operations support. 

At this year’s IMC, our experts will introduce you to our open source software products for  telco and how they can help you build modern telco infrastructure and applications. We will cover a range of topics that will interest you, whether you are an operator, a systems integrator, a hardware or software vendor, a developer, or simply a technology enthusiast. 

Here are the topics we will cover at the event.

Low-cost and enterprise-grade 5G telecom infrastructure

Telco operators and system integrators look for ways to build systems that are production-grade and at the same time cost-effective. Especially with the increasing adoption of network function virtualisation in telco, and the emergence of 5G mobile communications systems that rely on it, the market needs efficient and low-cost enterprise-grade virtualisation infrastructure solutions more than ever.

<noscript> <img alt="" height="82" src=",q_auto,fl_sanitize,c_fill,w_136,h_82/" width="136" /> </noscript>

With that in mind, Canonical’s strategy is to deliver open source infrastructure software products as building blocks to deliver effective telco solutions tailored for the needs of our telco customers. At IMC 2023, we will highlight our wide range of open source infrastructure solutions.

Highly-performant and reliable solutions at the edge

Performance has always been one of the most important goals of telco infrastructure. To achieve the ever-increasing customer SLA requirements, operators increasingly deploy edge clouds closer to their customers. Multi-access edge (MEC) applications running business workloads and operational software benefit from running close to devices and end users. The emergence of Open RAN technologies also requires a more effective cloud infrastructure fit for the network edge.

<noscript> <img alt="" height="145" src=",q_auto,fl_sanitize,c_fill,w_242,h_145/" width="242" /> </noscript>

Canonical’s mission is to deliver secure and reliable infrastructure built with trusted open source technologies, optimised for effective edge computing. At IMC 2023, you will learn more about our telco edge solutions and discover how open source meets high performance.

Secure and compliant open source applications 

Security for telecom infrastructure is a central need – it is part of the critical national infrastructure in every country. Telco systems must comply with various security standards and frameworks  to be robust and be able to withstand the latest threats. We recognise these needs, and make them our highest priority in our telco solutions. 

<noscript> <img alt="" height="149" src=",q_auto,fl_sanitize,c_fill,w_149,h_149/" width="149" /> </noscript>

Our solutions are supported by the most comprehensive security coverage for open source delivered to you by Canonical with Ubuntu Pro.

<noscript> <img alt="" height="143" src=",q_auto,fl_sanitize,c_fill,w_238,h_143/" width="238" /> </noscript>

We deliver secure and supported telco applications with automated lifecycle management. At IMC 2023, we will explain how your applications can be safeguarded against common vulnerability exposures and comply with standards.

A full ecosystem of open source solutions

Our open source products and solutions are fit for purpose in the telco ecosystem, enabling the needs for secure, trusted, flexible, optimised, automated, and user-friendly operations. 

Operators can achieve the cost reduction they need, with the performance and scalability goals they seek. System integrators can find all the building blocks they need for an efficient, highly-performant, inter-operable, and modular infrastructure. Vendors can find the ideal runtime environments and toolsets for their hardware and software solutions, providing them with fast market reach and a competitive advantage when their solutions are certified for Ubuntu.


Our expert speakers will participate in panel sessions to discuss some of the latest technology trends in telco. Don’t miss out and join our panel sessions to learn more:

  • Open RAN: Does it hold answers to the evolving telecom landscape
  • Affordability, accessibility and advancement in devices: Key to 5G adoption
  • Monetising connectivity: Network as a Service for seamless business growth
  • Harnessing automation and AI: Empowering the networks of tomorrow
  • Edge and beyond: Data centres redefined
<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>


Canonical will show some examples of how we can deliver a highly-performant solution targeted at edge compute and Open RAN use cases. 

The demo will showcase a full deployment sequence of an edge cloud with our open source solutions. You will have the chance to see bare-metal provisioning with MAAS, Ubuntu deployment, and then the setup and configuration of an edge cloud powered by MicroCloud and MicroK8s.

We will also have some exciting AI/ML use cases at the booth for you to check out.

Looking forward to seeing you all at India Mobile Congress 2023. Meet us to discuss your telco needs at this exciting event.

Contact us

Canonical provides a full stack for your telecom infrastructure. To learn more about our telco solutions, visit our webpage at

<noscript> <img alt="" height="88" src=",q_auto,fl_sanitize,c_fill,w_124,h_88/" width="124" /> </noscript>

If you also would like to learn more about Ubuntu Pro’s security features for telco, you can watch our webinar and read our blog.

For more information on real-time kernel with Ubuntu Pro in telco, check out our blog post, and contact us for your real-time kernel needs in your telco business today.

on September 22, 2023 12:00 PM

September 21, 2023


Jonathan Carter

I very, very nearly didn’t make it to DebConf this year, I had a bad cold/flu for a few days before I left, and after a negative covid-19 test just minutes before my flight, I decided to take the plunge and travel.

This is just everything in chronological order, more or less, it’s the only way I could write it.


I planned to spend DebCamp working on various issues. Very few of them actually got done, I spent the first few days in bed further recovering, took a covid-19 test when I arrived and after I felt better, and both were negative, so not sure what exactly was wrong with me, but between that and catching up with other Debian duties, I couldn’t make any progress on catching up on the packaging work I wanted to do. I’ll still post what I intended here, I’ll try to take a few days to focus on these some time next month:

Calamares / Debian Live stuff:

  • #980209 – installation fails at the “install boot loader” phase
  • #1021156 – calamares-settings-debian: Confusing/generic program names
  • #1037299 – “Install Debian” -> “Untrusted application launcher”
  • #1037123 – “Minimal HD space required’ too small for some live images”
  • #971003– Console auto-login doesn’t work with sysvinit

At least Calamares has been trixiefied in testing, so there’s that!

Desktop stuff:

  • #1038660 – please set a placeholder theme during development, different from any release
  • #1021816 – breeze: Background image not shown any more
  • #956102 – desktop-base: unwanted metadata within images
  • #605915 – please mtheake it a non-native package
  • #681025 – Put old themes in a new package named desktop-base-extra
  • #941642 – desktop-base: split theme data files and desktop integrations in separate packages

The “Egg” theme that I want to develop for testing/unstable is based on Juliette Taka’s Homeworld theme that was used for Bullseye. Egg, as in, something that hasn’t quite hatched yet. Get it? (for #1038660)

Debian Social:

  • Set up Lemmy instance
    • I started setting up a Lemmy instance before DebCamp, and meant to finish it.
  • Migrate PeerTube to new server
    • We got a new physical server for our PeerTube instance, we should have more space for growth and it would help us fix the streaming feature on our platform.


I intended to get the loop for DebConf in good shape before I left, so that we can spend some time during DebCamp making some really nice content, unfortunately this went very tumbly, but at least we ended up with a loopy that kind of worked and wasn’t too horrible. There’s always another DebConf to try again, right?

So DebCamp as a usual DebCamp was pretty much a wash (fitting with all the rain we had?) for me, at least it gave me enough time to recover a bit for DebConf proper, and I had enough time left to catch up on some critical DPL duties and put together a few slides for the Bits from the DPL talk.


Bits From the DPL

I had very, very little available time to prepare something for Bits fro the DPL, but I managed to put some slides together (available on my wiki page).

I mostly covered:

  • A very quick introduction of myself (I’ve done this so many times, it feels redundant giving my history every time), and some introduction on what it is that the DPL does. I declared my intent not to run for DPL again, and the reasoning behind it, and a few bits of information for people who may intend to stand for DPL next year.
  • The sentiment out there for the Debian 12 release (which has been very positive). How we include firmware by default now, and that we’re saying goodbye to architectures both GNU/KFreeBSD and mipsel.
  • Debian Day and the 30th birthday party celebrations from local groups all over the world (and a reminder about the Local Groups BoF later in the week).
  • I looked forward to Debian 13 (trixie!), and how we’re gaining riscv64 as a release architecture, as well as loongarch64, and that plans seem to be forming to fix 2k38 in Debian, and hopefully largely by the time the Trixie release comes by.
  • I made some comments about “Enterprise Linux” as people refer to the RHEL eco-system these days, how really bizarre some aspects of it is (like the kernel maintenance), and that some big vendors are choosing to support systems outside of that eco-system now (like CPanel now supporting Ubuntu too). I closed with the quote below from Ian Murdock, and assured the audience that if they want to go out and make money with Debian, they are more than welcome too.

Job Fair

I walked through the hallway where the Job Fair was hosted, and enjoyed all the buzz. It’s not always easy to get this right, but this year it was very active and energetic, I hope lots of people made some connections!

Cheese & Wine

Due to state laws and alcohol licenses, we couldn’t consume alcohol from outside the state of Kerala in the common areas of the hotel (only in private rooms), so this wasn’t quite as big or as fun as our usual C&W parties since we couldn’t share as much from our individual countries and cultures, but we always knew that this was going to be the case for this DebConf, and it still ended up being alright.

Day Trip

I opted for the forest / waterfalls daytrip. It was really, really long with lots of time in the bus. I think our trip’s organiser underestimated how long it would take between the points on the route (all in all it wasn’t that far, but on a bus on a winding mountain road, it takes long). We left at 8:00 and only found our way back to the hotel around 23:30. Even though we arrived tired and hungry, we saw some beautiful scenery, animals and also met indigenous river people who talked about their struggles against being driven out of their place of living multiple times as government invests in new developments like dams and hydro power.

Photos available in the DebConf23 public git repository.

Losing a beloved Debian Developer during DebConf

To our collective devastation, not everyone made it back from their day trips. Abraham Raji was out to the kayak day trip, and while swimming, got caught by a whirlpool from a drainage system.

Even though all of us were properly exhausted and shocked in disbelief at this point, we had to stay up and make some tough decisions. Some initially felt that we had to cancel the rest of DebConf. We also had to figure out how to announce what happened asap both to the larger project and at DebConf in an official manner, while ensuring that due diligence took place and that the family is informed by the police first before making anything public.

We ended up cancelling all the talks for the following day, with an address from the DPL in the morning to explain what had happened. Of all the things I’ve ever had to do as DPL, this was by far the hardest. The day after that, talks were also cancelled for the morning so that we could attend his funeral. Dozens of DebConf attendees headed out by bus to go pay their final respects, many wearing the t-shirts that Abraham had designed for DebConf.

A book of condolences was set up so that everyone who wished to could write a message on how they remembered him. The book will be kept by his family.

Today marks a week since his funeral, and I still feel very raw about it. And even though there was uncertainty whether DebConf should even continue after his death, in hindsight I’m glad that everyone pushed forward. While we were all heart broken, it was also heart warming to see people care for each other in all of this. If anything, I think I needed more time at DebConf just to be in that warm aura of emotional support for just a bit longer. There are many people who I wanted to talk to who I barely even had a chance to see.

Abraham, or Abru as he was called by some people (which I like because “bru” in Afrikaans” is like “bro” in English, not sure if that’s what it implied locally too) enjoyed artistic pursuits, but he was also passionate about knowledge transfer. He ran classes at DebConf both last year and this year (and I think at other local events too) where he taught people packaging via a quick course that he put together. His enthusiasm for Debian was contagious, a few of the people who he was mentoring came up to me and told me that they were going to see it through and become a DD in honor of him. I can’t even remember how I reacted to that, my brain was already so worn out and stitching that together with the tragedy of what happened while at DebConf was just too much for me.

I first met him in person last year in Kosovo, I already knew who he was, so I think we interacted during the online events the year before. He was just one of those people who showed so much promise, and I was curious to see what he’d achieve in the future. Unfortunately, we was taken away from us too soon.

Poetry Evening

Later in the week we had the poetry evening. This was the first time I had the courage to recite something. I read Ithaka by C.P. Cavafy (translated by Edmund Keely). The first time I heard about this poem was in an interview with Julian Assange’s wife, where she mentioned that he really loves this poem, and it caught my attention because I really like the Weezer song “Return to Ithaka” and always wondered what it was about, so needless to say, that was another rabbit hole at some point.

Group Photo

Our DebConf photographer organised another group photo for this event, links to high-res versions available on Aigar’s website.


I didn’t attend nearly as many talks this DebConf as I would’ve liked (fortunately I can catch up on video, should be released soon), but I did make it to a few BoFs.

In the Local Groups BoF, representatives from various local teams were present who introduced themselves and explained what they were doing. From memory (sorry if I left someone out), we had people from Belgium, Brazil, Taiwan and South Africa. We talked about types of events a local group could do (BSPs, Mini DC, sprints, Debian Day, etc. How to help local groups get started, booth kits for conferences, and setting up some form of calendar that lists important Debian events in a way that makes it easier for people to plan and co-ordinate. There’s a mailing list for co-ordination of local groups, and the irc channel is -localgroups on oftc.

If you got one of these Cheese & Wine bags from DebConf, that’s from the South African local group!

In the BoF, we discussed the hosting service, where Debian pays for VMs hosted for projects by individual DDs on The idea is that we start some form of census that monitors the services, whether they’re still in use, whether the system is up to date, whether someone still cares for it, etc. We had some discussion about where the lines of responsibility are drawn, and we can probably make things a little bit more clear in the documentation. We also want to offer more in terms of backups and monitoring (currently DDs do get 500GB from that could be used for backups of their services though). The intention is also to deploy some form of configuration management for some essentials across the hosts. We should also look at getting some sponsored hosting for this.

In the Debian Social BoF, we discussed some services that need work / expansion. In particular, Matrix keeps growing at an increased rate as more users use it and more channels are bridged, so it will likely move to its own host with big disks soon. We might replace Pleroma with a fork called Akkoma, this will need some more home work and checking whether it’s even feasible. Some services haven’t really been used (like Writefreely and Plume), and it might be time to retire them. We might just have to help one or two users migrate some of their posts away if we do retire them. Mjolner seems to do a fine job at spam blocking, we haven’t had any notable incidents yet. WordPress now has improved fediverse support, it’s unclear whether it works on a multi-site instance yet, I’ll test it at some point soon and report back. For upcoming services, we are implementing Lemmy and probably also Mobilizon. A request was made that we also look into Loomio.

More Information Overload

There’s so much that happens at DebConf, it’s tough to take it all in, and also, to find time to write about all of it, but I’ll mention a few more things that are certainly worth of note.

During DebConf, we had some people from the Kite Linux team over. KITE supplies the ICT needs for the primary and secondary schools in the province of Kerala, where they all use Linux. They decided to switch all of these to Debian. There was an ad-hoc BoF where locals were listening and fielding questions that the Kite Linux team had. It was great seeing all the energy and enthusiasm behind this effort, I hope someone will properly blog about this!

I learned about the VGLUG Foundation, who are doing a tremendous job at promoting GNU/Linux in the country. They are also training up 50 people a year to be able to provide tech support for Debian.

I came across the booth for Mostly Harmless, they liberate old hardware by installing free firmware on there. It was nice seeing all the devices out there that could be liberated, and how it can breathe new life into old harware.

Some hopefully harmless soldering.

Overall, the community and their activities in India are very impressive, and I wish I had more time to get to know everyone better.


Oh yes, one more thing. The food was great. I tasted more different kinds of curry than I ever did in my whole life up to this point. The lunch on banana leaves was interesting, and also learning how to eat this food properly by hand (thanks to the locals who insisted on teaching me!), it was a… fruitful experience? This might catch on at home too… less dishes to take care of!

Special thanks to the DebConf23 Team

I think this may have been one of the toughest DebConfs to organise yet, and I don’t think many people outside of the DebConf team knows about all the challenges and adversity this team has faced in organising it. Even just getting to the previous DebConf in Kosovo was a long and tedious and somewhat risky process. Through it all, they were absolute pro’s. Not once did I see them get angry or yell at each other, whenever a problem came up, they just dealt with it. They did a really stellar job and I did make a point of telling them on the last day that everyone appreciated all the work that they did.

Back to my nest

I bought Dax a ball back from India, he seems to have forgiven me for not taking him along.

I’ll probably take a few days soon to focus a bit on my bugs and catch up on my original DebCamp goals. If you made it this far, thanks for reading! And thanks to everyone for being such fantastic people.

on September 21, 2023 08:36 PM

E265 Baderna Na Caserna I

Podcast Ubuntu Portugal

Num ambiente sórdido de folies-bérgeres , deboche, álcool e perfume barato, um pirata, um motoqueiro e uma artista de variedades Lili, La Bardageuse juntam-se para entrevistar convidados que vieram arrastados da Festa do Software Livre. A atmosfera decadente, apimentada com público ao vivo, visivelmente intoxicado, ultrapassou todos os limites da decência e bons costumes, chocando todos quantos assistiram. É um não mais acabar de poucas-vergonhas, que muito desonra o país!

Já sabem: oiçam, subscrevam e partilhem!


Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on September 21, 2023 12:00 AM

September 20, 2023

Test post

Jonathan Carter

just testing, please ignore

on September 20, 2023 04:00 PM


One very neat feature we had back when LXD was hosted on the Linux Containers infrastructure was the ability to try it online. For that, we were dynamically allocating a LXD container with nested support, allowing the user to quickly get a shell and try LXD for a bit.

This was the first LXD experience for tens of thousands of people and made it painless to discover LXD and see if it’s a good fit for you.

With the move of LXD to Canonical, this was lost and my understanding is that for LXD, there’s currently no plan to bring it back.

Enter Incus

Now that Incus is part of the Linux Containers project, it gets to use some of the infrastructure which was once provided to LXD, including the ability to provide a live demo server!

This is now live at:

Technical details

Quite a few things have changed on the infrastructure side since the LXD days.

For one thing, the server code has seen some substantial updates, porting it to Incus, adding support for virtual machines, talking to remote clusters, making the configuration file easier to read, adding e-mail notifications for when users leave feedback and more!

On the client side, the code was also ported from the now defunct term.js over to the actively maintained xterm.js. The instructions were obviously updated to fit Incus too.

But the exciting part is that we’re no longer using nested containers run inside of one large mostly stateless VM, that had to be rebuilt daily for security reasons. No, we’re now spawning individual virtual machines against a remote Incus cluster!

Each session now gets an Ubuntu 22.04 VM for a duration of 30 minutes. Each VM is running on an Incus cluster with a few beefy machines available. They use the Incus daily repository along with both my kernel and zfs builds.

Resource wise, we’re also looking at a big upgrade, moving from just 1 CPU, 256 MB of RAM and 5 GB of slow disk to a whopping 2 CPU, 4 GB of RAM and 50 GB of NVME storage!

The end result is that while the session startup time is a bit longer, up to around 15s from just 5s, the user now gets a full dedicated VM with fast storage and a lot more resources to play with. The most notable change this introduces is the ability to play with Incus VMs too!

Next steps

The demo server is currently using Incus daily builds as there’s no stable Incus release yet. This will obviously change as soon as we have a stable release!

Other than that, the instructions may be expanded a bit to cover more resource intensive parts of Incus, making use of the extra resources now available.

on September 20, 2023 03:07 PM

September 19, 2023

Steam Deck Emulation Done Right

Ubuntu Podcast from the UK LoCo

Consolidating services onto a dedicated server, contributing to a large open source project, and retro gaming on a refurbished Steam Deck.
on September 19, 2023 07:15 PM

Ubuntu Budgie 23.10 (Mantic Minotaur) is a normal release with 9 months of support, from Oct 2023 to June 2024. Ubuntu LTS releases are focused on long term support. If stability is more important than having the latest and greatest version of the kernel, desktop environment, and applications then Ubuntu Budgie 22.04 LTS is perfect for you. In these release notes, we are going to cover the...


on September 19, 2023 06:56 PM

How to run Remark42 on

Andrea Corbellini

As I wrote on my previous post, I recently switched from Disqus to Remark42 for the comments on my blog. Here I will explain how I set it up on


The setup that I ended up with looks like the following:

Diagram of the components for the Remark42 setup

Something to note about this setup is that the “machine” (more on that later) and the storage volume are both a single instance. This is not a distributed setup. This is because Remark42 stores comments in a single file and does not make use of a distributed database. This is listed as a “feature” on the Remark42 website. How one is supposed to implement replication? I have no idea. Thankfully seems to be fast to provision machines, and the Remark42 daemon also seems fast to start, so hopefully if a problem occurs (or when updates are required), the downtime will be minimal.

It is imperative however to understand that, because of the non-distributed/non-replicated nature of this setup, backups should be made periodically to avoid the risk of losing your comments forever.


Before setting up Remark42, I had never used before. As newbie, I would describe it as a cloud provider focused on Docker containers. uses some concepts (like “apps” and “machines”) that make sense after you practice a bit with them, but as a beginner they are not the easiest to learn. Most of the complexity I think comes from the fact that the documentation is poorly written. On top of that, it appears that is migrating their offering from “V1 apps” to “V2 apps”, and today some documentation applies only to “V1 apps”, other pieces apply only to “V2 apps”, resulting in a big mess. The error messages you get are also far from clear.

But don’t get too scared: once you get to know, it can actually be fun to use.

Creating resources on requires installing their command line client: flyctl. Because I do not like to run unknown software unconfined, I packaged it as a snap that you can install using:

snap install andrea-flyctl

Another source of confusion that I had the beginning was that, by reading the documentation, it looked like a second command line tool named fly was needed in addition to flyctl. It turns out that fly and flyctl are the same thing, it’s just that they’re transitioning from a name to another. If you installed the tool through the snap, you can set up these aliases so that you can copy and paste commands without trouble:

alias fly=/snap/bin/
alias flyctl=/snap/bin/

According to the documentation (and assuming it’s up-to-date), flyctl does not support everything that supports, so sometimes curl is used to interact directly with the API. In order to use that, you’ll need to download an authentication token from the interface and store it in a file (that I’ll call ~/fly-token from now on).

I’m going to skip over the steps to create and configure a account, obtaining an authentication token, as those were easy steps in my opinion.

Creating a machine

A “machine” is a virtual machine running a single Docker container with a persistent volume attached to it. In order to create my machine to run Remark42 in it, I loosely followed this page from the documentation: Run User Code on Fly Machines . “Loosely” because it turned out that some pieces on that page are not fully correct, but anyway…

Before creating a machine, you first need to create an “app”. A app is basically an endpoint, which consists of a DNS name (in the form ${app_name}, and a set of IP addresses. Behind these IP addresses there are load balancers that will forward requests to the machines inside the app.

You can do that through the API like this:

curl -X POST \
  -H "Authorization: Bearer $(<~/fly-token)" \
  -H 'Content-Type: application/json' \
  '' \
  -d '{ "app_name": "${app_name}", "org_slug": "personal" }'

(Replace ${app_name} with some identifier of your choice; I chose remark42 without knowing that this would have removed the possibility for other people to register an app with the same name.)

IP addresses need to be manually allocated:

fly ips allocate-v4 --app=${app_name} --shared
fly ips allocate-v6 --app=${app_name}

The --shared option to allocate-v4 tells to allocate an IP address that may be shared with other apps, even outside of your account/organization. Remove --shared if you want to use a dedicated IP, but note that dedicated IPv4 addresses is a paid feature.

Allocating IPs is an important step: it can be done later, after creating the machine, but it must be done, otherwise your machine will be unreachable and it won’t be obvious why.

You should now create a persistent volume for your machine:

fly volume create remark42_db_0 --app=${app_name} --size=1

This will display a warning about replication, but you can ignore it because, sadly, Remark42 does not support replication.

Remark42 needs to be given a secret key (I guess for the purpose of signing JWT tokens). has a handy feature to manage secrets, and make them available to machines, albeit poorly documented. You can set the Remark42 secret like this:

fly secrets set --app=${app_name} SECRET='a very secret string'

(You can generate a random secret string with a command like cat /dev/urandom | tr -Cd 'a-zA-Z0-9' | head -c64, which means: get some random bytes, keep only alphanumeric characters, get the first 64 characters.)

You may be wondering: how is the container running inside the machine supposed to access this secret? The documentation doesn’t say a word about it, but after experimenting I was able to find that all the app secrets are passed as environment variables, which is great, because this is exactly what Remark42 expects.

Note: it’s important to set SECRET before creating the machine, or Remark42 will refuse to start.

Now you’re ready to spin up the machine: create a configuration file for it…

  "name": "remark42-0",
  "config": {
    "image": "umputun/remark42:latest",
    "env": {
      "SITE": "",
      "REMARK_URL": "https://${app_name}",
      "ALLOWED_HOSTS": "'self',",
      "AUTH_SAME_SITE": "none",
      "AUTH_ANON": "true",
      "AUTH_EMAIL_ENABLE": "true",
      "AUTH_EMAIL_FROM": "Andrea's Blog <>",
      "AUTH_EMAIL_SUBJ": "Andrea's Blog - Email Confirmation",
      "NOTIFY_USERS": "email",
      "NOTIFY_ADMINS": "email",
      "NOTIFY_EMAIL_FROM": "Andrea's Blog <>",
    "mounts": [
        "volume": "${volume_id}",
        "path": "/srv/var"
    "services": [
        "ports": [
            "port": 443,
            "handlers": [
            "port": 80,
            "handlers": [
        "protocol": "tcp",
        "internal_port": 8080
    "checks": {
      "httpget": {
        "type": "http",
        "port": 8080,
        "method": "GET",
        "path": "/ping"
        "interval": "15s",
        "timeout": "10s",
    "metadata": {
      "fly_platform_version": "v2",

…and give it to

curl -X POST \
  -H "Authorization: Bearer $(<~/fly-token)" \
  -H 'Content-Type: application/json' \
  -d @config.json

There’s a lot here, so let me break it down for you:

  • "image": "umputun/remark42:latest": this is the Docker image for Remark42.

  • "env": { ... }: these are all the environment variables to pass to our container. They are briefly documented on the Remark42 website, and here’s a bit more detailed explanation of some of them:

    • "SITE": "": this is the internal identifier for the site, it can be an arbitrary string, it won’t be visible, and you can omit it.

    • "REMARK_URL": "https://${app_name}": this is the URL where Remark42 will be serving requests from. I set it to the app endpoint. It’s important that you do not put a trailing slash, or Remark42 will error out later on. It’s also important that the protocol (http or https) matches your blog’s protocol, or Remark42 will refuse to display comments (this makes local testing a bit annoying).

    • "ALLOWED_HOSTS": "'self',": this is the list of sources that will be put into the Content-Security-Policy: frame-ancestors header) of HTTP responses. Essentially, this defines where the Remark42 comments can be displayed.

    • "AUTH_SAME_SITE": "none": this disable the “same site” policy for cookies. Disabling it is necessary because, in my setup, comments are served from one domain ( to another domain (

    • "AUTH_ANON": "true": allows anonymous commenters. You may or may not want it.

    • "AUTH_EMAIL_ENABLE": "true" and friends: allows email-based authentication of commenters.

    • "NOTIFY_USERS" "email": allows readers and commenters to be notified of new comments via email.

    • "NOTIFY_ADMINS" "email" and "ADMIN_SHARED_EMAIL": "": makes Remark42 send me an email every time there’s a new comment.

  • "mounts": [ ... ]: this tells to attach the volume that you created earlier to the container at the path /srv/var, which is what Remark42 uses to store its database as well as daily backups.

  • "services": [ ... ]: this tells what to expose through the load balancer. With the configuration that I provided, the endpoint (${app_name} will provide both HTTP and HTTPS to the internet. However, the load balancer will talk to the machine over plain HTTP on port 8080 (meaning that TLS is terminated at the load balancer).

    I think in the future I will setup certbot inside the container so that I can do TLS termination on the machine, but not today.

  • "checks": { ... }: this tells to check if the Remark42 daemon is healthy by using its /pingendpoint.

  • "metadata": { "fly_platform_version": "v2" }: this tells to use a “V2 machine”, or something like that. Setting this metadata is very important, or certain things won’t work later on. The documentation doesn’t tell you to do it, but this is needed if you need to update the environment variables or the secrets inside the machine.

Note that all of this configuration can be changed at any time, so if you make any mistakes or you just want to experiment, you don’t have to overly worry. You can even destroy your machine and recreate it from scratch if you want.

To view the configuration of an existing machine use the following:

curl \
  -H "Authorization: Bearer $(<~/fly-token)" \

And to update it:

curl -X POST \
  -H "Authorization: Bearer $(<~/fly-token)" \
  -H 'Content-Type: application/json' \
  "${app_name}/machines/${machine_id}" \
  -d @new-config.json

I was also successful at changing configuration using fly machines update, although it can’t be used for everything (for example: it can be used to add or change environment variables, but not to remove them).

Testing the setup

If everything went well, you should be able to interact with Remark42 at https://${app_name} This should let you read and post new comments.

Configuring Remark42 to send emails

For sending emails, I chose to use Elastic Email, which is an email-delivery service that supports SMTP with STARTTLS. Creating an Elastic Email, setting up DKIM and SPF, and obtaining SMTP credentials was extremely easy with Elastic Email, so I won’t cover it here.

Setting up email delivery with Remark42 is pretty easy once you have the SMTP credentials. Set the necessary (non-secret) configuration like this:

fly machines update ${machine_id} --app=${app_name} \
  -e \
  -e SMTP_PORT=2525 \
  -e SMTP_STARTTLS=true \

And then set the SMTP password as a secret:

fly secrets set --app=${app_name} SMTP_PASSWD='a very secret password'

Doing both machines update and secrets set will automatically restart the machine so that Remark42 can pick up the new configuration. Pretty neat, heh?

Configuring authentication providers for Remark42

Remark42 can let your users log in from a variety of providers, including: GitHub, Google, Facebook, Telegram, and more. There are specific instructions for each provider in the Remark42 documentation. There’s really not much to add on top of what’s already written there. Just remember: set non-secret environment variables with fly machines update, and set secrets with fly secrets set.

Creating an administrator account

If you want to be able to moderate comments, you’ll need an administrator account. With Remark42, this is a 3 step process: first you create an account (like any other user would do), then you copy the ID of the user you just created, and lastly you add that user ID to the ADMIN_SHARED_ID environment variable:

fly machines update ${machine_id} --app=${app_name} -e ADMIN_SHARED_ID=...

As step-by-step guide is on the Remark42 documentation.

Importing comments from Disqus (or any other platform)

In order to import comments into Remark42, first you need to temporarily set an “admin password” for Remark42 (here the word “admin” has nothing to do with the administrator account you just created; it’s a totally separate concept):

fly secrets set --app=${app_name} ADMIN_PASSWD='this is super secret'

You can now copy your Disqus (or equivalent) backup on the machine and import it. I could not find an easy way to do it through flyctl (but I also did not spend too much time looking for an option), I did however find a way to open a console on the machine, so what I did was simply copying and pasting the base64-encoded backup:

# on my laptop
base64 < disqus-export.xml.gz  # copy the output

# attach to the machine
fly console --app=${app_name} --machine=${machine_id}

# on the machine
cd /srv/var
base64 -d > disqus-export.xml.gz  # paste the output from earlier
gunzip disqus-export.xml.gz
import --provider=disqus --file=/srv/var/disqus-export.xml --url=http://localhost:8080
rm disqus-export.xml

Note: importing comments will clear the Remark42 database. Any pre-existing comment will be deleted. See also the Remark42 documentation for more information.

Another note: for some reason, my Disqus export referenced my blog posts using http:// URLs instead of https://. Because of that, Remark42 did correctly import all the Disqus comments in its database, but would not display them under my blog posts. Remember: Remark42 is very picky when it comes to URL schemes. To fix this, I simply created a backup from Remark42, modified the backup to change all http entries to https, and then restored the backup. This was quite trivial given that the format used by the backups is extremely intuitive.

Final remarks

That was it!

Setting up Remark42 on wasn’t particularly difficult, but it took me way more time than expected due to the poor documentation of both Remark42 and I had to resort to trial-and-error multiple times to make things work.

One big drawback of Remark42 is that it does not allow replication. This means that:

  • if the machine running my instance of Remark42 goes down, or becomes unreachable for any reason, there will be downtime;
  • some people who are “far away” from the Remark42 instance may experience higher latency than others;
  • I need to periodically take backups of my Remark42 database and copy it somewhere, otherwise if my single storage volume is lost, I will lose all the comments.

Nonetheless I think both Remark42 and are very interesting products. I love Remark42’s features, and is easy enough to use once you get familiar with it. I think I’m gonna stick with them for a long time.

on September 19, 2023 02:12 AM

September 18, 2023

Just a very short post to mention that I’ve enabled ActivityPub on my blog, making it possible to follow it by simply following in Mastodon or other Fediverse platforms!

For good measure I’ve also made redirect to my current Mastodon account which should make for easier discovery.

on September 18, 2023 06:37 PM

September 14, 2023

Witch Wells Az

I had to make the hard decision to put snaps on hold. I am working odd jobs to “stay alive” and to pay for my beautiful scenery. My “Project” should move forward, as I have done everything asked of me including finding a super awesome management team to take us all the way through. But until it is signed sealed and delivered, I have to survive. In my free time I am helping out Jonathan and working on KDE Neon, he has done so much for me over the years, it is the least I can do!

So without further ado! Carlos and I have been working diligently on new Frameworks 5.110, Plasma 5.27.8, and Applications 23.08.1! They are complete and ready in /user! With that, a great many fixes to qml dependencies and packaging updates. Current users can update freely and the docker images and ISO are building now. We are working on Unstable… as it is a bit unstable right now, but improving 🙂

On the Debian front I am wrapping up packaging of new upstream release of squashfuse.

Thanks for stopping by!

If you can spare some change, consider a donation

Thank you!

on September 14, 2023 05:06 PM

E264 Tudo a Postos

Podcast Ubuntu Portugal

Já se respira o ambiente de pré-festa: as castanhas a assar, o bacalhau no forno, o vinho que corre do tonel para os copos vazios, os convivas que chegam aos poucos em algazarra. Que festa? A Festa do Software Livre e Ubucon Portugal, pois claro! O Tiago voltou da sua missão secreta, com menos um peso nos ombros: libertou-se da Vodafone e teve tempo de brincar com o Chromium. O Miguel resmungou um bocado, o Diogo ostentou ufano o seu grande número de Zigbees. No fim, salivámos um bocadinho em antecipação do programa da Festa e daquelas comunicações e oficinas que queremos mesmo mesmo mesmo mesmo mesmo ver.

Já sabem: oiçam, subscrevam e partilhem!


Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on September 14, 2023 12:00 AM

September 12, 2023

Building a NAS

Jo Shields

The status quo

Back in 2015, I bought an off-the-shelf NAS, a QNAP TS-453mini, to act as my file store and Plex server. I had previously owned a Synology box, and whilst I liked the Synology OS and experience, the hardware was underwhelming. I loaded up the successor QNAP with four 5TB drives in RAID10, and moved all my files over (after some initial DoA drive issues were handled).

QNAP TS-453mini product photoQNAP TS-453mini product photo

That thing has been in service for about 8 years now, and it’s been… a mixed bag. It was definitely more powerful than the predecessor system, but it was clear that QNAP’s OS was not up to the same standard as Synology’s – perhaps best exemplified by “HappyGet 2”, the QNAP webapp for downloading videos from streaming services like YouTube, whose icon is a straight rip-off of StarCraft 2. On its own, meaningless – but a bad omen for overall software quality

The logo for QNAP HappyGet 2 and Blizzard's Starcraft 2 side by sideThe logo for QNAP HappyGet 2 and Blizzard’s StarCraft 2 side by side

Additionally, the embedded Celeron processor in the NAS turned out to be an issue for some cases. It turns out, when playing back videos with subtitles, most Plex clients do not support subtitles properly – instead they rely on the Plex server doing JIT transcoding to bake the subtitles directly into the video stream. I discovered this with some Blu-Ray rips of Game of Thrones – some episodes would play back fine on my smart TV, but episodes with subtitled Dothraki speech would play at only 2 or 3 frames per second.

The final straw was a ransomware attack, which went through all my data and locked every file below a 60MiB threshold. Practically all my music gone. A substantial collection of downloaded files, all gone. Some of these files had been carried around since my college days – digital rarities, or at least digital detritus I felt a real sense of loss at having to replace. This episode was caused by a ransomware targeting specific vulnerabilities in the QNAP OS, not an error on my part.

So, I decided to start planning a replacement with:

  • A non-garbage OS, whilst still being a NAS-appliance type offering (not an off-the-shelf Linux server distro)
  • Full remote management capabilities
  • A small form factor comparable to off-the-shelf NAS
  • A powerful modern CPU capable of transcoding high resolution video
  • All flash storage, no spinning rust

At the time, no consumer NAS offered everything (The Asustor FS6712X exists now, but didn’t when this project started), so I opted to go for a full DIY rather than an appliance – not the first time I’ve jumped between appliances and DIY for home storage.

Selecting the core of the system

There aren’t many companies which will sell you a small motherboard with IPMI. Supermicro is a bust, so is Tyan. But ASRock Rack, the server division of third-tier motherboard vendor ASRock, delivers. Most of their boards aren’t actually compliant Mini-ITX size, they’re a proprietary “Deep Mini-ITX” with the regular screw holes, but 40mm of extra length (and a commensurately small list of compatible cases). But, thankfully, they do have a tiny selection of boards without the extra size, and I stumbled onto the X570D4I-2T, a board with an AMD AM4 socket and the mature X570 chipset. This board can use any AMD Ryzen chip (before the latest-gen Ryzen 7000 series); has built in dual 10 gigabit ethernet; IPMI; four (laptop-sized) RAM slots with full ECC support; one M.2 slot for NVMe SSD storage; a PCIe 16x slot (generally for graphics cards, but we live in a world of possibilities); and up to 8 SATA drives OR a couple more NVMe SSDs. It’s astonishingly well featured, just a shame it costs about $450 compared to a good consumer-grade Mini ITX AM4 board costing less than half that.

I was so impressed with the offering, in fact, that I crowed about it on Mastodon and ended up securing ASRock another sale, with someone else looking into a very similar project to mine around the same timespan.

The next question was the CPU. An important feature of a system expected to run 24/7 is low power, and AM4 chips can consume as much as 130W under load, out of the box. At the other end, some models can require as little as 35W under load – the OEM-only “GE” suffix chips, which are readily found for import on eBay. In their “PRO” variant, they also support ECC (all non-G Ryzen chips support ECC, but only Pro G chips do). The top of the range 8 core Ryzen 7 PRO 5750GE is prohibitively expensive, but the slightly weaker 6 core Ryzen 5 PRO 5650GE was affordable, and one arrived quickly from Hong Kong. Supplemented with a couple of cheap 16 GiB SODIMM sticks of DDR4 PC-3200 direct from Micron for under $50 a piece, that left only cooling as an unsolved problem to get a bootable test system.

The official support list for the X570D4I-2T only includes two rackmount coolers, both expensive and hard to source. The reason for such a small list is the non standard cooling layout of the board – instead of an AM4 hole pattern with the standard plastic AM4 retaining clips, it has an Intel 115x hole pattern with a non-standard backplate (Intel 115x boards have no backplate, the stock Intel 115x cooler attaches to the holes with push pins). As such every single cooler compatibility list excludes this motherboard. However, the backplate is only secured with a mild glue – with minimal pressure and a plastic prying tool it can be removed, giving compatibility with any 115x cooler (which is basically any CPU cooler for more than a decade). I picked an oversized low profile Thermalright AXP120-X67 hoping that its 120mm fan would cool the nearby MOSFETs and X570 chipset too.

Thermalright AXP120-X67, AMD Ryzen 5 PRO 5650GE, ASRock Rack X570D4I-2T, all assembled and running on a flat surface

Testing up to this point

Using a spare ATX power supply, I had enough of a system built to explore the IPMI and UEFI instances, and run MemTest86 to validate my progress. The memory test ran without a hitch and confirmed the ECC was working, although it also showed that the memory was only running at 2933 MT/s instead of the rated 3200 MT/s (a limit imposed by the motherboard, as higher speeds are considered overclocking). The IPMI interface isn’t the best I’ve ever used by a long shot, but it’s minimum viable and allowed me to configure the basics and boot from media entirely via a Web browser.

Memtest86 showing test progress, taken from IPMI remote control window

One sad discovery, however, which I’ve never seen documented before, on PCIe bifurcation.

With PCI Express, you have a number of “lanes” which are allocated in groups by the motherboard and CPU manufacturer. For Ryzen prior to Ryzen 7000, that’s 16 lanes in one slot for the graphics card; 4 lanes in one M.2 connector for an SSD; then 4 lanes connecting the CPU to the chipset, which can offer whatever it likes for peripherals or extra lanes (bottlenecked by that shared 4x link to the CPU, if it comes down to it).

It’s possible, with motherboard and CPU support, to split PCIe groups up – for example an 8x slot could be split into two 4x slots (eg allowing two NVMe drives in an adapter card – NVME drives these days all use 4x). However with a “Cezanne” Ryzen with integrated graphics, the 16x graphics card slot cannot be split into four 4x slots (ie used for for NVMe drives) – the most bifurcation it allows is 8x4x4x, which is useless in a NAS.

Screenshot of PCIe 16x slot bifurcation options in UEFI settings, taken from IPMI remote control window

As such, I had to abandon any ideas of an all-NVMe NAS I was considering: the 16x slot split into four 4x, combined with two 4x connectors fed by the X570 chipset, to a total of 6 NVMe drives. 7.6TB U.2 enterprise disks are remarkably affordable (cheaper than consumer SATA 8TB drives), but alas, I was locked out by my 5650GE. Thankfully I found out before spending hundreds on a U.2 hot swap bay. The NVMe setup would be nearly 10x as fast as SATA SSDs, but at least the SATA SSD route would still outperform any spinning rust choice on the market (including the fastest 10K RPM SAS drives)

Containing the core

The next step was to pick a case and power supply. A lot of NAS cases require an SFX (rather than ATX) size supply, so I ordered a modular SX500 unit from Silverstone. Even if I ended up with a case requiring ATX, it’s easy to turn an SFX power supply into ATX, and the worst result is you have less space taken up in your case, hardly the worst problem to have.

That said, on to picking a case. There’s only one brand with any cachet making ITX NAS cases, Silverstone. They have three choices in an appropriate size: CS01-HS, CS280, and DS380. The problem is, these cases are all badly designed garbage. Take the CS280 as an example, the case with the most space for a CPU cooler. Here’s how close together the hotswap bay (right) and power supply (left) are:

Internal image of Silverstone CS280 NAS build. Image stolen from ServeTheHome

With actual cables connected, the cable clearance problem is even worse:

Internal image of Silverstone CS280 NAS build. Image stolen from ServeTheHome

Remember, this is the best of the three cases for internal layout, the one with the least restriction on CPU cooler height. And it’s garbage! Total hot garbage! I decided therefore to completely skip the NAS case market, and instead purchase a 5.25″-to-2.5″ hot swap bay adapter from Icy Dock, and put it in an ITX gamer case with a 5.25″ bay. This is no longer a served market – 5.25″ bays are extinct since nobody uses CD/DVD drives anymore. The ones on the market are really new old stock from 2014-2017: The Fractal Design Core 500, Cooler Master Elite 130, and Silverstone SUGO 14. Of the three, the Fractal is the best rated so I opted to get that one – however it seems the global supply of “new old stock” fully dried up in the two weeks between me making a decision and placing an order – leaving only the Silverstone case.

Icy Dock have a selection of 8-bay 2.5″ SATA 5.25″ hot swap chassis choices in their ToughArmor MB998 series. I opted for the ToughArmor MB998IP-B, to reduce cable clutter – it requires only two SFF-8611-to-SF-8643 cables from the motherboard to serve all eight bays, which should make airflow less of a mess. The X570D4I-2T doesn’t have any SATA ports on board, instead it has two SFF-8611 OCuLink ports, each supporting 4 PCI Express lanes OR 4 SATA connectors via a breakout cable. I had hoped to get the ToughArmor MB118VP-B and run six U.2 drives, but as I said, the PCIe bifurcation issue with Ryzen “G” chips meant I wouldn’t be able to run all six bays successfully.

NAS build in Silverstone SUGO 14, mid build, panels removed
Silverstone SUGO 14 from the front, with hot swap bay installed

Actual storage for the storage server

My concept for the system always involved a fast boot/cache drive in the motherboard’s M.2 slot, non-redundant (just backups of the config if the worst were to happen) and separate storage drives somewhere between 3.8 and 8 TB each (somewhere from $200-$350). As a boot drive, I selected the Intel Optane SSD P1600X 58G, available for under $35 and rated for 228 years between failures (or 11,000 complete drive rewrite cycles).

So, on to the big expensive choice: storage drives. I narrowed it down to two contenders: new-old-stock Intel D3-S4510 3.84TB enterprise drives, at about $200, or Samsung 870 QVO 8TB consumer drives, at about $375. I did spend a long time agonizing over the specification differences, the ZFS usage reports, the expected lifetime endurance figures, but in reality, it came down to price – $1600 of expensive drives vs $3200 of even more expensive drives. That’s 27TB of usable capacity in RAID-Z1, or 23TB in RAID-Z2. For comparison, I’m using about 5TB of the old NAS, so that’s a LOT of overhead for expansion.

Storage SSD loaded into hot swap sled

Booting up

Bringing it all together is the OS. I wanted an “appliance” NAS OS rather than self-administering a Linux distribution, and after looking into the surrounding ecosystems, decided on TrueNAS Scale (the beta of the 2023 release, based on Debian 12).

TrueNAS Dashboard screenshot in browser window

I set up RAID-Z1, and with zero tuning (other than enabling auto-TRIM), got the following performance numbers:

4k random writes19.3k75.6 MiB/s
4k random reads36.1k141 MiB/s
Sequential writes2300 MiB/s
Sequential reads3800 MiB/s
Results using fio parameters suggested by Huawei

And for comparison, the maximum theoretical numbers quoted by Intel for a single drive:

4k random writes16k?
4k random reads90k?
Sequential writes280 MiB/s
Sequential reads560 MiB/s
Numbers quoted by Intel SSD successors Solidigm.

Finally, the numbers reported on the old NAS with four 7200 RPM hard disks in RAID 10:

4k random writes4301.7 MiB/s
4k random reads800632 MiB/s
Sequential writes311 MiB/s
Sequential reads566 MiB/s

Performance seems pretty OK. There’s always going to be an overhead to RAID. I’ll settle for the 45x improvement on random writes vs. its predecessor, and 4.5x improvement on random reads. The sequential write numbers are gonna be impacted by the size of the ZFS cache (50% of RAM, so 16 GiB), but the rest should be a reasonable indication of true performance.

It took me a little while to fully understand the TrueNAS permissions model, but I finally got Plex configured to access data from the same place as my SMB shares, which have anonymous read-only access or authenticated write access for myself and my wife, working fine via both Linux and Windows.

And… that’s it! I built a NAS. I intend to add some fans and more RAM, but that’s the build. Total spent: about $3000, which sounds like an unreasonable amount, but it’s actually less than a comparable Synology DiskStation DS1823xs+ which has 4 cores instead of 6, first-generation AMD Zen instead of Zen 3, 8 GiB RAM instead of 32 GiB, no hardware-accelerated video transcoding, etc. And it would have been a whole lot less fun!

The final system, powered up

(Also posted on PCPartPicker)

on September 12, 2023 09:33 PM

September 05, 2023

Using Two GPUs at Once

Ubuntu Podcast from the UK LoCo

Exploring the secrets of the TPM, running Radeon and NVIDIA GPUs in one PC, and getting ongoing data from an EV.
on September 05, 2023 07:15 PM

I received the following question on my AMA section and thought of writing a blog post instead of answering in a few lines. I like Linux but I do not enjoy competitive programming (sport programming). How can I enjoy competitive programming? Thank you for your question. If you want to enjoy competitive programming more, even […]

The post Tips for Competitive Programmers appeared first on Cyber Kingdom of Russell John.

on September 05, 2023 06:52 PM

Readers of this blog might have noticed a few changes recently. For example, I’ve been working on improving the look of the blog (maybe with questionable results), as well as improving the experience on mobile. But one of the biggest changes that perhaps some have noticed is that all of the comments on all of my articles have suddenly disappeared since February 2023. Now, almost 7 months later, all comments have finally been restored.

The reason for this 7 months blackout of comments is that I decided to change the platform that hosts comments: I got rid of Disqus, and eventually replaced it with Remark42. Here I will describe why I did it. There will be another (more technical) blog post about my new setup.


My blog is a static website that has been using Disqus as a commenting platform for a long time: since at least 2015 (8 years ago), or maybe even more (back when my blog was on WordPress). Disqus at that time was gaining a lot of popularity, it was free, and it was very attractive to me because easy to set up. I might be wrong, but at that time, Disqus did not look to me like the data-savvy, privacy-invading, revenue-oriented company that it is today. Maybe I just naive, but so I kept using Disqus all these years without paying too much attention to it: after all, it worked, so why would I spend any time thinking about it?

Advertisements on my blog!?

Fast-forward to February 2023: one day, a person very close to me, with the utmost kindness that characterizes her, came to me and said: “the ads on your blog suck! They’re the worst kind of ads!”

At the beginning I had no idea what she was talking about. I have never intentionally run any sort of advertisements on my blog. I hate advertisements!

Then I realized what was going on: precisely because I hate advertisements, I run ad-blockers on all my devices. Maybe there were ads on my blog, but I never noticed because I block those ads. The only third-party service that I used to run on my blog was Disqus, so I immediately turned my attention to it. I disabled my ad-blockers, refreshed my blog, scrolled down to the comments section, and… the sad truth was revealed: Disqus was showing ads to my readers. And yes, those ads were some of the worst kind of ads.

And I knew that, together with those ads, there was massive tracking, collection of data, and maybe even data sharing with third-parties. People who know me, know that I deeply care about privacy, and having Disqus on my blog tracking my readers was the complete opposite of what I wanted.

I was extremely disappointed.

Leaving Disqus

I did some quick research and I discovered that (1) I could not disable Disqus ads without paying, and (2) Disqus was no longer that nice commenting platform that I met in 2015. It had mutated into something obsessed about revenue, and it was clear that their business model was completely based on ads. My fears about tracking were quickly confirmed. Let’s just say that Disqus turned out to something that does not really align with my values.

I made the difficult decision to completely remove Disqus from my blog on the same day. But I firmly believe that a blog without comments is not a blog, and so I had to find an alternative.

Looking for a new platform

I quickly started to look for new commenting platforms that could replace Disqus. The basic criteria that this new platform had to meet were (in no particular order):

  • be free of charge
  • display only comments, no ads
  • respect the privacy of users
  • allow users to comment anonymously (at least to some extent)

The last time that I searched for a commenting platform was in 2015. Back in those days, there were not many solutions, and that’s one reason why I ended up with Disqus. I thought: 8 years have passed since then, surely the space must have improved, and alternatives must be proliferating, right? Well, no, not really. I struggled to find a managed platform that met those criteria.

I did find some solutions that were using Mastodon or GitHub as a backend to store comments, but I did not like at all the idea of forcing my readers to have a Mastodon or GitHub account to comment on my blog.

Trying Cactus Comments

One platform that came up multiple times during my search was Cactus Comments. Quoting the homepage of the project:

Cactus Comments is a federated comment system built on Matrix. It respects your privacy, and puts you in control. The entire thing is completely free and open source.

That sounded interesting, although I did not really know what Matrix was to begin with (if you, like me earlier this year, do not know what Matrix is: it is a team communication platform, somewhat similar to Slack). I thought that I could give Cactus a try. So, a few days after removing Disqus, I onboarded on Cactus Comments.

Onboarding was not hard, but it was not trivial either, mostly because I was not familiar with Matrix. The frontend shown to readers was a bit disappointing: even though Matrix supports threads, Cactus Comments does not. Overall, the number of features that commenters could use was scarce: people could only post a comment, and not much else; they had no ability to edit their comments, or delete them. But it did allow people to post even without creating a Matrix account, and that was great for me.

The “administrative interface” (if we can call it this way) was also disappointing. All the administration and moderation had to be done through Matrix, sometimes by communicating with a bot, and could not be done by clicking buttons on my blog. Every blog post had to have its own Matrix channel and I (the author) had to manually join each channel in order to get some sort of notification for new comments.

I needed a Matrix client to spot new comments, and to perform moderation actions, and I chose Element for that purpose. Sadly, Element was totally unreadable on small displays like my phone. And apparently there’s no web-based Matrix client that works well on mobile. I could have installed an app for my phone, but I hate installing apps, especially for activities that can in theory be done through a web browser.

Cactus Comments also did not support importing comments from Disqus, so moving to this platform meant that all the conversations that happened over the years on my blog were lost. But because Cactus Comments is free & open source software, I thought that I could add support for importing comments from Disqus if I decided to settle with Cactus Comments, so this was not a deal breaker.

Overall my experience with Cactus Comments was not great, but I was willing to accept that in exchange for a platform that was free, managed by someone else, and respecting the privacy of my readers.

There was however one big problem that eventually led me to remove Cactus Comments from my blog: Cactus did not support sending email notifications. This meant that if you left a comment on this blog, I would not get notified. And if I responded to your comment, you would not get notified. In order to spot new comments, I had to check the Matrix channels periodically, and readers and to check my blog periodically. Maybe if I installed a Matrix app I could have received push notifications on my phone, but that’s not what I wanted, and this wouldn’t have solved the problem for my commenters anyway.

I was pretty bad at checking for new comments on Cactus. What happened multiple times is that people would leave comments or questions on my blog, but I wouldn’t notice until 2 weeks later. At that point, it was pointless for me to respond because so much time had passed that those commenters surely wouldn’t be checking my blog for a response…

I would say that with Cactus I had a blog that allowed comments, but did not allow conversations. Not allowing conversations made the comments pointless in my opinion. I might as well have had no comments at all: at least people would stop leaving questions there that were destined to be unanswered, and instead they would have emailed me directly.

Meet Remark42

Between August and September 2023, I decided that I had to restart my quest for a commenting platform. This time I knew that I had to look for a solution that I had to install and manage myself. I was not super-excited about it, but from my first search for a Disqus alternative, I couldn’t find any managed solution that I really liked.

Initially I thought about writing my own commenting platform in Rust with a key-value store, but then I figured that if I looked for a software to install instead of a managed platform, maybe I could find something I liked.

After some research, I decided to go with Remark42. There were a few contenders, but Remark42 won because it looked like it had all of the features I needed, and more:

  • it supports sending of email notifications, both to me, and to my readers;
  • it supports various authentication mechanisms, including: email, GitHub, Google, Facebook, etc (it’s nice to give commenters a choice);
  • it supports leaving comments anonymously, without logging in or leaving an email address;
  • commenters can edit and delete comments;
  • it supports importing comments from Disqus;
  • in fact, it supports importing comments from any platform: the format it uses for restoring backups is JSON-based and very easy to replicate (in theory I could import the comments from Cactus, even though I have not done that yet);
  • it’s privacy-focused, and it looks like it’s implemented with security in mind.

I decided to host it on, which offers some compute and storage capacity for free. I was introduced to on Mastodon, but I had never used it before.

For sending emails, I chose Elastic Email, which also offers the features I needed for free. I also had never used this service before, and did not know much about it: it showed up while searching for a free SMTP provider. Elastic Email describes itself as a marketing service, which does not sound great from the point of view of privacy, but I figured that all the emails being sent here contain only public information (all comments are public after all), so there’s not much to protect besides email addresses. And people are free to use temporary email providers like Mailinator if they don’t want to leave their real email, or even leave no email address at all. (Should I be concerned about Elastic Email, like I should have been concerned about Disqus? Let me know… in the comments below.)

Setting up Remark42 on was relatively easy, but it took me way longer than I had expected, mostly because the documentation was quite inconsistent and confusing, and also the Remark42 documentation was not fully clear. In the end I managed to make everything work and I’m pretty happy with the setup I ended up with. I’m going to publish details about my setup in a future blog post, in case you’re interested (update: said blog post is now published).


That’s all I have to say for now! Remark42 has been running on my blog for a few days, so it’s too early for me to say whether I’ll stick with it or I will look for a new solution, but so far it looks very promising, and I’m very happy with it. I hope this is the beginning of a long journey!

on September 05, 2023 08:30 AM

September 04, 2023

Xubuntu Development Update September 2023

September has arrived and cooler days are finally ahead of us in the northern hemisphere. Development on the 23.10 release, "Mantic Minotaur", has been progressing nicely with numerous updates to report.

Color Emoji Have Arrived

Font updates made to the desktop-common seed on August 21 added the fonts-noto-color-emoji package to all Ubuntu flavors, including Xubuntu. This enables color emoji in any application using GTK 3 or 4. I&aposve written about color emoji in Xubuntu previously, in case you want to learn more.

Fixed Support for Bluetooth Headphones

When we switched from PulseAudio to PipeWire in 23.04, our changes to make Bluetooth headphones work didn&apost take. While I had added the required libspa-0.2-bluetooth package to our seed, I didn&apost confirm that it was included in the generated metapackages (#2028530, #2017818), . Unit 193 dug in and resolved this issue, fixing support for 23.10.

mate-polkit  Replaces policykit-1-gnome

Here&aposs a feature that we hope you don&apost notice! The policykit-1-gnome package is no longer maintained and may be dropped in a future Debian release. After the mate-polkit package was patched to support Xfce, we made the switch! This should be a completely transparent transition with no issues. But do let us know if anything doesn&apost work right.

So Many Package Updates

The Xfce, GNOME, and other desktop components have received numerous package updates since the start of 23.10. Review the list below for any surprises!

  • baobab 44.0 → 45 alpha
  • gnome-font-viewer 44.0 → 45 alpha
  • gnome-software 44.0 → 45 beta
  • libreoffice 7.5.2 → 7.5.6
  • libxfce4ui 4.18.2 → 4.18.4
  • mousepad 0.5.10 → 0.6.1
  • rhythmbox 3.4.6 → 3.4.7
  • ristretto 0.12.4 → 0.13.1
  • sgt-puzzles 20230122.806ae71 → 20230410.71cf891
  • thunar 4.18.4 → 4.18.6
  • thunar-archive-plugin 0.5.0 → 0.5.1
  • thunar-media-tags-plugin 0.3.0 → 0.4.0
  • thunderbird 102.10.0 → 115.2.0
  • transmission 3.00 → 4.0.2
  • tumbler 4.18.0 → 4.18.1
  • xfburn 0.6.2 → 0.7.0
  • xfce4-clipman 1.6.2 → 1.6.4
  • xfce4-cpugraph-plugin 1.2.7 → 1.2.8
  • xfce4-dict 0.8.4 → 0.8.5
  • xfce4-indicator-plugin 2.4.1 → 2.4.2
  • xfce4-mailwatch-plugin 1.3.0 → 1.3.1
  • xfce4-netload-plugin 1.4.0 → 1.4.1
  • xfce4-notifyd 0.7.3 → 0.8.2
  • xfce4-panel 4.18.2 → 4.18.4
  • xfce4-panel-profiles 1.0.13 → 1.0.14
  • xfce4-power-manager 4.18.1 → 4.18.2
  • xfce4-pulseaudio-plugin 0.4.5 → 0.4.7
  • xfce4-screensaver 4.16.0 → 4.18.2
  • xfce4-screenshooter 1.10.3 → 1.10.4
  • xfce4-session 4.18.1 → 4.18.3
  • xfce4-verve-plugin 2.0.1 → 2.0.3
  • xfce4-weather-plugin 0.11.0 → 0.11.1
  • xfce4-whiskermenu-plugin 2.7.2 → 2.7.3
  • xfconf 4.18.0 → 4.18.1
  • xubuntu-desktop 2.248 → 2.250

Testing Xubuntu 23.10

If you&aposd like to check out the latest developments, go download one of the daily ISO images and give it a spin. Don&apost forget to report issues or leave test results!

Thanks to everybody in the community continuing to support our project!

on September 04, 2023 12:13 PM

August 24, 2023

I’m happy to announce that Netplan version 0.107 is now available on GitHub and is soon to be deployed into a Linux installation near you! Six months and more than 200 commits after the previous version (including a .1 stable release), this release is brought to you by 8 free software contributors from around the globe.


Highlights of this release include the new configuration types for veth and dummy interfaces:

  version: 2
      peer: veth1
      peer: veth0

Furthermore, we implemented CFFI based Python bindings on top of libnetplan’s API, that can easily be consumed by 3rd party applications (see full example):

from netplan import Parser, State, NetDefinition
from netplan import NetplanException, NetplanParserException

parser = Parser()

# Parse the full, existing YAML config hierarchy

# Validate the final parser state
state = State()
    # validation of current state + new settings
except NetplanParserException as e:
    print('Error in', e.filename, 'Row/Col', e.line, e.column, '->', e.message)
except NetplanException as e:
    print('Error:', e.message)

# Walk through ethernet NetdefIDs in the state and print their backend
# renderer, to demonstrate working with NetDefinitionIterator &
# NetDefinition
for netdef in state.ethernets.values():
    print('Netdef',, 'is managed by:', netdef.backend)
    print('Is it configured to use DHCP?', netdef.dhcp4 or netdef.dhcp6)


Bug fixes:

on August 24, 2023 12:59 PM

August 23, 2023


Jo Shields

Apparently it’s nearly four years since I last posted to my blog. Which is, to a degree, the point here. My time, and priorities, have changed over the years. And this lead me to the decision that my available time and priorities in 2023 aren’t compatible with being a Debian or Ubuntu developer, and realistically, haven’t been for years. As of earlier this month, I quit as a Debian Developer and Ubuntu MOTU.

I think a lot of my blogging energy got absorbed by social media over the last decade, but with the collapse of Twitter and Reddit due to mismanagement, I’m trying to allocate more time for blog-based things instead. I may write up some of the things I’ve achieved at work (.NET 8 is now snapped for release Soon™). I might even blog about work-adjacent controversial topics, like my changed feelings about the entire concept of distribution packages. But there’s time for that later. Maybe.

I’ll keep tagging vaguely FOSS related topics with the Debian and Ubuntu tags, which cause them to be aggregated in the Planet Debian/Ubuntu feeds (RSS, remember that from the before times?!) until an admin on those sites gets annoyed at the off-topic posting of an emeritus dev and deletes them.

But that’s where we are. Rather than ignore my distro obligations, I’ve admitted that I just don’t have the energy any more. Let someone less perpetually exhausted than me take over. And if they don’t, maybe that’s OK too.

on August 23, 2023 03:52 PM

August 18, 2023

Making and maintaining a distribution, be it an independent distro or a flavor of a bigger one, such as Ubuntu Budgie, requires a good infrastructure setup to make sure that everything is reliable and functioning like a well oiled machine. In case of a Ubuntu Budgie, we made sure to make a setup that is both easy to use and sturdy. The first step, the most visible part is this website that you are...


on August 18, 2023 11:11 AM

August 06, 2023

Numeric Pangrams

Stuart Langridge

A few days ago I had an interesting maths thought which I dropped on Mastodon:

Today’s interesting maths problem to think about: what is the largest total from a correct maths equation which uses any number of +*- symbols, one =, and the digits from 0-9 once each?

For example, if we’re doing it for digits 0-4, then 40-12=3 isn’t valid (answer is wrong), 1+3=4 isn’t valid (correct but doesn’t use 0 or 2), 12/4+0=3 is good, and 4*3+0=12 is better (because 12 is higher than 3).

Is there an interesting way to solve this which isn’t “have a computer check all possibilities”?

Disappointingly, I couldn’t think of an interesting way to solve it which wasn’t “have a computer check all the possibilities”, so I had a computer check all the possibilities.

I’m not publishing the script, because it’s not very good; in particular, I’m sure this is the sort of thing that someone could make run in a few seconds if they were clever. I wasn’t feeling clever, so I brute-forced it and just left it to run for a few hours.

These are numeric pangrams, I suppose you might call them. A pangram is a sentence which uses all the letters in the alphabet at least once: there are a bunch of famous English examples, starting with “The quick brown fox jumps over a lazy dog” and getting less and less comprehensible as they get shorter. Vexing quizzes show up a lot. Anyway, after I thought up the term “numeric pangrams” I checked to see if anybody else had already done so and of course Greg Ross at Futility Closet did sixteen years ago. His examples are the digits from 1-9, though (his biggest example is 4 × 1963 = 7852), and my 0-9 suggestion allows for much larger total values.

Anyway, the best equations I could come up with which use all the digits from 0-9 were:

  • 42 × 8 × 91 = 30576
  • 46 × 715 = 32890
  • 4 × 9127 = 36508
  • 63 × 927 = 58401
  • 7 × 9403 = 65821

This is the sort of thing that Henry Dudeney would have cleverly done by hand. Python is the way forward, of course.

on August 06, 2023 01:11 PM

August 04, 2023

At KDE we make software for many different platforms. One of them is Microsoft Windows. But what if an application crashes on Windows? New tech enables us to track crashes right in Sentry! Time to learn about it.

When an application crashes on Windows the user can submit crash data to Microsoft. Later KDE, as publisher of the app, can retrieve the crashes from there. This is the standard crash handling for the platform and it works incredibly well. What’s more, it means we don’t need to engineer our own custom solution for the entire process. So, that is all lovely.

Alas, since we are rolling out a KDE-wide crash tracking system called Sentry it would be even nicer if we had Windows crashes in there rather than third party service. That is just what I’ve built recently.

Crashes for our Windows applications now get imported into Sentry!

There are two pieces to this puzzle. One is a symstore and one is the actual importer.

Symbol Storage

In the Linux world we not so long ago grew debuginfod, it’s basically a tiny service that lays out debug symbols in a standardized file path so it may be consumed by any number of tools by making HTTP GET requests to well-known URIs.

Windows has something like this called symbol storage. It’s incredibly simple and merely lays out Windows debug information in a well defined path setup for consumption in tools.

To maintain a symstore of our own we use the excellent python module symstore along with some custom rigging to actually retrieve our debug information files.


The second piece is our importer. It’s a trivial program that uses Microsoft’s APIs to retrieve crash data along with the minidump (the dump format used by Windows crashes), converts it into a Sentry payload, and sends it off to our Sentry instance. Sentry then consumes the crash, does symbolication using the debug symbols from our symstore, and generates a crash event.


If you are shipping your application on the Windows Store you should consider publishing your dbg.7z files on (for example here is Okular), simply file a sysadmin ticket same as with any other upload to get this published. Once that is done you can submit a change to our symstore to publish the debug symbols, and our importer to import data into Sentry. If you need help just give me a poke.

If you are publishing builds for Windows but are not yet on the Window Store, you definitely should change that. It increases your application’s reach considerably. Get in touch with sysadmins to get yourself set up.

Importing crashes into Sentry adds another piece towards higher quality, more actionable crash reports. It’s going to be amazing.

Discuss this blog post on KDE Discuss.

on August 04, 2023 10:56 PM

July 28, 2023

The OpenUK Awards are open for nominations for 2023.

Awards timetable

  • Nominations open 28th July 2023
  • Nominations close midnight UK 19th September 2023 (this will not be extended)
  • Shortlist of up to 3 nominees per category announced 18th October 2023
  • Winners Announced 20th November 2023: Black Tie Awards Ceremony and dinner at House of Lords sponsored by Lord Vaizey, 6-10.30pm, tickets limited 

Self nominations are very welcome. If you know fit into the categories or have a project or company which does or know anyone else who does then fill in the form and say why it’s deserved. You might get fame and glory or at the least a dinner in the house of lords.

on July 28, 2023 05:04 PM

July 18, 2023

Photo by Taylor Vick (Unsplash)

Linux networking can be confusing due to the wide range of technology stacks and tools in use, in addition to the complexity of the surrounding network environment. The configuration of bridges, bonds, VRFs or routes can be done programmatically, declaratively, manually or with automated with tools like ifupdown, ifupdown2, ifupdown-ng, iproute2, NetworkManager, systemd-networkd and others. Each  of these tools use different formats and locations to store their configuration files. Netplan, a utility for easily configuring networking on a Linux system, is designed to unify and standardise how administrators interact with these underlying technologies. Starting from a YAML description of the required network interfaces and what each should be configured to do, Netplan will generate all the necessary configuration for your chosen tool.

In this article, we will provide an overview of how Ubuntu uses Netplan to manage Linux networking in a unified way. By creating a common interface across two disparate technology stacks, IT administrators benefit from a unified experience across both desktops and servers whilst retaining the unique advantages of the underlying tech.

But first, let’s start with a bit of history and show where we are today.

The history of Netplan in Ubuntu

Starting with Ubuntu 16.10 and driven by the need to express network configuration in a common way across cloud metadata and other installer systems, we had the opportunity to switch to a network stack that integrates better with our dependency-based boot model. We chose systemd-networkd on server installations for its active upstream community and because it was already part of Systemd and therefore included in any Ubuntu base installation. It has a much better outlook for the future, using modern development techniques, good test coverage and CI integration, compared to the ifupdown tool we used previously. On desktop installations, we kept using NetworkManager due to its very good integration with the user interface.

Having to manage and configure two separate network stacks, depending on the Ubuntu variant in use, can be confusing, and we wanted to provide a streamlined user experience across any flavour of Ubuntu. Therefore, we introduced as a control layer above systemd-networkd and NetworkManager. Netplan takes declarative YAML files from /etc/netplan/ as an input and generates corresponding network configuration for the relevant network stack backend in /run/systemd/network/ or /run/NetworkManager/ depending on the system configuration. All while keeping full flexibility to control the underlying network stack in its native way if need be.

Design overview (

Who is using Netplan?

Recent versions of Netplan are available and ready to be installed on many distributions, such as Ubuntu, Fedora, RedHat Enterprise Linux, Debian and Arch Linux.


As stated above, Netplan has been installed by default on Ubuntu systems since 2016 and is therefore being used by millions of users across multiple long-term support versions of Ubuntu (18.04, 20.04, 22.04) on a day-to-day basis. This covers Ubuntu server scenarios primarily, such as bridges, bonding, VLANs, VXLANs, VRFs, IP tunnels or WireGuard tunnels, using systemd-networkd as the backend renderer.

On Ubuntu desktop systems, Netplan can be used manually through its declarative YAML configuration files, and it will handle those to configure the NetworkManager stack. Keep reading to get a glimpse of how this will be improved through automation and integration with the desktop stack in the future.


It might not be as obvious, but many people have been using Netplan without knowing about it when configuring a public cloud instance on AWS, Google Cloud or elsewhere through cloud-init. This is because cloud-init’s “Networking Config Version 2” is a passthrough configuration to Netplan, which will then set up the underlying network stack on the given cloud instance. This is why Netplan is also a key package on the Debian distribution, for example, as it’s being used by default on Debian cloud images, too.

Our vision for Linux networking

We know that Linux networking can be a beast, and we want to keep simple things simple. But also allow for custom setups of any complexity. With Netplan, the day-to-day networking needs are covered through easily comprehensible and nicely documented YAML files, that describe the desired state of the local network interfaces, which will be rendered into corresponding configuration files for the relevant network stack and applied at (re-)boot or at runtime, using the “netplan apply” CLI. For example /etc/netplan/lan.yaml:

  version: 2
  renderer: networkd
      dhcp4: true

Having a single source of truth for network configuration is also important for administrators, so they do not need to understand multiple network stacks, but can rely on the declarative data given in /etc/netplan/ to configure a system, independent of the underlying network configuration backend. This is also very helpful to seed the initial network configuration for new Linux installations, for example through installation systems such as Subiquity, Ubuntu’s desktop installer or cloud-init across the public and private clouds.

In addition to describing and applying network configuration, the “netplan status” CLI can be used to query relevant data from the underlying network stack(s), such as systemd-networkd, NetworkManager or iproute2, and present them in a unified way.

Netplan status (Debian)

At the Netplan project we strive for very high test automation and coverage with plenty of unit tests, integration tests and linting steps, across multiple Linux distros, which gives high confidence in also supporting more advanced networking use cases, such as Open vSwitch or SR-IOV network virtualization, in addition to normal wired (static IP, DHCP, routing), wireless (e.g. wwan modems, WPA2/3 connections, WiFi hotspot, controlling the regulatory domain, …) and common server scenarios.

Should there ever be a scenario that is not covered by Netplan natively, it allows for full flexibility to control the underlying network stack directly through systemd override configurations or NetworkManager passthrough settings in addition to having manual configuration side-by-side with interfaces controlled through Netplan.

The future of Netplan desktop integration

On workstations, the most common scenario is for end users to configure NetworkManager through its user interface tools, instead of driving it through Netplan’s declarative YAML files, which makes use of NetworkManager’s native configuration files. To avoid Netplan just handing over control to NetworkManager on such systems, we’re working on a bidirectional integration between NetworkManager and Netplan to further improve the “single source of truth” use case on Ubuntu desktop installations.

Netplan is shipping a “libnetplan” library that provides an API to access Netplan’s parser and validation internals, that can be used by NetworkManager to write back a network interface configuration. For instance, configuration given through NetworkManager’s UI tools or D-Bus API can be exported to Netplan’s native YAML format in the common location at /etc/netplan/. This way, administrators just need to care about Netplan when managing a fleet of Desktop installations. This solution is currently being used in more confined environments, like Ubuntu Core, when using the NetworkManager snap, and we will deliver it to generic Ubuntu desktop systems in 24.04 LTS.

In addition to NetworkManager, libnetplan can also be used to integrate with other tools in the networking space, such as cloud-init for improved validation of user data or installation systems when seeding new Linux images.


Overall, Netplan can be considered to be a good citizen within a network environment that plays hand-in-hand with other networking tools and makes it easy to control modern network stacks, such as systemd-networkd or NetworkManager in a common, streamlined and declarative way. It provides a “single source of truth” to network administrators about the network state, while keeping simple things simple, but allowing for arbitrarily complex custom setups.
If you want to learn more, feel free to follow our activities on, GitHub, Launchpad, IRC or our Netplan Developer Diaries blog on discourse.

on July 18, 2023 09:15 AM

July 02, 2023

There is a great set of articles in the Ubuntu Server guide about how to achieve an unattended installation of Ubuntu Server:

And I followed those to test my minimal autoinstall file as follows.

  version: 1
    hostname: ubuntu-server
    username: ubuntu
    # password=ubuntu
    password: "$6$exDY1mhS4KUYCE/2$zmn9ToZwTKLhCw.b4/b.ZRTIZM30JZ4QrOQ2aOXJ8yk96xpcCof0kxKwuX1kqLG/ygbJ1f8wxED22bTL4F46P0"
    install-server: yes  #
    allow-pw: no
    geoip: false
    flavor: generic  # or hwe

It works flawlessly with the kvm command with:

-append 'autoinstall ds=nocloud-net;s=http://_gateway:3003/'

as per the document. However, when it comes to a physical machine installation with the vanilla 22.04 LTS ISO, the installer just stopped at the initial language selection step.

In the end, it was a subtle difference. Without escaping the semicolon, the URL to the autoinstall file is ignored. One missing backslash cost me more than an hour. I provided feedback to the document and I hope it’s a time saver for somebody.

The semicolon must be escaped in GRUB
The semicolon must be escaped in GRUB
on July 02, 2023 07:29 AM

May 13, 2023

You might remember that in my last post about the Ubuntu debuginfod service I talked about wanting to extend it and make it index and serve source code from packages. I’m excited to announce that this is now a reality since the Ubuntu Lunar (23.04) release.

The feature should work for a lot of packages from the archive, but not all of them. Keep reading to better understand why.

The problem

While debugging a package in Ubuntu, one of the first steps you need to take is to install its source code. There are some problems with this:

  • apt-get source required dpkg-dev to be installed, which ends up pulling in a lot of other dependencies.
  • GDB needs to be taught how to find the source code for the package being debugged. This can usually be done by using the dir command, but finding the proper path to be is usually not trivial, and you find yourself having to use more “complex” commands like set substitute-path, for example.
  • You have to make sure that the version of the source package is the same as the version of the binary package(s) you want to debug.
  • If you want to debug the libraries that the package links against, you will face the same problems described above for each library.

So yeah, not a trivial/pleasant task after all.

The solution…

Debuginfod can index source code as well as debug symbols. It is smart enough to keep a relationship between the source package and the corresponding binary’s Build-ID, which is what GDB will use when making a request for a specific source file. This means that, just like what happens for debug symbol files, the user does not need to keep track of the source package version.

While indexing source code, debuginfod will also maintain a record of the relative pathname of each source file. No more fiddling with paths inside the debugger to get things working properly.

Last, but not least, if there’s a need for a library source file and if it’s indexed by debuginfod, then it will get downloaded automatically as well.

… but not a perfect one

In order to make debuginfod happy when indexing source files, I had to patch dpkg and make it always use -fdebug-prefix-map when compiling stuff. This GCC option is used to remap pathnames inside the DWARF, which is needed because in Debian/Ubuntu we build our packages inside chroots and the build directories end up containing a bunch of random cruft (like /build/ayusd-ASDSEA/something/here). So we need to make sure the path prefix (the /build/ayusd-ASDSEA part) is uniform across all packages, and that’s where -fdebug-prefix-map helps.

This means that the package must honour dpkg-buildflags during its build process, otherwise the magic flag won’t be passed and your DWARF will end up with bogus paths. This should not be a big problem, because most of our packages do honour dpkg-buildflags, and those who don’t should be fixed anyway.

… especially if you’re using LTO

Ubuntu enables LTO by default, and unfortunately we are affected by an annoying (and complex) bug that results in those bogus pathnames not being properly remapped. The bug doesn’t affect all packages, but if you see GDB having trouble finding a source file whose full path starts without /usr/src/..., that is a good indication that you’re being affected by this bug. Hopefully we should see some progress in the following weeks.

Your feedback is important to us

If you have any comments, or if you found something strange that looks like a bug in the service, please reach out. You can either send an email to my public inbox (see below) or file a bug against the ubuntu-debuginfod project on Launchpad.

on May 13, 2023 08:43 PM

May 10, 2023

I got a new SSD and did a fresh Ubuntu 23.04 install. What I usually do, is connecting the old disk via USB and copy data over from the old disk to the new SSD.

But my old disk used encrypted ZFS. It took me some time to figure out how to mount that that so here’s what I did.

The old disk gets detected as /dev/sda . There are 2 pools, rpool and bpool in my case. rpool is the one that contains my home directory and also the root directory. Let’s import that pool:

# zpool import -f rpool
# zpool list
rpool  1.81T  1.29T   536G        -         -    21%    71%  1.00x    ONLINE  -

After the pool import, there is now the LUKS encrypted keystore available under /dev/zvol/rpool/keystore . That keystore does contain the ZFS key for encrypting/decrypting. so let’s luksOpen that one:

# cryptsetup open /dev/zvol/rpool/keystore rpool-keystore
Enter passphrase for /dev/zvol/rpool/keystore: 

And now the newly created mapper device for the opened crypt device:

# mount /dev/mapper/rpool-keystore /mnt/
# ls /mnt/
lost+found  system.key

So system.key is there. Let’s load it so ZFS can use it and clean up:

# cat /mnt/system.key | sudo zfs load-key -L prompt rpool
# umount /mnt
# cryptsetup close rpool-keystore

With zfs list the different datasets can be listed. To mount the /home/$USERNAME database, find the right one, change the mountpoint and mount it (/mnt/home-myuser must be created before):

# zfs list|grep home
rpool/USERDATA/myuser_xq8e3k                                                                        1.22T   478G      986G  /home/myuser
# zfs set mountpoint=/mnt/home-myuser rpool/USERDATA/myuser_xq8e3k
# zfs mount rpool/USERDATA/myuser_xq8e3k
ls /mnt/home-myuser  # this should show the files from your home now

That’s it. The last steps can be repeated to mount any other ZFS dataset (eg. the one for /)

on May 10, 2023 01:25 PM

May 08, 2023

Open to work!

Paul Tagliamonte

I decided to leave my job (Principal Software Engineer) after 4 years. I have no idea what I want to do next, so I’ve been having loads of chats to try and work that out.

I like working in mission focused organizations, working to fix problems across the stack, from interpersonal down to the operating system. I enjoy “going where I’m rare”, places that don’t always get the most attention. At my last job, I most enjoyed working to drive engineering standards for all products across the company, mentoring engineers across all teams and seniority levels, and serving as an advisor for senior leadership as we grew the engineering team from 3 to 150 people.

If you have a role that you think I’d like to hear about, I’d love to hear about it at jobs{} (where the {} is an @ sign).

on May 08, 2023 06:19 PM
This is a long retrospective, organized into 3 parts, about my 2023 hike of the Portuguese Camino de Santiago.
  1. Part 1 is about my preparation, packing and gear
  2. Part 2 covers a retrospective of insights gained along the way, and advice for anyone else considering the Camino
  3. And Part 3 is a roll-up of the daily social media posts and accompanying photographs from my two weeks on the trail
Comments are disabled on Blogger due to relentless spam, but you reach me on LinkedIn and Twitter with questions.  You're also welcome to follow me on Google Maps, where I contribute frequently and reviewed many of the establishments we visited along this trip.  Enjoy!

Part 1: Preparation, Packing, and Gear

I love a good, long walk -- I've had some brilliant hikes across Scotland, New Zealand, Peru, Switzerland, and of course throughout Texas and the United States National Parks.  I know my limits -- I'm fit enough to easily walk 10-15 miles a day, with a pack, for multiple days in a row.  But I also know that I need to be very comfortable in my shoes, and with my pack, and with the right gear.  So I spent about 3 weeks walking around my hometown of Austin, Texas, with my pack, "training".  I walked about 5-7 miles in my (pretty hilly) neighborhood, almost every day, and did one 10 miler around Lady Bird Lake trail in downtown Austin.  I wouldn't call this "training" per se, but I do think it was pretty valuable acclimation to the mileage and additional weight of a backpack.

All in all, I felt very well prepared throughout the actual hike on the Camino.

More importantly, I was generally pleased with the light weight and balance of my pack.  With hindsight, are there a few things I would pack differently?  You bet.  Here's a spreadsheet I used when packing:

See below for a thorough retrospective on my packing and gear.  
Disclosure: As an Amazon Associate I earn from qualifying purchases.


I decided to upgrade my vintage, 20+ year old Gregory Reality backpack, to a brand new, ultralight Osprey Exos 38L, and that was a great investment!  I kept my fully loaded pack to about 18 lbs with an empty water bladder, or 20 lbs with a fully loaded 2L water bladder.  That weight was perfect for me.  I hardly noticed it at all.  I mean, I couldn't "run", but at no point in the 160 miles did I ever even think about the backpack's weight or comfort.  It was just part of me.  I also brought the over-pack rain shell, which was useful and necessary a couple of times.  For what it's worth, my hiking partner used an Osprey Stratos 36L and he loved his, too.


My Osprey backpack supported a water bladder, and I carried a 2L Camelbak bladder.  It was awesome.  I swear by it.  It was really nice to sip water along the way, any time, without stopping.


In my early Camino prep, I also originally figured I'd take my beloved heavy duty leather Hanwag hiking boots.  However, doing a little bit of research and reading, it seems that all of the distance through hikers these days have moved to using trail running shoes, of which Hoka Speedgoat and Altra Lone Peak seem to be by far the most popular.  I tried both, and really liked the Altras better.  (I got a pretty good deal by buying last year's model).  I treated them with a couple of applications of Scotch Guard for a little bit of extra water resistance.

REI has both (and some others), and you should probably check them out, if you haven't already.  There were plenty of people on the Camino wearing both Hoka and Altra, as well as some Soloman's, a few Brooks, and others.  I was quite happy with the Altras, all things said.  I ended up with a few blisters on the 18 mile days, but I think that was more a matter of my socks.


Speaking of socks, I only brought my favorite Bombas ankle socks, which are my all-day, everyday socks at home.  (I'm a huge fan of Bombas, and generally buy direct from, as they donate socks for every direct purchase).  But in retrospect, I probably should have brought purpose made hiking socks instead.

Alternate Pair of Shoes

Without a doubt, you'll also need a second pair of shoes for the afternoon / evening, after you're done hiking.  I brought my Olukai flip flops (which I can easily walk/hike 6+ miles in, no question).  I liked flip flops because they were light, and could get wet (I used them in the occasional communal / prison-style showers).  My hiking partner used lightweight deck shoes as his 2nd pair and he was happy with that as well.  The key point is just that you'll absolutely need something to switch into, after a long day of hiking and you really need something that doesn't require socks.


I packed 5 sets of socks, underwear, and shirts (3 long sleeves, 2 shortsleeves), and 3 sets of pants (two long hiking pants, one pair of shorts).  We did laundry every 4-5 days.  Basically I'd wash 4 sets of clothes, while wearing my last set.  Most accommodation has some form of laundry facilities.  We chose to use the ones that had both washers and dryers (to get everything done in 90 minutes or so), but most people just hang dry their clothes overnight.  Plenty of people travel with just 2 or 3 sets of clothes too, and they do laundry more frequently.


The weather was really, really quite perfect for our hike in March/April.  Just a bit of chill in the air in the morning (mid 40s-50s Fahrenheit), and warm (high 70s Fahrenheit) most afternoons.  As such, I was able to wear long sleeves shirts every day (avoiding some sunscreen on the arms).  My go-to shirt for hiking are these Fjallraven Abisko wool shirts, and the Fjallraven sun hoodie version.  Light, breathable, extraordinarily comfortable.  I also brought one Nike Dri-fit long sleeve zip top, for some variety.


In terms of underwear, I brought 5 pairs of Reebok performance boxer briefs.  And for pants, I brought two pairs of Patagonia Quandry hiking pants, and one pair of Under Armour Match Play golf shorts.  And a simple Fjallraven Canvas belt to hold them up.

Layers, and Outerwear

I also brought 3 layers every one of which I used, almost every day -- a Fjallraven Abisko Trail fleece jacket, Fjallraven Buck Fleece vest, and a Fjallraven Keb Eco rain shell.  If you see a pattern, I'm a huge fan of all things Fjallraven for outdoor adventures.  Just a great brand, great quality, great comfort, lasts forever.

Most mornings (in March and April for us) were chilly (low 40s F), and the afternoons sunny and warm (high 70s F).  I'd start most mornings with 1, 2, or all 3 layers, and then stop about every hour to shed a layer.  I liked the long sleeves and long pants (albeit very thin and breathable), even the hottest part of the day, for sun protection.  I wore sunscreen on my face and neck every day.  

I also brought an pair of Frog Tog rain shell pants, which, thankfully I never actually used, as we had great weather.  I probably could have done without, but on the whole, I'm glad I brought as the weight and space required were negligible.

Hat / Cap / Sunglasses

I also brought a basic baseball cap and Oakley Frogskins Lite sunglasses.  My hiking partner swore by his full brim hiking hat, but I didn't like how the back of my full brim hat brushed the top of my backpack. So, instead I used a Fjallraven knit cap/beanie, which was nice a few mornings, but certainly not necessary at all.  I typically shed it within the first hour and could have done without it easily.

Tech and Electronics

Charing electronics (phone, watch) was far easier than I expected.  I carried a rechargeable power brick that I never really used and, in hindsight, I didn't need, at all.

My Google Pixel 6a phone and Google Pixel watch batteries (both proactively set to power save mode every day), had more than enough battery every day for each day's hike, even with full GPS tracking enabled, lots of pictures, a fair amount of Google maps for restaurant reviews, Google lens for translations, and Wikipedia for learnings.

I also brought a small, rechargeable Black Diamond headlamp that I totally didn't need either.  We only started hiking two days before dawn, and even then there were plenty of street lights.  And a phone flashlight would have been plenty enough light. Next time, I wouldn't bother bringing a headlamp.

I did bring a set of Airpods, planning to listen to music or audiobooks, but surprisingly, I never actually used them for even one minute of music or books!  Rather, I enjoyed the conversation with my hiking partner, and the peace and quiet and sounds of the trail.  That said, they were still handy for calling home and talking to the family, though, so no regrets on bringing them.

 Credit cards were good about 80% of the time (mostly NFC tap-to-pay, conveniently), though I did pull cash (about 100-150 euros, three times), with my M1 Finance debit card.

Additional Hiking / Camping Gear

The only important thing I neglected to bring, and actually purchased along the way was a hiking pole.  Of course I have great trekking poles at home, but I was unable to find out definitively if I could carry them onto the plane (we definitely did NOT check our backpacks).  It sounds like the TSA and airline rules against "blunt force weapons" are sometimes (but very inconsistently) applied to hiking poles and walking sticks?  So I did not bring my own, but should have.

As it turns out, I'm sure I totally could have, if I had collapsed it all the way down, and stuffed it entirely inside of the pack, rather than cinching it to the outside where it's visible and accessible.  While it's nice to walk with two poles, one is enough, when packing light.  After about day three, I was missing a walking stick, and I picked up a wooden one (as did my hiking partner), but it was awkward and hurt my wrist.  So I bought one for 15 euros at the first place I saw one for sale.  It was nice enough -- collapsible and spring loaded, but certainly not the highest end equipment.  It does say "Camino de Santiago" on it, so it's a nice souvenir, and it really helped with the up and down hills on the hike.  I had no trouble whatsoever bringing it home on the plane in my carry on backpack, so looking back, I'm sure I could have brought it on very easily.

Regarding Accommodations on the Camino

There's no camping really (maybe just one or two spots), along the Camino, and lodging is readily available, so there's no need for a tent.  If you're flexible (quality and cost, in both directions, up and down), you'll never have a problem finding a place to sleep.  At the lowest end, there are plenty of first-come, first-serve, free (or nearly free, with a nominal "donation" of 10 euros or so), public "Albergues".  These are basically hostels, with 30 or more bunk beds in a room, and communal bathrooms and showers.  It's meager accommodations, usually in an old monastery or similar historic building.  This is the very "traditional" Camino experience.

Similarly, there are also "private" Albergues, which are very similar bunk beds and bath setups, but they do usually take reservations, and are a little newer (maybe cleaner?), and they are also quite affordable (12-20 euros per bed).  We poked our head into 3 or 4 public Albergues, and they were all serviceable, but we chose to stay in a couple of private Albergues, mainly because we could make our reservations a few hours or a day or two ahead of time, and have the peace of mind that we were "booked".  Some private Albergues also have "private rooms", which are a really good deal, if you're traveling in a small group of 2-4.

I was traveling with one other friend, and 9 of the 12 nights, we managed to book our own private room (usually two twin beds, and an en suite shower and toilet) for about 40 euros for the two of us -- which we always opted for, over say a 12 euro apiece bunk bed.  There are a few private apartments and hotels available too -- these are probably in the 50 euro - 80 euro per night range -- still very affordable.  For the region we were in, (which has a pretty decent app), was absolutely the way to go.  It was easy to find availability, prices, features, addresses, ratings, and communicate with the hosts (with translation capabilities built into the messaging app).

The only thing we pre-booked before we left home, was our first night at the Hilton in Porto, and our last nights, at the Marriott in Santiago.  These were obviously much, much higher end accommodations, and we paid much more for those (nearly 200 euro a night).  The fancy digs were nice, but totally unnecessary.

Oh, and maybe two nights along the way, we stayed at "farmhouses".  These were my favorite accommodations by far.  One was at a vineyard, with the traditional home cooked, communal dinner, which we shared with 8 other pilgrims.  Truly unforgettable experiences.  If I were to do the Camino again, I would absolutely seek these out, though you do have to book these in advance (we just got lucky), as they are extremely limited and very popular.

Related to accommodations, I also brought my own lightweight Marmot 55F summer sleeping bag, and a very tiny Summit Aeros Down Inflatable Pillow.  This is a tough one.  Strictly speaking, these were not entirely necessary.  The sleeping bag was a lot of extra weight (1.5 lbs), and every single place we stayed provided sheets, pillows, and pillow cases.  I chose to use my own sleeping bag in the 3 bunk room setups, but I totally could have used the provided sheets.  All that said, having my own sleeping bag and pillow was an important piece of mind, in that I knew that I could sleep literally anywhere, as long as I had them.  On a subsequent Camino, though, I don't think I'd bring a sleeping bag at all, and instead would just plan on taking accommodations that provide sheets, or bring a very light sleep sack.

Cooking / Meals

I considered bringing my ultralight MSR camping stove (and buying fuel), but ultimately did not.  That was a good decision, as that would have been completely unnecessary.  There are plenty of cafes and restaurants along the way.  Most lunch and dinner spots have a "pilgrim menu" which is a basic package for 10 euros of a drink, appetizer, entree, and dessert -- so super cheap to eat.  Of course you can order anything you want on the full menu any time.  The wine across Galicia and Portugal is amazing!  The red wines are mostly Tempranillo (or similar) and the whites are Albarino (or similar) from the Douro valley, and delicious.  You'll rarely pay more than 3 euros per glass  Full bottles of red or white wine is readily available for under 10 euros -- and their top shelf super reserva is rarely more than 20 euro a bottle, and it's usually unbelievable stuff.

I did bring a Snow Peak cup and Snow Peak spork.  The cup was useful -- I used it every day, and I like to have a glass of water by my nightstand at night.  I never used the spork -- it's not needed.  I did however bring a tiny corkscrew, which came in handy a couple of times.


Probably the one thing I considered bringing, but did not, but REALLY should have, was my Aeropress coffee maker.  For no good reason, I took it out of my pack at the very last minute, and I very much regretted it, almost every single day.

Excellent coffee (espressos, cortados, cafe con leche, etc.) is always available at every cafe along the way, and it's really cheap (1 euro typically).  But at the hostels and hotels, the coffee is universally awful.  There's always a kettle that can boil hot water, and there's usually tea.  But almost everywhere has that dissolvable Nescafe instant coffee.  I found it just undrinkable.

We found a drip coffee maker in just 1 of 12 of our lodgings, and a pod coffee maker (like a Keurig) in just 1 of 12 as well (and that was a private apartment / airbnb type thing).  So most mornings meant stumbling out of bed, packing your backpack, maybe choking down some Nescafe, and then bolting down the trail sans caffeine until the first cafe (which was sometimes packed with pilgrims doing the same thing).  The Aeropress would have taken very little space, very little weight, and coffee grounds were easily available at markets and kettles for hot water.  Seems like such a trivial thing, and maybe I'm more coffee-focused than most, but this was probably my only significant packing regret.

Towels / Laundry

I did bring a quick-dry camping towel, which I'm glad I had for those 2-3 communal showers, but almost everywhere else provided towels.  Strictly speaking, this probably isn't 100% necessary, but it was nice to have.

I had a little drawstring collapsible backpack, which I used as my laundry bag.  That was nice, and amounted to a negligible amount of extra weight and space.

First Aid

I also brought a very basic, pocket first-aid kit.  I used a couple of band aids, and popped a couple of Ibuprofen at the end of the longest days.  I also had a sewing kit, which I didn't use, but I did give the sewing needle away to a fellow pilgrim than needed to pop and drain a terribly infected toenail (gross, you can keep the needle, pal).  I popped and drained my own blisters just with my fingernails, which was fine (and provided a ton of relief!), and just wiped those down with alcohol wipes.  Oh, I also took some Dramamine for the bus ride back to Porto.  Thankfully, I didn't need any of the rest, but I had a couple of anti-diarrheal tablets, some antihistamine, bite and itch cream.  My hiking partner also brought some melatonin which came in handy on the plane and for that first night of weird jet lag sleep.

Other Things I Should Have Brought

Things I Really Could Have Left Behind

Part 2: Insights, Advice, and Retrospective

My Camino was truly an amazing experience!  If you're even remotely considering it, you'd almost certainly enjoy doing it.

History and Basics

Just some basic Camino history, before I dig in...
  • The Camino de Santiago (or, The Way of St. James) has been a religious pilgrimage for over a 1000 years, traveled by literally millions of people.
  • There are dozens of different popular starting points, though you can start your Camino anywhere you choose.
  • The only requirement to getting your "Compostela" (a parchment certificate that officially recognizes your completion), is that you walk the last 100km to the cathedral of Santiago.
  • The medieval Camino has its roots around the 900s, and then further popularized by a "guidebook" published in 1140, called the Codex Calixtinus.
  • Also, St. James (the apostle whose remains are believed to be held at the cathedral of Santiago), is the patron saint of Spain, one of the major sponsors of the crusades, and his popularity rose tremendously in medieval times.
  • The trail was mostly forgotten in the 20th century (just a few hundred pilgrims a year), until a guidebook was published in 1957, and since then, there has been a huge resurgence of interest.
  • Of course the 2010 film, The Way with Martin Sheen and Emilio Estevez (which is excellent, by the way), also helped bring the Camino to American audiences and interest has surged.  Now, several hundred thousand pilgrims make the trip every year.
  • Of the dozens of routes, the most popular is the "French Way", which is the Camino depicted in the movie The Way.  It's 422 miles, and takes about 5 weeks.
  • Perhaps the second most popular routes is the Portuguese Way, roughly 160 miles and takes about 2 weeks, which is the path we took.

Our Way

  • We started in Porto, Portugal, and took the "Central Route", which is inland among the hills and vineyards, but there's also the "Coastal Route", which of course hugs the coast (though the weather can be pretty rough, or so we heard).
  • Our Camino Portuguese Central was about 160 miles, and we did it in 12 days of walking, averaging about 13 miles (a half marathon) a day, with our longest days around 18 miles and our shortest about 9 miles.
  • There was some elevation to climb and descend every day, but it was all very reasonable.  I found it about the same difficulty as hiking around the Texas hill country near Austin.  Hilly, but certainly not mountainous.  Nothing like Colorado or Switzerland.  But also, it's not all just flat walking.
  • Most days, we walked for about 3-5 hours, typically starting between 7am and 9am, and finishing around Noon-2pm.
  • There were other pilgrims who left much earlier than us every day (probably either much slower, or going much farther, or both), and some who left later, or walked later.
  • When we really pushed, with no stops, our fastest miles were about 17 minutes per mile (as measured by my Pixel watch and my hiking partner's Garmin), though a much more comfortable pace, taking a few pictures, conversing, and relaxing a bit, was closer to ~20 minutes a mile.
  • Perhaps the most surprising, and maybe most disappointing part about the Camino, is how much of it is on asphalt, concrete, pavement, or cobblestones, and how very little of it is on dirt or rock trails.
  • There are probably some official numbers somewhere, but I'd estimate that less than 10-15% of our whole Camino was spent on trails, and 85-90% was spent walking on paved streets or cobble stones.
  • Also, I'd say that at least 65-70% was "urban", walking among buildings, towns, villages, and only about 30-35% in fields or the woods or within vineyards or farms.
  • That might be fine by you, but I think I was expecting something a little more remote.  My favorite trails all time are a little more remote treks through Scotland, New Zealand, Switzerland, along the Appalations or the Pacific Northwest -- the Camino is most certainly NOT that.
  • But what the Camino is, is a walk through history.  Many of those cobblestones I'm complaining about were laid by the Romans (and probably their slaves) literally 2000 years ago.
  • Most of the Portuguese Camino follows Roman Road XIX (yes, the Romans basically had a numbered interstate system throughout Europe).
  • There are stone mile markers (literally, "milestones"), carved in Latin, dating to the 1st century A.D.
  • We crossed probably 20 stone arch bridges built by the Romans over 1500 years ago, and another 20 stone bridges built (or re-built) in medieval times 1000 years ago.
  • There's just so, so, so much history.  Castles, keeps, cathedrals, chapels, aqueducts, olive trees, grape groves -- many over 1000 years old.  Let that sink in, and the hard asphalt and pavement do melt away.

Other Pilgrims on the Trail

  • Our first few days, the crowds were very light.  We just 1 other pilgrim on the first day, and less than a dozen per day, for the next 3-4 days.
  • But as of April 1st, things picked up tremendously.
  • Part of it, as we just got closer to Santiago, the trail got busier (back to the point that you must complete the last 100 km on foot to receive the Compostela).
  • The other consideration was that we kind of accidentally started our hike that would put us into Santiago for Easter weekend.  While that was an accident on our part, thousands of pilgrims traveling for more pious reasons were very deliberately covering the Camino over the course of Holy Week, and planning to land in Santiago specifically for Easter.
  • We only met perhaps 5 or 6 other Americans on the trail (a group of 60-something retirees from Rhode Island, and a small group of retired ladies from California).
  • From most-to-fewer, we met many Portuguese, Spaniards, Brazilians, Germans, and a smattering of French, Brits, Canadians, Taiwanese, Australians, Danes, Czechs, Swiss, and others I'm sure I'm missing.
  • Interestingly, almost no one ever asks your name -- just where you're from.  And from then on, you're mostly referred to by your country (or in our case, states, Texas and Colorado).  It's kind of a nice convention.
  • You end up seeing many of the same people every couple of days, roughly traveling in your cohort from stage to stage.  That's pretty fun.  Easy to engage, and get into a conversation with anyone, but also easy to keep your privacy and distance.
  • The vast majority of people we met on the Camino, have done it before, many of them multiple times (like 7 or 8 times or so), and often different routes each time.

Logistics Advice for the End of the Trail

We struggled to find much information about how we were supposed to "complete" our Camino.  Where do we go first?  The Cathedral?  The Pilgrims' Office?  Our hotel?  Where is the 0-km marker?  Thus, we made a few mistakes, and so we'll try to help you out here...

Here's what NOT to do....which is exactly what we did....

  • We got in line to visit the Santiago de Compostela Cathedral itself.
  • We waited about 45 minutes, to enter the cathedral and visit the tomb of St. James and get our passport stamped at the cathedral, which seemed like the sensible thing to do, for pilgrims on the Camino.
  • HOWEVER, the Cathedral does NOT allow backpacks inside.  Thus, we waited 45 minutes to be refused entry because of our backpacks.
  • So each of us took turns going inside (by ourselves, without a pack), while the other waited with the backpacks.
  • Unbelievably, there is NO passport stamp at the Cathedral itself!  Shocking, but true.  We asked all over.
  • The famous front doors are ONLY open during jubilee years (of which 2023 is not).  So instead you enter through the side doors.
  • If you want to see the front doors (which are roped off), you'll need to buy a ticket and a tour (which we did a day later).
  • The tomb of St. James is under the altar, and there's another 10-15 minute queue inside of the Cathedral to see that.

Here's what TO DO...

Rather, here's a much better plan...  Go straight to the Pilgrim's Office, get your Compostela, then check-in to your hotel or accommodations, drop your pack, clean up, and visit the Cathedral afterwards.
  • The traditional end of the Camino is an old, worn stone tile, with the scallop shell, right in the geometric center of the plaza in front of the cathedral.

  • After this, go straight to the Pilgrim's Office, where you'll scan a QR code, and complete a short form to register for your Compostela.

  • It'll ask you a few things -- your name, your nationality, the starting point of your Camino, and your reasons for embarking on this journey.
  • Once you've completed this form (it takes 60 seconds or less), you'll get a number on your phone (basically like pulling a paper ticket at the DMV).
  • In the event that there is a longer wait, there's a beautiful garden within the Pilgrim's complex, which would make for a lovely place to relax while waiting for your number.
  • Then, there's a line that forms inside the building, for the numbers being called.
  • We arrived at the Pilgrim's office at 2pm on Good Friday -- one of the busiest days of the year -- and we waited in line for less than 5 minutes.
  • When your number is called, you'll move into a very busy room that looks just like the DMV, with perhaps a dozen or more booths, each staffed by a helpful person who checks your Camino passport stamps briefly, gives you your "final" stamp, and then prints your compostela.
  • Optionally, you can also add a second "certificate of distance", and purchase a tube for safekeeping your certificates for a couple of euros.

Also, the website will say that there's a "Pilgrim's Mass" every day.  Which is true, except for when it isn't.  There was no Pilgrim's Mass on Good Friday or Holy Saturday (so we didn't get to attend one). 

    Would I do it again?

    Big question...would I do it again?  Complicated answer.

    I can say unequivocally that I enjoyed every minute of it -- the distance, the landscape, the clean air, the weather, the people, the food, the pain, the gain, the history, the bridges, the cobblestones, the churches, the cathedral, the vineyards, the orchards, the farm animals, the rivers, the streams, the waterfalls -- all of it.

    But, at the same time, I just feel that there's so much more to see in the world.  While I was raised Catholic and appreciate the history and solemnity, as mostly an apostate, the religious parts of the pilgrimage are perhaps a little lost on me.  The cathedral is an unbelievable feat of medieval architecture and the history is just mind blowing, but I don't think I get quite as much out of it as someone doing it for the religious act of pilgrimage.

    Moreover, personally, I'm a little more drawn toward the beauties of nature, maybe dotted with some architectural and historical storylines.  With limited opportunities to travel and hike for weeks at a time away from family and work, I think I'd probably tackle some other trails on my list first, before making another Camino.  But, I'd never say never...

    Part 3: Itinerary, Narrative, and Pictures

    March 25, 2023, Day -1: Travel

    Setting out toward Portugal to walk the Camino.  Big thanks to the ladies for taking care of everything back home!

    March 26, 2023, Day 0: Arrival in Porto

    Let's do this!  Made it to Porto, a little later than expected due to strikes in France (merci).  Picked up our passport and toured the Porto Se (Cathedral), c. 1100AD.  Dinner and beers and dessert port wines around town, and we're back at our hotel, ready to set out on our Camino tomorrow.

    March 27, 2023, Day 1 of 13: Porto to Vilarinho, 18.4 miles

    Perfect weather on our first day, sunny and cool in the morning, slightly warm in the afternoon.  This will probably be one of our longest days, as it just took a while to get out of the city and suburbs of Porto.  The walk was nice, but very urban.  Mostly hard, cobblestone streets, lots of lots of vehicle traffic, so not ideal trail conditions.  We stopped at the 10 mile mark for beers and sandwiches, then kept moving.  We peeked in one aubergue (public hostel for pilgrims) and one other private one, before finding a good match here in Vilarinho.  It's not very crowded at all on the trail yet. We've only met 4 other pilgrims on the trail so far -- a young lady from Taiwan, an older couple from France (on their 15th Camino) and a guy from Brazil.  We seem to be very early in the season.

    March 28, 2023, Day 2 of 13: Vilarinho to Barcelos, 18.2 miles

    Another big day, 18.2 miles, some rolling hills, from Vilarinho to Barcelos.  It finally feels like we are out of the city and into the countryside.  About half of today was about 2/3 pavement, 1/3 trails.  We met about 10 other pellegrino's today -- a Brazilian, a Portuguese, 2 Czech, 2 Swiss, 2 Danes, and 2 Canadians.  We crossed three different stone bridges built about 1000 years ago (possibly incorporating structures from the Roman era 2000 years ago).  Barcelos is our destination for the night, a classic medieval village and home for weary pilgrims for a millennium.  Like them, we kicked off our shoes and enjoyed cold beers, delicious wine, and good food.  Bom cominho!

    March 29, 2023, Day 3 of 13: Barcelos to Sao Simao, 13.5 miles

    13.5 miles, from Barcelos to tiny vineyard outside of Sao Simao.  Best day on the Camino so far, by far!  Much shorter day, we walked a little over 13 miles, mostly on dirt trails through farms and groves and vineyards.  We saw a few more pellegrino's today, perhaps a dozen, including a few of the same from the day before.  We've covered a few different miles with two different mother/daughter pairs, one Canadian and the other Danish.  All of us are more or less in the same cohort, on roughly the same schedule.  Our first choice of lodging was totally booked and so we walked an extra couple of miles to our second choice and grabbed the last room.  As it turned out, it was an awesome result, as we spent this evening at a hostel within a vineyard, drinking all the wine we could from this vineyard, and eating a communal meal with the other pilgrims here.  We shared a table with our Canadian friends, as well as some Brazilians, Germans, and Portuguese.  Wonderful day, even better night.

    March 30, 2023, Day 4 of 13: Sao Simao to Ponte de Lima, 8.5 miles

    About 8.5 easy miles to Ponte de Lima.  Absolutely wonderful day.  Seems like we're finally done with walking along high traffic motor highways and onto trails through fields, farms, vineyards, and orchards.  We crossed several stone bridges built by the Romans, reinforced and then widened in the middle ages.  These bridges are 1000 to 2000 years old.  Spectacular.  Nice, easy day, cool, fair weather, good breeze, cloudy and not too much sun.  The cork oak trees are beautiful, and some of these olive trees have been growing since Roman times.

    March 31, 2023, Day 5 of 13: Ponte de Lima to Rubiaes, 12.3 miles

    Ponte de Lima to Rubiaes. Best day of walking so far!  A little over 12 miles but the most elevation gain we should see over the entire hike, all in all really not that intense though.  We were on beautiful trails and dirt roads for the vast majority of the day.  The pellegrino traffic has picked up considerably, now seeing a few dozen pilgrims per day, plus mountain bikers.  We've very quickly gone from walking up to our hotels and just getting a room at the end of a long day, to having to have a reservation a day or two in advance.  We met our first Americans all week -- a trio of retirees from Rhode Island.  There were some amazing waterfalls and mountain streams, and lush, moss and fern covered walls, in addition of course to the vineyards, olive trees, orchards, and cork oaks.  The ground was a little moist, but we generally missed the rain again today.  A few more Roman era (1st century AD) bridges and roads and water fountains, and a church built in 1295 that is a national monument in Portugal.

    April 1, 2023, Day 6 of 13: Rubiaes to Tui, 13 miles

    Really, really nice day, about 13 miles, all the way across the border from Portugal into Spain.  We got our first taste of slightly damp weather. The forecast was 'rain' but it was more like a very light fog or light drizzle.  The last town at the border, Valenca, was a beautiful medieval town with a proper wall and castle and moat.  We walked the ramparts and stumbled on a Renaissance festival and/or passion of the Christ where I had the best crepe ever, cooked over an open flame.  We're staying in a hostel outside of town though we walked a few miles back into town to have drinks and dinner.  I played goalie for a couple of kids in a Saturday night futbol match and kept a clean sheet (like 9 saves or so, if I do say so myself)...

    April 2, 2023, Day 7 of 13: Tui to O Porrino, 10.5 miles

    A little over 10 miles from Tui to O Porinño.  Really hard to believe, but the journey is well over half way over.  We gained a full day by walking extra our first 2 days, so we'll probably spend an extra day at the end in Santiago de Compostela.  Today's walk from Tui was very, very, VERY different.  There were over a hundred at least pellegrino's setting out from Tui for Santiago.  The Camino was packed with pilgrims.  Very different. It seems that the pellegrino traffic picks up considerably in the last 100km or so. This was also our first day out walking before sunrise, which made for some great pictures. In any case, we had a nice short walk today, along an ancient, 2000 year old Roman road, across a couple of Roman era bridges, into the town of O Porinño, where we had the best meals of our journey.  Because our distance was shorter (and the time change moving into Spain) we arrived much earlier, and enjoyed the best meals of our trip so far.  We did some laundry, had a nap and a siesta, and then a great evening too.  Hard to believe we are wrapping this up so soon.

    April 3, 2023, Day 8 of 13: O Porrino to Redondela, 9.9 miles

    Just about 10 miles today, from O Porriño to Redondela, and then another 4 or so miles exploring the town and area.  The trail started out somewhat crowded but thinned out a bit mid morning.  We climbed about a 1000 foot hill and back down towards the end of today's walk, which afforded some beautiful views of the valley and the water.  Our hostel for the night is right next door to a small cheese and meat and meticulously curated bottle shop, where we enjoyed charcuterie plates twice today, plus a delicious Palo Cortado sherry, a tawny port, and a couple of interesting beers.  Very enjoyable day!

    April 4, 2023, Day 9 of 13: Redondela to Pontevedra, 14.2 miles

    From Redondela to Pontevedra, it was a solid 14 mile walk, with a few more hills than we were expecting.  Still, very fair weather and a really beautiful day.  We opt for a couple of slightly longer, slightly more scenic "complementario" routes, hoping to avoid some of the crowds.  Speaking's crowded now.  The Camino is a highway of pilgrims trying to make it into Santiago for Easter.  Of course we walked along the ancient Roman XIX road (yes, the Romans numbered their major roads across Europe like Interstate highways in the US), and several Roman era bridges, including a beautiful one in Arcade.  We also had a magical experience at a bakery in Arcade recommended by our butcher friend a town or two back.  In Pontevedra, we visited an extremely well curated art museum with several Velasquez, Goya, and other Spanish masters.  But the antiquities really shine, with a couple of gold and silver hordes from the area, dated to almost 3000 years ago.  Lots of history and natural beauty today.

    April 5, Day 10 of 13: Pontevedra to Caldas de Reis, 14.2 miles

    Roughly 14 miles, from Pontevedra to Caldas de Reis, mostly flat with a couple of minor hills, beautiful, fair weather, cold in the morning, warm in the afternoon.  The Camino for traffic is in full force from here on in.  Hundreds upon hundreds upon hundreds of pellegrino's now cover the trails.  Young, old, native, foreign, walking with kids, with dogs, with backpacks and without, and a few annoying groups who think that you want to listen to their Bluetooth speaker for 4 hours.  Everyone is on the trail now!  Most of today was spent on nice trails or small alleys and away from highways.  Caldas de Reis is an interesting little town, surrounding a network of hot springs which have been in continuous use since the time of Ptolemy and the Romans.  We dunked our feet a public hot spring that has served pilgrims for literally a millennia!

    April 6, Day 11 of 13: Caldas de Reis to Padron, 12.2 miles

    Second to last day hiking, we covered a little over 12 miles from Caldas de Reis to Padròn today, mostly flat terrain, and very fair, favorable weather.  Crowds are heavy -- according to the Camino website, about 1500 pellegrino's are arriving per day.  We enjoyed some nice time in the forest, along trails and streams, the usual Roman roads and bridges of antiquity, and plenty of sunshine.  The food and drink of Galicia are most delightful, from the sweet pastries for breakfast, breads and meats and cheeses for lunch, and the many varieties of fermented beverages.  And the locals are just so incredibly friendly.  Every proprietor and shop owner and bar tender treats you like you are the single most important person in the world.  In fact, we had drinks this evening at Pepe's place in Padron, where every patron gets a giant bear hug and a kiss on the cheek from Pepe, after you sign his book.  This is timeless and precious.

    April 7, 2023, Day 12 of 13: Padron to Santiago de Compostela, 17.5 miles

    A long, hard uphill final day of walking, 17+ miles from Padròn to Santiago de Compostela, our destination.  We left our albergue well before dawn, knowing that we had a tough day ahead of us, and a lot of pilgrims doing the same walk, expecting to arrive in Santiago on Good Friday in time for Easter.  Our earliest start of the whole trip, it was cold and dark when we started, but also quiet and beautiful and less crowded.  We saw both a full moon set and a sun rise, and zipped through trails and small towns.  We stopped once for a brief breakfast and picked up sandwiches to go, which came in handy 10 miles later.  Once the sun came up, it got hot quickly, and the crowds picked up too.  The last few miles into the city were less scenic and more urban, but ticked away pretty quickly.  Upon arriving, we weren't sure if the protocol, so we went straight to the cathedral where we waited in a line for a half hour and were turned away because backpacks aren't allowed in the church.  So we each took turns, watching one another's packs outside, and doing a quick tour of the church.  Then we went to the pilgrim's office a few blocks away and learned that we didn't even have to visit the cathedral to claim our certificate, the Compostela, but we did anyway, so..bonus points?  We paid a few euros to get our Latin inscribed parchment (pretty cool actually), and then checked into the hotel, and cleaned up.  Then we headed back to the cathedral to actually attend the "pilgrim's mass", which happens every day, and they read the names of all the pilgrim's that arrived that day.  Except when we got there, we were told that "every day" doesn't include "today" because today's Good Friday, and there's no mass. And every day doesn't include tomorrow either.  Or Sunday.  So come back on Monday, two days after you leave.  At this point, the religious part is lost on me, but it was a mind clearing, spiritual journey none the less. Anyway, amazing walk!  Spectacular weather.  Great company.  Delicious food. Good people.  I'm tired, but refreshed.  Challenging, but rewarding! Exhausted, but energized.  Appreciative of this experience, but also ready for home with my girls.  I'll have a bit more to say later in retrospect, but for now, "bom comiño!"

    April 8, 2023, Day 13 of 13: Day of Rest in Santiago de Compostela

    This is the first day in over two weeks that we haven't had to pack our backpacks and check out of our lodgings and head to the next town.  Rather, we enjoyed a nice day touring the beautiful town of Santiago de Compostela, starting with the museum at the cathedral, which included 4 levels of history around the construction of the structure, art, and artifacts.  After that, I think I had the most unique tour I've ever experienced, period...  We took a guided tour (albeit entirely in Spanish), whereby we spent almost 90 climbing and crawling around THE ROOF OF THE CATHEDRAL.  Yes, you heard me... There's a tour where you basically climb out of a window and look at the cathedral from every possible angle, on the damn roof!  This was simple indescribable and incredible.  Maybe a little unnerving and uncomfortable at times, but a heck of a lot of fun and a one of a kind experience, never to be missed.  The roof was renovated in 2021 and this experience is only recently available.  Tomorrow morning is an early bus ride back to Porto, then I fly back through Amsterdam to my girls, and I'm very much looking forward to being home.  I'll post again a bit more about the logistics of the trip for those considering something like this, and maybe something a little more introspective about what I found along the way.  The way.

    April 9, 2023, Day 14 of 13: Back in Porto for a day

    Just a short bonus post...we spent Easter Sunday commuting by bus from Santiago back to Porto.  It was kind of amazing to watch the 12 little towns we spent our night tick by, about every 15 minutes by the highway.  3 different times the highway touched the Camino and pellegrino's were obvious with their backpacks and very determined walks.  Porto was a buzz with Easter festivals and activities.  Corey and I had a nice meal and a couple of fantastic ports, which put a nice finish on a great trip.  We even made our way back to the Porto Cathedral, to the start of our journey and the mile marker that says "248 km to Santiago." To the next pilgrims, bom comiño...

    That's all, folks!  I'll leave you with this caminho, good way, buen camino!


    on May 08, 2023 02:10 AM

    May 05, 2023

    Some time ago, before the world locked down, I pondered that KDE wasn’t very good at getting our apps to our users. We didn’t even have a website that listed our apps with download links. If you were an open source app developer using our tech (Qt and KDE Frameworks) would you come into KDE to build your app or just start a project on Github and do it yourself? KDE has community which means some people to help look over your work and maybe contribute and translate and some promo and branding mindshare and there’s teams of people in the distros who specialise in packaging our stuff. But successful projects like Krita and Digikam and indeed my own Plasma release scripts still have to do a lot on top of what KDE communally gives them.

    So I launched and made the All About the Apps goal which was selected in the hope of getting KDE to support taking our apps to the users more slickly. I didn’t manage to make much progress with the goal which I will readily take the blame for. After some fighting I managed to get our announcements linking to the app stores directly but I didn’t manage to get much else slicker.

    What my dream still is would be for apps to have a button that…

    • Bumps the version number in the source
    • Makes the tar and uploads it to a secret place
    • Tells the Linux distros to package it
    • Packaging for Windows/Mac/Android/Snap/Flatpak/Appimage would be in the Git repo and our CI would now build them and upload to the relevant test sites
    • OpenQA style tests would be in the git repo and our CI would now test these packages
    • Another button would make the source and packages public in Microsoft Store/Appimagehub/SnapStore/Flathub/ and somehow tells the Linux distros and send the announce to the Discuss group and start off a blog post for you

    I just released KDE ISO Image Writer (another project I didn’t make much progress with for too many years) and had a chance to see how it all felt

    There’s no nice buttons and while we have a tool to make the tar and I have a tool to add the release to the AppStream file, there’s nothing to bump version numbers in cmake or add releases to AppStream or make templates for pre-announcements and announcements.

    How’s the packaging and app store situation?

    Windows and Microsoft Store

    I had to go out and buy a laptop for this, there’s virtual machines available for free which should work but I didn’t trust them with the hardware needed here and they’re time limited so I’m a bit wary of setting up Craft then having to do it again when the time runs out. Craft does a lot of the hard work building for Windows and binary-factory and elite Craft dev hvonreth is often around to give help.

    Getting access to the Microsoft Store takes a sysadmin request and working out what to ask for then working out what to upload. I uploaded the wrong thing (a .appx file) when it should have been a .appxupload file and that seemed to break the MS Store from accepting it at all. After lots of twiddling and deleting and generally turning it off and on again I got it uploaded and a day later it was rejected with the claim that it crashed. While the app had installed and run fine for me locally using this .appxupload thing to install it locally did indeed cause it to crash. We diagnosed that to the elevated privileges needed and after some Googling it turns out the Microsoft Store doesn’t seem to support this at all. So my dream of having it available to install there has not worked out, but you can get the installer from and use that.

    There’s still only 9 KDE apps on the MS Store at a quick “KDE” search which seems far too few.


    These have been around for decades and KDE has always had fans of this format (it used to be called Klik at one point e.g. test KOffice). SUSE devs were a big fan at one point. In recent years its gained auto-update, daemons to manage the system integration, build tools, support from top apps like Krita and Digikam and a centralised place to get it in AppimageHub (not to be confused with the other AppimageHub). And yet mass adoption seems as far off as ever.

    There’s two ways I found to build it, with appimage-builder which was easy enough to pick up and make a packaging file which uses packages from Ubuntu and neon.

    Or you can reuse Craft (used earlier for Windows) to build on Linux for the AppImage. This also allows binary-factory integration but I don’t seem to have got this working yet. It might also be worth exploring openSUSE’s OSB which might allow for other platforms.

    I tried to upload it to AppimageHub but that broke the website which needed some back channel chats to fix. Once uploaded it appears shortly, no further bureaucracy needed (which is a bit scary). It doesn’t appear on the KDE Store which seems to be about themes and addons rather than apps. And I put it on

    It’s hard to know how popular AppImage is within KDE, neither of the AppImageHubs seem easy to search and many apps publish their own in various ways. There’s about a dozen (non-Maui) KDE apps with appimages on plus a dozen Maui apps which are developed within KDE and used by the Nitrux distro. I hear complains that AppImage doesn’t support Wayland which will limit them.

    Flatpak and Flathub

    This format has lots of good feels and mindshare because it integrates well with the existing open source communities.

    The flatpak-manifest.json file can be added directly to the repo (which I’m very jealous of, when I suggested it for Snaps it was rejected and caused me to grump off the whole Goal) and that can be added to binary-factory but also to CI. There’s an active team around to help out. That gets uploaded to a KDE testing repo where you can install and test.

    But to get it out to the users there’s a separate process for Flathub the main host for Flatpak packages. That takes a another week or two of bureaucracy to get published (bureaucracy when publishing software for people to install is necessary and important). There’s also a stats website which suggests it has 300 installs.

    Searching for KDE on Flathub gives over 130 results.


    This works the smoothest if I say so myself. Add the packaging to the snapcraft repo and it builds on the CI which actually just sends it off to the launchpad builders and it builds for ARM and AMD64. Then you get one of the KDE Snapcraft team (Scarlett, me, Harald) to register it and voila it uploads to candidate channel for testing. It needs manually moved into the stable release channel which can either be done by our team or we can share admin rights. The bureaucracy comes when you need to ask for permissions such ISO Image Writer needing access to disks, that took a week to be accepted. The packages are build using KDE neon for Qt and KDE Frameworks etc and we’ve had troubles before when KDE neon moves onto new versions of Qt but the content Snap has stayed on older ones, but we’re working out when to save a spare snapshot of it. The build tool Snapcraft also has a kde-neon extension which just adds in common parts used by KDE snaps but sometimes that gets out of date too so we’ve had to work out ways around it.

    The Snapcraft KDE page has about 140 apps. From the admin page I can see ISO Image Writer has 920 installs around the world (not bad for two days old). The store doesn’t seem great at picking up the AppStream meta data so screenshot and icons are often out of date which I’ve brought up with the devs a bunch of times. It’s centralised around a single Canonical owned store which open source/free software fans can find a bad smell but it is what users want.


    I’ve not looked at f-droid, Google Play, Chocolately, or Apple’s App Store. With the probable exception of Apple’s store we should embrace all of these.

    I couldn’t find any tools to add release data (the files to download) to AppStream file which is what ends up on, that feels like a low-hanging-fruit fix. Building pre-release tars which aren’t available publicly seems tricky to do, we have that for KDE neon but none of the app stores have it. Similarly tools to make templates for release announcements can’t be hard, I do that for Plasma already.

    So lots of work still to do to make KDE have slick processes for getting our software out there to the users, it’s social and technical challenges and cultural shifts take a long time. Loads of people have put in lots of work to get us where we have today but still lots to do. If you’re up for a challenge and want to help I hope this blog shows the challenges and potential for fixing them rather than sounding too negative. Let’s keep KDE being All About the Apps!

    on May 05, 2023 12:59 PM

    April 27, 2023

    Today I learned: set -e sucks even more.

    Summary: Just don’t use set -e.

    I’ve never been a fan “errexit” in shell. You’ve probably seen this as set -e, or set -o errexit or sh -e.

    People write lists of shell commands in a file and want the script to exit on the first one that fails rather than barreling on and causing damage. That seems sane.

    I’ve always strived to write “shell programs” rather than “shell scripts”. The difference being that the program will clean up after itself and give sane error messages. It won’t just exit when mkdir fails and leave the user to understand some message like:

     mkdir: cannot create directory ‘/tmp/work/bcd’: No such file or directory

    I’ve always felt that errexit makes “good error handling” in shell more difficult.

    The bash(1) man page has the following text:

    If a compound command or shell function executes in a context where -e is being ignored, none of the commands executed within the compound command or function body will be affected by the -e setting, even if -e is set and a command returns a failure status. If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes.

    Here’s an example of how painful it can be, and I took way too long today tracking down what was wrong.

    1. A Programmer starts off with a simple script make-lvs and uses set -e.

      #!/bin/bash -ex
      lvm lvcreate --size=1G myvg -n mylv0
      lvm lvcreate --size=1G myvg -n mylv1

      This looks fine for a “shell script”. The -x argument even makes shell write to standard error the commands it is running. At this point everyone is happy.

    2. Later the programmer looks at the script and realizes that he/she needs more flags to lvm lvcreate. So now the script looks like:

      #!/bin/bash -ex
      lvm lvcreate --ignoremonitoring --yes --activate=y --setactivationskip=n --size=1G --name=mylv0 mylv0
      lvm lvcreate --ignoremonitoring --yes --activate=y --setactivationskip=n --size=1G --name=mylv1 myvg 

      I’m happy that the programmer here used long format flags as they are much more self documenting so readers don’t have to (as quickly) open up the lvm man page. It is easier to make sense of that than it is to read ‘lvm lvcreate --ignoremonitoring -y -ay -ky -L1G -n mylv’.

    3. After doing so, they realize that they can make this look a lot nicer, and reduce the copy/paste code with a simple function wrapper.

      #!/bin/bash -e
      lvcreate() {
          echo "Creating lv $2 on vg $1 of size $3"
          lvm lvcreate "--size=$3" --ignoremonitoring --yes --activate=y \
              --setactivationskip=n --name="$2" "$1"
          echo "Created $1/$2"
      lvcreate myvg mylv0 1G
      lvcreate myvg mylv1 1G

      The improvements are great. The complexity of the lvm lvcreate is abstracted away nicely. They’ve even dropped the vile set -x in favor of more human friendly messages.

      Output of a failing lvm command looks like this:

      $ make-lvs; echo "exited with $?"
      Creating lv mylv0 on vg myvg of size 1G
      out of space
      exited with 1
    4. The next improvement is where sanity goes completely out the window. [I realize you were questioning my sanity long ago due to my pursuit of shell scripting perfection].

      The programmer tries to add reasonable ‘FATAL’ messages that you might find in log messages of other other programming languages.

       #!/bin/bash -e
       lvcreate() {
           echo "Creating lv $2 on vg $1 of size $3"
           lvm lvcreate "--size=$3" --ignoremonitoring --yes --activate=y \
               --setactivationskip=n --name="$2" "$1"
           echo "Created $1/$2"
       fail() { echo "FATAL:" "$@" 1>&2; exit 1; }
       lvcreate myvg mylv0 1G || fail "Failed to create mylv0"
       if ! lvcreate myvg mylv1 2G; then
           fail "Failed to create lylv1"
       echo "Success"

      Can you guess what is going to happen here?

      If the lvm command fails (perhaps the vg is out of space) then the output of this script will look like:

       $ make-lvs; echo exited with $?
       Creating lv mylv1 on vg myvg of size 1G
       error: out of space
       Created myvg/mylv1
       Creating lv mylv2 on vg myvg of size 1G
       error: out of space
       Created myvg/mylv2
       exited with 0

    The attempt to handle the failure of the lvcreate function with || on lines 10 and with if ! on line 12 made it a “compound command”. A compound command disables the error handling and shell exit that would have come from -e when the lvm command on line 4 failed.

    Above I’ve demonstrated with bash, but this is actually posix behavior, and you can just as well test the function with sh as well.


    If you’re interested in further reading, you can see this topic on the BashFAQ. I agree with GreyCat and geirha: “don’t use set -e. Add your own error checking instead.”

    If you’re still here, the following is the version of the script that I’d like to see. Of course there are other improvements that can be made, but I’m happy with it.

       info() { echo "$@" 1>&2; }
       stderr() { echo "$@" 1>&2; }
       fail() { stderr "FATAL:" "$@"; exit 1; }
       lvcreate() {
           local vg="$1" lv="$2" size="$3"
           info "Creating $vg/$lv size $size"
           lvm lvcreate \
               --ignoremonitoring --yes --activate=y --setactivationskip=n \
               --size="$size" --name="$lv" "$vg" || {
                   stderr "failed ($?) to create $vg/$lv size $size"
                   return 1
           info "Created $vg/$lv"
       # demonstrate both 'command ||' and 'if ! command; then' styles.
       lvcreate myvg mylv0 1G || fail "Failed to create mylv0"
       if ! lvcreate myvg mylv1 1G; then
           fail "Failed to create mylv1"
       info "Success"
    on April 27, 2023 12:00 AM

    April 25, 2023

    Rust is a hugely popular compiled programming language and accelerating it was an important goal for Firebuild for some time.

    Firebuild‘s v0.8.0 release finally added Rust support in addition to numerous other improvements including support for Doxygen, Intel’s Fortran compiler and restored javac and javadoc acceleration.

    Firebuild’s Rust + Cargo support

    Firebuild treats programs as black boxes intercepting C standard library calls and system calls. It shortcuts the program invocations that predictably generate the same outputs because the program itself is known to be deterministic and all inputs are known in advance. Rust’s compiler, rustc is deterministic in itself and simple rustc invocations were already accelerated, but parallel builds driven by Cargo needed a few enhancements in Firebuild.

    Cargo’s jobserver

    Cargo uses the Rust variant of the GNU Make’s jobserver to control the parallelism in a build. The jobserver creates a file descriptor from which descendant processes can read tokens and are allowed to run one extra thread or parallel process per token received. After the extra threads or processes are finished the tokens must be returned by writing to the other file descriptor the jobserver created. The jobserver’s file descriptors are shared with the descendant processes via environment variables:

    # rustc's environment variables
    CARGO_MAKEFLAGS="-j --jobserver-fds=4,5 --jobserver-auth=4,5"

    Since getting tokens from the jobserver involves reading them as nondeterministic bytes from an inherited file descriptor this is clearly an operation that would depend on input not known in advance. Firebuild needs to make an exception and ignore jobserver usage related reads and writes since they are not meant to change the build results. However, there are programs not caring at all about jobservers and their file descriptors. They happily close the inherited file descriptors and open new ones with the same id, to use them for entirely different purposes. One such program is the widely used ./configure script, thus the case is far from being theoretical.

    To stay on the safe side firebuild ignores jobserver fd usage only in programs which are known to use the jobserver properly. The list of the programs is now configurable in /etc/firebuild.conf and since rustc is on the list by default parallel Rust builds are accelerated out of the box!

    Writable dependency dir

    The other issue that prevented highly accelerated Rust builds was rustc‘s -L dependency=<dir> parameter. This directory is populated in a not fully deterministic order in parallel builds. Firebuild on the other hand hashes directory listings of open()-ed directories treating them as inputs assuming that the directory content will influence the intercepted programs’ outputs. As rustc programs started in parallel scanned the dependency directory in different states depending on what other Rust compilations finished already Firebuild had to store the full directory content as an input for each rustc cache entry resulting low hit rate when rustc was started again with otherwise identical inputs.

    The solution here is ignoring rustc scanning the dependency directory, because the dependencies actually used are still treated as input and are checked when shortcutting rustc. With that implemented in firebuild, too, librsvg’s build that uses Rust and Cargo can be accelerated by more than 90%, even on a system having 12 cores/24 threads!:

    Firebuild accelerating librsvg’s Rust + Cargo build from 38s to 2.8s on a Ryzen 5900X (12C/24T) system

    On the way to accelerate anything

    Firebuild’s latest release incorporated more than 100 changes just from the last two months. They unlocked acceleration of Rust builds with Cargo, fixed Firebuild to work with the latest Java update that slightly changed its behavior, started accelerating Intel’s Fortran compiler in addition to accelerating gfortran that was already supported and included many smaller changes improving the acceleration of other compilers and tools. If your favorite toolchain is not mentioned, there is still a good chance that it is already supported. Give Firebuild a try and tell us about your experience!

    Update 1: Comparison to sccache came up in the reddit topic about Firebuild’s Rust acceleration , thus by popular demand this is how sccache performs on the same project:

    Firebuild 0.8.0 vs. sccache 0.4.2 accelerating librsvg ‘s Rust + Cargo build

    All builds took place on the same Ryzen 5900X system with 12 cores / 24 threads in LXC containers limited to using 1-12 virtual CPUs. A warm-up build took place before the vanilla (without any instrumentation) build to download and compile the dependency crates to measure only the project’s build time. A git clean command cleared all the build artifacts from the project directory before each build and ./ was run to measure only clean rebuilds (without autotools). See test configuration in the Firebuild performance test repository for more details and easy reproduction.

    Firebuild had lower overhead than sccache (2.83% vs. 6.10% on 1 CPU and 7.71% vs. 22.05% on 12 CPUs) and made the accelerated build finish much faster (2.26% vs. 19.41% of vanilla build’s time on 1 CPU and 7.5% vs. 27.4% of vanilla build’s time on 12 CPUs).

    on April 25, 2023 09:38 PM

    April 20, 2023

    The Xubuntu team is happy to announce the immediate release of Xubuntu 23.04.

    Xubuntu 23.04, codenamed Lunar Lobster, is a regular release and will be supported for 9 months, until January 2024.

    Xubuntu 23.04, featuring the latest updates from Xfce 4.18 and GNOME 44.

    Xubuntu 23.04 features the latest Xfce 4.18. Xfce 4.18 delivers a stable desktop environment with a number of performance improvements and new features to enjoy. In particular, the Thunar file manager benefits from a new image preview feature, undo and redo functionality, file highlights, and recursive search. Check out the Xfce 4.18 tour for more details!

    Xubuntu 23.04 also welcomes Xubuntu Minimal as an official subproject. Xubuntu Minimal is a slimmed down version of Xubuntu that only includes the bare essentials: the desktop, a few Xfce components, and the Xubuntu look and feel. Longtime Xubuntu fans may better know this as Xubuntu Core. After nearly eight years of being a supported, but community-built project, we’re happy to finally publish downloads along with the main Xubuntu version. Many thanks to the community for keeping the dream alive all these years!

    The final release images are available as torrents and direct downloads from

    As the main server might be busy in the first few days after the release, we recommend using the torrents if possible.

    We’d like to thank everybody who contributed to this release of Xubuntu!

    Highlights and Known Issues


    • Xfce 4.18, released in December 2022, is included in Xubuntu 23.04.
    • Xubuntu Minimal is included as an officially supported subproject.
    • Pipewire (and wireplumber) are now included in Xubuntu.

    Known Issues

    • The shutdown prompt may not be displayed at the end of the installation. Instead you might just see a Xubuntu logo, a black screen with an underscore in the upper left hand corner, or just a black screen. Press Enter and the system will reboot into the installed environment. (LP: #1944519)
    • The screensaver unlock dialog crashes after unlocking. The session can still be locked and unlocked after this crash. We’re working on a fix and hope to publish it in the next few weeks. (LP: #2012795)
    • Xorg crashes and the user is logged out after logging in or switching users on some virtual machines, including GNOME Boxes. (LP: #1861609)

    For more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions, please refer to the Xubuntu Release Notes.

    The main Ubuntu Release Notes cover many of the other packages we carry and more generic issues.


    For support with the release, navigate to Help & Support for a complete list of methods to get help.

    on April 20, 2023 02:41 AM

    April 04, 2023

    So, Dungeons & Dragons: Honour Among Thieves, which I have just watched. I have some thoughts. Spoilers from here on out!

    Theatrical release poster for Honour Among Thieves which depicts a big D&D logo in flames (a dragon curled into the form of an ampersand and breathing fire)

    Up front I shall say: that was OK. Not amazing, but not bad either. It could have been cringy, or worthy, and it was not. It struck a reasonable balance between being overly puffed up with a sense of epic self-importance (which it avoided) and being campy and ridiculous and mocking all of us who are invested in the game (which it also avoided). So, a tentative thumbs-up, I suppose. That’s the headline review.

    But there is more to be considered in the movie. I do rather like that for those of us who play D&D, pretty much everything in the film was recognisable as an actual rules-compliant thing, without making a big deal about it. I’m sure there are rules lawyers quibbling about the detail (“blah blah wildshape into an owlbear”, “blah blah if she can cast time stop why does she need some crap adventurers to help”) but that’s all fine. It’s a film, not a rulebook.

    I liked how Honour Among Thieves is recognisably using canon from an existing D&D land, Faerûn, but without making it important; someone who doesn’t know this stuff will happily pass over the names of Szass Tam or Neverwinter or Elminster or Mordenkainen as irrelevant world-building, but that’s in there for those of us who know those names. It’s the good sort of fanservice; the sort that doesn’t ruin things if you’re not a fan.

    (Side notes: Simon is an Aumar? And more importantly, he’s Simon the Sorcerer? Is that a sly reference to the Simon the Sorcerer? Nice, if so. Also, I’m sure there are one billion little references that I didn’t catch but might on a second or third or tenth viewing, and also sure that there are one billion web pages categorising them all in exhaustive detail. I liked the different forms of Bigby’s Hand. But what happened to the random in the gelatinous cube?)

    And Chris Pine is nowhere near as funny as he thinks he is. Admittedly, he’s playing a character, and obviously Edgin the character’s vibe is that Edgin is not as funny as he thinks he is, but even given that, it felt like half the jokes were delivered badly and flatly. Marvel films get the comedy right; this film seemed a bit mocking of the concept, and didn’t work for me at all.

    I was a bit disappointed in the story in Honour Among Thieves, though. The characters are shallow, as is the tale; there’s barely any emotional investment in any of it. We’re supposed, presumably, to identify with Simon’s struggles to attune to the helmet and root for him, or with the unexpectedness of Holga’s death and Edgin and Kira’s emotions, but… I didn’t. None of it was developed enough; none of it made me feel for the characters and empathise with them. (OK, small tear at Holga’s death scene. But I’m easily emotionally manipulated by films. No problem with that.) Similarly, I was a bit annoyed at how flat and undeveloped the characters were at first; the paladin Xenk delivering the line about Edgin re-becoming a Harper with zero gravitas, and the return of the money to the people being nowhere near as epically presented as it could have been.

    But then I started thinking, and I realised… this is a D&D campaign!

    That’s not a complaint at all. The film is very much like an actual D&D game! When playing, we do all strive for epic moves and fail to deliver them with the gravitas that a film would, because we’re not pro actors. NPCs do give up the info you want after unrealistically brief persuasion, because we want to get through that quick and we rolled an 18. The plans are half-baked but with brilliant ideas (the portal painting was great). That’s D&D! For real!

    You know how when someone else is describing a fun #dnd game and the story doesn’t resonate all that strongly with you? This is partially because the person telling you is generally not an expert storyteller, but mostly because you weren’t there. You didn’t experience it happening, so you missed the good bits. The jokes, the small epic moments, the drama, the bombast.

    That’s what D&D: Honour Among Thieves is. It’s someone telling you about their D&D campaign.

    It’s possible to rise above this, if you want to and you’re really good. Dragonlance is someone telling you about their D&D campaign, for example. Critical Role can pull off the epic and the tragic and the hilarious in ways that fall flat when others try (because they’re all very good actors with infinite charisma). But I think it’s OK to not necessarily try for that. Our games are fun, even when not as dramatic or funny as films. Honour Among Thieves is the same.

    I don’t know if there’s a market for more. I don’t know how many people want to hear a secondhand story about someone else’s D&D campaign that cost $150m. This is why I only gave it a tentative thumbs-up. But… I believe that the film-makers’ attempt to make Honour Among Thieves be like actual D&D is deliberate, and I admire that.

    This game of ours is epic and silly and amateurish and glorious all at once, and I’m happy with that. And with a film that reflects it.

    on April 04, 2023 06:18 PM

    April 02, 2023

    Enable Color Emoji on Xubuntu

    Many of us are exiting another drab, grey winter season. As Spring ramps up and the colors of the world awaken all around us, maybe now is the time to make your Xubuntu desktop just a bit more colorful. Since Xubuntu uses GTK, you can quickly spice up your writing with emoji by using the Ctrl + . keyboard shortcut.

    Enable Color Emoji on XubuntuXubuntu supports emoji out of the box, albeit, of a less colorful variety.

    Oh, that&aposs disappointing. We have emoji support, but it&aposs all monochrome. While expressive, without color they lack life and don&apost always convey the proper meaning. The good news is that there is a quick and easy fix.

    Install Color Emoji Support

    The monochrome emoji come from the fonts-symbola package that&aposs included with Xubuntu. If you install the fonts-noto-color-emoji package, the emoji picker will automatically upgrade to the full-color set. In Xubuntu, you can use the following button or command to install this package.

    sudo apt install fonts-noto-color-emoji

    Afterwards, restart any running applications and the emoji picker will be refreshed, no restart required.

    Enable Color Emoji on XubuntuColor emoji are much more interesting to look at and can be easier to understand.

    Affected Applications

    • GTK 3 and 4 graphical apps support the on-demand emoji picker (Ctrl + .) and embedded color emoji.
    • GTK 3 and 4 terminal apps display color emoji.
    • KDE apps, from what I can tell, do not take advantage of the color emoji without some additional configuration. If anybody knows what that is, I&aposd love to find out.
    • Firefox and Thunderbird will display the color emoji in most instances. Chromium browsers tend to have better support.

    Bonus: Use emoji in any app

    Once you start using emoji, you might be disappointed at the number of apps you use that are not native GTK, and therefore do not support the keyboard shortcut. Don&apost despair, there is another solution. Emote, available in the Snap Store and GNOME Software. Once you install and launch Emote, a new keyboard shortcut is enabled. Type Ctrl + Alt + E to show the app. Click your emoji and it will be inserted into your app.

    Enable Color Emoji on XubuntuEmote adds a handy interface for adding emoji in any app.

    Have fun with Emoji in Xubuntu! 😉

    on April 02, 2023 12:08 PM