December 10, 2023

KDE PIM Kaddressbook snapKDE PIM Kaddressbook snap

KDE Snaps:

This weeks big accomplishment is KDE PIM snaps! I have successfully added akonadi as a service via an akonadi content snap and running it as a service. Kaddressbook is our first PIM snap with this setup and it works flawlessly! It is available in the snap store. I have a pile of MRs awaiting approvals, so keep your eye out for the rest of PIM in the next day.

KDE Applications 23.08.4 has been released and available in the snap store.

Krita 5.2.2 has been released.

I have created a new kde-qt6 snap as the qt-framework snap has not been updated and the maintainer is unreachable. It is in edge and I will be rebuilding our kf6 snap with this one.

I am debugging an issue with the latest Labplot release.

KDE neon:

This week I helped with frameworks release 5.113 and KDE applications 23.08.4.

I also worked on the ongoing Unstable turning red into green builds as the porting to qt6 continues.

Debian:

With my on going learning packaging for all the programming languages, Rust packaging: I started on Rustic https://github.com/rustic-rs/rustic unfortunately, it was a bit of wasted time as it depends on a feature of tracing-subcriber that depends on matchers which has a grave bug, so it remains disabled.

Personal:

I do have an interview tomorrow! And it looks like the ‘project’ may go through after the new year. So things are looking up, unfortunately I must still ask, if you have any spare change please consider a donation. The phone company decided to take an extra $200.00 I didn’t have to spare and while I resolved it, they refused a refund, but gave me a credit to next months bill, which doesn’t help me now. Thank you for your consideration.

https://gofund.me/b74e4c6f

on December 10, 2023 01:33 PM
The Lubuntu Team has been hard at work already this development cycle polishing the Lubuntu desktop in time for our upcoming Long-Term Support release, 24.04 (codenamed Noble Numbat). We have pioneered groundbreaking features and achieved remarkable stability in crucial components. These enhancements are not just technical milestones; they're transformative changes you'll experience when you install […]
on December 10, 2023 03:26 AM

December 07, 2023

Canonical has been working with our testing lab partner, atsec information security, to prepare the cryptographic modules in Ubuntu 22.04 LTS (Jammy Jellyfish) for certification with NIST under the new FIPS 140-3 standard. The modules passed all of atsec’s algorithm validation tests and are in the queue awaiting NIST’s approval. We can’t predict when the FIPS modules will eventually be processed, but NIST updates the list of modules in their queue on a daily basis – and you can access the preview modules now.

FIPS 140-3 is a NIST standard for ensuring that cryptography has been implemented correctly, protecting users against common pitfalls such as misconfigurations or weak algorithms. All US Government departments, federal agencies, contractors and military groups are required to use FIPS validated crypto modules, and a number of industries have also adopted the FIPS 140 standard as a security best practice. A FIPS-compliant technology stack is therefore essential in these sectors, and Ubuntu provides the building blocks for a modern and innovative open source solution.

FIPS Mode on Ubuntu

Ubuntu is a general purpose operating system which serves as a platform for millions of users around the world to build upon, and so we have chosen a set of libraries and utilities that have the widest usage and converted them to FIPS mode. We have disabled various disallowed algorithms and ciphers from the libraries, and made sure that they work by default in a FIPS compatible mode of operation. This means that you can easily comply with FIPS requirements by installing these modules.

We have converted these packages to FIPS mode:

  • Linux kernel v5.15 – this provides a kernel cryptographic API as well as a validated source of entropy
  • OpenSSL v3.0.2 – the most popular general purpose crypto library
  • Libgcrypt v1.9.4 – another general purpose library based on code from GnuPG
  • GnuTLS v3.7.3 – a secure communications library for protocols such as TLS
  • Strongswan v5.9.5 – an IPSec VPN client

Pick your FIPS: Updates, Preview or Strict

When it comes to FIPS we inevitably face the dilemma of certified versions versus security patching: while regular security updates are essential for maintaining a secure system, NIST-certified FIPS modules are standardised at a fixed point in time and immediately start falling behind with security updates.

In order to address the concerns about having our customers use FIPS certified modules that contain vulnerabilities, we provide an alternative path: fips-updates. This is where we apply the necessary security patches to the FIPS modules and assert that we have not altered the FIPS cryptographic functionality.

We strongly recommend that you choose fips-updates in the Pro client and receive the security updates – the vast majority of our customers select this option.

Once the modules are NIST certified they will become available in the strictly compliant Pro channel called fips. You should only use these if you have the most stringent auditing requirements for certified modules, as they will almost certainly contain known security vulnerabilities by the time they have come through the long certification process.

For the Ubuntu 22.04 LTS release we are providing a new, third channel for accessing the modules: fips-preview. Some compliance schemes such as FedRAMP require you to only deploy strictly FIPS-certified modules, except when a vendor has published a new version fixing a security vulnerability, in which case you can deploy the newly-patched version as long as the module is in NIST’s recertification queue.

It is worth noting that fips-preview will still generally not provide comprehensive and up-to-date security patched modules, whereas with fips-updates we can apply fixes to the modules right away. Submitting modules to NIST for recertification necessarily introduces a lag due to bureaucracy, paperwork and testing costs, and we can’t provide any guarantees when modules will be repackaged for recertification.

How to get access

You can get access to the FIPS modules by using the Pro client command-line tool, which is built into all recent versions of Ubuntu (it used to be known as ubuntu-advantage, or “ua”). As FIPS is a Pro feature, you’ll need to first get a Pro token. This is as simple as signing up with an email address (we promise not to spam you) and attaching the token to your client.

Armed with your token, first ensure the system is fully up to date:

$ sudo apt update && sudo apt -y upgrade

Next, attach your Pro token (paste in the actual token):

$ sudo pro attach [C11AAAA1A1AAAAAAA1AAAAA11AA1AA]

Now that the token is attached, you can see two options for enabling the FIPS modules: fips-preview and fips-updates. Enable fips-updates using the Pro client:

$ sudo pro enable fips-updates

This command configures the system into FIPS mode and installs the relevant modules.

To check what has been enabled, use the Pro client:

$ sudo pro status
SERVICE          ENTITLED  STATUS       DESCRIPTION
anbox-cloud      yes       disabled     Scalable Android in the cloud
esm-apps         yes       enabled      Expanded Security Maintenance for Applications
esm-infra        yes       enabled      Expanded Security Maintenance for Infrastructure
fips-updates     yes       enabled      FIPS compliant crypto packages with stable security updates
livepatch        yes       warning      Current kernel is not supported
realtime-kernel* yes       disabled     Ubuntu kernel with PREEMPT_RT patches integrated
usg              yes       disabled     Security compliance and audit tools

You can verify that the FIPS kernel is running:

$ uname -a
Linux jammy 5.15.0-73-fips #80+fips1-Ubuntu SMP Thu Jun 1 20:57:42 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

In FIPS mode, the kernel provides a system-wide flag that applications and users can check:

$ cat /proc/sys/crypto/fips_enabled
1

Path to full certification

When the modules are fully certified, they will become available in the strict fips channel within the Pro client. The fips-updates and fips-preview channels will remain as they are, so if you are using these to test the pre-certified modules nothing will change from the perspective of the the Pro client when NIST issues the certificates, apart from the existential knowledge that the modules have been approved.

Reporting problems

If you encounter any issues or difficulties when integrating the preview modules, please do let us know via Launchpad. For example, you can raise a bug against OpenSSL here. Alternatively, if you have a support contract with us then get in touch via the Support Portal.

FAQs

What is Ubuntu Pro?

Ubuntu Pro is a set of security and compliance features built on top of the regular Ubuntu, and the FIPS packages are part of Ubuntu Pro. We have a detailed FAQ all about Ubuntu Pro here.

Will FIPS work with the latest hardware?

Canonical publishes Long Term Support (LTS) releases every 2 years, but for each interim release (every 6 months) we also make that latest kernel package available for the most recent LTS release as a Hardware Enablement (HWE) kernel, allowing customers to benefit from the latest hardware support whilst still using the LTS release. By default, Desktop installations will use the latest HWE kernel, while Server installations will stick to the generic LTS kernel. The 22.04 LTS FIPS kernel is derived from the generic version 5.15. If the 5.15 kernel works on your hardware, the FIPS kernel should also work.

Check your kernel version using uname -a. To downgrade to the generic version use sudo apt install –install-recommends linux-generic and check the system behaves correctly.

Can I use FIPS in containers?

Yes, you can deploy FIPS modules in containers and run them in FIPS mode, provided that the host system has a FIPS kernel of the same release. You can learn more in this blog post. The reason that the kernel should be the same release as the modules is that our FIPS userspace libraries get their random numbers (the entropy) from the kernel, and the modules are certified to work in tandem.

Why isn’t SSH included in the modules?

We provide a version of SSH (both client and server) that links to the FIPS OpenSSL library, so that the SSH packages don’t need to be certified individually. This means that we can provide security updates for SSH without being constrained by the FIPS process. These SSH packages are modified versions of the regular Ubuntu SSH packages, and the Pro client will seamlessly ensure that the right packages are installed if required.

Conclusion

We encourage everyone who has a need for FIPS cryptography to enable the preview FIPS modules in Ubuntu 22.04 LTS and take them for a test run. If you have any questions or feedback about the modules, or would like to know more about Ubuntu Pro in general and how Canonical can support your security and compliance requirements, please get in touch.

Further reading

How New Mexico State University accelerates compliant federal research with Ubuntu

Manage FIPS-enabled Linux machines at scale with Landscape 23.03

Managing security vulnerabilities and compliance for U.S. Government with Ubuntu Pro

Docker container security: demystifying FIPS-enabled containers with Ubuntu Pro

on December 07, 2023 05:56 PM

E276 O Abade De Bação

Podcast Ubuntu Portugal

Desta vez recebemos a visita do André Bação, para fazer serão connosco. Para além de uma dose copiosa de vinhos regionais e fumeiro, trouxe-nos condensadores bojudos com perninhas, novidades sobre Bsides e conferências de Segurança, porquê usar Regolith, Home Assistant com radar e nariz apurado, malucos que constroem computadores "vintage" às pecinhas - e ainda falámos de extensões de Gnome para ter mosaicos peganhentos e bonitos, Ubuntu Touch e alternativas e como o Diogo anda a respirar compostos orgânicos voláteis enquanto é esmagado por uma agenda gigante de eventos bem bons.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on December 07, 2023 12:00 AM

December 06, 2023

It’s a wrap – Canonical AI Roadshow 2023 has come to an end. From Brazil to the United Arab Emirates, from Europe to the US, we’ve spent an amazing 10 weeks talking with people all over the world about how to innovate at speed with open source artificial intelligence (AI), and how to make enterprise AI use cases a reality.

Now that our globetrotting is over and the winter break is around the corner, let’s look back at some of the big news we shared and some of the lessons we learned during Canonical AI Roadshow 2023.

Charmed MLflow is here

In September 2023, right at the beginning of the roadshow, we announced the general availability of Charmed MLflow, Canonical’s distribution of the upstream project, as part of our MLOps portfolio. Charmed MLFlow can be deployed on a laptop within minutes, facilitating quick experimentation. It is fully tested on Ubuntu and can be used on other operating systems through Canonical’s Multipass or Windows Subsystem for Linux (WSL). It has all the features of the upstream project, as well as additional enterprise-grade capabilities, such as:

  • Simplified deployment and upgrades: the time to deployment is less than 5 minutes, enabling users also to upgrade their tools seamlessly.
  • Automated security scanning: The bundle is scanned at a regular cadence..
  • Security patching: Charmed MLflow follows Canonical’s process and procedure for security patching. Vulnerabilities are prioritised based on severity, the presence of patches in the upstream project and the risk of exploitation.
  • Maintained images: All Charmed MLflow images are actively maintained.
  • Comprehensive testing: Charmed MLflow is thoroughly tested on multiple platforms, including public cloud, local workstations, on-premises deployments, and various CNCF-compliant Kubernetes distributions.
  • Tools integration: Charmed MLflow is integrated with leading open source tools such as Kubeflow or Spark.

Charmed Spark is here

Our promise to offer secure open source software goes beyond MLOps. In Dubai at Gitex Global 2023, we also released Charmed Spark, which provides users with everything they need to run Apache Spark on Kubernetes. It is suitable for use in diverse data processing applications including predictive analytics, data warehousing, machine learning data preparation and extract-transform-load (ETL). Canonical Charmed Spark accelerates data engineering across public clouds and private data centres alike and comes with a comprehensive support and security maintenance offering, so teams can work with complete peace of mind.

Sustainable AI with open source

While AI is at the forefront of a revolution across industries and in the way we work, it is also a topic that raises numerous questions regarding long-term environmental impact. Google revealed that AI contributes to 10-15% of their electricity usage (source), and there is a growing concern over the CO2 footprint that such technologies have around the globe. From optimising computing power to using more open source software, throughout the Roadshow we learned how organisations are taking steps to build sustainably using the new technologies.

Open source tools optimise energy consumption by enabling organisations to spend less time on training models from scratch, as well as developing software used to run AI at scale. In this way, environmental responsibility goes hand-in-hand with faster project delivery, so organisations have double the incentive to follow a sustainable approach with open source tools, models, datasets and even frameworks.

Responsible AI

As the adoption of artificial intelligence grows within enterprises, there is also a need for more guidance on the market. Initiatives such as the European Artificial Intelligence Act approach this gap and put on the table proposals for a more responsible approach towards AI. Data security, artifact ownership, and practices for sharing the same infrastructure are just some of the topics that the industry needs more answers about. European AI Forum, EY, Rosenberg Institute and Activision Blizzard are just some of the organisations that approached responsible AI during World AI Summit 2023 and discussed how to build trust in relation with generative AI. Public sector players aren’t shying away either, with organisations such as the Dutch Authority for Digital Infrastructure approaching the topic and proposing a “European Approach to artificial intelligence”.

Run AI at scale

One big challenge that organisations face is moving projects beyond experimentation and into production. Running AI at scale comes with new capabilities such as model monitoring, infrastructure monitoring, pipeline automation or model serving. At the same time, there is a need to adjust the hardware, such that it stays time-efficient and cost-effective.

Michael Balint, Senior Manager, Product Architecture at NVIDIA and Maciej Mazur, Principal AI Field Engineer at Canonical held a hands-on workshop focused on building an LLM factory during World AI Summit. They highlighted a cohesive solution that runs on NVIDIA DGX, and also other hardware, including open source tools and libraries such as NVIDIA NeMo, Charmed Kubeflow or NVIDIA Triton. 

A global roadshow

<noscript> <img alt="" height="1648" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_3150,h_1648/https://ubuntu.com/wp-content/uploads/06f9/AI-roadshow-6.png" width="3150" /> </noscript>

That’s all for the Canonical AI Roadshow 2023. We had a great time discussing the latest trends in generative AI, showcasing how Canonical technology can speed up companies’ AI journeys, and spotlighting our MLOps and Data Fabric solutions. But rest assured, there’s still plenty more to come – both for Canonical and the AI industry at large – so stay tuned for what’s next. 

Further reading

Generative AI explained

Building a comprehensive toolkit for machine learning

ML observability: what, why, how

on December 06, 2023 11:10 AM

December 04, 2023

Welcome to the Ubuntu Weekly Newsletter, Issue 816 for the week of November 26 – December 2, 2023. The full version of this issue is available here.

In this issue we cover:

  • Ubuntu Summit 2023 Reflections
  • Welcome New Members and Developers
  • Ubuntu Stats
  • Hot in Support
  • LoCo Events
  • CFP for the FOSDEM 2024 Distros Devroom (closes December 5th)
  • Mir Release 2.16.0
  • Call for testing Ubuntu Frame, Mir Kiosk
  • Intel® TDX 1.0 technology preview available on Ubuntu 23.10
  • Miriway – bringing Wayland to your desktop
  • Other Community News
  • Canonical News
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for 20.04, 22.04, 23.04 and 23.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

on December 04, 2023 09:04 PM

December 01, 2023

While the winter sets in, I have been mostly busy following up on job leads and applying to anything and everything. Something has to stick… I am trying! Even out of industry.

Debian:

This weeks main focus was getting involved and familiar with Debian rust-packaging. It is really quite different from other teams! I was successful and the MR is https://salsa.debian.org/rust-team/debcargo-conf/-/merge_requests/566 if anyone on the rust team can take a gander, I will upload when merged. I will get a few more under my belt next week now that I understand the process.

KDE neon:

Unfortunately, I did not have much time for neon, but I did get some red builds fixed and started uploading the new signing key for mauikit* applications.

KDE snaps:

A big thank you to Josh and Albert for merging all my MR’s and I have finished 23.08.3 releases to stable.

I released a new Krita 5.2.1 with some runtime fixes, please update.

Still no source of income so I must ask, if you have any spare change, please consider a donation.

Thank you, Scarlett

https://gofund.me/b74e4c6f

on December 01, 2023 06:05 PM

November 30, 2023

Every so often I have to make a new virtual machine for some specific use case. Perhaps I need a newer version of Ubuntu than the one I’m running on my hardware in order to build some software, and containerization just isn’t working. Or maybe I need to test an app that I made modifications to in a fresh environment. In these instances, it can be quite helpful to be able to spin up these virtual machines quickly, and only install the bare minimum software you need for your use case.

One common strategy when making a minimal or specially customized install is to use a server distro (like Ubuntu Server for instance) as the base and then install other things on top of it. This sorta works, but it’s less than ideal for a couple reasons:

  • Server distros are not the same as minimal distros. They may provide or offer software and configurations that are intended for a server use case. For instance, the ubuntu-server metapackage in Ubuntu depends on software intended for RAID array configuration and logical volume management, and it recommends software that enables LXD virtual machine related features. Chances are you don’t need or want these sort of things.

  • They can be time-consuming to set up. You have to go through the whole server install procedure, possibly having to configure or reconfigure things that are pointless for your use case, just to get the distro to install. Then you have to log in and customize it, adding an extra step.

If you’re able to use Debian as your distro, these problems aren’t so bad since Debian is sort of like Arch Linux - there’s a minimal base that you build on to turn it into a desktop or server. But for Ubuntu, there’s desktop images (not usually what you want), server images (not usually what you want), cloud images (might be usable but could be tricky), and Ubuntu Core images (definitely not what you want for most use cases). So how exactly do you make a minimal Ubuntu VM?

As hinted at above, a cloud image might work, but we’re going to use a different solution here. As it turns out, you don’t actually have to use a prebuilt image or installer to install Ubuntu. Similar to the installation procedure Arch Linux provides, you can install Ubuntu manually, giving you very good control over what goes into your VM and how it’s configured.

This guide is going to be focused on doing a manual installation of Ubuntu into a VM, using debootstrap to install the initial minimal system. You can use this same technique to install Ubuntu onto physical hardware by just booting from a live USB and then using this technique on your hardware’s physical disk(s). However we’re going to be primarily focused on using a VM right now. Also, the virtualization software we’re going to be working with is QEMU. If you’re using a different hypervisor like VMware, VirtualBox, or Hyper-V, you can make a new VM and then install Ubuntu manually into it the same way you would install Ubuntu onto physical hardware using this technique. QEMU, however, provides special tools that make this procedure easier, and QEMU is more flexible than other virtualization software in my experience. You can install it by running sudo apt install qemu-system-x86 on your host system.

With that laid out, let us begin.

Open a terminal on your physical machine, and make a directory for your new VM to reside in. I’ll use “~/VMs/Ubuntu” here.

mkdir ~/VMs/Ubuntu
cd ~/VMs/Ubuntu

Next, let’s make a virtual disk image for the VM using the qemu-img utility.

qemu-img create -f qcow2 ubuntu.img 32G

This will make a 32 GiB disk image - feel free to customize the size or filename as you see fit. The -f parameter at the beginning specifies the VM disk image format. QCOW2 is usually a good option since the image will start out small and then get bigger as necessary. However, if you’re already using a copy-on-write filesystem like BTRFS or ZFS, you might want to use -f raw rather than -f qcow2 - this will make a raw disk image file and avoid the overhead of the QCOW2 file format.

Now we need to attach the disk image to the host machine as a device. I usually do this with you can use qemu-nbd, which can attach a QEMU-compatible disk image to your physical system as a network block device. These devices look and work just like physical disks, which makes them extremely handy for modifying the contents of a disk image.

qemu-nbd requires that the nbd kernel module be loaded, and at least on Ubuntu, it’s not loaded by default, so we need to load it before we can attach the disk image to our host machine.

sudo modprobe nbd
sudo qemu-nbd -f qcow2 -c /dev/nbd0 ./ubuntu.img

This will make our ubuntu.img file available through the /dev/nbd0 device. Make sure to specify the format via the -f switch, especially if you’re using a raw disk image. QEMU will keep you from writing a new partition table to the disk image if you give it a raw disk image without telling it directly that the disk image is raw.

Once your disk image is attached, we can partition it and format it just like a real disk. For simplicity’s sake, we’ll give the drive an MBR partition table, create a single partition enclosing all of the disk’s space, then format the partition as ext4.

sudo fdisk /dev/nbd0
n
p
1


w
sudo mkfs.ext4 /dev/nbd0p1

(The two blank lines are intentional - they just accept the default options for the partition’s first and last sector, which makes a partition that encloses all available space on the disk.)

Now we can mount the new partition.

mkdir vdisk
sudo mount /dev/nbd0p1 ./vdisk

Now it’s time to install the minimal Ubuntu system. You’ll need to know the first part of the codename for the Ubuntu version you intend to install. The codenames for Ubuntu releases are an adjective followed by the name of an animal, like “Jammy Jellyfish”. The first word (“Jammy” in this instance) is the one you need. These codenames are easy to look up online. Here’s the codenames for the currently supported LTS versions of Ubuntu, as well as the codename for the current development release:

+-------------------+-------+
| 20.04 | Focal |
|-------------------+-------+
| 22.04 | Jammy |
|-------------------+-------+
| 24.04 Development | Noble |
|-------------------+-------+

To install the initial minimal Ubuntu system, we’ll use the debootstrap utility. This utility will download and install the bare minimum packages needed to have a functional Ubuntu system. Keep in mind that the Ubuntu installation this tool makes is really minimal - it doesn’t even come with a bootloader or Linux kernel. We’ll need to make quite a few changes to this installation before it’s ready for use in a VM.

Assuming we’re installing Ubuntu 22.04 LTS into our VM, the command to use is:

sudo debootstrap jammy ./vdisk

After a few minutes, our new system should be downloaded and installed. (Note that debootstrap does require root privileges.)

Now we’re ready to customize the VM! To do this, we’ll use a utility called chroot - this utility allows us to “enter” an installed Linux system, so we can modify with it without having to boot it. (This is done by changing the root directory (from the perspective of the chroot process) to whatever directory you specify, then launching a shell or program inside the specified directory. The shell or program will see its root directory as being the directory you specified, and volia, it’s as if we’re “inside” the installed system without having to boot it. This is a very weak form of containerization and shouldn’t be relied on for security, but it’s perfect for what we’re doing.)

There’s one thing we have to account for before chrooting into our new Ubuntu installation. Some commands we need to run will assume that certain special directories are mounted properly - in particular, /proc should point to a procfs filesystem, /sys should point to a sysfs filesystem, /dev needs to contain all of the device files of our system, and /dev/pts needs to contain the device files for pseudoterminals (you don’t have to know what any of that means, just know that those four directories are important and have to be set up properly). If these directories are not properly mounted, some tools will behave strangely or not work at all. The easiest way to solve this problem is with bind mounts. These basically tell Linux to make the contents of one directory visible in some other directory too. (These are sort of like symlinks, but they work differently - a symlink says “I’m a link to something, go over here to see what I contain”, whereas a bind mount says “make this directory’s contents visible over here too”. The differences are subtle but important - a symlink can’t make files outside of a chroot visible inside the chroot. A bind mount, however, can.)

So let’s bind mount the needed directories from our system into the chroot:

sudo mount --bind /dev ./vdisk/dev
sudo mount --bind /proc ./vdisk/proc
sudo mount --bind /sys ./vdisk/sys
sudo mount --bind /dev/pts ./vdisk/dev/pts

And now we can chroot in!

sudo chroot ./vdisk

Run ping -c1 8.8.8.8 just to make sure that Internet access is working - if it’s not, you may need to copy the host’s /etc/resolv.conf file into the VM. However, you probably won’t have to do this. Assuming Internet is working, we can now start customizing things.

By default, debootstrap only enables the “main” repository of Ubuntu. This repository only contains free-and-open-source software that is supported by Canonical. This does *not* include most of the software available in Ubuntu - most of it is in the “universe”, “restricted”, and “multiverse” repositories. If you really know what you’re doing, you can leave some of these repositories out, but I would highly recommend you enable them. Also, only the “release” pocket is enabled by default - this pocket includes all of the software that came with your chosen version of Ubuntu when it was first released, but it doesn’t include bug fixes, security updates, or newer versions of software. All those are in the “updates”, “security”, and “backports” pockets.

To fix this, run the following block of code, adjusted for your release of Ubuntu:

tee /etc/apt/sources.list << ENDSOURCESLIST
deb http://archive.ubuntu.com/ubuntu jammy main universe restricted multiverse
deb http://archive.ubuntu.com/ubuntu jammy-updates main universe restricted multiverse
deb http://archive.ubuntu.com/ubuntu jammy-security main universe restricted multiverse
deb http://archive.ubuntu.com/ubuntu jammy-backports main universe restricted multiverse
ENDSOURCESLIST

Replace “jammy” with the codename corresponding to your chosen release of Ubuntu. Once you’ve run this, run cat /etc/apt/sources.list to make sure the file looks right, then run apt update to refresh your software database with the newly enabled repositories. Once that’s done, run apt full-upgrade to update any software in the base installation that’s out-of-date.

What exactly you install at this point is up to you, but here’s my list of recommendations:

  • linux-generic. Highly recommended. This provides the Linux kernel. Without it, you’re going to have significant trouble booting. You can replace this with a different kernel metapackage if you want to for some reason (like linux-lowlatency).

  • grub-pc. Highly recommended. This is the bootloader. You might be able to replace this with an alternative bootloader like systemd-boot.

  • vim (or some other decent text editor that runs in a terminal). Highly recommended. The minimal install of Ubuntu doesn’t come with a good text editor, and you’ll really want one of those most likely.

  • network-manager. Highly recommended. If you don’t install this or some other network manager, you won’t have Internet access. You can replace this with an alternative network manager if you’d like.

  • tmux. Recommended. Unless you’re going to install a graphical environment, you’ll probably want a terminal multiplexer so you don’t have to juggle TTYs (which is especially painful in QEMU).

  • openssh-server. Optional. This is handy since it lets you use your terminal emulator of choice on your physical machine to interface with the virtual machine. You won’t be stuck using a rather clumsy and slow TTY in a QEMU display.

  • pulseaudio. Very optional. Provides sound support within the VM.

  • icewm + xserver-xorg + xinit + xterm. Very optional. If you need or want a graphical environment, this should provide you with a fairly minimal and fast one. You’ll still log in at a TTY, but you can use startx to start a desktop.

Add whatever software you want to this list, remove whatever you don’t want, and then install it all with this command:

apt install listOfPackages

Replace “listOfPackages” with the actual list of packages you want to install. For instance, if I were to install everything in the above list except openssh-server, I would use:

apt install linux-generic grub-pc vim network-manager tmux icewm xserver-xorg xinit xterm

At this point our software is installed, but the VM still has a few things needed to get it going.

  • We need to install and configure the bootloader.

  • We need an /etc/fstab file, or the system will boot with the drive mounted read-only.

  • We should probably make a non-root user with sudo access.

  • There’s a file in Ubuntu that will prevent Internet access from working. We should delete it now.

The bootloader is pretty easy to install and configure. Just run:

sudo grub-install /dev/nbd0
sudo update-grub

For /etc/fstab, there are a few options. One particularly good one is to label the partition we installed Ubuntu into using e2label, then use that label as the ID of the drive we want to mount as root. That can be done like this:

e2label /dev/nbd0p1 ubuntu-inst
echo "LABEL=ubuntu-inst / ext4 defaults 0 1" > /etc/fstab

Making a user account is fairly easy:

adduser user # follow the prompts to create the user
adduser user sudo

And lastly, we should remove the Internet blocker file. I don’t understand why exactly this file exists in Ubuntu, but it does, and it causes problems for me when I make a minimal VM in this way. Removing it fixes the problem.

rm /usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf

And that’s it! Now we can exit the chroot, unmount everything, and detach the disk image from our host machine.

exit
sudo umount ./vdisk/dev/pts
sudo umount ./vdisk/dev
sudo umount ./vdisk/proc
sudo umount ./vdisk/sys
sudo umount ./vdisk
sudo qemu-nbd -d /dev/nbd0

Now we can try and boot the VM. But before doing that, it’s probably a good idea to make a VM launcher script. Run vim ./startVM.sh (replacing “vim” with your text editor of choice), then type the following contents into the file:

#!/bin/bash
qemu-system-x86_64 -enable-kvm -machine q35 -m 4G -smp 2 -vga qxl -display sdl -monitor stdio -device intel-hda -device hda-duplex -usb -device usb-tablet -drive file=./ubuntu.img,format=qcow2,if=virtio

Refer to the qemu-system-x86_64 manpage or QEMU Invocation documentation page at https://www.qemu.org/docs/master/system/invocation.html for more info on what all these options do. Basically this gives you a VM with 4 GB RAM, 2 CPU cores, decent graphics (not 3d accelerated but not as bad as plain VGA), and audio support. You can tweak the amount of RAM and number of CPU cores by changing the -m and -smp parameters respectively. You’ll have access to the QEMU monitor through whatever terminal you run the launcher script in, allowing you to do things like switch to a different TTY, insert and remove devices and storage media on the fly, and things like that.

Finally, it’s time to see if it works.

chmod +x ./startVM.sh
./startVM.sh

If all goes well, the VM should boot and you should be able to log in! If you installed IceWM and its accompanying software like mentioned earlier, try running startx once you log in. This should pop open a functional IceWM desktop.

Some other things you should test once you’re logged in:

  • Do you have Internet access? ping -c1 8.8.8.8 can be used to test. If you don’t have Internet, run sudo nmtui in a terminal and add a new Ethernet network within the VM, then try activating it. If you get an error about the Ethernet device being strictly unmanaged, you probably forgot to remove the /usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf file mentioned earlier.

  • Can you write anything to the drive? Try running touch test to make sure. If you can’t, you probably forgot to create the /etc/fstab file.

If either of these things don’t work, you can power off the VM, then re-attach the VM’s virtual disk to your host machine, mount it, and chroot in like this:

sudo qemu-nbd -f qcow2 -c /dev/nbd0 ./ubuntu.img
sudo mount /dev/nbd0p1 ./vdisk
sudo chroot vdisk

Since all you’ll be doing is writing or removing a file, you don’t need to bind mount all the special directories we had to work with earlier.

Once you’re done fixing whatever is wrong, you can exit the VM, unmount and detach its disk, and then try to boot it again like this:

exit
sudo umount vdisk
sudo qemu-nbd -d /dev/nbd0
./startVM.sh

You now have a fully functional, minimal VM! Some extra tips that you may find handy:

  • If you choose to install an SSH server into your VM, you can use the “hostfwd” setting in QEMU to forward a port on your local machine to port 22 within the VM. This will allow you to SSH into the VM. Add a parameter like -nic user,hostfwd=tcp:127.0.0.1:2222-:22 to your QEMU command in the “startVM.sh” script. This will forward port 2222 of your host machine to port 22 of the VM. Then you can SSH into the VM by running ssh user@127.0.0.1 -p 2222. The “hostfwd” QEMU feature is documented at https://www.qemu.org/docs/master/system/invocation.html - just search the page for “hostfwd” to find it.

  • If you intend to use the VM through SSH only and don’t want a QEMU window at all, remove the following three parameters from the QEMU command in “startVM.sh”:

    • -vga qxl

    • -display sdl

    • -monitor stdio

    Then add the following switch:

    • -nographic

    This will disable the graphical QEMU window entirely and provide no video hardware to the VM.

  • You can disable sound support by removing the following switches from the QEMU command in “startVM.sh”:

    • -device intel-hda

    • -device hda-duplex

There’s lots more you can do with QEMU and manual Ubuntu installations like this, but I think this should give you a good start. Hope you find this useful! God bless.

Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

on November 30, 2023 10:34 PM

E275 Vende-Se NFT, Como Novo

Podcast Ubuntu Portugal

Estamos nas nuvens, porque esta semana tivemos uma convidada muito especial, pela segunda vez: Joana Simões - que nos veio falar de "Open Washing", dilemas com IA, a Outra Cimeira Concorrente da Cimeira do Ubuntu e muito mais. Com um Velho Lobo do Mar e o Diogo Cloudstantino, cochichámos sobre desaires com NFT, Directores Executivos bonzinhos e vilões, salamandras esquisitas, unicórnios voadores, frequências a 562.30 Hz que azucrinam cabeças, microK8, piqueniques com CNCF , Kubernetes e cantos peganhentos!

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on November 30, 2023 12:00 AM

November 29, 2023

KDiagram 3.0.0

Jonathan Riddell

KDiagram is two powerful libraries (KChart, KGantt) for creating business diagrams.

Version 3.0.0 is now available for packaging.

It moves KDiagram to use Qt 6. It is co-installable with previous Qt 5 versions and distros may want to package both alongside each other for app compatibility.

URL: https://download.kde.org/stable/kdiagram/3.0.0/
SHA256: 6d5f53dfdd019018151c0193a01eed36df10111a92c7c06ed7d631535e943c21

Signed by E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Esk-Riddell jr@jriddell.org
https://jriddell.org/esk-riddell.gpg

on November 29, 2023 04:05 PM

KWeatherCore 0.8.0

Jonathan Riddell

KWeatherCore is a library to facilitate retrieval of weather information including forecasts and alerts.

0.8.0 is available for packaging now

URL: https://download.kde.org/stable/kweathercore/0.8.0/
SHA256: 9bcac13daf98705e2f0d5b06b21a1a8694962078fce1bf620dbbc364873a0efeS
Signed by E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Esk-Riddell <jr@jriddell.org>
https://jriddell.org/esk-riddell.gpg

This release moves the library to use Qt 6. It is not compatible with older Qt 5 versions of the library so should only be packaged when KWeather is released or in testing archives.

on November 29, 2023 03:56 PM

November 27, 2023

Announcing Incus 0.3

Stéphane Graber

Another month, another Incus release!
Incus 0.3 is now out, featuring OpenFGA support, a lot of improvements to our migration tool and support for hot-plug/hot-remove of shared paths in virtual machines.

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

Finally just a quick reminder that my company is now offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship.
You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

on November 27, 2023 09:34 PM

Welcome to the Ubuntu Weekly Newsletter, Issue 815 for the week of November 19 – 25, 2023. The full version of this issue is available here.

In this issue we cover:

  • Ubuntu Stats
  • Hot in Support
  • Ubuntu Meetup/Workshop in Africa – A Resounding Success!
  • UbuCon Asia 2023 & Ubuntu Summit 2023 Recap Seminar
  • Ubuntu 23.10 Release Party + InstallFest Review
  • Ubuntu 23.10 Release Party + InstallFest in Busan is successfully completed!
  • LoCo Events
  • Ubuntu Cloud News
  • Canonical News
  • In the Press
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for 20.04, 22.04, 23.04 and 23.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

on November 27, 2023 09:27 PM

November 26, 2023

As has become a bit of a tradition by now, I’ll be attending FOSDEM 2024 in Brussels, Belgium on the weekend of the 3-4th of February 2024.

I’m once again one of the organizers of the containers devroom, a devroom we’ve been running for over 5 years now. And on top of that, will also help organize the kernel devroom. This is going to be our second year for this devroom after a very successful first year in 2023!

The CFPs for both devrooms are currently still open with a submission deadline of December 10th:

If you have anything that’s containers or kernel related, please send it, we have a variety of time slot lengths to accommodate anything from a short demo to a full size talk.

But those are just two of a lot of different devrooms running over the weekend, you can find a full list here along with all the CFP links.

See you in Brussels!

PS: A good chunk of the LXC/Incus team is going to be attending, so let us know if you want to chat and we’ll try to find some time!

on November 26, 2023 03:00 PM

November 25, 2023

In 2020 I reviewed LiveCD memory usage.

I was hoping to review either Wayland only or immutable only (think ostree/flatpak/snaps etc) but for various reasons on my setup it would just be a Gnome compare and that's just not as interesting. There are just to many distros/variants for me to do a full followup.

Lubuntu has previously always been the winner, so let's just see how Lubuntu 23.10 is doing today.

Previously in 2020 Lubuntu needed to get to 585 MB to be able to run something with a livecd. With a fresh install today Lubuntu can still launch Qterminal with just 540 MB of RAM (not apples to apples, but still)! And that's without Zram that it had last time.

I decided to try removing some parts of the base system to see the cost of each component (with 10MB accuracy). I disabled networking to try and make it a fairer compare.

  • Snapd - 30 MiB
  • Printing - cups foomatic - 10 MiB
  • rsyslog/crons - 10 MiB

Rsyslog impact

Out of the 3 above it's felt more like with rsyslog (and cron) are redundant in modern Linux with systemd. So I tried hitting the log system to see if we could get a slowdown, by every .1 seconds having a service echo lots of gibberish.

After an hour of uptime, this is how much space was used:

  • syslog 575M
  • journal at 1008M

CPU Usage on fresh boot after:

With Rsyslog

  • gibberish service was at 1% CPU usage
  • rsyslog was at 2-3%
  • journal was at ~4%

Without Rsyslog

  • gibberish service was at 1% CPU usage
  • journal was at 1-3%

That's a pretty extreme case, but does show some impact of rsyslog, which in most desktop settings is redundant anyway.

Testing notes:

  • 2 CPUs (Copy host config)
  • Lubuntu 23.10 install
  • no swap file
  • ext4, no encryption
  • login automatically
  • Used Virt-manager and only default change was enabling EUFI
on November 25, 2023 02:42 AM

November 22, 2023

Launchpad has supported building for riscv64 for a while, since it was a requirement to get Ubuntu’s riscv64 port going. We don’t actually have riscv64 hardware in our datacentre, since we’d need server-class hardware with the hypervisor extension and that’s still in its infancy; instead, we do full-system emulation of riscv64 on beefy amd64 hardware using qemu. This has worked well enough for a while, although it isn’t exactly fast.

The biggest problem with our setup wasn’t so much performance, though; it was that we were just using a bunch of manually-provisioned virtual machines, and they weren’t being reset to a clean state between builds. As a result, it would have been possible for a malicious build to compromise future builds on the same builder: it would only need a chroot or container escape. This violated our standard security model for builders, in which each build runs in an isolated ephemeral VM, and each VM is destroyed and restarted from a clean image at the end of every build. As a result, we had to limit the set of people who were allowed to have riscv64 builds on Launchpad, and we had to restrict things like snap recipes to only use very tightly-pinned parts from elsewhere on the internet (pinning is often a good idea anyway, but at an infrastructural level it isn’t something we need to require on other architectures).

We’ve wanted to bring this onto the same footing as our other architectures for some time. In Canonical’s most recent product development cycle, we worked with the OpenStack team to get riscv64 emulation support into nova, and installed a backport of this on our newest internal cloud region. This almost took care of the problem. However, Launchpad builder images start out as standard Ubuntu cloud images, which on riscv64 are only available from Ubuntu 22.04 LTS onwards; in testing 22.04-based VMs on other relatively slow architectures we already knew that we were seeing some mysterious hangs in snap recipe builds. Figuring this out blocked us for some time, and involved some pretty intensive debugging of the “strace absolutely everything in sight and see if anything sensible falls out” variety. We eventually narrowed this down to a LXD bug and were at least able to provide a workaround, at which point bringing up new builders was easy.

As a result, you can now enable riscv64 builds for yourself in your PPAs or snap recipes. Visit the PPA and follow the “Change details” link, or visit the snap recipe and follow the “Edit snap package” link; you’ll see a list of checkboxes under “Processors”, and you can enable or disable any that aren’t greyed out, including riscv64. This now means that all Ubuntu architectures are fully virtualized and unrestricted in Launchpad, making it easier for developers to experiment.

on November 22, 2023 02:00 PM

November 20, 2023

Access tokens can be used to access repositories on behalf of someone. They have scope limitations, optional expiry dates, and can be revoked at any time. They are a stricter and safer alternative to using real user authentication when needing to automate pushing and/or pulling from your git repositories.

This is a concept that has existed in Launchpad for a while now. If you have the right permissions in a git repository, you might have seen a “Manage Access Tokens” button in your repository’s page in the past.

These tokens can be extremely useful. But if you have multiple git repositories within a project, it can be a bit of a nuisance to create and manage access tokens for each repository.

So what’s new? We’ve now introduced project-scoped access tokens. These tokens reduce the trouble for the creation and maintenance of tokens for larger projects. A project access token will work as authentication for any git repository within that project.

Let’s say user A wants to run something in a remote server that requires pulling multiple git repositories from a project. User A can create a project access token, and restrict it to “repository pull” scope only. This token will then be valid authentication to pull from any repository within that project. And user A will be able to revoke that token once it’s no longer needed, keeping their real user authentication safe.

The same token will be invalid for pushing, or for accessing repositories within other projects. Also note that this is used for ‘authentication’, not ‘authorization’ – if the user doesn’t have access to a given git repository, their access token will not grant them permissions.

Anyone with permissions to edit a project will be able to create an access token, either through the UI or the API, using the same method as to create access tokens for git repositories. See Generating Access Tokens section in our documentation for instructions and other information.
This feature was implemented on request by our colleagues from the ROS team. We would love to get some feedback whether this also covers your use case. Please let us know.

on November 20, 2023 09:31 AM

November 19, 2023

In this article I will show you how to start your current operating system inside a virtual machine. That is: launching the operating system (with all your settings, files, and everything), inside a virtual machine, while you’re using it.

This article was written for Ubuntu, but it can be easily adapted to other distributions, and with appropriate care it can be adapted to non-Linux kernels and operating systems as well.

Motivation

Before we start, why would a sane person want to do this in the first place? Well, here’s why I did it:

  • To test changes that affect Secure Boot without a reboot.

    Recently I was doing some experiments with Secure Boot and the Trusted Platform Module (TPM) on a new laptop, and I got frustrated by how time consuming it was to test changes to the boot chain. Every time I modified a file involved during boot, I would need to reboot, then log in, then re-open my terminal windows and files to make more modifications… Plus, whenever I screwed up, I would need to manually recover my system, which would be even more time consuming.

    I thought that I could speed up my experiments by using a virtual machine instead.

  • To predict the future TPM state (in particular, the values of PCRs 4, 5, 8, and 9) after a change, without a reboot.

    I wanted to predict the values of my TPM PCR banks after making changes to the bootloader, kernel, and initrd. Writing a script to calculate the PCR values automatically is in principle not that hard (and I actually did it before, in a different context), but I wanted a robust, generic solution that would work on most systems and in most situations, and emulation was the natural choice.

  • And, of course, just for the fun of it!

To be honest, I’m not a big fan of Secure Boot. The reason why I’ve been working on it is simply that it’s the standard nowadays and so I have to stick with it. Also, there are no real alternatives out there to achieve the same goals. I’ll write an article about Secure Boot in the future to explain the reasons why I don’t like it, and how to make it work better, but that’s another story…

Procedure

The procedure that I’m going to describe has 3 main steps:

  1. create a copy of your drive
  2. emulate a TPM device using swtpm
  3. emulate the system with QEMU

I’ve tested this procedure on Ubuntu 23.04 (Lunar) and 23.10 (Mantic), but it should work on any Linux distribution with minimal adjustments. The general approach can be used for any operating system, as long as appropriate replacements for QEMU and swtpm exist.

Prerequisites

Before we can start, we need to install:

  • QEMU: a virtual machine emulator
  • swtpm: a TPM emulator
  • OVMF: a UEFI firmware implementation

On a recent version of Ubuntu, these can be installed with:

sudo apt install qemu-system-x86 ovmf swtpm

Note that OVMF only supports the x86_64 architecture, so we can only emulate that. If you run a different architecture, you’ll need to find another UEFI implementation that is not OVMF (but I’m not aware of any freely available ones).

Create a copy of your drive

We can decide to either:

  • Choice #1: run only the components involved early at boot (shim, bootloader, kernel, initrd). This is useful if you, like me, only need to test those components and how they affect Secure Boot and the TPM, and don’t really care about the rest (the init process, login manager, …).

  • Choice #2: run the entire operating system. This can give you a fully usable operating system running inside the virtual machine, but may also result in some instability inside the guest (because we’re giving it a filesystem that is in use), and may also lead to some data loss if we’re not careful and make typos. Use with care!

Choice #1: Early boot components only

If we’re interested in the early boot components only, then we need to make a copy the following from our drive: the GPT partition table, the EFI partition, and the /boot partition (if we have one). Usually all these 3 pieces are at the “start” of the drive, but this is not always the case.

To figure out where the partitions are located, run:

sudo parted -l

On my system, this is the output:

Model: WD_BLACK SN750 2TB (nvme)
Disk /dev/nvme0n1: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  525MB   524MB   fat32              boot, esp
 2      525MB   1599MB  1074MB  ext4
 3      1599MB  2000GB  1999GB                     lvm

In my case, the partition number 1 is the EFI partition, and the partition number 2 is the /boot partition. If you’re not sure what partitions to look for, run mount | grep -e /boot -e /efi. Note that, on some distributions (most notably the ones that use systemd-boot), a /boot partition may not exist, so you can leave that out in that case.

Anyway, in my case, I need to copy the first 1599 MB of my drive, because that’s where the data I’m interested in ends: those first 1599 MB contain the GPT partition table (which is always at the start of the drive), the EFI partition, and the /boot partition.

Now that we have identified how many bytes to copy, we can copy them to a file named drive.img with dd (maybe after running sync to make sure that all changes have been committed):

# replace '/dev/nvme0n1' with your main drive (which may be '/dev/sda' instead),
# and 'count' with the number of MBs to copy
sync && sudo -g disk dd if=/dev/nvme0n1 of=drive.img bs=1M count=1599 conv=sparse

Choice #2: Entire system

If we want to run our entire system in a virtual machine, then I would recommend creating a QEMU copy-on-write (COW) file:

# replace '/dev/nvme0n1' with your main drive (which may be '/dev/sda' instead)
sudo -g disk qemu-img create -f qcow2 -b /dev/nvme0n1 -F raw drive.qcow2

This will create a new copy-on-write image using /dev/nvme0n1 as its “backing storage”. Be very careful when running this command: you don’t want to mess up the order of the arguments, or you might end up writing to your storage device (leading to data loss)!

The advantage of using a copy-on-write file, as opposed to copying the whole drive, is that this is much faster. Also, if we had to copy the entire drive, we might not even have enough space for it (even when using sparse files).

The big drawback of using a copy-on-write file is that, because our main drive likely contains filesystems that are mounted read-write, any modification to the filesystems on the host may be perceived as data corruption on the guest, and that in turn may cause all sort of bad consequences inside the guest, including kernel panics.

Another drawback is that, with this solution, later we will need to give QEMU permission to read our drive, and if we’re not careful enough with the commands we type (e.g. we swap the order of some arguments, or make some typos), we may potentially end up writing to the drive instead.

Emulate a TPM device using swtpm

There are various ways to run the swtpm emulator. Here I will use the “vTPM proxy” way, which is not the easiest, but has the advantage that the emulated device will look like a real TPM device not only to the guest, but also to the host, so that we can inspect its PCR banks (among other things) from the host using familiar tools like tpm2_pcrread.

First, enable the tpm_vtpm_proxy module (which is not enabled by default on Ubuntu):

sudo modprobe tpm_vtpm_proxy

If that worked, we should have a /dev/vtpmx device. We can verify its presence with:

ls /dev/vtpmx

swtpm in “vTPM proxy” mode will interact with /dev/vtpmx, but in order to do so it needs the sys_admin capability. On Ubuntu, swtpm ships with this capability explicitly disabled by AppArmor, but we can enable it with:

sudo sh -c "echo '  capability sys_admin,' > /etc/apparmor.d/local/usr.bin.swtpm"
systemctl reload apparmor

Now that /dev/vtpmx is present, and swtpm can talk to it, we can run swtpm in “vTPM proxy” mode:

sudo mkdir /tpm/swtpm-state
sudo swtpm chardev --tpmstate dir=/tmp/swtpm-state --vtpm-proxy --tpm2

Upon start, swtpm should create a new /dev/tpmN device and print its name on the terminal. On my system, I already have a real TPM on /dev/tpm0, and therefore swtpm allocates /dev/tpm1.

The emulated TPM device will need to be readable and writeable by QEMU, but the emulated TPM device is by default accessible only by root, so either we run QEMU as root (not recommended), or we relax the permissions on the device:

# replace '/dev/tpm1' with the device created by swtpm
sudo chmod a+rw /dev/tpm1

Make sure not to accidentally change the permissions of your real TPM device!

Emulate the system with QEMU

Inside the QEMU emulator, we will run the OVMF UEFI firmware. On Ubuntu, the firmware comes in 2 flavors:

  • with Secure Boot enabled (/usr/share/OVMF/OVMF_CODE_4M.ms.fd), and
  • with Secure Boot disabled (in /usr/share/OVMF/OVMF_CODE_4M.fd)

(There are actually even more flavors, see this AskUbuntu question for the details.)

In the commands that follow I’m going to use the Secure Boot flavor, but if you need to disable Secure Boot in your guest, just replace .ms.fd with .fd in all the commands below.

To use OVMF, first we need to copy the EFI variables to a file that can be read & written by QEMU:

cp /usr/share/OVMF/OVMF_VARS_4M.ms.fd /tmp/

This file (/tmp/OVMF_VARS_4M.ms.fd) will be the equivalent of the EFI flash storage, and it’s where OVMF will read and store its configuration, which is why we need to make a copy of it (to avoid modifications to the original file).

Now we’re ready to run QEMU:

  • If you copied only the early boot files (choice #1):

    # replace '/dev/tpm1' with the device created by swtpm
    qemu-system-x86_64 \
      -accel kvm \
      -machine q35,smm=on \
      -cpu host \
      -smp cores=4,threads=1 \
      -m 4096 \
      -vga virtio \
      -bios /usr/share/ovmf/OVMF.fd \
      -drive if=pflash,unit=0,format=raw,file=/usr/share/OVMF/OVMF_CODE_4M.ms.fd,readonly=on \
      -drive if=pflash,unit=1,format=raw,file=/tmp/OVMF_VARS_4M.ms.fd \
      -drive if=virtio,format=raw,file=drive.img \
      -tpmdev passthrough,id=tpm0,path=/dev/tpm1,cancel-path=/dev/null \
      -device tpm-tis,tpmdev=tpm0
    
  • If you have a copy-on-write file for the entire system (choice #2):

    # replace '/dev/tpm1' with the device created by swtpm
    sudo -g disk qemu-system-x86_64 \
      -accel kvm \
      -machine q35,smm=on \
      -cpu host \
      -smp cores=4,threads=1 \
      -m 4096 \
      -vga virtio \
      -bios /usr/share/ovmf/OVMF.fd \
      -drive if=pflash,unit=0,format=raw,file=/usr/share/OVMF/OVMF_CODE_4M.ms.fd,readonly=on \
      -drive if=pflash,unit=1,format=raw,file=/tmp/OVMF_VARS_4M.ms.fd \
      -drive if=virtio,format=qcow2,file=drive.qcow2 \
      -tpmdev passthrough,id=tpm0,path=/dev/tpm1,cancel-path=/dev/null \
      -device tpm-tis,tpmdev=tpm0
    

    Note that this last command makes QEMU run as the disk group: on Ubuntu, this group has the permission to read and write all storage devices, so be careful when running this command, or you risk losing your files forever! If you want to add more safety, you may consider using an ACL to give the user running QEMU read-only permission to your backing storage.

In either case, after launching QEMU, our operating system should boot… while running inside itself!

In some circumstances though it may happen that the wrong operating system is booted, or that you end up at the EFI setup screen. This can happen if your system is not configured to boot from the “first” EFI entry listed in the EFI partition. Because the boot order is not recorded anywhere on the storage device (it’s recorded in the EFI flash memory), of course OVMF won’t know which operating system you intended to boot, and will just attempt to launch the first one it finds. You can use the EFI setup screen provided by OVMF to change the boot order in the way you like. After that, changes will be saved into the /tmp/OVMF_VARS_4M.ms.fd file on the host: you should keep a copy of that file so that, next time you launch QEMU, you’ll boot directly into your operating system.

Reading PCR banks after boot

Once our operating system has launched inside QEMU, and after the boot process is complete, the PCR banks will be filled and recorded by swtpm.

If we choose to copy only the early boot files (choice #1), then of course our operating system won’t be fully booted: it’ll likely hang waiting for the root filesystem to appear, and may eventually drop to the initrd shell. None of that really matters if all we want is to see the PCR values stored by the bootloader.

Before we can extract those PCR values, we first need to stop QEMU (Ctrl-C is fine), and then we can read it with tpm2_pcrread:

# replace '/dev/tpm1' with the device created by swtpm
tpm2_pcrread -T device:/dev/tpm1

Using the method described here in this article, PCRs 4, 5, 8, and 9 inside the emulated TPM should match the PCRs in our real TPM. And here comes an interesting application of this method: if we upgrade our bootloader or kernel, and we want to know the future PCR values that our system will have after reboot, we can simply follow this procedure and obtain those PCR values without shutting down our system! This can be especially useful if we use TPM sealing: we can reseal our secrets and make them unsealable at the next reboot without trouble.

Restarting the virtual machine

If we want to restart the guest inside the virtual machine, and obtain a consistent TPM state every time, we should start from a “clean” state every time, which means:

  1. restart swtpm
  2. recreate the drive.img or drive.qcow2 file
  3. launch QEMU again

If we don’t restart swtpm, the virtual TPM state (and in particular the PCR banks) won’t be cleared, and new PCR measurements will simply be added on top of the existing state. If we don’t recreate the drive file, it’s possible that some modifications to the filesystems will have an impact on the future PCR measurements.

We don’t necessarily need to recreate the /tmp/OVMF_VARS_4M.ms.fd file every time. In fact, if you need to modify any EFI setting to make your system bootable, you might want to preserve it so that you don’t need to change EFI settings at every boot.

Automating the entire process

I’m (very slowly) working on turning this entire procedure into a script, so that everything can be automated. Once I find some time I’ll finish the script and publish it, so if you liked this article, stay tuned, and let me know if you have any comment/suggestion/improvement/critique!

on November 19, 2023 04:33 PM

November 16, 2023

When users first download Lubuntu, they are presented with two options: Install the latest Long-Term Support release, providing them with a rock-solid and stable base (we assume most users choose this option). Install the latest interim release, providing the latest base with the latest LXQt release. As we have mentioned in previous announcements, Kubuntu and […]
on November 16, 2023 10:10 PM


Photo by Pixabay

Ubuntu systems typically have up to 3 kernels installed, before they are auto-removed by apt on classic installs. Historically the installation was optimized for metered download size only. However, kernel size growth and usage no longer warrant such optimizations. During the 23.10 Mantic Minatour cycle, I led a coordinated effort across multiple teams to implement lots of optimizations that together achieved unprecedented install footprint improvements.

Given a typical install of 3 generic kernel ABIs in the default configuration on a regular-sized VM (2 CPU cores 8GB of RAM) the following metrics are achieved in Ubuntu 23.10 versus Ubuntu 22.04 LTS:

  • 2x less disk space used (1,417MB vs 2,940MB, including initrd)

  • 3x less peak RAM usage for the initrd boot (68MB vs 204MB)

  • 0.5x increase in download size (949MB vs 600MB)

  • 2.5x faster initrd generation (4.5s vs 11.3s)

  • approximately the same total time (103s vs 98s, hardware dependent)


For minimal cloud images that do not install either linux-firmware or modules extra the numbers are:

  • 1.3x less disk space used (548MB vs 742MB)

  • 2.2x less peak RAM usage for initrd boot (27MB vs 62MB)

  • 0.4x increase in download size (207MB vs 146MB)


Hopefully, the compromise of download size, relative to the disk space & initrd savings is a win for the majority of platforms and use cases. For users on extremely expensive and metered connections, the likely best saving is to receive air-gapped updates or skip updates.


This was achieved by precompressing kernel modules & firmware files with the maximum level of Zstd compression at package build time; making actual .deb files uncompressed; assembling the initrd using split cpio archives - uncompressed for the pre-compressed files, whilst compressing only the userspace portions of the initrd; enabling in-kernel module decompression support with matching kmod; fixing bugs in all of the above, and landing all of these things in time for the feature freeze. Whilst leveraging the experience and some of the design choices implementations we have already been shipping on Ubuntu Core. Some of these changes are backported to Jammy, but only enough to support smooth upgrades to Mantic and later. Complete gains are only possible to experience on Mantic and later.


The discovered bugs in kernel module loading code likely affect systems that use LoadPin LSM with kernel space module uncompression as used on ChromeOS systems. Hopefully, Kees Cook or other ChromeOS developers pick up the kernel fixes from the stable trees. Or you know, just use Ubuntu kernels as they do get fixes and features like these first.


The team that designed and delivered these changes is large: Benjamin Drung, Andrea Righi, Juerg Haefliger, Julian Andres Klode, Steve Langasek, Michael Hudson-Doyle, Robert Kratky, Adrien Nader, Tim Gardner, Roxana Nicolescu - and myself Dimitri John Ledkov ensuring the most optimal solution is implemented, everything lands on time, and even implementing portions of the final solution.


Hi, It's me, I am a Staff Engineer at Canonical and we are hiring https://canonical.com/careers.


Lots of additional technical details and benchmarks on a huge range of diverse hardware and architectures, and bikeshedding all the things below:


For questions and comments please post to Kernel section on Ubuntu Discourse.



on November 16, 2023 10:45 AM

A lot of time has passed since my previous post on my work to make dhcpcd the drop-in replacement for the deprecated ISC dhclient a.k.a. isc-dhcp-client. Current status:

  • Upstream now regularly produces releases and with a smaller delta than before. This makes it easier to track possible breakage.
  • Debian packaging has essentially remained unchanged. A few Recommends were shuffled, but that's about it.
  • The only remaining bug is fixing the build for Hurd. Patches are welcome. Once that is fixed, bumping dhcpcd-base's priority to important is all that's left.
on November 16, 2023 09:38 AM

November 12, 2023

Ubuntu 23.10 “Mantic Minotaur” Desktop, showing network settings

We released Ubuntu 23.10 ‘Mantic Minotaur’ on 12 October 2023, shipping its proven and trusted network stack based on Netplan. Netplan is the default tool to configure Linux networking on Ubuntu since 2016. In the past, it was primarily used to control the Server and Cloud variants of Ubuntu, while on Desktop systems it would hand over control to NetworkManager. In Ubuntu 23.10 this disparity in how to control the network stack on different Ubuntu platforms was closed by integrating NetworkManager with the underlying Netplan stack.

Netplan could already be used to describe network connections on Desktop systems managed by NetworkManager. But network connections created or modified through NetworkManager would not be known to Netplan, so it was a one-way street. Activating the bidirectional NetworkManager-Netplan integration allows for any configuration change made through NetworkManager to be propagated back into Netplan. Changes made in Netplan itself will still be visible in NetworkManager, as before. This way, Netplan can be considered the “single source of truth” for network configuration across all variants of Ubuntu, with the network configuration stored in /etc/netplan/, using Netplan’s common and declarative YAML format.

Netplan Desktop integration

On workstations, the most common scenario is for users to configure networking through NetworkManager’s graphical interface, instead of driving it through Netplan’s declarative YAML files. Netplan ships a “libnetplan” library that provides an API to access Netplan’s parser and validation internals, which is now used by NetworkManager to store any network interface configuration changes in Netplan. For instance, network configuration defined through NetworkManager’s graphical UI or D-Bus API will be exported to Netplan’s native YAML format in the common location at /etc/netplan/. This way, the only thing administrators need to care about when managing a fleet of Desktop installations is Netplan. Furthermore, programmatic access to all network configuration is now easily accessible to other system components integrating with Netplan, such as snapd. This solution has already been used in more confined environments, such as Ubuntu Core and is now enabled by default on Ubuntu 23.10 Desktop.

Migration of existing connection profiles

On installation of the NetworkManager package (network-manager >= 1.44.2-1ubuntu1) in Ubuntu 23.10, all your existing connection profiles from /etc/NetworkManager/system-connections/ will automatically and transparently be migrated to Netplan’s declarative YAML format and stored in its common configuration directory /etc/netplan/

The same migration will happen in the background whenever you add or modify any connection profile through the NetworkManager user interface, integrated with GNOME Shell. From this point on, Netplan will be aware of your entire network configuration and you can query it using its CLI tools, such as “sudo netplan get” or “sudo netplan status” without interrupting traditional NetworkManager workflows (UI, nmcli, nmtui, D-Bus APIs). You can observe this migration on the apt-get command line, watching out for logs like the following:

Setting up network-manager (1.44.2-1ubuntu1.1) ...
Migrating HomeNet (9d087126-ae71-4992-9e0a-18c5ea92a4ed) to /etc/netplan
Migrating eduroam (37d643bb-d81d-4186-9402-7b47632c59b1) to /etc/netplan
Migrating DebConf (f862be9c-fb06-4c0f-862f-c8e210ca4941) to /etc/netplan

In order to prepare for a smooth transition, NetworkManager tests were integrated into Netplan’s continuous integration pipeline at the upstream GitHub repository. Furthermore, we implemented a passthrough method of handling unknown or new settings that cannot yet be fully covered by Netplan, making Netplan future-proof for any upcoming NetworkManager release.

The future of Netplan

Netplan has established itself as the proven network stack across all variants of Ubuntu – Desktop, Server, Cloud, or Embedded. It has been the default stack across many Ubuntu LTS releases, serving millions of users over the years. With the bidirectional integration between NetworkManager and Netplan the final piece of the puzzle is implemented to consider Netplan the “single source of truth” for network configuration on Ubuntu. With Debian choosing Netplan to be the default network stack for their cloud images, it is also gaining traction outside the Ubuntu ecosystem and growing into the wider open source community.

Within the development cycle for Ubuntu 24.04 LTS, we will polish the Netplan codebase to be ready for a 1.0 release, coming with certain guarantees on API and ABI stability, so that other distributions and 3rd party integrations can rely on Netplan’s interfaces. First steps into that direction have already been taken, as the Netplan team reached out to the Debian community at DebConf 2023 in Kochi/India to evaluate possible synergies.

Conclusion

Netplan can be used transparently to control a workstation’s network configuration and plays hand-in-hand with many desktop environments through its tight integration with NetworkManager. It allows for easy network monitoring, using common graphical interfaces and provides a “single source of truth” to network administrators, allowing for configuration of Ubuntu Desktop fleets in a streamlined and declarative way. You can try this new functionality hands-on by following the “Access Desktop NetworkManager settings through Netplan” tutorial.


If you want to learn more, feel free to follow our activities on Netplan.io, GitHub, Launchpad, IRC or our Netplan Developer Diaries blog on discourse.

on November 12, 2023 03:00 PM

November 11, 2023

AppStream 1.0 released!

Matthias Klumpp

Today, 12 years after the meeting where AppStream was first discussed and 11 years after I released a prototype implementation I am excited to announce AppStream 1.0! 🎉🎉🎊

Check it out on GitHub, or get the release tarball or read the documentation or release notes! 😁

Some nostalgic memories

I was not in the original AppStream meeting, since in 2011 I was extremely busy with finals preparations and ball organization in high school, but I still vividly remember sitting at school in the students’ lounge during a break and trying to catch the really choppy live stream from the meeting on my borrowed laptop (a futile exercise, I watched parts of the blurry recording later).

I was extremely passionate about getting software deployment to work better on Linux and to improve the overall user experience, and spent many hours on the PackageKit IRC channel discussing things with many amazing people like Richard Hughes, Daniel Nicoletti, Sebastian Heinlein and others.

At the time I was writing a software deployment tool called Listaller – this was before Linux containers were a thing, and building it was very tough due to technical and personal limitations (I had just learned C!). Then in university, when I intended to recreate this tool, but for real and better this time as a new project called Limba, I needed a way to provide metadata for it, and AppStream fit right in! Meanwhile, Richard Hughes was tackling the UI side of things while creating GNOME Software and needed a solution as well. So I implemented a prototype and together we pretty much reshaped the early specification from the original meeting into what would become modern AppStream.

Back then I saw AppStream as a necessary side-project for my actual project, and didn’t even consider me as the maintainer of it for quite a while (I hadn’t been at the meeting afterall). All those years ago I had no idea that ultimately I was developing AppStream not for Limba, but for a new thing that would show up later, with an even more modern design called Flatpak. I also had no idea how incredibly complex AppStream would become and how many features it would have and how much more maintenance work it would be – and also not how ubiquitous it would become.

The modern Linux desktop uses AppStream everywhere now, it is supported by all major distributions, used by Flatpak for metadata, used for firmware metadata via Richard’s fwupd/LVFS, runs on every Steam Deck, can be found in cars and possibly many places I do not know yet.

What is new in 1.0?

API breaks

The most important thing that’s new with the 1.0 release is a bunch of incompatible changes. For the shared libraries, all deprecated API elements have been removed and a bunch of other changes have been made to improve the overall API and especially make it more binding-friendly. That doesn’t mean that the API is completely new and nothing looks like before though, when possible the previous API design was kept and some changes that would have been too disruptive have not been made. Regardless of that, you will have to port your AppStream-using applications. For some larger ones I already submitted patches to build with both AppStream versions, the 0.16.x stable series as well as 1.0+.

For the XML specification, some older compatibility for XML that had no or very few users has been removed as well. This affects for example release elements that reference downloadable data without an artifact block, which has not been supported for a while. For all of these, I checked to remove only things that had close to no users and that were a significant maintenance burden. So as a rule of thumb: If your XML validated with no warnings with the 0.16.x branch of AppStream, it will still be 100% valid with the 1.0 release.

Another notable change is that the generated output of AppStream 1.0 will always be 1.0 compliant, you can not make it generate data for versions below that (this greatly reduced the maintenance cost of the project).

Developer element

For a long time, you could set the developer name using the top-level developer_name tag. With AppStream 1.0, this is changed a bit. There is now a developer tag with a name child (that can be translated unless the translate="no" attribute is set on it). This allows future extensibility, and also allows to set a machine-readable id attribute in the developer element. This permits software centers to group software by developer easier, without having to use heuristics. If we decide to extend the developer information per-app in future, this is also now possible. Do not worry though the developer_name tag is also still read, so there is no high pressure to update. The old 0.16.x stable series also has this feature backported, so it can be available everywhere. Check out the developer tag specification for more details.

Scale factor for screenshots

Screenshot images can now have a scale attribute, to indicate an (integer) scaling factor to apply. This feature was a breaking change and therefore we could not have it for the longest time, but it is now available. Please wait a bit for AppStream 1.0 to become deployed more widespread though, as using it with older AppStream versions may lead to issues in some cases. Check out the screenshots tag specification for more details.

Screenshot environments

It is now possible to indicate the environment a screenshot was recorded in (GNOME, GNOME Dark, KDE Plasma, Windows, etc.) via an environment attribute on the respective screenshot tag. This was also a breaking change, so use it carefully for now! If projects want to, they can use this feature to supply dedicated screenshots depending on the environment the application page is displayed in. Check out the screenshots tag specification for more details.

References tag

This is a feature more important for the scientific community and scientific applications. Using the references tag, you can associate the AppStream component with a DOI (Digital object identifier) or provide a link to a CFF file to provide citation information. It also allows to link to other scientific registries. Check out the references tag specification for more details.

Release tags

Releases can have tags now, just like components. This is generally not a feature that I expect to be used much, but in certain instances it can become useful with a cooperating software center, for example to tag certain releases as long-term supported versions.

Multi-platform support

Thanks to the interest and work of many volunteers, AppStream (mostly) runs on FreeBSD now, a NetBSD port exists, support for macOS was written and a Windows port is on its way! Thank you to everyone working on this 🙂

Better compatibility checks

For a long time I thought that the AppStream library should just be a thin layer above the XML and that software centers should just implement a lot of the actual logic. This has not been the case for a while, but there was still a lot of complex AppStream features that were hard for software centers to implement and where it makes sense to have one implementation that projects can just use.

The validation of component relations is one such thing. This was implemented in 0.16.x as well, but 1.0 vastly improves upon the compatibility checks, so you can now just run as_component_check_relations and retrieve a detailed list of whether the current component will run well on the system. Besides better API for software developers, the appstreamcli utility also has much improved support for relation checks, and I wrote about these changes in a previous post. Check it out!

With these changes, I hope this feature will be used much more, and beyond just drivers and firmware.

So much more!

The changelog for the 1.0 release is huge, and there are many papercuts resolved and changes made that I did not talk about here, like us using gi-docgen (instead of gtkdoc) now for nice API documentation, or the many improvements that went into better binding support, or better search, or just plain bugfixes.

Outlook

I expect the transition to 1.0 to take a bit of time. AppStream has not broken its API for many, many years (since 2016), so a bunch of places need to be touched even if the changes themselves are minor in many cases. In hindsight, I should have also released 1.0 much sooner and it should not have become such a mega-release, but that was mainly due to time constraints.

So, what’s in it for the future? Contrary to what I thought, AppStream does not really seem to be “done” and fetature complete at a point, there is always something to improve, and people come up with new usecases all the time. So, expect more of the same in future: Bugfixes, validator improvements, documentation improvements, better tools and the occasional new feature.

Onwards to 1.0.1! 😁

on November 11, 2023 07:48 PM

November 07, 2023

Last week, I wrote about my somewhat last-minute plans to attend the 2023 Ubuntu Summit in Riga, Latvia. The event is now over, and I’m back home collating my thoughts about the weekend.

The tl;dr: It was a great, well-organised and run event with interesting speakers.

Here’s my “trip report”.

Logistics

The event was held at the Radisson Blu Latvija. Many of the Canonical staff stayed at the Raddison, while most (perhaps all) of the non-Canonical attendees were a short walk away at the Tallink Hotel.

Everything kicked off with a “Welcome” session at 14:00 on Friday. That may seem like a weird time to start an event, but it’s squashed on the weekend between an internal Canonical Product Sprint and an Engineering Sprint.

The conference rooms were spread across a couple of floors, with decent signage, and plenty of displays showing the schedule. It wasn’t hard to plan your day, and make sure you were in the right place for each talk.

The talks were live streamed, and as I understand it, also recorded. So remote participants could watch the sessions, and for anyone who missed them, they should be online soon.

Coffee, cold drinks, snacks, cakes and fruit were refreshed through the day to keep everyone topped up. A buffet lunch was provided on Saturday and Sunday.

A “Gaming” night was organised for the Saturday evening. There was also a party after the event finished, on the Sunday.

A bridged Telegram/Matrix chat was used during the event to enable everyone to co-ordinate meeting up, alert people of important things, or just invite friends for beer. Post-event it was also used for people to post travel updates, and let everyone know when they got home safely.

An email was sent out early on at the start of each day, to give everyone a heads-up on the main things happening that day, and provide information about social events.

There were two styles of lanyard from which to hang your name badge. One was coloured diffierently to indicate the individual did not wish to be photographed. I saw similar at Akademy back in 2018, and appreciate this option.

Sessions

There was one main room with a large stage used for plenary and keynote style talks, two smaller rooms for talks and two further workshop rooms. It was sometimes a squeeze in the smaller rooms when a talk was popular, but it was rarely ‘standing room only’.

The presentation equipment that was provided worked well, for the most part. A few minor display issues, and microphone glitches occurred, but overall I could hear and see everything I was expected to experience.

There was also a large open area with standing tables, where people could hang out between sessions, and noodle around with things - more on that later. A few sessions which left an impression on me are detailed below, with a conclusion at the end.

Ubuntu Asahi

Tobias Heider (Canonical) was on stage, with a remote Hector Martin (Asahi Linux) via video link. They presented some technical slides about the MacOS boot process, and how Asahi is able to be installed on Apple Silicon devices. I personally found this interesting, understandable, and accessible. Hector speaks fast, but clearly, and covered plenty of ground in the time they had.

Tobias then took over to talk about some of the specifics of the Ubuntu Asahi build, how to install it, and some of the future plans. I was so interested and inspired that I immediately installed Ubuntu Asahi on my M1 Apple MacBook Air. More on that experience in a future blog post.

MoonRay

This was a great talk about the process of open sourcing a component of the video production pipeline. While that sounds potentially dull, it wasn’t. Partly helped by plenty of cute rendered DreamWorks characters in the presentation, along with short video clips. We got a quick primer on rendering scenes, then moved into the production pipeline and finally to MoonRay. Hearing how and why a large movie production house like DreamWorks would open source a core part of the pipeline was fascinating. We even got to see Bilby at the end.

Ubuntu Core Desktop

Oliver Smith and Ken VanDine presented Ubuntu Core Desktop Preview, from a laptop running the Core Desktop. I talked a little about this in Ubuntu Core Snapdeck.

It’s essentially the Ubuntu desktop packaged up as a bunch of snap packages. Very much like Fedora Silverblue, or SteamOS on the steamdeck, Ubuntu Core Desktop is an “immutable” system.

It was interesting to see the current blockers to release. It’s already quite usable, but they’re not quite ready to share images of Ubuntu Core Desktop. Not that they’re hard to find if you’re keen!

Framework

This was one of my favourite talks. Daniel Schaefer talks about the Framework laptops, their design and decisions made during their development. The talk straddled the intersection of hardware, firmware and software which tickles me. I was also pleased to see Daniel fiddle with parts of the laptop while giving a talk from it. Demonstrating the replacable magnetically attached display bezel and replacing the keyboard while using the laptop is a great demo and fun sight.

Security

Mark Esler, from the Ubuntu Security Team gave a great overview of security best practices. They had specific, and in some cases simple, actionable things developers can do to improve their application security. We had a brief discussion afterwards about snap application security, which I’ll cover in a future post.

Discord

Some of the team behind the Ubuntu Discord presented stats about the sizable community that use Discord. They also went through their process for ensuring a friendly environment for support.

Hallway track

At all these kinds of events the so-called ‘Hallway track’ is just as important as the scheduled sessions. There were opportunities to catch-up with old friends, meet new people I’d only seen online before, and play with technology.

Some highlights for me on the hallway track include:

Kind words

Quite a few people approached and introduced themselves to me over the weekend. It was a great opportunity to meet people I’ve not seen before, only met online, or not seen since a, Ubuntu Developer Summit long ago.

A few introduced themselves then thanked me as I’d inspired them to get involved in Linux or Ubuntu as part of their career. It was very humbling to think those years were a positive impact on people’s lives. So I greatly appreciated their comments.

UBports

Previously known as Ubuntu Touch, the UBports project had a stand to exhibit the status of the project to bring the converged desktop to devices. I have a great fondness for the UBports project, having worked on the core apps for Ubuntu Touch. It always puts a smile on my face to see the Music, Terminal, Clock, Calendar and other apps I worked on, still in use on UBports today.

I dug out my OnePlus 5 when I got home, and might give UBports another play in a spare moment.

Raspberry Pi 5

Dave Jones from Canonical had a Raspberry Pi 5 which he’d hooked up to a TV, keyboard and mouse, and was running Ubuntu Desktop. I’d not seen a Pi running the Ubuntu desktop so fluidly before, so I had a play with it. We installed a bunch of snaps from the store, to put the device through its paces, and see if any had problems on the new Pi. The collective brains of myself, Dave, Ogra and Martin solved a bug or two and sent the results over the network to my laptop to be pushed to Launchpad.

Gaming Night

A large space was set aside for gaming night on the Saturday evening. Most people left the event, found food, then came back to ‘game’. There were board games, cards, computers and consoles. A fair number of people were not actually gaming, but coding and just chatting. It was quite nice to have a bit of space to just chill out and get on with whatever you like.

One part which amused me greatly was Ken VanDine and Dave Jones attempting to get the aforementioned Ubuntu Core Desktop Preview working on the new Raspberry Pi 5. They had the pi, cables, keyboard and mouse, but no display. There were however, projectors around the room. Unfortunately the HDMI sockets were nowhere near the actual projection screen. So we witnessed Dave, Ken and others burning calories walking back and forth to see terminal output, then call out commands across the loud room to the pi operator.

This went on for some time until I pointed out to Ken that Martin had a portable display in his bag. I probably should have thought about that before hand. Then someone else saved the day by walking in with a TV they’d acquired from somewhere. I’ve never seen so many nerds sat around a Raspberry Pi, reading logs from a TV screen. It’s perfectly normal at events like this, of course.

After party

Once the event was over, we all decamped to Digital Art House to relax over a beer or five. There were displays and projectors all around the venue, showing Ubuntu wallpapers, and the artworks of Sylvia Ritter.

Conclusion

I think the organising committee nailed it with this event. The number of rooms and tracks was about right. There was a good mix of talks. Some were technical, and Ubuntu related, others were just generally interesting. The infrastructure worked and felt professionally run.

I had an opportunity to meet a ton of people I’ve never met, but have spoken to online for years. I also got to meet and talk with some of the new people at Canonical, of which, there are many.

I’d certainly go again if I had the opportunity. Perhaps I’ll come up with something to talk about, I’ve got a year to prepare!

on November 07, 2023 11:00 AM

November 05, 2023

Ubuntu Summit 2023

Ross Gammon

UbuntuSummit2023

I am currently attending the Ubuntu Summit 2023 in Riga, Latvia. This is the first time I have deliberately attended an Ubuntu event. Back in 2013, I accidentally walked through what I believe was the last Ubuntu Developers Summit in Copenhagen, when I was showing some friends around Bella Sky in Copenhagen.

This time I was asked by Erich Eickmeyer if I would like to join him as a member of the Ubuntu Studio team. It has been fantastic to meet him and Eylul Dogruel from the Ubuntu Studio team. It was also fantastic to meet or see in person other members of the Linux Audio community, and other Ubuntu and Canonical people that have helped me with my Ubuntu contributions along the way.

Here are the talks I attended and meetings I had related to Ubuntu Studio:

50 things you did not know you could do with Ardour , Dr Robin Gareus (Ardour, Linux Audio)
Making a standalone effects pedal system based on embed Linux, Filipe Coelho
Live Mixing with PipeWire and Ardour/Harrison Mixbus, Erich Eickmeyer (Ubuntu / Ubuntu Studio)
Art and ownership – the confusing problem of owning a visual idea, Eylul Dogruel (Ubuntu Studio)
Ubuntu Flavour Sync meeting, Aaron Prisk (Canonical), Ana Sereijo (Canonical), Daniel Bungert (Canonical), Mr Mauro Gaspari (Canonical), Michael Hudson-Doyle (Canonical), Oliver Smith (Canonical), Mr Tim Holmes-Mitra (Canonical)
I believe talks will be uploaded onto You Tube at some point, so look out for them!
on November 05, 2023 04:01 PM

November 03, 2023

At the Ubuntu Summit in Latvia, Canonical have just announced their plans for the Ubuntu Core Desktop. I recently played with a preview of it, for fun. Here’s a nearby computer running it right now.

Ubuntu Core Desktop Development Preview on a SteamDeck

Ubuntu Core is a “a secure, application-centric IoT OS for embedded devices”. It’s been around a while now, powering IoT devices, kiosks, routers, set-top-boxes and other appliances.

Ubuntu Core Desktop is an immutable, secure and modular desktop operating system. It’s (apparently) coming to a desktop near you next year.

In case you weren’t aware, the SteamDeck is a portable desktop PC running a Linux distribution from Valve called “SteamOS”.

As a tinkerer, I thought “I wonder what Ubuntu Core on the SteamDeck looks like”. So I went grubbing around in GitHub projects to find something to play with.

I’m not about to fully replace SteamOS on my SteamDeck, of course, at least, not yet. This was just a bit of fun, to see if it worked. I’m told by the team that I’m likely the only person who has tried this so far.

Nobody at Canonical asked me to do this, and I didn’t get special access to the image. I just stumbled around until I found it, and started playing. You know, for fun.

Also, obviously I don’t speak for Canonical, these are my own thoughts. This also isn’t a how-to guide, or a recommendation that you should use this. It isn’t ready for prime time yet.

Snaps all the way down

Let’s get this out of the way up front. Ubuntu Core images are all about snaps. The kernel, applications, and even the desktop itself is a snap. Everything is a snap. Snap, snappity, snap snap! 🐊

So it has the features that come with being snap-based. Applications can be automatically updated, reverted, multiple parallel versions installed. Snaps are strictly confined using container primitives, seccomp and AppArmor.

This is not too dissimilar to the way many SteamDeck users add applications to the immutable SteamOS install. On SteamOS they use Flatpak, whereas on Ubuntu Core, Snap is used.

They achieve much the same goal though. A secure, easily updated and managed desktop OS.

Not ready yet

The image is currently described as “Ubuntu Core Desktop Development Preview”.

Indeed the wallpaper makes this very clear. Here be dragons. 🐉

Ubuntu Core Desktop Development Preview wallpaper

This is not ready for daily production use as a primary OS, but I’m sure some nerds like me will be running it soon enough. It’s fun to play with stuff like this, and get a glimpse of what the future of Ubuntu desktop might be like.

I was pleasantly surprised that the developer preview exceeded my expectations. Here’s what I discovered.

Installation

I didn’t want to destroy the SteamOS install on my SteamDeck - I quite like playing games on the device. So I put the Ubuntu Core image on a USB stick, and ran it from that. The current image doesn’t have an ‘installer’ as such.

On first boot, you’re greeted with an Ubuntu Core logo while the various snaps are setup and configured. Once that completes, a first-run wizard pops up to walk though the initial setup.

Initial setup

This is the usual configuration steps to setup keyboard, locale, first user and so on.

Pre-installed applications

Once installed, everything was pretty familiar.

There’s a browser - Firefox, and a small set of default GNOME applications such as Eye of GNOME, Evince, GNOME Calculator, Characters, Clocks, Logs, Weather, Font Viewer and Text Editor. There’s also a graphical Ubuntu App Centre (more on that in a moment).

There’s also three terminal applications.

  • GNOME Terminal - which is a little bit useless because it’s strictly confined.

  • Console - also GNOME Terminal, but is unconfined, so can be used for system administration tasks like installing software.

  • Workshops - which provides a Toolbox / Distrobox like experience for launching LXD containers running Ubuntu or another Linux distribution. The neat part about this is there’s full GPU passthrough to the containers.

So on a suitably equipped desktop with an nVidia GPU, it’s possible to run CUDA workloads inside a container on top of Ubuntu Core.

Automatic updates

When I initially played with this a week or two back, I noticed that the core image shipped with a build of GNOME 42.

GNOME 42

One major feature of snaps is their ability to do automatic updates in the background. At some point between October 19th and today, an update brought me GNOME 45!

GNOME 45

I doubt that a final product will jump users unceremoniously from one major desktop release to another, but this is a preview remember, so interesting, exciting and frightening things happen.

Installing apps

The “traditional” (read: deb-based) Ubuntu Desktop recently shipped with a new software store front. This application, built using Flutter, makes it easy to find and install snaps on the desktop.

I tested this process by installing Steam, given this is a SteamDeck!

Installing Steam

This process was uneventful and smooth. Installing additional apps on the Ubuntu core desktop preview works as expected. However, so-called “classic” (unconfined) snaps are not yet installable. So applications like VSCode, Sublime Text and Blender can’t currently be easily installed.

Kernel switcheroo

Did I mention everything is a snap? This includes the Linux kernel. That means it’s possible to quickly switch to a completely different kernel, trivially easily, with one snap refresh command.

Switching kernel

It’s just as simple to snap revert back to the previous kernel, or try kernels specifically optimised for the hardware or use cases, such as gaming, or resource constrained computers.

Steam snap

The snap of Steam has been around for a while now, to install on the traditional Linux desktop. As a snap, it’s installable on this core desktop preview too.

The Steam snap also bundles some additional tools you might find on the SteamOS shipped on the SteamDeck, like MangoHUD.

Launching Steam on Ubuntu Core on the SteamDeck works just like it does on a traditional desktop. The SteamDeck is a desktop PC at its heart, after all.

Here’s a few screenshots, but this isn’t super remarkable, but neat nonetheless. The controller works, and the games I tested run fine. I didn’t install anything huge like GTA5, because this was all running off a USB stick. Ain’t nobody got time for that.

Steam

I didn’t try using the new Steam UI as seen on the SteamOS official builds. But I imagine it’s possible to get that working.

Steam

Audio doesn’t work in the Ubuntu Core image on the SteamDeck for me, so the whole game playing experience is a little impacted by that.

Steam

Steam

As you can see, this doesn’t really look any different to running a traditional desktop Linux distribution.

Steam

Steam

Unworking things

Not everything is smooth - this is a developer preview remember! I have fed back these things to the team - over beer, last night. I’m happy to help them debug these issues.

On my SteamDeck, I had no audio, at all. I suspect this is likely due to something missing in the Ubuntu kernel. As shown above, I did try a different, newer kernel, to no avail.

Bluetooth also didn’t work. In GNOME Settings, pressing the bluetooth enable button just toggled it back off again. I didn’t investigate this deeply, but will certainly file a bug and provide logs to the team.

Running snap refresh in the console doesn’t finish, when there’s an update to the desktop itself. I suspect this is a byproduct of Ubuntu Core usually being an unattended IoT device where it would normally do an automatic reboot when these packages are updated. You clearly don’t want a desktop to do random reboots after updates, so that behaviour seems to be supressed.

I’ve not commented at all on performance, because it’s a little unfair, given this is a preview. That’s not to say it’s slow, but I am running it on a USB stick, not the internal nvme drive. It’s certainly more than usable, but I didn’t measure any performance benchmarks yet.

The future

While the SteamDeck is a desktop “PC”, it’s a little quirky. There’s no keyboard, only one USB port, has weird audio chipset, and the display initially boots rotated by 90 degrees. It’s not really the target for this image.

I would expect this Ubuntu Core Developer Preview to be more usable on a traditional laptop or desktop computer. I haven’t tried that, but I know others have. Over time, more people will need to play with this platform, to find the sharp edges, and resolve the critical bugs before this ships for general use.

I can envisage a future where laptops from well-known vendors ship with Ubuntu Core Desktop by default. These might target developers initially, but I suspect eventually ’normie’ users will use Ubuntu Core Desktop.

It’s pretty far along already though. For some desktop use cases this is perfectly usable today, just probably not on your primary or only computer. In five months, when the next Ubuntu release comes out, I think it could be a very compelling daily driver.

Worth keeping an eye on this!

on November 03, 2023 11:00 AM

November 02, 2023

From journal to .json

In order to convert from your journal log to json, so it is easily parseable, jq offers an option that allows you to run a filter, only until the end of the file:

    Instead of running the filter for each JSON object in the input, read the entire input stream into a large array and
    run the filter just once.

This allows you to save the file directly, ready to be processed by your favorite tools, here’s what I used:

journalctl -u postfix.service --since yesterday -g "denied" --output json | jq -s "." > data/log.json

Enter Perl

Now, because I’ve been using Perl for ${deity} knows how long (I wrote my first Perl script at the end of the 90’s), naturally, is my language of choice for quick things where my knowledge of bash isn’t going to cut it:

First I want to load my file, I’m going to rely on Mojo, specifically Mojo::Collection and Mojo::JSON for this as I’m familiar with both, also, if I wan’t to dig a bit into what’s inside my collections, I can always do:

use Mojo::Util qw(dumper);

say dumper $collection->to_array;

But I digress, back to business

The real stuff

This piece of code filters for me, what it reads from a file (I’m doing $_= path(shift); for convenience)

my $file = Mojo::Collection->new(decode_json($_->slurp))->flatten;

// Filter using Mojo::Collection::grep to have a new collection with the data I'm interested in
my $filtered = $file->grep(sub{ 
        $_->{MESSAGE} =~ /denied/
    });

Now that I have the elements on a single array (of course, if I’m looking at a file over a gigabyte, likely I’d look into putting this inside some sort of database, PostgreSQL for instance, has excellent Json support), it’s time to do something with it:

// get anything that looks like a hostname before, and get the ip address
// example: NOQUEUE: reject: RCPT from ns2.pads.ufrj.br[146.164.48.5]: 554 5.7.1 <relaytest@antispam-ufrj.pads.ufrj.br>:
// I want to have ethe IP in a named group so I can later reference it with `$+{'IP'}`
my $regexp = qr{(.*[a-zA-Z0-9-._]+)\[(?<IP>.*)\]>?:.*};

Ideally (and for the future) I might want to filter in a different way, and capture different things, but you get the idea however today, we only want to know which ip addresses were rejected while testing our changes in our postfix’s configuration

$filtered->each(sub{
        if ($_->{'MESSAGE'} =~ /$regexp/ ){
            say "bash ./unban ".$+{'IP'};
        } else {
            warn "CANNOT GET IP: ".$_->{"MESSAGE"};
        }
    });

I have another script that does the unban, but for now I’m ok with copy&pasting :)

The full script is at foursixnine/stunning-octo-chainsaw pull requests are welcome, and maybe in the future I move this to its own thing, but for now, that’s all folks.

Regular Expressions

on November 02, 2023 12:00 AM

October 24, 2023

In consideration of some ongoing issues, we thought we’d give some insight into a few things going on right now.

Upgrades to 23.10

Unfortunately, as of this writing, upgrades to 23.10 have not yet been enabled due to a blocking bug in the release upgrader. The fix is in place, but must be manually verified by the team in charge of the release upgrader only after the new development cycle opens up for 24.04. For more details, please see the Ubuntu Discourse.

Closure of Matrix Rooms

Due to the disabling of Libera’s Matrix to IRC bridge, we had to make the hard decision to close the Matrix rooms since we want to keep our support and communication rooms unified. While we would like to return to Matrix someday, this is something that is on hold for now. There are plans in the larger Ubuntu community, in cooperation with Canonical, to unify all communication platforms between the community and Canonical, so stay tuned for that.

Closure of the Ubuntu Studio Café (offtopic chat)

What started out as the #ubuntustudio-offtopic channel on IRC, the Ubuntu Studio Café was intended to be a place where the community could simply socialize and chat about whatever they wanted so long as the IRC Guidelines and the Ubuntu Code of Conduct were still followed. However, we started to realize that people became confused between this channel and the support channel, #ubuntustudio, and would often use them interchangeably. While support wasn’t allowed in the offtopic channel, people were asking for support in the offtopic channel, and often going offtopic (chatting without support specifically) in the support channel.

Additionally, neither channel sees much traffic, with the support channel understandably seeing most of the traffic.

Therefore, after a discussion on the Ubuntu Studio Users Mailing List, it has been decided to close #ubuntustudio-offtopic and combine much of its function with the main support channel, making the the support channel an on-topic support and discussion channel, as long as the discussion is related to Ubuntu Studio and creativity, meaning using its tools and helping each other use the included tools. For offtopic non-Ubuntu Studio discussion, the #ubuntu-offtopic channel exists and anyone is welcome.

This closure will occur this Friday, October 27th, 2023.

More on Backports

As stated in the Ubuntu 23.10 Release Announcement, the Ubuntu Studio Backports PPA is in the process of being sunset in favor of using the official Ubuntu Backports repository. However, the Backports repository only works for LTS releases and for good reason. There are a few requirements for backporting:

  • It must be an application which already exists in the Ubuntu repositories
  • It must be an application which would not otherwise qualify for a simple bugfix, which would then qualify it to be a Stable Release Update. This means it must have new features.
  • It must not rely on new libraries or new versions of libraries.
  • It must exist within a later supported release or the development release of Ubuntu.

If you have a suggestion for an application for which to backport that meets those requirements, feel free to join and email the Ubuntu Studio Users Mailing List with your suggestion with the tag “[BPO]” at the beginning of the subject line. Backports to 22.04 LTS will close with the release of 24.04 LTS at which time backports to 24.04 LTS will open. Additionally, suggestions must pertain to Ubuntu Studio and preferably must be applications included with Ubuntu Studio. Suggestions can be rejected at the Project Leader’s discretion.

We are also considering sunsetting the Ardour Backports PPA in favor of only backporting Ardour’s point releases. For major upgrades, we recommend subscribing to Ardour’s official releases at ardour.org for as little as $1 USD per month.


on October 24, 2023 08:44 PM

October 18, 2023

We have had many requests to make Plasma 5.27 available in our backports PPA for Jammy Jellyfish 22.04. However, for technical reasons this would have broken upgrades to Kinetic 22.10 while that upgrade path existed. Now that Kinetic is end of life, it is possible to allow opt in backports of plasma 5.27 for 22.04.

As with the previous backport of plasma 5.25, 5.27 is provided in the backports-extra PPA

This PPA is intended to be used in combination with our standard backports PPA, but should also work standalone.

As usual with our PPAs, there is the caveat that the PPA may receive additional updates and new releases of KDE Plasma, Gear (Apps) and Frameworks, plus other apps and required libraries. Users should always review proposed updates to decide whether they wish to receive them.

While we feel these backports will be beneficial to enthusiastic adopters, users wanting to use a more tested Plasma release on the 22.04 base may find it advisable to stay with Plasma 5.24 as included in the original 22.04 (Jammy) release.

To add the PPA and upgrade, do:

sudo add-apt-repository ppa:kubuntu-ppa/backports-extra && sudo apt full-upgrade -y

We hope keen adopters enjoy using Plasma 5.27

on October 18, 2023 04:45 PM

October 17, 2023


The Kubuntu Team is happy to announce that Kubuntu 23.10 has been released, featuring the ‘beautiful’ KDE Plasma 5.27 simple by default, powerful when needed.

Codenamed “Mantic Minotaur”, Kubuntu 23.10 continues our tradition of giving you Friendly Computing by integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.

Under the hood, there have been updates to many core packages, including a new 6.5-based kernel, KDE Frameworks 5.110, KDE Plasma 5.27 and KDE Gear 23.08.

KDE Plasma desktop 5.27.8 on Kubuntu 23.10

Kubuntu has seen many updates for other applications, both in our default install, and installable from the Ubuntu archive.

Haruna, Krita, Kdevelop, Yakuake, and many many more applications are updated.

Applications for core day-to-day usage are included and updated, such as Firefox, and LibreOffice.

For a list of other application updates, and known bugs be sure to read our release notes.

Download Kubuntu 23.10, or learn how to upgrade from 23.04.

Note: For upgrades from 23.04, there may a delay of a few hours to days between the official release announcements and the Ubuntu Release Team enabling upgrades.

on October 17, 2023 12:50 AM

October 12, 2023

The Xubuntu team is happy to announce the immediate release of Xubuntu 23.10.

Xubuntu 23.10, codenamed Mantic Minotaur, is a regular release and will be supported for 9 months, until July 2024.

Xubuntu 23.10, featuring the latest updates from Xfce 4.18 and GNOME 44.

Xubuntu 23.10 features the latest updates from Xfce 4.18, GNOME 45, and MATE 1.26. With a focus on stability, memory management, and hardware support, Xubuntu 23.10 should perform well on your device. Enjoy frictionless bluetooth headphone connections and out-of-the-box touchpad support. Read Sean’s What’s New in Xubuntu 23.10 post for an in-depth review of the latest updates.

The final release images for Xubuntu Desktop and Xubuntu Minimal are available as torrents and direct downloads from xubuntu.org/download/.

As the main server might be busy in the first few days after the release, we recommend using the torrents if possible.

We’d like to thank everybody who contributed to this release of Xubuntu!

Highlights and Known Issues

Highlights

  • Improved hardware support for bluetooth headphones and touchpads
  • Color emoji is now included and supported in Firefox, Thunderbird, and newer Gtk-based apps
  • Significantly improved screensaver integration and stability

Known Issues

  • The shutdown prompt may not be displayed at the end of the installation. Instead you might just see a Xubuntu logo, a black screen with an underscore in the upper left hand corner, or just a black screen. Press Enter and the system will reboot into the installed environment. (LP: #1944519)
  • Xorg crashes and the user is logged out after logging in or switching users on some virtual machines, including GNOME Boxes. (LP: #1861609)
  • You may experience choppy audio or poor system performance while playing audio, but only in some virtual machines (observed in VMware and VirtualBox)

For more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions, please refer to the Xubuntu Release Notes.

The main Ubuntu Release Notes cover many of the other packages we carry and more generic issues.

Support

For support with the release, navigate to Help & Support for a complete list of methods to get help.

on October 12, 2023 05:18 PM

The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 23.10, code-named “Mantic Minotaur”. This marks Ubuntu Studio’s 33rd release. This release is a regular release and as such, it is supported for 9 months (until July 2024).

Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a more-complete list of changes and known issues.

You can download Ubuntu Studio 23.10 from our download page.

Upgrading

Instructions for upgrading are included in the release notes.

New This Release

Ubiquity System Installer Returns

In the days where we used Xfce as our default desktop environment, our default system installer was one developed and used by most Ubuntu flavors called Ubiquity. However, when we switched to KDE’s Plasma Desktop as our default desktop environment, we wished to use the Qt version, but we found out it was hardcoded for Kubuntu’s branding. With that, we partnered with Lubuntu, which also uses a Qt-based desktop called LXQt, to use the Qt-based Calamares installer.

Unfortunately, we found that, despite the customization and user experience it provided, it lacked a few features Ubiquity provides:

  • A language chooser at boot, which changes the language for the live session and installer alike
  • An OEM mode for computer manufacturers
  • The ability to install without running the live session

This release, we worked hard to make sure the first time you boot Ubuntu Studio 23.10 is a pleasing one and that everything is themed correctly, including pushing changes to Ubiquity itself so that the GTK version of Ubiquity would work with our default theming as well as fixing a bug that affected other Ubuntu flavors with dark themes.

For the next release, we hope to switch to the same installer Ubuntu Desktop (GNOME) and Ubuntu Budgie are using, which is a graphical frontend for the text-based Subiquity installer used in Ubuntu Server. We believe that unifying under a single codebase makes maintenance easier and creates a higher-quality experience for everyone.

PipeWire Continues to Get Stronger

Pipewire logo.svg

PipeWire has gained numerous improvements, including fixes for prosumer and professional audio. The JACK compatibility now performs in real-time, and some FireWire features have now been implemented.

We also include a new utility, called the Ubuntu Studio Audio Configuration utility, found in the Ubuntu Studio Information menu. This can be used to configure the default PipeWire Quantum value, Enable or Disable the PipeWire-JACK compatibility layer (handy for using JACK by itself), or switch to the classic PulseAudio-JACK setup that was the default prior to 23.04. Click here for more information about that utility

In the Repositories: QPrompt

QPrompt is a Qt-based teleprompter application now available in the repositories. While a snap version does exist and is installable from Discover, the version in the repositories is more up-to-date. Since Discover does not see the version in the repositories, it needs to be installed from the command line by issuing the following command:

sudo apt install qprompt

Backports PPA is Now Deprecated

As stated in the release announcement of Ubuntu Studio 23.04, that release was the last release in which packages would be backported to the Ubuntu Studio Backports PPA. As of this release, we will no longer be backporting packages from the next release to regular releases such as this release. As such, the backports PPA items have been removed from Ubuntu Studio Installer for this release. This simply means a longer wait for new software and that interval will be six months for regular releases.

In the future, we plan to utilize the official Ubuntu Backports Repository and backporting newer package versions there, except for Ardour Backports when new whole versions are released as they would still be backported to our Ardour Backports PPA. The Ubuntu Backports Repository only supports LTS releases. Therefore, we plan to backport only to 22.04 LTS and, in the future, only 24.04 LTS using the backports repository once it releases.

Furthermore, as of today, backports to the Backports PPA for 23.04 have stopped, and software for 23.04 in that PPA will be deleted when 23.04 goes End-Of-Life (EOL) in January 2024. If you would like newer software and are running 23.04, we encourage you to upgrade when you receive the notification.

Plasma Desktop Backports

Since we share the Desktop Environment with Kubuntu, simply adding the Kubuntu Backports will help you with keeping the desktop environment and its components up to date with the latest versions:

  • sudo add-apt-repository ppa:kubuntu-ppa/backports
  • sudo apt upgrade

More Updates

There are many more updates not covered here but are mentioned in the Release Notes. We highly recommend reading those release notes so you know what has been updated and know any known issues that you may encounter.

Get Involved!

A wonderful way to contribute is to get involved with the project directly! We’re always looking for new volunteers to help with packaging, documentation, tutorials, user support, and MORE! Check out all the ways you can contribute!

Our project leader, Erich Eickmeyer, is now working on Ubuntu Studio part-time, and is hoping that the users of Ubuntu Studio can give enough to generate a monthly part-time income. Your donations are appreciated! If other distributions can do it, surely we can! See the sidebar for ways to give!

Special Thanks

Huge special thanks for this release go to:

  • Eylul Dogruel: Artwork, Graphics Design
  • Ross Gammon: Upstream Debian Developer, Testing, Email Support
  • Sebastien Ramacher: Upstream Debian Developer
  • Dennis Braun: Upstream Debian Maintainer
  • Rik Mills: Kubuntu Council Member, help with Plasma desktop
  • Len Ovens: Studio Controls
  • Mauro Gaspari: Tutorials, Promotion, and Documentation, Testing
  • Krytarik Raido: IRC Moderator, Mailing List Moderator
  • Erich Eickmeyer: Project Leader, Packaging, Development, Direction, Treasurer

Frequently Asked Questions

Q: Does Ubuntu Studio contain snaps?
A: Yes. Mozilla’s distribution agreement with Canonical changed about a year ago, and Ubuntu was forced to no longer distribute Firefox in a native .deb package. We have found that, after numerous improvements, Firefox now performs just as well as the native .deb package did.

Additionally, FreeShow is an Electron-based application. Electron-based applications cannot be packaged in the Ubuntu repositories in that they cannot be packaged in a traditional Debian source package. While such apps do have a build system to create a .deb binary package, it circumvents the source package build system which is required when packaging for Ubuntu. However, Electron apps also have a facility for creating snaps, which can be uploaded to the snap store. Therefore, for FreeShow to be included in Ubuntu Studio, it had to be packaged as a snap.

Q: Will you ever make an ISO image with {my favorite desktop environment}?
A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since we’re completely volunteer-run, we don’t have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio. Please note that this process does not convert that flavor to Ubuntu Studio but adds its tools, features, and benefits to the existing flavor installation.

Q: What if I don’t want all these packages installed on my machine?
A: Simply use the Ubuntu Studio Installer to remove the features of Ubuntu Studio you don’t want or need by unchecking the boxes!

on October 12, 2023 04:57 PM

We are pleased to announce the release of the next version of our distro, the 23.10 release. This is a standard release supported for 9 months packed full of all sorts of new capabilities. If you want a well tested and longer term support then our 22.04 LTS version is supported for 3 years. The new release has many new core updates as well as v10.8 version of budgie itself: We also inherit...

Source

on October 12, 2023 02:45 PM

October 10, 2023

APT currently knows about three types of upgrades:

  • upgrade without new packages (apt-get upgrade)
  • upgrade with new packages (apt upgrade)
  • upgrade with new packages and deletions (apt{,-get} {dist,full}-upgrade)

All of these upgrade types are necessary to deal with upgrades within a distribution release. Yes, sometimes even removals may be needed because bug fixes require adding a Conflicts somewhere.

In Ubuntu we have a third type of upgrades, handled by a separate tool: release upgrades. ubuntu-release-upgrader changes your sources.list, and applies various quirks to the upgrade.

In this post, I want to look not at the quirk aspects but discuss how dependency solving should differ between intra-release and inter-release upgrades.

Previous solver projects (such as Mancoosi) operated under the assumption that minimizing the number of changes performed should ultimately be the main goal of a solver. This makes sense as every change causes risks. However it ignores a different risk, which especially applies when upgrading from one distribution release to a newer one: Increasing divergence from the norm.

Consider a person installs foo in Debian 12. foo depends on a | b, so a will be automatically installed to satisfy the dependency. A release later, a has some known issues and b is prefered, the dependency now reads: b|a.

A classic solver would continue to keep a installed because it was installed before, leading upgraded installs to have foo, a installed whereas new systems have foo, b installed. As systems get upgraded over and over, they continue to diverge further and further from new installs to the point that it adds substantial support effort.

My proposal for the new APT solver is that when we perform release upgrades, we forget which packages where previously automatically installed. We effectively perform a normalization: All systems with the same set of manually installed packages will end up with the same set of automatically installed packages. Consider the solving starting with an empty set and then installing the latest version of each previously manually installed package: It will see now that foo depends b|a and install b (and a will be removed later on as its not part of the solution).

Another case of divergence is Suggests handling. Consider that foo also Suggests s. You now install another package bar that depends s, hence s gets installed. Upon removing bar, s is not being removed automatically because foo still suggests it (and you may have grown used to foo’s integration of s). This is because apt considers Suggests to be important - they won’t be automatically installed, but will not be automatically removed.

In Ubuntu, we unset that policy on release upgrades to normalize the systems. The reasoning for that is simple: While you may have grown to use s as part of foo during the release, an upgrade to the next release already is big enough that removing s is going to have less of an impact - breakage of workflows is expected between release upgrades.

I believe that apt release-upgrade will benefit from both of these design choices, and in the end it boils down to a simple mantra:

  • On upgrades within a release, minimize changes.
  • On upgrades between releases, minimize divergence from fresh installs.
on October 10, 2023 05:22 PM
What's New in Xubuntu 23.10

Xubuntu 23.10, codenamed "Mantic Minotaur", is due to climb out of the development labyrinth on Thursday, October 12, 2023. It features the latest apps from Xfce 4.18, GNOME 45, and MATE 1.26. There&aposs not many exciting new features this time around. Instead, the overall theme of this release is stability, better memory management, and improved support for UI scaling.

In case you&aposre a Xubuntu regular or somebody with a growing interest, I&aposve documented the purpose and highlights for each updated graphical app below... except Firefox, Thunderbird, and LibreOffice (those apps deserve their own separate changelogs). Enjoy!

Xubuntu Updates

What's New in Xubuntu 23.10The Xubuntu 23.10 desktop, featuring the latest wallpaper by Xubuntu&aposs own Pasi Lallinaho

Known issues fixed since 23.04 "Lunar Lobster"

The following issues were reported in a previous release and are no longer reproducible in 23.10. Hooray!

  • OEM installation uses the wrong slideshow (LP: #1842047)
  • Screensaver crashes shortly after unlocking (LP: #2012795)
  • Password required twice when switching users (LP: #1874178)

Improved hardware support

  • Bluetooth headphones are now better supported under PipeWire. We caught up with the other flavors and added the missing key package: libspa-0.2-bluetooth
  • The Apple Magic Trackpad 2 and other modern touch input devices are now supported. We removed the conflicting and unsupported xserver-xorg-input-synaptics package to allow libinput to take control.

Appearance updates

What's New in Xubuntu 23.10Color emoji are now supported in Xubuntu 23.10
  • elementary-xfce 0.18 features numerous refreshed icons in addition to a handful of removed, deprecated icons.
  • Greybird 3.23.3 is a minor update that delivers improved support for Gtk 3 and 4 applications.
  • Color emoji are now supported and used in Firefox, Thunderbird, and Gtk 3/4 applications. To enter emoji on Gtk applications (such as Mousepad), use the Ctrl + . keyboard shortcut to show the emoji picker. Some text areas will also allow you to bring up the emoji picker from the right-click menu.
  • When changing your Gtk (interface) theme, a matching Xfwm (window manager) theme is now automatically selected.
  • Past Xubuntu wallpapers can now be easily installed from the repositories! Additionally, wallpapers from before the 22.10 release have been removed from the default installation.

GNOME Apps

Disk Usage Analyzer (44.0 to 45.0)

What's New in Xubuntu 23.10Baobab 45.0 features a visual refresh to match the other GNOME 45 apps

Disk Usage Analyzer (baobab) provides a graphical representation of disk usage for local and remote volumes. The 45.0 release features the latest GNOME 45&aposs libadwaita widgets and design conventions.

Disks (44.0 to 45.0)

What's New in Xubuntu 23.10Disks 45.0 received a minimal bug-fixing update

Disks (gnome-disk-utility) is an easy-to-use disk management utility that can inspect, format, partition, image, configure, and benchmark disks. Version 45.0 received a minimal update, silencing some warnings thrown in the benchmark dialog.

Fonts (44.0 to 45.0)

What's New in Xubuntu 23.10Fonts 45.0 features the latest GNOME 45 design trends

Fonts (gnome-font-viewer) is a font management application that allows you to view installed fonts and install new ones for yourself or for all users on your computer. Version 45.0 features a graphical refresh to the new GNOME 45 styles.

Software (44.0 to 45.0)

What's New in Xubuntu 23.10Software 45.0 is mostly a bugfix release with some usability enhancements

Software (gnome-software) allows you to find and install apps. It includes plugins for installing Debian, Snap, and Flatpak packages (Flatpak not included in 23.10). The 45.0 release benefits from a number of bug fixes, performance improvements, and usability improvements. Flatpak users are now offered to clear app storage when removing an app.

Rhythmbox (3.4.6 to 3.4.7)

What's New in Xubuntu 23.10Rhythmbox 3.4.7 includes some bug fixes while also removing party mode

Rhythmbox is a music player, online radio, and music management application. The 3.4.7 update drops party mode and includes a handful of improvements. User-facing improvements include:

  • Imported playlists will now retain the playlist file name
  • Subscribing to a podcast will no longer cause the app to crash
  • WMA-format audio files will now play automatically when clicked

Xfce Apps

Dictionary (0.8.4 to 0.8.5)

What's New in Xubuntu 23.10Dictionary 0.8.5 fixes some bugs while also applying some light UI enhancements

Dictionary (xfce4-dict) allows you to search dictionary services including dictionary servers like dict.org, web servers in a browser, or local spell check programs. Version 0.8.5 includes some minor updates, including a switch to symbolic icons, properly escaping markup in server information, and reducing unused code.

Mousepad (0.5.10 to 0.6.1)

What's New in Xubuntu 23.10Mousepad 0.6.1 adds a new search setting and fixes a variety of bugs

Mousepad is an easy-to-use and fast text editor. The 0.6.1 release includes some useful updates. A new "match whole word" toggle has been added to the search toolbar. File modification state is now tracked more reliably. Multi-window sessions are now properly restored at startup. Improvements to the menu bar and search labels round out this release.

Notifications (0.7.3 to 0.8.2)

What's New in Xubuntu 23.10Notifications 0.8.2 incorporates numerous bug fixes and improves logging support

The Xfce Notify Daemon (xfce4-notifyd) enables sending application and system notifications in Xfce. Version 0.8.2 received a massive number of bug fixes in addition to some new features:

  • "Mark All Read" button added to the settings and panel plugin
  • Individual log entries can now now be deleted or marked read
  • Option to show only unread notifications in the plugin menu
  • Option to ignore app-specified notification timeouts

Power Manager (4.18.1 to 4.18.2)

What's New in Xubuntu 23.10Power Manager 4.18.2 improves memory management and screensaver integration

Power Manager (xfce4-power-manager) manages the power settings of the computer, monitor, and connected devices. Included in 4.18.2 are syncing the lock-on-sleep setting with the Xfce Screensaver, fixes to a handful of memory management issues, and some stability improvements.

Ristretto (0.12.4 to 0.13.1)

What's New in Xubuntu 23.10Ristretto 0.13.1 (finally) introduces printing support

Ristretto is a fast and lightweight image viewer for Xfce. The latest 0.13.1 release introduces a long-awaited feature: printing support! It also improves thumbnailing and looks better when scaling the UI beyond 1x.

Screensaver (4.16.0 to 4.18.2)

What's New in Xubuntu 23.10Screensaver 4.18.2 fixes several stability and usability issues

Screensaver (xfce4-screensaver) is a simple screen saver and locker app for Xfce. Version 4.18.2 fixes some crashes and memory management issues seen in Xubuntu 23.04, correctly integrates with LightDM (no more double password entry when switching users), and correctly inhibits sleep when expected. Screensaver works in conjunction with Power Manager to secure your desktop session.

Screenshooter (1.10.3 to 1.10.4)

What's New in Xubuntu 23.10Screenshooter 1.10.4 improves usability and adds two new file types

Screenshooter (xfce4-screenshooter) allows you to capture your entire screen, an active window, or a selected region. The 1.10.4 update introduces support for AVIF and JPEG XL files, better handles unwritable directories, and remembers preferences between sessions.

Thunar (4.18.4 to 4.18.7)

What's New in Xubuntu 23.10Thunar 4.18.7 features some bug fixes and performance improvements

Thunar is a fast and feature-full file manager for Xfce. Version 4.18.7 fixes a number of bugs and performance issues, resulting in a more stable and responsive file manager.

Thunar Media Tags Plugin (0.3.0 to 0.4.0)

What's New in Xubuntu 23.10The lesser-known Media Tags plugin received minor technical updates

Thunar Media Tags Plugin extends Thunar&aposs support for media files, adding a tag-based bulk rename option, an audio tag editor, and an additional page to the file properties dialog. Version 0.4.0 received only minor updates, updating some backend libraries to newer versions.

Xfburn (0.6.2 to 0.7.0)

What's New in Xubuntu 23.10Xfburn 0.7.0 continues to receive updates as one of the best-supported burning apps around

Xfburn is an easy-to-use disc burning software for Xfce. Version 0.7.0 includes numerous usability bug fixes (missing icons, missing progress dialogs, multi-item selection) and adds supported MIME types to open blank and audio CDs from other apps.

Xfce Panel (4.18.2 to 4.18.4)

What's New in Xubuntu 23.10Panel 4.18.4 fixes memory management issues and improves scaling

Xfce Panel (xfce4-panel) is a key component of the Xfce desktop environment, featuring application launchers and various useful plugins. The 4.18.4 release fixes memory management issues, improves icon scaling at different resolutions, and updates icons when their status changes (e.g. for symbolic colored icons).

Panel Profiles (1.0.13 to 1.0.14)

What's New in Xubuntu 23.10Panel Profiles 1.0.14 significantly improves file handling

Panel Profiles (xfce4-panel-profiles) allows you to manage and share Xfce panel layouts. Version 1.0.14 introduces saving and restoration of RC files, ensures unique and consistent profile and file names, and fixes file handling issues.

Panel Plugins

Clipman Plugin (1.6.2 to 1.6.4)

What's New in Xubuntu 23.10Clipman 1.6.4 improves memory management and tidies up the UI

Clipman (xfce4-clipman-plugin) is a clipboard manager for Xfce. Once activated (it&aposs disabled by default in Xubuntu), it will automatically store your clipboard history for easy later retrieval. Version 1.6.4 improves memory management, polishes up the UX with the addition of some new icons and better layout, and fixes icon display when the UI is scaled beyond 1x.

CPU Graph Plugin (1.2.7 to 1.2.8)

What's New in Xubuntu 23.10CPU Graph 1.2.8 makes more information readily available from the panel

CPU Graph (xfce4-cpugraph-plugin) adds a graphical representation of your CPU load to the Xfce panel. Version 1.2.8 now displays detailed CPU load information and features an improved tooltip.

Indicator Plugin (2.4.1 to 2.4.2)

What's New in Xubuntu 23.10Indicator Plugin 2.4.2 removes the downstream Xubuntu delta, guaranteeing better support

Indicator Plugin (xfce4-indicator-plugin) adds support for Ayatana indicators to the Xfce panel. While many applications have moved to the (also-supported) KStatusNotifierItem format, some older apps still utilize the classic indicator libraries. The 2.4.2 update sees the upstream panel plugin migrate to the Ayatana indicators, a patch that Xubuntu and Debian have carried for a while.

Mailwatch Plugin (1.3.0 to 1.3.1)

What's New in Xubuntu 23.10Mailwatch 1.3.1 improves logging and UI scaling support

Mailwatch Plugin (xfce4-mailwatch-plugin) is a mailbox watching applet for the Xfce panel. The 1.3.1 release fixes blurry icons when using UI scaling, adds a new "View Log" menu item, and updates the log when an update is manually requested.

Netload Plugin (1.4.0 to 1.4.1)

What's New in Xubuntu 23.10Netload 1.4.1 shows your network utilization, with correct units, in the panel

Netload Plugin (xfce4-netload-plugin) shows your current network usage in the panel. The 1.4.1 update fixes some memory management issues, uses the correct units, and adds a new option to set the decimal precision ("digits number").

PulseAudio Plugin (xfce4-pulseaudio-plugin 0.4.5 to 0.4.8)

What's New in Xubuntu 23.10PulseAudio Plugin 0.4.8 greatly improves device and MPRIS (media player) support

PulseAudio Plugin (xfce4-pulseaudio-plugin) shows your current volume levels in the panel and makes it easy to adjust your volume, switch devices, and control media playback. Version 0.4.8 fixes a bug with changing devices, eliminates flickering in the microphone icon, adds scrolling to the microphone icon to adjust recording volume, and includes a bevy of other improvements for MPRIS and device handling. And yes, it works with PipeWire.

Verve Plugin (2.0.1 to 2.0.3)

What's New in Xubuntu 23.10Verve 2.0.3 plays better with the updates the panel has received in recent years

Verve Plugin (xfce4-verve-plugin) is a command line plugin for the Xfce panel. It allows you to run commands, jump to folder locations, or open URLs in your browser. The 2.0.3 release features a port to PCRE2, better handling for focus-out events, and a fix for a crash when used with the panel&aposs autohide functionality.

Weather Plugin (0.11.0 to 0.11.1)

What's New in Xubuntu 23.10Weather 0.11.1 makes it easier to configure the plugin and improves UI scaling

Weather Plugin (xfce4-weather-plugin) shows your local weather conditions in the panel. Click the panel icon to reveal your forecast for the next few days. Version 0.11.1 fixes a bug where the temperature would read as -0C, fixes logo and icon display with UI scaling beyond 1x, and makes configuration easier.

Whisker Menu Plugin (2.7.2 to 2.8.0)

What's New in Xubuntu 23.10Whisker Menu 2.8.0 improves power user support with more menu popup options

Whisker Menu Plugin (xfce4-whiskermenu-plugin) is a modern application launcher for Xfce. While the standard menu is reminiscent of Windows 2000 and earlier, the Whisker Menu has features you&aposd find in Windows Vista or later. Version 2.8.0 fixes breakage with AccountsService, adds support for showing specific menu instances (when using multiple plugins in multiple panels), and adds support for showing the menu at the center of the screen.

Download Xubuntu 23.10

Ready to take Xubuntu 23.10 for a spin? Well, we&aposre not! However, if you want to test the Release Candidate (RC) image, you can find the download information and where to report your test results on iso.qa.ubuntu.com.

See you Thursday when this release is ready to roll!

on October 10, 2023 11:49 AM

At the moment I am hard at work putting together the final bits for the AppStream 1.0 release (hopefully to be released this month). The new release comes with many new new features, an improved developer API and removal of most deprecated things (so it carefully breaks compatibility with very old data and the previous C API). One of the tasks for the upcoming 1.0 release was #481 asking about a formal way to distinguish Linux phone applications from desktop applications.

AppStream infamously does not support any “is-for-phone” label for software components, instead the decision whether something is compatible with a device is based the the device’s capabilities and the component’s requirements. This allows for truly adaptive applications to describe their requirements correctly, and does not lock us into “form factors” going into the future, as there are many and the feature range between a phone, a tablet and a tiny laptop is quite fluid.

Of course the “match to current device capabilities” check does not work if you are a website ranking phone compatibility. It also does not really work if you are a developer and want to know which devices your component / application will actually be considered compatible with. One goal for AppStream 1.0 is to have its library provide more complete building blocks to software centers. Instead of just a “here’s the data, interpret it according to the specification” API, libappstream now interprets the specification for the application and provides API to handle most common operations – like checking device compatibility. For developers, AppStream also now implements a few “virtual chassis configurations”, to roughly gauge which configurations a component may be compatible with.

To test the new code, I ran it against the large Debian and Flatpak repositories to check which applications are considered compatible with what chassis/device type already. The result was fairly disastrous, with many applications not specifying compatibility correctly (many do, but it’s by far not the norm!). Which brings me to the actual topic of this blog post: Very few seem to really know how to mark an application compatible with certain screen sizes and inputs! This is most certainly a matter of incomplete guides and good templates, so maybe this post can help with that a bit:

The ultimate cheat-sheet to mark your app “chassis-type” compatible

As a quick reminder, compatibility is indicated using AppStream’s relations system: A requires relation indicates that the system will not run at all or will run terribly if the requirement is not met. If the requirement is not met, it should not be installable on a system. A recommends relation means that it would be advantageous to have the recommended items, but it’s not essential to run the application (it may run with a degraded experience without the recommended things though). And a supports relation means a given interface/device/control/etc. is supported by this application, but the application may work completely fine without it.

I have a desktop-only application

A desktop-only application is characterized by needing a larger screen to fit the application, and requiring a physical keyboard and accurate mouse input. This type is assumed by default if no capabilities are set for an application, but it’s better to be explicit. This is the metadata you need:

<component type="desktop-application">
  <id>org.example.desktopapp</id>
  <name>DesktopApp</name>
  [...]
  <requires>
    <display_length>768</display_length>

    <control>keyboard</control>
    <control>pointing</control>
  </requires>
  [...]
</component>

With this requires relation, you require a small-desktop sized screen (at least 768 device-independent pixels (dp) on its smallest edge) and require a keyboard and mouse to be present / connectable. Of course, if your application needs more minimum space, adjust the requirement accordingly. Note that if the requirement is not met, your application may not be offered for installation.

Note: Device-independent / logical pixels

One logical pixel (= device independent pixel) roughly corresponds to the visual angle of one pixel on a device with a pixel density of 96 dpi (for historical X11 reasons) and a distance from the observer of about 52 cm, making the physical pixel about 0.26 mm in size. When using logical pixels as unit, they might not always map to exact physical lengths as their exact size is defined by the device providing the display. They do however accurately depict the maximum amount of pixels that can be drawn in the depicted direction on the device’s display space. AppStream always uses logical pixels when measuring lengths in pixels.

I have an application that works on mobile and on desktop / an adaptive app

Adaptive applications have fewer hard requirements, but a wide range of support for controls and screen sizes. For example, they support touch input, unlike desktop apps. An example MetaInfo snippet for these kind of apps may look like this:

<component type="desktop-application">
  <id>org.example.adaptive_app</id>
  <name>AdaptiveApp</name>
  [...]

  <requires>
    <display_length>360</display_length>
  </requires>

  <supports>
    <control>keyboard</control>
    <control>pointing</control>
    <control>touch</control>
  </supports>
  [...]
</component>

Unlike the pure desktop application, this adaptive application requires a much smaller lowest display edge length, and also supports touch input, in addition to keyboard and mouse/touchpad precision input.

I have a pure phone/table app

Making an application a pure phone application is tricky: We need to mark it as compatible with phones only, while not completely preventing its installation on non-phone devices (even though its UI is horrible, you may want to test the app, and software centers may allow its installation when requested explicitly even if they don’t show it by default). This is how to achieve that result:

<component type="desktop-application">
  <id>org.example.phoneapp</id>
  <name>PhoneApp</name>
  [...]

  <requires>
    <display_length>360</display_length>
  </requires>

  <recommends>
    <display_length compare="lt">1280</display_length>
    <control>touch</control>
  </recommends>
  [...]
</component>

We require a phone-sized display minimum edge size (adjust to a value that is fit for your app!), but then also recommend the screen to have a smaller edge size than a larger tablet/laptop, while also recommending touch input and not listing any support for keyboard and mouse.

Please note that this blog post is of course not a comprehensive guide, so if you want to dive deeper into what you can do with requires/recommends/suggests/supports, you may want to have a look at the relations tags described in the AppStream specification.

Validation

It is still easy to make mistakes with the system requirements metadata, which is why AppStream 1.0 will provide more commands to check MetaInfo files for system compatibility. Current pre-1.0 AppStream versions already have an is-satisfied command to check if the application is compatible with the currently running operating system:

:~$ appstreamcli is-satisfied ./org.example.adaptive_app.metainfo.xml
Relation check for: */*/*/org.example.adaptive_app/*

Requirements:
 • Unable to check display size: Can not read information without GUI toolkit access.
Recommendations:
 • No recommended items are set for this software.
Supported:
 ✔ Physical keyboard found.
 ✔ Pointing device (e.g. a mouse or touchpad) found.
 • This software supports touch input.

In addition to this command, AppStream 1.0 will introduce a new one as well: check-syscompat. This command will check the component against libappstream’s mock system configurations that define a “most common” (whatever that is at the time) configuration for a respective chassis type.

If you pass the --details flag, you can even get an explanation why the component was considered or not considered for a specific chassis type:

:~$ appstreamcli check-syscompat --details ./org.example.phoneapp.metainfo.xml
Chassis compatibility check for: */*/*/org.example.phoneapp/*

Desktop:
  Incompatible
 • recommends: This software recommends a display with its shortest edge
   being << 1280 px in size, but the display of this device has 1280 px.
 • recommends: This software recommends a touch input device.

Laptop:
  Incompatible
 • recommends: This software recommends a display with its shortest edge 
   being << 1280 px in size, but the display of this device has 1280 px.
 • recommends: This software recommends a touch input device.

Server:
  Incompatible
 • requires: This software needs a display for graphical content.
 • recommends: This software needs a display for graphical content.
 • recommends: This software recommends a touch input device.

Tablet:
 ✔ Compatible (100%)

Handset:
 ✔ Compatible (100%)

I hope this is helpful for people. Happy metadata writing! 😀

on October 10, 2023 08:34 AM

October 06, 2023

The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack 2023.2 Bobcat on Ubuntu 22.04 LTS (Jammy Jellyfish). For more details on the release, please see the OpenStack 2023.2 Bobcat release notes.

The Ubuntu Cloud Archive for OpenStack 2023.2 Bobcat can be enabled on Ubuntu 22.04 by running the following command:

sudo add-apt-repository cloud-archive:bobcat

The Ubuntu Cloud Archive for 2023.2 Bobcat includes updates for:

aodh, barbican, ceilometer, ceph (18.2.0), cinder, designate, designate-dashboard, dpdk (22.11.3), glance, gnocchi, heat, heat-dashboard, horizon, ironic, ironic-ui, keystone, magnum, magnum-ui, manila, manila-ui, masakari, mistral, murano, murano-dashboard, networking-arista, networking-bagpipe, networking-baremetal, networking-bgpvpn, networking-l2gw, networking-mlnx, networking-sfc, neutron, neutron-dynamic-routing, neutron-fwaas, neutron-taas, neutron-vpnaas, nova, octavia, octavia-dashboard, openstack-trove, openvswitch (3.2.0), ovn (23.09.0), ovn-octavia-provider, placement, sahara, sahara-dashboard, senlin, swift, trove-dashboard, vitrage, watcher, watcher-dashboard, zaqar, and zaqar-ui.

For a full list of packages and versions, please refer to the Ubuntu Cloud Archive Tracker.

Reporting bugs

If you have any issues please report bugs using the ‘ubuntu-bug’ tool to ensure that bugs get logged in the right place in Launchpad:

sudo ubuntu-bug nova-conductor

Thank you to everyone who contributed to OpenStack 2023.2 Bobcat!

on October 06, 2023 12:47 PM

October 05, 2023

What’s-a matter you? Hey!
Gotta no respect?
What-a you t’ink you do, why you look-a so sad?
It’s-a not so bad, it’s-a nice-a place
Shaddap Your Face, Joe Dolce

(If you just want to know about how to make your own Matter device in software and don’t want to read a whole story to get to that point, then first of all that’s, I mean, that’s fine, it’s not a problem, you go ahead, I’m not hurt at all, definitely not, and secondly skip down to Part 4 or check out the Github repo. But, I mean, really? You’re that busy? You might wanna find a way to chill out a bit. Have a cup of tea. Enjoy life. It’s better, I promise.)

Part the first: the kingdom of the blind

I’ve got new window blinds in my flat. They’re pretty cool; ivory and purple, we’re all very pleased. They’re from Hillarys, who don’t get a link because although the actual blinds are great, their customer service is… less so, so I’m loath to recommend them. But one of the neat things about them is that they’re electrically operated. There’s a remote control; press the up or down button and they go up and down. Some people would be like a fascinated small child with this technology and spend half of the first day just pressing the buttons and giggling, but not me, no no.

A white room interior with a window covered by a day-night blind in a deep purple. The blind is alternating horizontal strips of deep purple fabric with a light texture, and almost-transparent white muslin. The room interior looks like a show home, because it probably is; the image is taken from Hillarys' website rather than being of my flat

One of the questions I was asked when speccing them out was: do you want them to be controllable by Alexa? And I thought: well, yes, I would like that. Being able to command my blinds to open or close is one more step on the quest to be a James Bond villain. All I need after that is a monorail and a big picture on the wall which revolves to show my world domination map and I’m golden.

But… I do not like or trust the IoT industry. I’m not entirely against the whole concept — I mean, I have an Amazon Echo, after all. It gets used for cooking timers, my shopping list, and music1, but it doesn’t get used for IoT stuff because I don’t have any. This is, basically, because the whole IoT industry is full of grifters and thieves and I do not trust them with my data or with still being around in two years. The whole idea that I speak to a thing in my flat, that thing routes my command up to some server run by an IoT vendor, and then the IoT vendor’s server speaks to another thing in my flat to make it do work… this is a stupid plan. It’s the vendor forcibly inserting themselves as a middle man for literally no reason other than to exploit me later. Because they’ll get bored or bought in a few years and they’ll shut down the servers, with a sorry-not-sorry email about how it’s “no longer viable to support” a thing I paid for and therefore they’re turning it into a brick, but I can buy a replacement for the low low cost of… anyway, I’m not doing it. No. I don’t want some random vendor’s server to punch a hole into my internal network, I don’t want my stuff to stop working if they turn their servers off or they go down, and the whole concept is hugely irritating.

A tweet from j_opdenakker reading "the S in IoT stands for Security

(You might be thinking: why do you have an Echo at all, then? That’s got all the same problems! And you’re not wrong. But I trust Amazon to still be around for a while. Trusting them with my data is another thing, of course, but I’m already on board with that… although I entirely sympathise with people who choose to not do so either from distrust or from objections to their crappy employment or sales practices.)

Anyway, I looked up these blinds and the Alexa integration they do comes via a company called Somfy. Their APIs seem semi-reasonably documented and I’ve heard nothing specifically bad about them as a company… but I still don’t like the idea. If I were Matthew Garrett then I would probably find joy in reverse-engineering whatever the protocol is and making it work for me, but I’m not as clever as he is2. And they’ll get bored or bought: I’m not sure I trust them to keep their servers running for years and years. Maybe I’d be OK with a thing that required an internet connection to someone else’s server and only let me fiddle with that to the extent that I am given permission for3, if I’d only expect to keep that thing for a short time, but these are window blinds. How often do you change your window blinds? I expect these will still be here in twenty years! Do I expect these servers to also be there that long? Hell no. So, I’m not doing that.

Part the second: keep government local

I do actually have one “IoT” device in my flat, though. It’s a remotely-controllable wall socket. It’s from a company called LocalBytes, and it’s basically a “smart plug”: it’s a cylinder that plugs into a wall socket, and has another socket on the other side of it, rather like one of those mini gangplug cube things.

A LocalBytes smart plug, as described, plugged into a wall socket

It contains a little microcontroller and wifi chip, and it runs software called Tasmota. And it’s entirely locally controlled; you get a Tasmota app (or talk the documented protocol yourself from code) which can send the plug a command to turn on and off (and also a bunch more complex stuff such as turning on at a specific time), and it involves no internet servers at all. I can’t get screwed by you turning off your servers (or failing to secure them properly) if there aren’t any4. Now, I would not recommend these Tasmota devices to a normal person; the app is unreliable and pretends it can’t find the device some of the time, and the configuration API is obscure and weird. But I am, as has been said by kings and queens, not a normal person. I’m a computer bloke. So I am OK with this, and I’d be OK with something similar to control my blinds; something that runs locally and accepts commands from Alexa and then talks to the blinds to open and close them.

Now, the bit that actually talks to the blinds, I haven’t started working on yet. As far as I can tell from reading, the remote control works on a standard “smart home” frequency of 433MHz, and there are loads of little dongles and boards that plug into USB or Raspberry Pi GPIO pins which can talk that. I’ll get to that eventually; once I have a working thing, making it talk 433MHz is step 2. Step 1 is to make something that I control and which Alexa can talk to, but which doesn’t require servers on the internet to make it work. This rules out writing a custom Alexa skill; I can do that, and have, but you can’t write a skill which makes the actual Echo in my flat do network connections. The connection comes from Amazon’s servers, which means I’d have to put my little device on the internet, which I don’t want to do. The concern of the servers going away doesn’t apply here — if Amazon’s servers go away, my Echo stops working anyway and all of this is moot — but I do not want to punch a hole into my internal network from outside, and I shouldn’t have to. This is one thing in my house talking to another thing in my house.

This problem used to be unsolvable. And then, just like in the beginning of all things, someone invented Matter.

Part the third: does it Matter?

It is, obviously, stupid that every device invents its own protocol to talk to it, and none of it’s documented, and none of it’s standardised, and everything gets reinvented from scratch by programmers who clearly have their minds on lunchtime. So, equally obviously, the way to solve this is to have all the biggest tech companies in the world collaboratively agree on a standard way to do things.

I can hear you laughing from here.

This has, equally equally obviously, resulted in a disastrous nine-way collision of corporate speak, absurd specifications, paywalls for things, requirements to sign up as a “partner” before you’re told anything, and documentation by press release. But to my surprise it has actually resulted in something! The Matter specifications have basically everybody on board — Amazon, Apple, Google, most of the big players in the device world — and have finally started to actually appear in real shipping products rather than only in conference talks. The Amazon Echo supports Matter back to the 3rd gen Dot (there’s a list of Matter-supporting Echo devices here). If you’ve got a Matter-supporting thingy, then it’ll have the Matter logo on it:

The Matter logo: three arrows with curved heads all pointing into a single central point; it has threefold symmetry like the radiation symbol and similar things. It also has the word "matter" in a curvy slightly childish font

and then you can say “Alexa, discover devices” and your Echo will search and then say “I have found a Matter device!”. You then use the Alexa app on your phone to pair it, either by scanning a QR code or typing in a code, both of which should be on the device itself.

Now, Matter is a big corporate spec and wants to deal with all sorts of complicated and detailed edge cases. In particular, there is a standard problem with an IoT device in your house, which is that you can’t talk to it until it’s on the wifi, but you can’t put it on the wifi without talking to it. This normally involves the device pretending to be a wifi access point, and you connecting to it with a mobile app you have to install5, but Matter attempts to improve this; a Matter device can potentially exchange data over wifi, over Bluetooth, over some extra network thing called Thread that I don’t understand, over ethernet, the works. A lot of the setup and spec for Matter involves dealing with all this.

I, personally, for myself, for this device, do not care about this. If I were making a device that would be sold to real people, I’d implement all this stuff. But since it’s just me, I’m OK with requiring that my window blind device is already on my wifi, and getting it that way is my problem, before I try detecting it with Alexa.

So, I need a way to make a Matter device; it only has to deal with wifi. The Matter specification6 describes how to talk Matter7, but there’s a lot of detail in there. Surely someone has already implemented this stuff?

Well, they have… but it’s not great. The @project-chip/connectedhomeip Github repository, named back in the days when Matter was still called “project CHIP” before the Branding People got their hands on it, is the reference implementation. And it’s a nightmare. It’s all in C, it’s huge (I checked out the repo and all its submodules and ran out of disk space; to get the repo and all sub-repos and then build it, you’d better have 30GB or so free), compiling it all is a massive headache, it contains code for all sorts of different single-board computers to make them Matter devices, and I couldn’t make head nor tail of it. I’m sure you need all this if you’re looking to build a device and ship millions of them to people in Best Buy and Currys, but I ain’t. Normally, people write stuff like this in Python, but in looking around all I could find was a Python library designed to work with Home Assistant, which is not what I wanted (I want to make a device, not a controller) and which required the CHIP SDK anyway, the big complicated nightmare thing mentioned above. I resigned myself to having to write a very noddy implementation of enough of Matter’s pairing and communications stuff in Python to get a device up and running, and bitched about having to do this on social media. And then Alan Bell, hero of the revolution, said: have you tried matter.js?8

I had not. So I gave it a look.

Part the fourth: yes it Matters!

Matter.js is an implementation of the Matter suite of protocols in JavaScript. It does not build on the huge complicated Project CHIP SDK; it’s its own thing. And it just works.

If you want to build a Matter device in JavaScript, take a look at the Matter.js Node examples and they work fine. You can do that right now.

I did something a little tiny bit different. The Matter.js stuff is not actually JS; it’s TypeScript. I don’t like TypeScript9 and so I wanted a plain JS thing. So I did a simple reimplementation of the simplest Matter.js example in plain JS, and made it even simpler; the original correctly lets you pass in a bunch of details on the command line to configure it, but I’m not bothered about that and wanted to keep it as simple as possible. So, the simplest possible “virtual” Matter device implemented in JavaScript. It’s a lightbulb; Alexa, and Matter, do support multiple types of devices (including window blind controllers!) but Matter.js only does lightbulbs and sockets at the moment. I’m probably going to build and contribute the window blind controller device type, if someone else doesn’t get there first. Anyway, let’s get this code running! Use is pretty easy for a developer:

$ git clone https://github.com/stuartlangridge/simple-js-matter-device.git
Cloning into 'simple-js-matter-device'...
remote: Enumerating objects: 11, done.
remote: Counting objects: 100% (11/11), done.
remote: Compressing objects: 100% (10/10), done.
remote: Total 11 (delta 0), reused 7 (delta 0), pack-reused 0
Receiving objects: 100% (11/11), 25.16 KiB | 1.48 MiB/s, done.
$ cd simple-js-matter-device
$ npm install

added 90 packages, and audited 91 packages in 5s

5 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities
$ node sil.mjs 
2023-10-06 11:38:24.231 INFO  Device               node-matter
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
█ ▄▄▄▄▄ ██▀▄▀▄█ ▄▄▄▄▄ █
█     █▄▀▄▀▄█     █
█ █▄▄▄█ ██ ▀▀▄█ █▄▄▄█ █
█▄▄▄▄▄▄▄█  ▀▄█▄▄▄▄▄▄▄█
█  █▄ ▄▄▄█   ▄▀██ █▀█
██▀  █▀▄█▄  ▀▀▄█  ▀█▀█
██▄█▄▄█▄▄█▄▄▀ ▀▄ █▄▀▄▀█
█ ▄▄▄▄▄   ▄█▄█▄▄   █
█     █▄▄ █▄  ▀▄ ▀██
█ █▄▄▄█   ▀█▄█ ▀█   █
█▄▄▄▄▄▄▄█▄███▄██▄██▄█▄█
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

2023-10-06 11:38:24.330 INFO  Device               QR Code URL: https://project-chip.github.io/connectedhomeip/qrcode.html?data=MT:Y.K90-Q000KA0648G00
2023-10-06 11:38:24.330 INFO  Device               Manual pairing code: 34970112332

We’ve git cloned the repository, done npm install to install the dependencies, and run it. That starts up our virtual device, and also prints the QR code and manual pairing code used for pairing. Job done; your device now exists.

Next, we teach Alexa about it. Exercise your most commanding voice and say “Alexa, discover new devices!” Your Echo will tell you it’s doing it, and then after a few seconds of searching, tell you that it’s found one Matter device and send you off to the devices section of the Alexa app on your phone to connect it up; I also got a notification on my phone telling me the same thing.

an iOS lock screen showing a notification reading "Tap to begin setup with Alexa: New Matter Device found

Hit that notification and you end up on the Devices screen, where it’ll tell you to “Connect your Matter Light Bulb”:

the Alexa app's Devices screen, with "Connect your Matter Light Bulb" in a newly showing "Available Devices" section

(For some reason, my device seems to present as two separate devices; a light bulb and an unnamed “Matter device”. I don’t know why. It seems that you can pick either.)

Choose to connect to this Matter device, and you get a screen called “Control your Matter device with Alexa”, which is exactly what we want to do:

the Alexa app showing a screen headlined "Control your Matter device with Alexa", and Cancel and Next buttons

When asked if your device has a Matter logo, say yes (it would do if it were a real Matter device, of course):

the Alexa app showing a screen headlined "Does your device have a Matter logo?", and No and Yes buttons

Then you have to scan the QR code. Usefully, the device has printed out a QR code in the Terminal where you ran it, above; scan that. If that didn’t work for some reason, it has also printed a URL for that QR code which you can follow to show the QR in the browser, and if that doesn’t work either, you can enter the numeric code it printed (the “manual pairing code) instead:

the Alexa app showing a screen headlined "Locate a QR code shown for your device", and "Try numeric code instead?" and "Scan QR code" buttons

Alexa will then claim to be “Looking for your device” (and the stuff in the Terminal will print a bunch of logging things about “Pase server: Received pairing request from udp://192.168.1.103:5541” and whatnot):

the Alexa app showing a screen headlined "Looking for your device", and a waiting spinner

and then, all else being good, you’ll get a screen telling you your light is connected! It’s called “Kryogenix Light” in this screen, and “Stuart Light” elsewhere; you can see those strings in sil.mjs and can customise them to your heart’s content.

the Alexa app showing a screen headlined "Kryogenix Light found and connected", and a Next button

You should now have a “Lights” section in the Devices screen of your Alexa app. You can use this to turn the “light” on and off from the Alexa app, or you can say “Alexa, turn Stuart Light on” to do it by voice. When you do that, the Terminal should print !!!!!!!!!!!!!!!! GOT COMMAND true (or false), which means it’s calling the onOffListener we defined in sil.mjs. You can customise this to do whatever you’d like!

the Alexa app showing a screen headlined "Stuart Light", and a big circular on/off power button, currently set to On with a blue halo around it

And that’s it working. That’s a software device, which you can pair with Alexa and customise how you choose; it doesn’t require you to write an Alexa skill, and the device does not need to be accessible from the internet. That’s just what I want.

Now I suppose I have to make it do something useful…

  1. less so music than before, after the bloody shysters at Amazon decided to take away the free music playing from Amazon Prime and make people pay extra for it
  2. I make up for it by being, like, twice his size, though
  3. honestly, this is the big issue. I react very badly to stuff where I ought to be able to do a thing and I’m not permitted to do so because they don’t want me to or can’t be bothered to implement it, and won’t let me do it myself. This is one of the big things that makes me move away from iOS every few years
  4. insert picture here of that bloke tapping his temple knowingly
  5. you should not need a mobile app to connect to a device pretending to be an access point. You should be able to do this from the web browser on your phone. But you can’t, as I have laboriously discussed before in these pages and am still annoyed about
  6. as usual with corporate technology things, the Matter specification is difficult to find; one of these things where you need to sign up, or be a “partner”, or pay money, or something. But it is obtainable. The Alexa Matter docs link to a page on the Matter website which lets you fill in an email address and then emails you a link to a PDF of the spec in all its 895 page glory
  7. super reassuringly, Matter seems to use all standard protocols! device discovery is mdns! They even have example commands for avahi-publish-service to show you how to do it, right there in the spec!
  8. There is an unfortunate naming collision here. There already was a Matter.js: it’s a 2d physics engine for the web, which I’ve used before. But I can’t see what else the people implementing the Matter spec in JavaScript could have called their thing. Only so many words in the world, I guess
  9. feel free to email your long diatribe about why TypeScript is the future to IDontCare@Whatever.Bored
on October 05, 2023 08:44 AM

September 30, 2023

Ode to Linux 🐧

Russell John

In the realm of ones and zeros, where bytes take flight, Lies a world of freedom, in the soft, glowing light. A kernel of power, an OS so divine, Linux, you’re a treasure, a gem that’ll shine. Born in the heart of a hacker’s desire, Open source spirit, like a blazing fire. Linus Torvalds, your […]

The post Ode to Linux 🐧 appeared first on Cyber Kingdom of Russell John.

on September 30, 2023 08:39 PM

September 28, 2023

Ubuntu MATE 23.10 is more of what you like, stable MATE Desktop on top of current Ubuntu. This release rolls up a number of bugs fixes and updates that continues to build on recent releases, where the focus has been on improving stability 🪨

Ubuntu MATE 23.10 Ubuntu MATE 23.10

Thank you! 🙇

I’d like to extend my sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this release 👏 From reporting bugs, submitting translations, providing patches, contributing to our crowd-funding, developing new features, creating artwork, offering community support, actively testing and providing QA feedback to writing documentation or creating this fabulous website. Thank you! 💚

What changed since the Ubuntu MATE 23.04?

Here are the highlights of what’s changed since the release of Ubuntu MATE 23.04

MATE Desktop

MATE Desktop has been updated to 1.26.2 with a selection of bugs fixes 🐛 and minor improvements 🩹 to associated components.

  • caja-rename 23.10.1-1 has been ported from Python to C.
  • libmatemixer 1.26.0-2+deb12u1 resolves heap corruption and application crashes when removing USB audio devices.
  • mate-desktop 1.26.2-1 improves portals support.
  • mate-notification-daemon 1.26.1-1 fixes several memory leaks.
  • mate-system-monitor 1.26.0-5 now picks up libexec files from /usr/libexec
  • mate-session-manager 1.26.1-2 set LIBEXECDIR to /usr/libexec/ for correct interaction with mate-system-monitor ☝️
  • mate-user-guide 1.26.2-1 is a new upstream release.
  • mate-utils 1.26.1-1 fixes several memory leaks.

Yet more AI Generated wallpaper

My friend Simon Butcher 🇬🇧 is Head of Research Platforms at Queen Mary University of London managing the Apocrita HPC cluster service. Once again, Simon has created a stunning AI-generated 🤖🧠 wallpaper for Ubuntu MATE using bleeding edge diffusion models 🖌 The sample below is 1920x1080 but the version included in Ubuntu MATE 23.10 are 3840x2160.

Here’s what Simon has to say about the process of creating this new wallpaper for Mantic Minotaur:

Since Minotaurs are imaginary creatures, interpretations tend to vary widely. I wanted to produce an image of a powerful creature in a graphic novel style, although not gruesome like many depictions. The latest open source Stable Diffusion XL base model was trained at a higher resolution and the difference in quality has been noticeable, particularly at better overall consistency and detail, while reducing anatomical irregularities in images. The image was produced locally using Linux and an NVIDIA A100 80GB GPU, starting from an initial text prompt and refined using img2img, inpainting and upscaling features.

Major Applications

Accompanying MATE Desktop 1.26.2 🧉 and Linux 6.5 🐧 are Firefox 118 🔥🦊, Celluloid 0.25 🎥, Evolution 3.50 📧, LibreOffice 7.6.1 📚

See the Ubuntu 23.10 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 23.10

This new release will be first available for PC/Mac users.

Download

Upgrading from Ubuntu MATE 23.04

You can upgrade to Ubuntu MATE 23.10 from Ubuntu MATE 23.04. Ensure that you have all updates installed for your current version of Ubuntu MATE before you upgrade.

  • Open the “Software & Updates” from the Control Center.
  • Select the 3rd Tab called “Updates”.
  • Set the “Notify me of a new Ubuntu version” drop down menu to “For any new version”.
  • Press Alt+F2 and type in update-manager -c -d into the command box.
  • Update Manager should open up and tell you: New distribution release ‘23.10’ is available.
    • If not, you can use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
  • Click “Upgrade” and follow the on-screen instructions.

There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

Known Issues

Here are the known issues.

Component Problem Workarounds Upstream Links

Feedback

Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

on September 28, 2023 11:29 AM