July 26, 2024

A trusted database is fundamental to the smooth and secure operation of telecommunications services:, from network management and customer service to compliance and fraud prevention. 

MongoDB® is one of the most widely used databases (DB Engines, 2024) for enterprises,  including those in the telecommunications  industry. It provides a sturdy, adaptable and trustworthy foundation. It also safeguards sensitive customer data while facilitating swift responses to rapidly evolving situations. 

With that in mind, let’s take a look at the key use cases for MongoDB in the telco sector and the advantages that this solution brings to the table. 

<noscript> <img alt="" height="764" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1366,h_764/https://ubuntu.com/wp-content/uploads/8184/Enterprise-mongodb-for-telco.png" width="1366" /> </noscript>

Subscribers and billing

The telco sector faces several technology challenges, including storing and processing large amounts of data collected from multiple sources. Operators collect data on how clients utilise their services and engage with third parties, as well as data related to the billing and charging applied to the services they receive. The difficulty comes from sorting through this vast amount of data to find insightful information, in order to deduce customer behaviour and usage trends, and then generate reliable predictions on service demand, data volume, and resource consumption. The size and speed at which data must be processed in telecommunications are too challenging for traditional relational databases to handle. If database operations are not agile and reliable, it leads to system slowdown and low service quality – this makes MongoDB the best database fit across multiple use cases.

MongoDB’s scalability and high availability features ensure that billing systems remain operational with minimal downtime, which is critical for accurate and timely billing. Additionally, MongoDB’s querying capabilities enable operators to swiftly retrieve and analyse customer data, including billing leading to more informed decision-making and improved customer support.

Internet of Things (IoT)

The use of MongoDB as a database for storing, retrieving, and processing IoT data from communication networks is increasingly common. This powerful database allows for real-time processing of IoT data, including alerts and sensor events. The data stored in MongoDB can be used for predictive analytics, specifically for monitoring the health and performance of network equipment like base stations, routers, and switches. By leveraging MongoDB, large volumes of sensor data can be stored and processed, enabling predictive maintenance algorithms to identify potential equipment failures before they occur. Continuous collection and analysis of sensor data in MongoDB helps predict maintenance needs, ultimately reducing downtime and maintenance costs.

Network and inventory management

A suitable database is integral to telco network management because it provides a structured and efficient way to store, manage, and analyse the vast amounts of data generated . MongoDB can store network performance data, including equipment status, network logs, and real-time telemetry data. This allows for monitoring and analysis of network performance, troubleshooting, and proactive maintenance as it can provide a near real-time data feed.

Telco companies often manage extensive inventories of network equipment and service resources. This is why use cases such as service provisioning and inventory management can be streamlined with a database technology like MongoDB. It can help track the allocation and availability of these resources through its storage, retrieval and scaling capabilities.

Take the next step

Explore how MongoDB can support your projects in the financial, telecommunications and automotive industries in our recent guide: “MongoDB® for enterprise data management”Download it now

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/docsz/AD_4nXcaUdhtODWvubjz-sDxzryGwO2ebmiQpEZtV__Ms_OLfdMDrz0Fs5hA4USwsPJKOjU9YYaa-gyE0iIpJ0VQolEv6TphKk1hTd0-qn_KS0fmii8ijyL_FAZz7wxXpT4tK3vOILFCFE45xpZjb3EFfdGrDnry?key=PlQNqUjkIPSDLnuhjuPjpw" width="720" /> </noscript>

Canonical for your MongoDB journey

Charmed MongoDB is an enhanced, open source and fully-compatible, drop-in replacement for the MongoDB Community Edition with advanced enterprise features – Canonical can support you at every stage of your MongoDB journey (from Day 0 to Day 2). Our services include:

Canonical open source solutions are building blocks that provide a unified approach to meet any current or future use cases in telco using multiple database,  AI, cloud and edge technologies.

Further reading

Trademark Notice

“MongoDB” is a trademark or registered trademark of MongoDB Inc. Other trademarks are property of their respective owners. Charmed MongoDB is not sponsored, endorsed, or affiliated with MongoDB, Inc.

on July 26, 2024 12:07 PM

We are excited to announce that, on the 21st of August 2024, product managers Andreea Munteanu (AI) and Adrian Matei (Managed Services) will represent Canonical in a keynote session at Kubecon China, at the Kerry Hotel in Hong Kong. Canonical has been a regular presence at Kubecon events over the years, and we are excited to be joining Kubecon China for the first time.

The session, Tackling operational time-to-market decelerators in AI/ML projects, will dive deep into the requirements and factors that enable operational excellence in AI ventures, from infrastructure provisioning to monitoring and incident recovery. We will focus on what an open source AI stack should look like from a dual perspective: 

  • On the component side, we will explore tools like Kubeflow and MLFlow, examining their integrability in the infrastructure stack and their usability in various AI projects. 
  • On the operations side, we will provide a holistic analysis of AI infrastructure operations, from the lowest to the highest levels of the stack. The session will define operational excellence in machine learning, then present the benefits and challenges of achieving it.Lastly, we will lay out various pathways to operational excellence suitable for enterprises of multiple sizes. 

This topic is particularly important in the context of the recent operational challenges that the industry is facing. With the fast-paced innovation that surrounds the market, and the plethora of new tools and methods that emerge every day, sustaining an operationally efficient stack is key to long-term resilience and success. Furthermore, operational excellence is essential to achieving legal and security compliance, as well as to involvement in governmental projects. 
For these reasons and more, we invite you to attend the keynote. Adrian and Andreea will be joined by several Canonical APAC representatives, who are excited to meet and discuss any requirements or questions you may have. Registrations for Kubecon China are open and accessible here. Our team looks forward to meeting you there.

on July 26, 2024 10:30 AM

July 25, 2024

E309 Ananás Na Chapa

Podcast Ubuntu Portugal

Devido a motivos imprevistos e a um calor de ananases - ou por outra, de fritar ananases - esta semana não teremos mais um episódio de podcast. Todavia, trazemos uma bonita mensagem de Natal e memórias nostálgicas dos anos 80.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on July 25, 2024 12:00 AM

July 22, 2024

Welcome to the Ubuntu Weekly Newsletter, Issue 849 for the week of July 14 – 20, 2024. The full version of this issue is available here.

In this issue we cover:

  • Upcoming APT cryptography changes for 24.04.1
  • Ubuntu 23.10 (Mantic Minotaur) reached End of Life on July 11, 2024
  • Ubuntu Stats
  • Hot in Support
  • LoCo Events
  • Oracular Oriole 24.10 Wallpaper Competition
  • Other Community News
  • Ubuntu Cloud News
  • Canonical News
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, and 24.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

on July 22, 2024 11:02 PM

Introduction

When managing Unix-like operating systems, understanding permission settings and security practices is crucial for maintaining system integrity and protecting data. FreeBSD and Linux, two popular Unix-like systems, offer distinct approaches to permission settings and security. This article delves into these differences, providing a comprehensive comparison to help system administrators and users navigate these systems effectively.

1. Overview of FreeBSD and Linux

FreeBSD is a Unix-like operating system derived from the Berkeley Software Distribution (BSD), renowned for its stability, performance, and advanced networking features. It is widely used in servers, network appliances, and embedded systems.

Linux, on the other hand, is a free and open-source operating system kernel created by Linus Torvalds. It is the foundation of numerous distributions (distros) like Ubuntu, Fedora, and CentOS. Linux is known for its flexibility, broad hardware support, and extensive community-driven development.

2. File System Hierarchy

Both FreeBSD and Linux follow the Unix file system hierarchy but with slight variations. Understanding these differences is key to grasping permission settings on each system.

  • FreeBSD: Uses the Filesystem Hierarchy Standard (FHS) but has its nuances. The /usr directory contains user programs and data, while /var holds variable data like logs and databases. FreeBSD also utilizes /usr/local for locally installed software.
  • Linux: Generally adheres to the FHS. Important directories include /bin for essential binaries, /etc for configuration files, /home for user directories, and /var for variable files.

3. Permissions and Ownership

Both systems use a similar model for file permissions but have some differences in implementation and additional features.

3.1 Basic File Permissions

  • FreeBSD:
  • Owner: The user who owns the file.
  • Group: A group of users with shared permissions.
  • Others: All other users.
  • Permissions are represented as read (r), write (w), and execute (x) for each category. Commands to manage permissions:
  • ls -l: Lists files with permissions.
  • chmod: Changes file permissions.
  • chown: Changes file ownership.
  • chgrp: Changes group ownership.
  • Linux:
  • Similar to FreeBSD, Linux file permissions are also divided into owner, group, and others.
  • Commands are the same: ls -l, chmod, chown, chgrp.

3.2 Special Permissions

  • FreeBSD:
  • Setuid: Allows users to execute a file with the file owner’s permissions.
  • Setgid: When applied to a directory, new files inherit the directory’s group.
  • Sticky Bit: Ensures only the file owner can delete the file.
  • Linux:
  • Setuid: Allows a user to execute a file with the permissions of the file owner.
  • Setgid: When set on a directory, files created within inherit the directory’s group.
  • Sticky Bit: Similar to FreeBSD, it restricts file deletion.

4. Extended Attributes and ACLs

4.1 FreeBSD:

FreeBSD supports Extended File Attributes (EAs) and Access Control Lists (ACLs) to provide more granular permission control.

  • Extended Attributes: Used to store metadata beyond standard attributes. Managed with setfattr and getfattr.
  • Access Control Lists (ACLs): Allow setting permissions for multiple users and groups. Managed with setfacl and getfacl.

4.2 Linux:

Linux also supports Extended Attributes and ACLs.

  • Extended Attributes: Managed with setxattr and getxattr.
  • Access Control Lists (ACLs): Managed with setfacl and getfacl.

5. Security Models and Practices

5.1 FreeBSD Security Model:

FreeBSD includes several features for enhanced security:

  • Jails: Provide a form of operating system-level virtualization. Each jail has its own filesystem, network configuration, and process space, which helps in isolating applications and services.
  • TrustedBSD Extensions: Enhance FreeBSD’s security by adding Mandatory Access Control (MAC) frameworks, which include fine-grained policies for file and process management.
  • Capsicum: A lightweight, capability-based security framework that allows developers to restrict the capabilities of running processes, minimizing the impact of potential vulnerabilities.

5.2 Linux Security Model:

Linux employs a range of security modules and practices:

  • SELinux (Security-Enhanced Linux): A set of kernel-level security enhancements that provide mandatory access controls. It defines policies that restrict how processes can interact with files and other processes.
  • AppArmor: A security module that restricts programs’ capabilities with per-program profiles. Unlike SELinux, it uses path-based policies.
  • Namespaces and cgroups: Used for containerization, allowing process isolation and resource control. These are the basis for technologies like Docker and Kubernetes.

6. System Configuration and Management

6.1 FreeBSD Configuration:

FreeBSD uses configuration files located in /etc and other directories for system management. The rc.conf file is central for system startup and service configuration. The sysctl command is used for kernel parameter adjustments.

6.2 Linux Configuration:

Linux configurations are distributed across various directories like /etc for system-wide settings and /proc for kernel parameters. Systemd is the most common init system, managing services and their dependencies. The sysctl command is also used in Linux for kernel parameter adjustments.

7. User Management

7.1 FreeBSD:

FreeBSD manages users and groups through /etc/passwd, /etc/group, and /etc/master.passwd. User and group management commands include adduser, pw, and groupadd.

7.2 Linux:

Linux also uses /etc/passwd and /etc/group for user management. User and group management commands include useradd, usermod, groupadd, and passwd.

8. Network Security

8.1 FreeBSD:

FreeBSD offers robust network security features, including:

  • IPFW: A firewall and packet filtering system integrated into the kernel.
  • PF (Packet Filter): A powerful and flexible packet filter that provides firewall functionality and network address translation (NAT).

8.2 Linux:

Linux provides several options for network security:

  • iptables: The traditional firewall utility for configuring packet filtering rules.
  • nftables: The successor to iptables, offering a more streamlined and flexible approach to packet filtering and NAT.
  • firewalld: A front-end for iptables and nftables, providing dynamic firewall management.

9. Backup and Recovery

9.1 FreeBSD:

FreeBSD supports several backup and recovery tools:

  • dump/restore: Traditional utilities for file system backups.
  • rsync: For incremental backups and synchronization.
  • zfs snapshots: ZFS filesystem features allow creating snapshots for backup and recovery.

9.2 Linux:

Linux offers a range of backup and recovery tools:

  • tar: A traditional tool for archiving files.
  • rsync: For incremental backups and synchronization.
  • LVM snapshots: Logical Volume Manager features provide snapshot capabilities.

10. Conclusion

Both FreeBSD and Linux offer robust permission settings and security features, each with its strengths and specific implementations. FreeBSD provides a comprehensive suite of security features, including jails and Capsicum, while Linux offers a variety of security modules like SELinux and AppArmor. Understanding these differences is crucial for system administrators to effectively manage and secure their systems. By leveraging the unique features of each operating system, administrators can enhance their systems’ security and maintain a robust and reliable computing environment.

The post Understanding Permission Setting and Security on FreeBSD vs. Linux appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.

on July 22, 2024 02:09 PM

July 18, 2024

E308 Trocas & Baldrocas

Podcast Ubuntu Portugal

Esta semana o Miguel andou a brincar com Kubuntu, automações falhadas em Home Assistant e clonou um disco com DD com consequências inesperadas. O Diogo anda a ler um novo livro empolgante (Privacidade 404), que recomenda a toda a gente; abjurámos erros cometidos em episódios anteriores; dissecámos penosamente o rastreio publicitário da Mozilla no Firefox; descascámos na Apple e na Google e ainda ressuscitámos a saudosa Cândida Branca Flor, graças a uma clamorosa falha de segurança em OpenSSH.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on July 18, 2024 12:00 AM

July 17, 2024

This is a follow-up to the End of Life warning sent earlier to confirm that as of July 11, 2024, Ubuntu 23.10 is no longer supported. No more package updates will be accepted to 23.10, and it will be archived to old-releases.ubuntu.com in the coming weeks.

Additionally, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 23.10.

The supported upgrade path from Ubuntu 23.10 is to Ubuntu 24.04 LTS.
Instructions and caveats for the upgrade may be found at:

https://help.ubuntu.com/community/NobleUpgrades

Ubuntu 24.04 LTS continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce

Since its launch in October 2004, Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.

Originally posted to the ubuntu-announce mailing list on Tue Jul 16 17:56:46 UTC 2024 by Graham Inggs on behalf of the Ubuntu Release Team

on July 17, 2024 01:43 AM

July 14, 2024

uCareSystem has had the ability to detect if a system reboot is needed after applying maintenance tasks for some time now. With the new release, it will also show you the list of packages that requested the reboot. Additionally, the new release has squashed some annoying bugs. Restart ? Why though ? uCareSystem has had […]
on July 14, 2024 05:03 PM

July 12, 2024

Announcing Incus 6.3

Stéphane Graber

This release includes the long awaited OCI/Docker image support!
With this, users who previously were either running Docker alongside Incus or Docker inside of an Incus container just to run some pretty simple software that’s only distributed as OCI images can now just do it directly in Incus.

In addition to the OCI container support, this release also comes with:

  • Baseline CPU definition within clusters
  • Filesystem support for io.bus and io.cache
  • Improvements to incus top
  • CPU flags in server resources
  • Unified image support in incus-simplestreams
  • Completion of libovsdb transition

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Enjoy!

on July 12, 2024 05:05 PM

One of the cool new features in Incus 6.3 is the ability to run OCI images (such as those for Docker) directly in Incus.

You can certainly install the Docker packages in an Incus instance but that would put you in a situation of running a container in a container. Why not let Docker be a first-class citizen in the Incus ecosystem?

Note that this feature is new to Incus, which means that if you encounter issues, please discuss and report them.

Launching the docker.io nginx OCI container image in Incus.

Table of Contents

Background

In Incus you typically run system containers, which are containers that have been setup to resemble a virtual machine (VM). That is, you launch a system container in Incus, and this system container keeps running until you stop it. Just like with VMs.

In contrast, with Docker you are running application containers. You launch the Docker container with some configuration to perform a task, the task is performed, and the container stops. The task might also be something long-lived, like a Web server. In that case, the application container will have a longer lifetime. With application containers you are thinking primarily about tasks. You stop the task and the container is gone.

Prerequisites

You need to install Incus 6.3. If you are using Debian or Ubuntu, you would select the stable repository of Incus.

$ incus version
Client version: 6.3
Server version: 6.3
$

Adding the Docker repository to Incus

The container images from Docker follow the Open Container Image (OCI) format. There is also a special way to access those images through the Docker Hub Container Image Repository, which is distinctive from the other ways supported by Incus.

We will be adding (once only) a remote for the Docker repository. A remote is a configuration to access images from a particular repository of such images. Let’s see what we already have. We run incus remote list which invokes the list command for the functionality about remotes (incus remote). There are two remotes, the images which is the standard repository for container images and virtual machine images for Incus. And then there is local, which is the remote of the local installation of Incus. Every installation of Incus has such a default remote.

$ incus remote list
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
|      NAME       |                URL                 |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC | GLOBAL |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| images          | https://images.linuxcontainers.org | simplestreams | none        | YES    | NO     | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| local (current) | unix://                            | incus         | file access | NO     | YES    | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
$ 

The Docker repository has the URL https://docker.io and it is accessed through a different protocol. It is called oci.

Therefore, to add the Docker repository, we need to run incus remote add with the appropriate parameters. The URL is https://docker.io and the --protocol is oci.

$ incus remote add docker https://docker.io --protocol=oci
$

Let’s list again the available Incus remotes. The docker remote has been added successfully.

$ incus remote list
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
|      NAME       |                URL                 |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC | GLOBAL |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| docker          | https://docker.io                  | oci           | none        | YES    | NO     | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| images          | https://images.linuxcontainers.org | simplestreams | none        | YES    | NO     | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| local (current) | unix://                            | incus         | file access | NO     | YES    | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
$ 

If we ever want to remove a remote called myremote, we would incus remote remove myremote.

Launching a Docker image in Incus

When you launch (install and run) a container in Incus, you use incus launch with the appropriate parameters. In this first example, we launched the image hello-world, which is one of the Docker official images. As an image, it runs, printing some text and then it stops. In this case we used the parameter --console in order to see the text output. Finally, we use the --ephemeral parameter that would automatically delete the container image as soon as it stops. Ephemeral (εφήμερο) is a Greek word, meaning that it lasts only a brief time. Both these two additional parameters are not essential but are helpful in this specific case.

$ incus launch docker:hello-world --console --ephemeral
Launching the instance
Instance name is: best-feature                       

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

$ 

Note that we did not specify a name for the container. Astonishingly, Incus randomly selected the name best-feature with some editorial help. Since this is an ephemeral container and the specific image hello-world is short-lived, it is gone in a flash. Indeed, if you then run incus list, the Docker container is not found because it has been auto-deleted.

Let’s try another Docker image, the official nginx Docker image. It launches the nginx image which serves an empty nginx Web server. We need to run incus list and search for the IP address that was given to the container. Then, we can view the default page of the Web server in our Web browser.

$ incus launch docker:nginx --console --ephemeral
Launching the instance
Instance name is: best-feature                               
To detach from the console, press: <ctrl>+a q
2024/07/12 16:31:59 [notice] 21#21: using the "epoll" event method
2024/07/12 16:31:59 [notice] 21#21: nginx/1.27.0
2024/07/12 16:31:59 [notice] 21#21: built by gcc 12.2.0 (Debian 12.2.0-14) 
2024/07/12 16:31:59 [notice] 21#21: OS: Linux 6.5.0-41-generic
2024/07/12 16:31:59 [notice] 21#21: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2024/07/12 16:31:59 [notice] 21#21: start worker processes
2024/07/12 16:31:59 [notice] 21#21: start worker process 45
2024/07/12 16:31:59 [notice] 21#21: start worker process 46
2024/07/12 16:31:59 [notice] 21#21: start worker process 47
2024/07/12 16:31:59 [notice] 21#21: start worker process 48
2024/07/12 16:31:59 [notice] 21#21: start worker process 49
2024/07/12 16:31:59 [notice] 21#21: start worker process 50
2024/07/12 16:31:59 [notice] 21#21: start worker process 51
2024/07/12 16:31:59 [notice] 21#21: start worker process 52
2024/07/12 16:31:59 [notice] 21#21: start worker process 53
2024/07/12 16:31:59 [notice] 21#21: start worker process 54
2024/07/12 16:31:59 [notice] 21#21: start worker process 55
2024/07/12 16:31:59 [notice] 21#21: start worker process 56
10.10.10.1 - - [12/Jul/2024:16:33:29 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36" "-"
...
^C
...
2024/07/12 16:39:10 [notice] 21#21: signal 29 (SIGIO) received
2024/07/12 16:39:10 [notice] 21#21: signal 17 (SIGCHLD) received from 55
2024/07/12 16:39:10 [notice] 21#21: worker process 55 exited with code 0
2024/07/12 16:39:10 [notice] 21#21: exit
Error: stat /proc/-1: no such file or directory
$ 

Discussion

How long is the lifecycle time of a minimal Docker container?

How long does it take to launch the docker:hello-world container? I prepend time to the command, and check the stats at the end of the execution. It takes about four seconds to launch and run a simple Docker container. On my system and my Internet connection (it’s cached).

$ time incus launch docker:hello-world --console --ephemeral
...
real	0m3,956s
user	0m0,016s
sys	0m0,016s
$ 

How long does it take to repeatedly run a minimal Docker container?

We are removing the --ephemeral option. We launch the container and we give some relevant name, mydocker. This container remains after execution. We can see that after launching it, the container stays in a STOPPED state. We then incus start the container with the command time incus start mydocker --console, and we can see that it takes a bit more than half a second to complete the execution.

$ incus launch docker:hello-world mydocker --console
Launching mydocker

Hello from Docker!
...
$ incus list mydocker
+----------+---------+------+------+-----------------+-----------+
|   NAME   |  STATE  | IPV4 | IPV6 |      TYPE       | SNAPSHOTS |
+----------+---------+------+------+-----------------+-----------+
| mydocker | STOPPED |      |      | CONTAINER (APP) | 0         |
+----------+---------+------+------+-----------------+-----------+
$ time incus start mydocker --console

Hello from Docker!
...

Hello from Docker!
...
real	0m0,638s
user	0m0,003s
sys	0m0,013s
$ 

Are the container images cached?

Yes, they are cached. When you launch a container for the first time, you can visibly the downloading of the image components. Also, if you run incus image list, you can see the cached Docker.io images in the output.

Troubleshooting

Error: Can’t list images from OCI registry

You tried to list the images of the Docker Hub repository. Currently this is not supported. Technically it could be supported though the repository has more than ten thousand images. Using incus list without parameters may not make sense as the output would require to download the full list of images from the repository. Searching for images does make sense, but if you can search, you should be able to list as well. I am not sure what’s the maintainer’s view on this.

$ incus image list docker:
Error: Can't list images from OCI registry
$ 

As a workaround, you can simply locate the image name from the Website at https://hub.docker.com/search?q=&image_filter=official

Error: stat /proc/-1: no such file or directory

I ran many many instances of Docker containers and in a few cases I got the above error. I do not know what it is and I am adding it here in case someone manages to replicate. It feels that it’s some kind of race condition.

Failed getting remote image info: Image not found

You have configured Incus properly and you definitely have Incus version 6.3 (both the client and the server). But still, Incus cannot find any Docker image, not even hello-world.

This can happen if you are using a packaging of Incus that does not include the skopeo and umoci packages. If you are using the Zabbly distribution of Incus, these programs are included in the Incus packaging. Therefore, if you are using alternative packaging for Incus, you can manually install the versions of those packages as provided by your Linux distribution.

on July 12, 2024 04:50 PM

July 11, 2024

As of July 11, 2024, all flavors of Ubuntu 23.10, including Ubuntu Studio 23.10, codenamed “Mantic Minotaur”, have reached end-of-life (EOL). There will be no more updates of any kind, including security updates, for this release of Ubuntu.

If you have not already done so, please upgrade to Ubuntu Studio 24.04 LTS via the instructions provided here. If you do not do so as soon as possible, you will lose the ability without additional advanced configuration.

No single release of any operating system can be supported indefinitely, and Ubuntu Studio has no exception to this rule.

Regular Ubuntu releases, meaning those that are between the Long-Term Support releases, are supported for 9 months and users are expected to upgrade after every release with a 3-month buffer following each release.

Long-Term Support releases are identified by an even numbered year-of-release and a month-of-release of April (04). Hence, the most recent Long-Term Support release is 24.04 (YY.MM = 2024.April), and the next Long-Term Support release will be 26.04 (2026.April). LTS releases for official Ubuntu flavors (not Desktop or Server which are supported for five years) are three years, meaning LTS users are expected to upgrade after every LTS release with a one-year buffer.

on July 11, 2024 03:27 PM

July 07, 2024

One year of freelancing

Stéphane Graber

Introduction

It was exactly one year ago today that I left my day job as Engineering Manager of LXD at Canonical and went freelance. It’s been quite a busy year but things turned out better than I had hoped and I’m excited about year two!

Zabbly

Zabbly is the company I created for my freelance work. Over the year, a number of my personal projects were transferred over to being part of Zabbly, including the operation of my ASN (as399760.net), my datacenter co-location infrastructure and more.

Through Zabbly I offer a mix of by-the-hour consultation with varying prices depending on the urgency of the work (basic consultation, support, emergency support) as well as fixed-cost services, mostly related to Incus (infrastructure review, migration from LXD, remote or on-site trainings, …).

Other than Incus, Zabbly also provides up to date mainline kernel packages for Debian and Ubuntu and associated up to date ZFS packages. This is something that came out as needed for a number of projects I work on, from being able to test Incus on recent Linux kernels to avoiding Ubuntu kernel bugs on my own and NorthSec’s servers.

Zabbly is also the legal entity for donations related to my open source work, currently supporting:

And lastly, Zabbly also runs a Youtube channel covering the various projects I’m involved with.
A lot of it is currently about Incus, but there is also the occasional content on NorthSec or other side projects. The channel grew to a bit over 800 subscribers in the past 10 months or so.

Now, how well is all of that doing? Well enough that I could stop relying on my savings just a few months in and turn a profit by the end of 2023. Zabbly currently has around a dozen active customers from 7 countries and across 3 continents with size ranging from individuals to large governmental agencies.

2024 has also been very good so far and while I’m not back to the level of income I had while at Canonical, I also don’t have to go through 4-5 hours of meetings a day and get to actually contribute to open source again, so I’ll gladly take the (likely temporary) pay cut!

Incus

A lot of my time in the past year has been dedicated to Incus.

This wasn’t exactly what I had planned when leaving Canonical.
I was expecting LXD to keep on going as a proper Open Source project as part of the Linux Containers community. But Canonical had other plans and so things changed a fair bit over the few months following my departure.

For those not aware, the rough timeline of what happened is:

So rather than contributing to LXD while working on some other new projects, a lot of my time has instead gone into setting up the Incus project for success.

And I think I’ve been pretty successful at that as we’re seeing a monthly user base growth (based on image server interactions) of around 25%. Incus is now natively available in most Linux distributions (Alpine, Arch Linux, Debian, Gentoo, Nix, Ubuntu and Void) with more coming soon (Fedora and EPEL).

Incus has 6 maintainers, most of whom were the original LXD maintainers.
We’ve seen over 100 individual contributors since Incus was forked from LXD including around 20 students from the University of Texas in Austin who contributed to Incus as part of their virtualization class.

I’ve been acting as the release manager for Incus, also running all the infrastructure behind the project, mentoring new contributors and reviewing a number of changes while also contributing a number of new features myself, some sponsored by my customers, some just based on my personal interests.

A big milestone for Incus was its 6.0 LTS release as that made it suitable for production users.
Today we’re seeing around 40% of our users running the LTS release while the rest run the monthly releases.

On top of Incus itself, I’ve also gotten to contribute to both create the Incus Deploy project, which is a collection of Ansible playbooks and Terraform modules to make it easy to deploy Incus clusters and contribute to both the Ansible Incus connection plugin and our Incus Terraform/OpenTofu provider.

The other Linux Containers projects

As mentioned in my recent post about the 6.0.1 LTS releases, the Linux Containers project tries to do coordinated LTS releases on our core projects. This currently includes LXC, LXCFS and Incus.

I didn’t have to do too much work myself on LXC and LXCFS, thanks to Aleksandr Mikhalitsyn from the Canonical LXD team who’s been dealing with most of the review and issues in both LXC and LXCFS alongside other long time maintainers, Serge Hallyn and Christian Brauner.

NorthSec

NorthSec is a yearly cybersecurity conference, CTF and training provider, usually happening in late May in Montreal, Canada. It’s been operating since 2013 and is now one of the largest on-site CTF events in the world along with having a pretty sizable conference too.

I’m the current VP of Infrastructure for the event and have been involved with it from the beginning, designing and running its infrastructure, first on a bunch of old donated hardware and then slowly modernizing that to the environment we have now with proper production hardware both at our datacenter and on-site during the event.

This year, other than transitioning everything from LXD to Incus, the main focus has been on upgrading the OS on our 6 physical servers and dozens of infrastructure containers and VMs from Ubuntu 20.04 LTS to Ubuntu 24.04 LTS.

At the same time, also significantly reducing the complexity of our infrastructure by operating a single unified Incus cluster, switching to OpenID Connect and OpenFGA for access control and automating even more of our yearly infrastructure with Ansible and Terraform.

Automation is really key with NorthSec as it’s a non-profit organization with a lot of staffing changes every year, around 100 year long contributors and then an additional 50 or so on-site volunteers!

I went over the NorthSec infrastructure in a couple of YouTube videos:

Conferences

I’ve cut down and focused my conference attendance a fair bit over this past year.
Part of it for budgetary reasons, part of it because of having so many things going on that fitting another couple of weeks of cross-country travel was difficult.

I decided to keep attending two main events. The Linux Plumbers Conference where I co-organizer the Containers and Checkpoint-Restore Micro-Conference and FOSDEM where I co-organize both the Containers and the Kernel devrooms.

With one event usually in September/October and the other in February, this provides two good opportunities to catch up with other developers and users, get to chat a bunch and make plans for the year.

I’m looking forward to catching up with folks at the upcoming Linux Plumbers Conference in Vienna, Austria!

What’s next

I’ve got quite a lot going on, so the remaining half of 2024 and first half of 2025 are going to be quite busy and exciting!

On the Incus front, we’ve got some exciting new features coming in, like the native OCI container support, more storage options, more virtual networking features, improved deployment tooling, full coverage of Incus features in Terraform/OpenTofu and even a small immutable OS image!

NorthSec is currently wrapping up a few last items related to its 2024 edition and then it will be time to set up the development infrastructure and get started on organizing 2025!

For conferences, as mentioned above, I’ll be in Vienna, Austria in September for Linux Plumbers and expect to be in Brussels again for FOSDEM in February.

There’s also more that I’m not quite ready to talk about, but expect some great Incus related news to come out in the next few months!

on July 07, 2024 12:00 PM

July 04, 2024

 

Critical OpenSSH Vulnerability (CVE-2024-6387): Please Update Your Linux

A critical security flaw (CVE-2024-6387) has been identified in OpenSSH, a program widely used for secure remote connections. This vulnerability could allow attackers to completely compromise affected systems (remote code execution).

Who is Affected?

Only specific versions of OpenSSH (8.5p1 to 9.7p1) running on glibc-based Linux systems are vulnerable. Newer versions are not affected.

What to Do?

  1. Update OpenSSH: Check your version by running ssh -V in your terminal. If you're using a vulnerable version (8.5p1 to 9.7p1), update immediately.

  2. Temporary Workaround (Use with Caution): Disabling the login grace timeout (setting LoginGraceTime=0 in sshd_config) can mitigate the risk, but be aware it increases susceptibility to denial-of-service attacks.

  3. Recommended Security Enhancement: Install fail2ban to prevent brute-force attacks. This tool automatically bans IPs with too many failed login attempts.

Optional: IP Whitelisting for Increased Security

Once you have fail2ban installed, consider allowing only specific IP addresses to access your server via SSH. This can be achieved using:

  • ufw for Ubuntu

  • firewalld for AlmaLinux or Rocky Linux

Additional Resources

About Fail2ban

Fail2ban monitors log files like /var/log/auth.log and bans IPs with excessive failed login attempts. It updates firewall rules to block connections from these IPs for a set duration. Fail2ban is pre-configured to work with common log files and can be easily customized for other logs and errors.

Installation Instructions:

  • Ubuntu: sudo apt install fail2ban

  • AlmaLinux/Rocky Linux: sudo dnf install fail2ban


About DevSec Hardening Framework

The DevSec Hardening Framework is a set of tools and resources that helps automate the process of securing your server infrastructure. It addresses the challenges of manually hardening servers, which can be complex, error-prone, and time-consuming, especially when managing a large number of servers. The framework integrates with popular infrastructure automation tools like Ansible, Chef, and Puppet. It provides pre-configured modules that automatically apply secure settings to your operating systems and services such as OpenSSH, Apache and MySQL. This eliminates the need for manual configuration and reduces the risk of errors.


Prepare by LinuxMalaysia with the help of Google Gemini


5 July 2024

 

In Google Doc Format 

 

https://docs.google.com/document/d/e/2PACX-1vTSU27PLnDXWKjRJfIcjwh9B0jlSN-tnaO4_eZ_0V5C2oYOPLLblnj3jQOzCKqCwbnqGmpTIE10ZiQo/pub 



on July 04, 2024 09:42 PM

July 02, 2024

My Debian contributions this month were all sponsored by Freexian.

  • I switched man-db and putty to Rules-Requires-Root: no, thanks to a suggestion from Niels Thykier.
  • I moved some files in pcmciautils as part of the /usr move.
  • I upgraded libfido2 to 1.15.0.
  • I made an upstream release of multipart 0.2.5.
  • I reviewed some security-update patches to putty.
  • I packaged yubihsm-connector, yubihsm-shell, and python-yubihsm.
  • openssh:
    • I did a bit more planning for the GSS-API package split, though decided not to land it quite yet to avoid blocking other changes on NEW queue review.
    • I removed the user_readenv option from PAM configuration (#1018260), and prepared a release note.
  • Python team:
    • I packaged zope.deferredimport, needed for a new upstream version of python-persistent.
    • I fixed some incompatibilities with pytest 8: ipykernel and ipywidgets.
    • I fixed a couple of RC or soon-to-be-RC bugs in khard (#1065887 and #1069838), since I use it for my address book and wanted to get it back into testing.
    • I fixed an RC bug in python-repoze.sphinx.autointerface (#1057599).
    • I sponsored uploads of python-channels-redis (Dale Richards) and twisted (Florent ‘Skia’ Jacquet).
    • I upgraded babelfish, django-favicon-plus-reloaded, dnsdiag, flake8-builtins, flufl.lock, ipywidgets, jsonpickle, langtable, nbconvert, requests, responses, partd, pytest-mock, python-aiohttp (fixing CVE-2024-23829, CVE-2024-23334, CVE-2024-30251, and CVE-2024-27306), python-amply, python-argcomplete, python-btrees, python-cups, python-django-health-check, python-fluent-logger, python-persistent, python-plumbum, python-rpaths, python-rt, python-sniffio, python-tenacity, python-tokenize-rt, python-typing-extensions, pyupgrade, sphinx-copybutton, sphinxcontrib-autoprogram, uncertainties, zodbpickle, zope.configuration, zope.proxy, and zope.security to new upstream versions.

You can support my work directly via Liberapay.

on July 02, 2024 12:02 PM

June 21, 2024

As the tech world comes together to celebrate FreeBSD Day 2024, we are thrilled to bring you an exclusive interview with none other than Beastie, the iconic mascot of BSD! In a rare and exciting appearance, Beastie joins Kim McMahon to share insights about their journey, their role in the BSD community, and some fun personal preferences. Here’s a sneak peek into the life of the beloved mascot that has become synonymous with BSD.

From Icon to Legend: How Beastie Became the BSD Mascot

Beastie, with their distinct and endearing devilish charm, has been the face of BSD for decades. But how did they land this coveted role? During the interview, Beastie reveals that their journey began back in the early days of BSD. The character was originally drawn by John Lasseter of Pixar fame, and quickly became a symbol of the BSD community’s resilience and innovation. Beastie’s playful yet formidable appearance captured the spirit of BSD, making them an instant hit among developers and users alike.

A Day in the Life of Beastie

What does a typical day look like for the BSD mascot? Beastie shares that their role goes beyond just being a symbol. They actively participate in community events, engage with developers, and even help in promoting BSD at various conferences around the globe. Beastie’s presence is a source of inspiration and motivation for the BSD community, reminding everyone of the project’s rich heritage and vibrant future.

Beastie’s Favorite Tools and Editors

No interview with a tech mascot would be complete without delving into their favorite tools. Beastie is an advocate of keeping things simple and efficient. When asked about their preferred text editor, Beastie enthusiastically endorsed Vim, praising its versatility and powerful features. They also shared their admiration for the classic Unix philosophy, which aligns perfectly with the minimalist yet powerful nature of Vim.

Engaging with the BSD Community

Beastie’s role is not just about representation; it’s about active engagement. They spoke about the importance of community in the BSD ecosystem and how it has been pivotal in driving the project forward. From organizing hackathons to participating in mailing lists, Beastie is deeply involved in fostering a collaborative and inclusive environment. They highlighted the incredible contributions of the BSD community, acknowledging that it’s the collective effort that makes BSD a robust and reliable operating system.

Looking Ahead: The Future of BSD

As we look to the future, Beastie remains optimistic about the path ahead for BSD. They emphasized the ongoing developments and the exciting projects in the pipeline that promise to enhance the BSD experience. Beastie encouraged new users and seasoned developers alike to explore BSD, contribute to its growth, and be a part of its dynamic community.

Join the Celebration

To mark FreeBSD Day 2024, the community is hosting a series of events, including workshops, Q&A sessions, and more. Beastie’s interview with Kim McMahon is just one of the highlights. Be sure to tune in and catch this rare glimpse into the life of BSD’s beloved mascot.

Final Thoughts

Beastie’s interview is a testament to the enduring legacy and vibrant community of BSD. As we celebrate FreeBSD Day 2024, let’s take a moment to appreciate the contributions of everyone involved and look forward to an exciting future for BSD.

Don’t miss out on this exclusive interview—check it out on YouTube and join the celebration of FreeBSD Day 2024!

Watch the interview here

The post Celebrating FreeBSD Day 2024: An Exclusive Interview with Beastie appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.

on June 21, 2024 06:03 PM

June 18, 2024

 

Download And Use latest Version Of Nginx Stable

To ensure you receive the latest security updates and bug fixes for Nginx, configure your system's repository specifically for it. Detailed instructions on how to achieve this can be found on the Nginx website. Setting up the repository allows your system to automatically download and install future Nginx updates, keeping your web server running optimally and securely.

Visit this websites for information on how to configure your repository for Nginx.

https://nginx.org/en/linux_packages.html

https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/ 

Installing Nginx on different Linux distributions

Example from https://docs.bunkerweb.io/latest/integrations/#linux 

Ubuntu

sudo apt install -y curl gnupg2 ca-certificates lsb-release debian-archive-keyring && \
curl https://nginx.org/
keys/nginx_signing.key | gpg --dearmor \
| sudo tee /usr/
share/keyrings/nginx-archive-keyring.gpg >/dev/null && \
echo
"deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/debian `lsb_release -cs` nginx"
 \
| sudo tee /etc/apt/sources.list.d/nginx.list

# Latest Stable (pick either latest stable
or by version)

sudo apt
update && \
sudo apt
install -y nginx

#
By version (pick one only, latest stable or by version)

sudo apt
update && \
sudo apt
install -y nginx=1.24.0-1~$(lsb_release -cs)

AlmaLinux / Rocky Linux (Redhat)

Create the following file at /etc/yum.repos.d/nginx.repo

[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/
gpgcheck=1
enabled=0
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

# Latest Stable (pick either latest stable or by version)

sudo dnf install nginx

# Latest Stable (pick either latest stable or by version)

sudo dnf install nginx-1.24.0

Nginx Fork (This for reference only - year 2024)

https://thenewstack.io/freenginx-a-fork-of-nginx/ 

https://github.com/freenginx/ 

Use this Web tool to configure nginx.

https://www.digitalocean.com/community/tools/nginx

https://github.com/digitalocean/nginxconfig.io 

Example

https://www.digitalocean.com/community/tools/nginx?domains.0.server.domain=songketmail.linuxmalaysia.lan&domains.0.server.redirectSubdomains=false&domains.0.https.hstsPreload=true&domains.0.php.phpServer=%2Fvar%2Frun%2Fphp%2Fphp8.2-fpm.sock&domains.0.logging.redirectAccessLog=true&domains.0.logging.redirectErrorLog=true&domains.0.restrict.putMethod=true&domains.0.restrict.patchMethod=true&domains.0.restrict.deleteMethod=true&domains.0.restrict.connectMethod=true&domains.0.restrict.optionsMethod=true&domains.0.restrict.traceMethod=true&global.https.portReuse=true&global.https.sslProfile=modern&global.https.ocspQuad9=true&global.https.ocspVerisign=true&global.security.limitReq=true&global.security.securityTxt=true&global.logging.errorLogEnabled=true&global.logging.logNotFound=true&global.tools.modularizedStructure=false&global.tools.symlinkVhost=false 

Harisfazillah Jamel - LinuxMalaysia - 20240619
on June 18, 2024 10:18 PM

June 16, 2024

The previous release of uCareSystem, version 24.05.0, introduced enhanced maintenance and cleanup capabilities for flatpak packages. The uCareSystem 24.06.0 has a special treat for desktop users 🙂 This new version includes: The people who supported the previous development cycle ( version 24.05 ) with their generous donations and are mentioned in the uCareSystem app: Where […]
on June 16, 2024 08:49 PM

June 05, 2024

I am still here, busy as ever, I just haven’t found the inspiration to blog. So soon after the loss of my son, I have lost my only brother a couple weeks ago. It has been a tough year for our family. Thank you everyone for you love and support during this difficult time. I will do my best in re-capping my work, there has been quite a bit as I am “keeping busy with work” so I don’t dwell to much on the sadness.

KDE Snaps:

Trying to debug the unable to save files breakage in the latest Krita builds without luck.

KisOpenGLCanvas
Renderer::reportFailedShaderCompilation\[0m: Shad
er Compilation Failure:  "Failed to add vertex sh
ader source from file: matrix_transform.vert - Ca
use: "

I have implemented everything from https://snapcraft.io/docs/gpu-support , it has worked for years and now suddenly it just stopped. I have had to put it on hold for now, it is unpaid work and I simply don’t have time.

With the help of my GSOC student we are improving the Qt6 snap MR: https://invent.kde.org/neon/snap-packaging/kde-qt6-core-sdk/-/merge_requests/3 and many improvements on top of that. This exposed many issues with the kf6 snap and the linking to static libs. Those are being worked on now.

Updated qt to 6.7.1

Qt6 apps in the works: okular, ark, gwenview, kwrited, elisa

Kubuntu:

So many SRu’s for the Noble release, I will probably miss a few.

https://bugs.launchpad.net/ubuntu/+source/ark/+bug/2068491 Ark cannot open 7-zip files. Sadly the patches were for qt6, waiting for a qt5 port upstream.

https://bugs.launchpad.net/ubuntu/noble/+source/merkuro/+bug/2065063 Crash due to missing qml. Fix is in git, no upload rights. Requested sponsor.

https://bugs.launchpad.net/ubuntu/+source/tellico/+bug/2065915 Several applications no longer work on architectures that are not amd64 due to hard coded paths. All fixed in git. Several uploaded to oracular, several sponsorship has been requested. Noble updates rejected despite SRU, going to retry.

https://bugs.launchpad.net/ubuntu/+source/sddm/+bug/2066275 The dreaded black screen on second boot bug is fixed in git and oracular. Noble was rejected despite the SRU. Will retry.

https://bugs.launchpad.net/ubuntu/+source/kubuntu-meta/+bug/2066028 Broken systray submenus. Fixed in git and oracular. Noble rejected despite SRU. Will retry.

https://bugs.launchpad.net/ubuntu/+source/plasma-workspace/+bug/2067747 Long standing bug with plasma not loading with lightdm. Fixed in git and oracular. Noble rejected… will retry.

https://bugs.launchpad.net/ubuntu/+source/plasma-workspace/+bug/2067742 CVE-2024-36041Fixed in git and oracular, noble rejected, will retry.

And many more…

I am applying for MOTU in hopes it will reduce all of my uploading issues. https://wiki.ubuntu.com/scarlettmoore/MOTUApplication

Debian:

kf6-knotifications and kapidox. Will jump into Plasma 6 next week !

Misc:

Went to LinuxFest Northwest with Valorie! We had a great time and it was a huge success, we had many people stop by our booth.

As usual, if you like my work and want to see Plasma 6 in Kubuntu it all depends on you!

Kubuntu will be out of funds soon and needs donations! Thank you for your consideration.

https://kubuntu.org/donate/

Personal:

Support for my grandson: https://www.gofundme.com/f/in-loving-memory-of-william-billy-dean-scalf

on June 05, 2024 05:30 PM

June 02, 2024

My Debian contributions this month were all sponsored by Freexian.

The bulk of my Debian time this month went towards trying to haul more Python packages up to current versions, but I got a few other bits and pieces done as well.

  • I did a little work on improving debbugs’ autopkgtest status.
  • openssh:
    • I fixed an OpenSSL version mismatch error in openssh-ssh1.
    • I finally tracked down a baffling CI issue in openssh, unblocking several contributed merge requests that I’d been sitting on until I could get CI to pass for them. (Special thanks to Andreas Hasenack; GSS-API integration tests will make my life much easier.)
    • I removed the user_readenv=1 option from openssh’s PAM configuration, and did some work on release notes to document this change for affected users.
    • I started work on the first stage of my plan to split out GSS-API key exchange support to separate packages.
  • Python team:
    • I upgraded bitstruct, flufl.enum, flufl.testing, gunicorn, langtable, psycopg3, pygresql, pylint-flask, python-click-didyoumean, python-gssapi, python-httplib2, python-json-log-formatter, python-persistent, python-pgspecial, python-pyld, python-repoze.tm2, python-serializable, python-tenacity, python-typing-extensions, python-unidiff, responses, shortuuid (including an upstream packaging tweak), sqlparse, vulture, zc.lockfile, and zope.interface to new upstream versions.
    • I cherry-picked an upstream PR to fix a pytest 8 incompatibility in ipywidgets.
  • I decided that fixing my old troffcvt package to support groff 1.23.0 wasn’t worth the time investment, and filed a removal request instead.
  • I NMUed bidentd and linuxtv-dvb-apps to declare Architecture: linux-any (and in the latter case also to fix a build failure due to 64-bit time), and worked with the buildd team to remove several of the other remaining entries from Packages-arch-specific.

You can support my work directly via Liberapay.

on June 02, 2024 10:53 AM

June 01, 2024

I’ve a self-hosted Nextcloud installation which is frankly a pain, there’s a good chance updates will break everything.

Plesk is a fairly intuitive interface for self hosting but is best described as non-standard Ubuntu – it handles PHP oddly in particular. Try running any of the php occ commands in it’s built-in ssh terminal and you’ll experience an exercise in frustration.

It’s so much easier to ssh from an Ubuntu terminal and run occ commands directly – giving a clear error message that you can actually do something with.

So its the Circles app, lets disable it:

php occ app:disable circles

Then repair the installation:

php occ maintenance:repair

Finally turn off maintenance mode:

php occ maintenance:mode --off

I don’t even use that app but we’re back.

on June 01, 2024 03:37 PM

May 24, 2024

In my previous blog, I explored The New APT 3.0 solver. Since then I have been at work in the test suite making tests pass and fixing some bugs.

You see for all intents and purposes, the new solver is a very stupid naive DPLL SAT solver (it just so happens we don’t actually have any pure literals in there). We can control it in a bunch of ways:

  1. We can mark packages as “install” or “reject”
  2. We can order actions/clauses. When backtracking the action that came later will be the first we try to backtrack on
  3. We can order the choices of a dependency - we try them left to right.

This is about all that we really want to do, we can’t go if we reach a conflict, say “oh but this conflict was introduced by that upgrade, and it seems more important, so let’s not backtrack on the upgrade request but on this dependency instead.”.

This forces us to think about lowering the dependency problem into this form, such that not only do we get formally correct solutions, but also semantically correct ones. This is nice because we can apply a systematic way to approach the issue rather than introducing ad-hoc rules in the old solver which had a “which of these packages should I flip the opposite way to break the conflict” kind of thinking.

Now our test suite has a whole bunch of these semantics encoded in it, and I’m going to share some problems and ideas for how to solve them. I can’t wait to fix these and the error reporting and then turn it on in Ubuntu and later Debian (the defaults change is a post-trixie change, let’s be honest).

apt upgrade is hard

The apt upgrade commands implements a safe version of dist-upgrade that essentially calculates the dist-upgrade, and then undoes anything that would cause a package to be removed, but it (unlike its apt-get counterpart) allows the solver to install new packages.

Now, consider the following package is installed:

X Depends: A (= 1) | B

An upgrade from A=1 to A=2 is available. What should happen?

The classic solver would choose to remove X in a dist-upgrade, and then upgrade A, so it’s answer is quite clear: Keep back the upgrade of A.

The new solver however sees two possible solutions:

  1. Install B to satisfy X Depends A (= 1) | B.
  2. Keep back the upgrade of A

Which one does it pick? This depends on the order in which it sees the upgrade action for A and the dependency, as it will backjump chronologically. So

  1. If it gets to the dependency first, it marks A=1 for install to satisfy A (= 1). Then it gets to the upgrade request, which is just A Depends A (= 2) | A (= 1) and sees it is satisfied already and is content.

  2. If it gets to the upgrade request first, it marks A=2 for install to satisfy A (= 2). Then later it gets to X Depends: A (= 1) | B, sees that A (= 1) is not satisfiable, and picks B.

We have two ways to approach this issue:

  1. We always order upgrade requests last, so they will be kept back in case of conflicting dependencies
  2. We require that, for apt upgrade a currently satisfied dependency must be satisfied by currently installed packages, hence eliminating B as a choice.

Recommends are hard too

See if you have a X Recommends: A (= 1) and a new version of A, A (= 2), the solver currently will silently break the Recommends in some cases.

But let’s explore what the behavior of a X Recommends: A (= 1) in combination with an available upgrade of A (= 2) should be. We could say the rule should be:

  • An upgrade should keep back A instead of breaking the Recommends
  • A dist-upgrade should either keep back A or remove X (if it is obsolete)

This essentially leaves us the same choices as for the previous problem, but with an interesting twist. We can change the ordering (and we already did), but we could also introduce a new rule, “promotions”:

A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently satisfied, must continue to be satisfied, that is, it effectively is promoted to a Depends.

This neatly solves the problem for us. We will never break Recommends that are satisfied.

Likewise, we already have a Recommends demotion rule:

A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently unsatisfied, will not be further evaluated (it is treated like a Suggests is in the default configuration).

Whether we should be allowed to break Suggests with our decisions or not (the old autoremover did not, for instance) is a different decision. Should we promote currently satisified Suggests to Depends as well? Should we follow currently satisified Suggests so the solver sees them and doesn’t autoremove them, but treat them as optional?

tightening of versioned dependencies

Another case of versioned dependencies with alternatives that has complex behavior is something like

X Depends: A (>= 2) | B
X Recommends: A (>= 2) | B

In both cases, installing X should upgrade an A < 2 in favour of installing B. But a naive SAT solver might not. If your request to keep A installed is encoded as A (= 1) | A (= 2), then it first picks A (= 1). When it sees the Depends/Recommends it will switch to B.

We can solve this again as in the previous example by ordering the “keep A installed” requests after any dependencies. Notably, we will enqueue the common dependencies of all A versions first before selecting a version of A, so something may select a version for us.

version narrowing instead of version choosing

A different approach to dealing with the issue of version selection is to not select a version until the very last moment. So instead of selecting a version to satisfy A (>= 2) we instead translate

Depends: A (>= 2)

into two rules:

  1. The package selection rule:

     Depends: A
    

    This ensures that any version of A is installed (i.e. it adds a version choice clause, A (= 1) | A (= 2) in an example with two versions for A.

  2. The version narrowing rule:

     Conflicts: A (<< 2)
    

    This outright would reject a choice of A (= 1).

So now we have 3 kinds of clauses:

  1. package selection
  2. version narrowing
  3. version selection

If we process them in that order, we should surely be able to find the solution that best matches the semantics of our Debian dependency model, i.e. selecting earlier choices in a dependency before later choices in the face of version restrictions.

This still leaves one issue: What if our maintainer did not use Depends: A (>= 2) | B but e.g. Depends: A (= 3) | B | A (= 2). He’d expect us to fall back to B if A (= 3) is not installable, and not to B. But we’d like to enqueue A and reject all choices other than 3 and 2. I think it’s fair to say: “Don’t do that, then” here.

Implementing strict pinning correctly

APT knows a single candidate version per package, this makes the solver relatively deterministic: It will only ever pick the candidate, or an installed version. This also happens to significantly reduce the search space which is good - less backtracking. An uptodate system will only ever have one version per package that can be installed, so we never actually have to choose versions.

But of course, APT allows you to specify a non-candidate version of a package to install, for example:

apt install foo/oracular-proposed

The way this works is that the core component of the previous solver, which is the pkgDepCache maintains what essentially amounts to an overlay of the policy that you could see with apt-cache policy.

The solver currently however validates allowed version choices against the policy directly, and hence finds these versions are not allowed and craps out. This is an interesting problem because the solver should not be dependent on the pkgDepCache as the pkgDepCache initialization (Building dependency tree...) accounts for about half of the runtime of APT (until the Y/n prompt) and I’d really like to get rid of it.

But currently the frontend does go via the pkgDepCache. It marks the packages in there, building up what you could call a transaction, and then we translate it to the new solver, and once it is done, it translates the result back into the pkgDepCache.

The current implementation of “allowed version” is implemented by reducing the search space, i.e. every dependency, we outright ignore any non-allowed versions. So if you have a version 3 of A that is ignored a Depends: A would be translated into A (= 2) | A (= 1).

However this has two disadvantages. (1) It means if we show you why A could not be installed, you don’t even see A (= 3) in the list of choices and (2) you would need to keep the pkgDepCache around for the temporary overrides.

So instead of actually enforcing the allowed version rule by filtering, a more reasonable model is that we apply the allowed version rule by just marking every other version as not allowed when discovering the package in the from depcache translation layer. This doesn’t really increase the search space either but it solves both our problem of making overrides work and giving you a reasonable error message that lists all versions of A.

pulling up common dependencies to minimize backtracking cost

One of the common issues we have is that when we have a dependency group

`A | B | C | D`

we try them in order, and if one fails, we undo everything it did, and move on to the next one. However, this isn’t perhaps the best choice of operation.

I explained before that one thing we do is queue the common dependencies of a package (i.e. dependencies shared in all versions) when marking a package for install, but we don’t do this here: We have already lowered the representation of the dependency group into a list of versions, so we’d need to extract the package back out of it.

This can of course be done, but there may be a more interesting solution to the problem, in that we simply enqueue all the common dependencies. That is, we add n backtracking levels for n possible solutions:

  1. We enqueue the common dependencies of all possible solutions deps(A)&deps(B)&deps(C)&deps(D)
  2. We decide (adding a decision level) not to install D right now and enqueue deps(A)&deps(B)&deps(C)
  3. We decide (adding a decision level) not to install C right now and enqueue deps(A)&deps(B)
  4. We decide (adding a decision level) not to install B right now and enqueue A

Now if we need to backtrack from our choice of A we hopefully still have a lot of common dependencies queued that we do not need to redo. While we have more backtracking levels, each backtracking level would be significantly cheaper, especially if you have cheap backtracking (which admittedly we do not have, yet anyway).

The caveat though is: It may be pretty expensive to find the common dependencies. We need to iterate over all dependency groups of A and see if they are in B, C, and D, so we have a complexity of roughly

#A * (#B+#C+#D)

Each dependency group we need to check i.e. is X|Y in B meanwhile has linear cost: We need to compare the memory content of two pointer arrays containing the list of possible versions that solve the dependency group. This means that X|Y and Y|X are different dependencies of course, but that is to be expected – they are. But any dependency of the same order will have the same memory layout.

So really the cost is roughly N^4. This isn’t nice.

You can apply various heuristics here on how to improve that, or you can even apply binary logic:

  1. Enqueue common dependencies of A|B|C|D
  2. Move into the left half, enqueue of A|B
  3. Again divide and conquer and select A.

This has a significant advantage in long lists of choices, and also in the common case, where the first solution should be the right one.

Or again, if you enqueue the package and a version restriction instead, you already get the common dependencies enqueued for the chosen package at least.

on May 24, 2024 08:57 AM

May 20, 2024

OR...

Aaron Rainbolt

Contrary to what you may be thinking, this is not a tale of an inexperienced coder pretending to know what they’re doing. I have something even better for you.

It all begins in the dead of night, at my workplace. In front of me is a typical programmer’s desk - two computers, three monitors (one of which isn’t even plugged in), a mess of storage drives, SD cards, 2FA keys, and an arbitrary RPi 4, along with a host of items that most certainly don’t belong on my desk, and a tangle of cables that would give even a rat a migraine. My dev laptop is sitting idle on the desk, while I stare intently at the screen of a system running a battery of software tests. In front of me is the logs of a failed script run.

Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

Generally when this particular script fails, it gives me some indication as to what went wrong. There are thorough error catching measures (or so I thought) throughout the code, so that if anything goes wrong, I know what went wrong and where. This time though, I’m greeted by something like this:

$ systemctl status test-sh.service
test-sh.service - does testing things
...
May 20 23:00:00 desktop-pc systemd[1]: Starting test-sh.service - does testing things
May 20 23:00:00 desktop-pc systemd[1]: test-sh.service: Failed with result ‘exit-code’.
May 20 23:00:00 desktop-pc systemd[1]: Failed to start test-sh.service.

I stare at the screen in bewilderment for a few seconds. No debugging info, no backtraces, no logs, not even an error message. It’s as if the script simply decided it needed some coffee before it would be willing to keep working this late at night. Having heard the tales of what happens when you give a computer coffee, I elected to try a different approach.

$ vim /usr/bin/test-sh
1 #!/bin/bash
2 #
3 # Copyright 2024 ...
4 set -u;
5 set -e;

Before I go into what exactly is wrong with this picture, I need to explain a bit about how Bash handles the ultimate question of life, “what is truth?”

(RED ALERT: I do not know if I’m correct about the reasoning behind the design decisions I talk about in the rest of this article. Don’t use me as a reference for why things work like this, and please correct me if I’ve botched something. Also, a lot of what I describe here is simplified, so don’t be surprised if you notice or discover that things are a bit more complex in reality than I make them sound like here.)

Bash, as many of you probably know, is primarily a “glue” language - it glues applications to each other, it glues the user to the applications, and it glues one’s sanity to the ceiling, far out of the user’s reach. As such, it features a bewildering combination of some of the most intuitive and some of the least intuitive behaviors one can dream up, and the handling of truth and falsehood is one of these bewildering things.

Every command you run in Bash reports back whether or not what it did “worked”. (“Worked” is subjective and depends on the command, but for the most part if a command says “It worked”, you can trust that it did what you told it to, at least mostly.) This is done by means of an “exit code”, which is nothing more than a number between 0 and 255. If a program exits and hands the shell an exit code of 0, it usually means “it worked”, whereas a non-zero exit code usually means “something went wrong”. (This makes sense if you know a bit about how programs written in C work - if your program is written to just “do things” and then exit, it will default to exiting with code zero.)

Because zero = good and non-zero = not good, it makes sense to treat zero as meaning “true” and non-zero as meaning “false”. That’s exactly what Bash does - if you do something like “if command; then commandIfTrue; else commandIfFalse; fi”, Bash will run “commandIfTrue” if “command” exits with 0, and will run “commandIfFalse” if “command” exits with 1 or higher.

Now since Bash is a glue language, it has to be able to handle it if a command runs and fails. This can be done with some amount of difficulty by testing (almost) every command the script runs, but that can be quite tedious. There’s a (generally) easier way however, which is to tell the script to immediately exit if any command exits with a non-zero exit code. This is done by using the command “set -e” at or near the top of the script. Once “set -e” is active, any command that fails will cause the whole script to stop.

So back to my script. I’m using “set -e” so that if anything goes wrong, the script stops. What could go wrong other than a failed command? To answer that question, we have to take a look at how some things work in C.

C is a very different language than Bash. Whereas Bash is designed to take a bunch of pieces and glue them together, C is designed to make the pieces themselves. You can think of Bash as being a glue gun and C as being a 3d printer. As such, C does not concern itself nearly as much with things like return codes and exiting when a command fails. It focuses on taking data and doing stuff with it.

Since C is more data- and algorithm-oriented, true and false work significantly differently here. C sees 0 as meaning “none, empty, all bits set to 0, etc.” and thus treats it as meaning “false”. Any number greater than 0 has a value, and can be treated as “on” or “true”. An astute reader will notice this is exactly the opposite of how Bash works, where 0 is true and non-zero is false. (In my opinion this is a rather lamentable design decision, but sadly these behaviors have been standardized for longer than I’ve been alive, so there’s not much point in trying to change them. But I digress.)

C also of course has features for doing math, called “operators”. One of the most common operators is the assignment operator, “=”. The assignment operator’s job is to take whatever you put on the right side of it, and store it in whatever you put on the left side. If you say “a = 0”, the value “0” will be stored in the variable “a” (assuming things work right). But the assignment operator has a trick up its sleeve - not only does it assign the value to the variable, it also returns the value. Basically what that means is that the statement “a = 0” spits out an extra value that you can do things with. This allows you to do things like “a = b = 0”, which will assign 0 to “b”, return zero, and then assign that returned zero to "a”. (The assignment of the second zero to “a” also returns a zero, but that simply gets ignored by the program since there’s nothing to do with it.)

You may be able to see where I’m going with this. Assigning a value to a variable also returns that value… and 0 means “false”… so “a = 0” succeeds, but also returns what is effectively “false”. That means if you do something like “if (a = 0) { ... } else { explodeComputer(); }”, the computer will explode. “a = 0” returns “false”, thus the “if” condition does not run and the “else” condition does. (Coincidentally, this is also a good example of the “world’s last programming bug” - the comparison operation in C is “==”, which is awfully easy to mistype as the assignment operator, “=”. Using an assignment operator in an “if” statement like this will almost always result in the code within the “if” being executed, as the value being stored in the variable will usually be non-zero and thus will be seen as “true” by the “if” statement. This also corrupts the variable you thought you were comparing something to. Some fear that a programmer with access to nuclear weapons will one day write something like “if (startWar = 1) { destroyWorld(); }” and thus the world will be destroyed by a missing equals sign.)

“So what,” you say. “Bash and C are different languages.” That’s true, and in theory this would mean that everything here is fine. Unfortunately theory and practice are the same in theory but much different in practice, and this is one of those instances where things go haywire because of weird differences like this. There’s one final piece of the puzzle to look at first though - how to do math in Bash.

Despite being a glue language, Bash has some simple math capabilities, most of which are borrowed from C. Yes, including the behavior of the assignment operator and the values for true and false. When you want to do math in Bash, you write “(( do math here... ))”, and everything inside the double parentheses is evaluated. Any assignment done within this mode is executed as expected. If I want to assign the number 5 to a variable, I can do “(( var = 5 ))” and it shall be so.

But wait, what happens with the return value of the assignment operator?

Well, take a guess. What do you think Bash is going to do with it?

Let’s look at it logically. In C (and in Bash’s math mode), 0 is false and non-zero is true. In Bash, 0 is true and non-zero is false. Clearly if whatever happen within math mode fails and returns false (0), Bash should not misinterpret this as true! Things like “(( 5 == 6 ))” shouldn’t be treated as being true, right? So what do we do with this conundrum? Easy solution - convert the return value to an exit code so that its semantics are retained across the C/Bash barrier. If the return value of the math mode statement is false (0), it should be converted to Bash’s concept of false (non-zero), therefore the return value of 0 is converted to an exit code of 1. On the other hand, if the return value of the math mode statement is true (non-zero), it should be converted to Bash’s concept of true (0), therefore the return value of anything other than 0 is converted to an exit code of 0. (You probably see the writing on the wall at this point. Spoiler, my code was weighed in the balances and found wanting.)

So now we can put all this nice, logical, sensible behavior together and make a glorious mess with it. Guess what happens if you run “(( var = 0 ))” in a script where “set -e” is enabled.

  • “0” is assigned to “var”.

  • The statement returns 0.

  • Bash dutifully converts that to a 1 (false/failure).

  • Bash now sees the command as having failed.

  • set -e” says the script should immediately stop if anything fails.

  • The script crashes.

You can try this for yourself - pop open a terminal and run “set -e; (( var = 0 ));” and watch in awe as your terminal instantly closes (or otherwise shows an indication that Bash has exited).

So back to the code. In my script, I have a function that helps with generating random numbers within any specified bounds. Basically it just grabs the value of “$RANDOM” (which is a special variable in Bash that always returns an integer between 0 and 32767) and does some manipulations on it so that it becomes a random number between a “lower bound” and an “upper bound” parameter. In the guts of that function’s code I have many “math mode” statements for getting those numbers into shape. Those statements include variable assignments, and those variable assignments were throwing exit codes into the script. I had written this before enabling “set -e”, so everything was fine before, but now “set -e” was enabled and Bash was going to enforce it as ruthlessly as possible.

While I will never know what line of code triggered the failure, it’s a fairly safe bet that the culprit was:

88 (( _val = ( _val % ( _adj_upper_bound + 1 ) ) ));

This basically takes whatever is in “_val” , divides it by “_adj_upper_bound + 1”, and then assigns the remainder of that operation to “_val”. This makes sure that “_val” is lower than “_adj_upper_bound + 1”. (This is typically known as a “getting the modulus”, and the “%” operator here is the “modulo operator”. For the math people reading this, don’t worry, I did the requisite gymnastics to ensure this code didn’t have modulo bias.) If “_val” happens to be equal to “_adj_upper_bound + 1”, the code on the right side of the assignment operator will evaluate to 0, which will become an exit code of 1, thus exploding my script because of what appeared to be a failed command.

Sigh.

So there’s the problem. What’s the solution? Turns out it’s pretty simple. Among Bash’s feature set, there is the profoundly handy “logical or operator”, “||”. This operator lets us say “if this OR that is true, return true.” In other words, “Run whatever’s on the left hand of the ||. If it exits 0, move on. If it exits non-zero, run whatever’s on the right hand of the ||. If it exits 0, move on and ignore the earlier failure. Only return non-zero if both commands fail.” There’s also a handy command in Bash called “true” that does nothing except for give an exit code of 0. That means that if you ever have a line of code in Bash that is liable to exit non-zero but it’s no big deal if it does, you can just slap an “|| true” on the end and it will magically make everything work by pretending that nothing went wrong. (If only this worked in real life!) I proceeded to go through and apply this bandaid to every standalone math mode call in my script, and it now seems to be behaving itself correctly again. For now anyway.

tl;dr: Faking success is sometimes a perfectly valid way to solve a computing problem. Just don’t live the way you code and you’ll be alright.

Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

on May 20, 2024 08:06 AM

May 14, 2024

The new APT 3.0 solver

Julian Andres Klode

APT 2.9.3 introduces the first iteration of the new solver codenamed solver3, and now available with the –solver 3.0 option. The new solver works fundamentally different from the old one.

How does it work?

Solver3 is a fully backtracking dependency solving algorithm that defers choices to as late as possible. It starts with an empty set of packages, then adds the manually installed packages, and then installs packages automatically as necessary to satisfy the dependencies.

Deferring the choices is implemented multiple ways:

First, all install requests recursively mark dependencies with a single solution for install, and any packages that are being rejected due to conflicts or user requests will cause their reverse dependencies to be transitively marked as rejected, provided their or group cannot be solved by a different package.

Second, any dependency with more than one choice is pushed to a priority queue that is ordered by the number of possible solutions, such that we resolve a|b before a|b|c.

Not just by the number of solutions, though. One important point to note is that optional dependencies, that is, Recommends, are always sorting after mandatory dependencies. Do note on that: Recommended packages do not “nest” in backtracking - dependencies of a Recommended package themselves are not optional, so they will have to be resolved before the next Recommended package is seen in the queue.

Another important step in deferring choices is extracting the common dependencies of a package across its version and then installing them before we even decide which of its versions we want to install - one of the dependencies might cycle back to a specific version after all.

Decisions about package levels are recorded at a certain decision level, if we reach a conflict we backtrack to the previous decision level, mark the decision we made (install X) in the inverse (DO NOT INSTALL X), reset all the state all decisions made at the higher level, and restore any dependencies that are no longer resolved to the work queue.

Comparison to SAT solver design.

If you have studied SAT solver design, you’ll find that essentially this is a DPLL solver without pure literal elimination. A pure literal eliminitation phase would not work for a package manager: First negative pure literals (packages that everything conflicts with) do not exist, and positive pure literals (packages nothing conflicts with) we do not want to mark for install - we want to install as little as possible (well subject, to policy).

As part of the solving phase, we also construct an implication graph, albeit a partial one: The first package installing another package is marked as the reason (A -> B), the same thing for conflicts (not A -> not B).

Once we have added the ability to have multiple parents in the implication graph, it stands to reason that we can also implement the much more advanced method of conflict-driven clause learning; where we do not jump back to the previous decision level but exactly to the decision level that caused the conflict. This would massively speed up backtracking.

What changes can you expect in behavior?

The most striking difference to the classic APT solver is that solver3 always keeps manually installed packages around, it never offers to remove them. We will relax that in a future iteration so that it can replace packages with new ones, that is, if your package is no longer available in the repository (obsolete), but there is one that Conflicts+Replaces+Provides it, solver3 will be allowed to install that and remove the other.

Implementing that policy is rather trivial: We just need to queue obsolete | replacement as a dependency to solve, rather than mark the obsolete package for install.

Another critical difference is the change in the autoremove behavior: The new solver currently only knows the strongest dependency chain to each package, and hence it will not keep around any packages that are only reachable via weaker chains. A common example is when gcc-<version> packages accumulate on your system over the years. They all have Provides: c-compiler and the libtool Depends: gcc | c-compiler is enough to keep them around.

New features

The new option --no-strict-pinning instructs the solver to consider all versions of a package and not just the candidate version. For example, you could use apt install foo=2.0 --no-strict-pinning to install version 2.0 of foo and upgrade - or downgrade - packages as needed to satisfy foo=2.0 dependencies. This mostly comes in handy in use cases involving Debian experimental or the Ubuntu proposed pockets, where you want to install a package from there, but try to satisfy from the normal release as much as possible.

The implication graph building allows us to implement an apt why command, that while not as nicely detailed as aptitude, at least tells you the exact reason why a package is installed. It will only show the strongest dependency chain at first of course, since that is what we record.

What is left to do?

At the moment, error information is not stored across backtracking in any way, but we generally will want to show you the first conflict we reach as it is the most natural one; or all conflicts. Currently you get the last conflict which may not be particularly useful.

Likewise, errors currently are just rendered as implication graphs of the form [not] A -> [not] B -> ..., and we need to put in some work to present those nicely.

The test suite is not passing yet, I haven’t really started working on it. A challenge is that most packages in the test suite are manually installed as they are mocked, and the solver now doesn’t remove those.

We plan to implement the replacement logic such that foo can be replaced by foo2 Conflicts/Replaces/Provides foo without needing to be automatically installed.

Improving the backtracking to be non-chronological conflict-driven clause learning would vastly enhance our backtracking performance. Not that it seems to be an issue right now in my limited testing (mostly noble 64-bit-time_t upgrades). A lot of that complexity you have normally is not there because the manually installed packages and resulting unit propagation (single-solution Depends/Reverse-Depends for Conflicts) already ground us fairly far in what changes we can actually make.

Once all the stuff has landed, we need to start rolling it out and gather feedback. On Ubuntu I’d like automated feedback on regressions (running solver3 in parallel, checking if result is worse and then submitting an error to the error tracker), on Debian this could just be a role email address to send solver dumps to.

At the same time, we can also incrementally start rolling this out. Like phased updates in Ubuntu, we can also roll out the new solver as the default to 10%, 20%, 50% of users before going to the full 100%. This will allow us to capture regressions early and fix them.

on May 14, 2024 11:26 AM

May 12, 2024

The Kubuntu Team are thrilled to announce significant updates to KubuQA, our streamlined ISO testing tool that has now expanded its capabilities beyond Kubuntu to support Ubuntu and all its other flavors. With these enhancements, KubuQA becomes a versatile resource that ensures a smoother, more intuitive testing process for upcoming releases, including the 24.04 Noble Numbat and the 24.10 Oracular Oriole.

What is KubuQA?

KubuQA is a specialized tool developed by the Kubuntu Team to simplify the process of ISO testing. Utilizing the power of Kdialog for user-friendly graphical interfaces and VirtualBox for creating and managing virtual environments, KubuQA allows testers to efficiently evaluate ISO images. Its design focuses on accessibility, making it easy for testers of all skill levels to participate in the development process by providing clear, guided steps for testing ISOs.

New Features and Extensions

The latest update to KubuQA marks a significant expansion in its utility:

  • Broader Coverage: Initially tailored for Kubuntu, KubuQA now supports testing ISO images for Ubuntu and all other Ubuntu flavors. This broadened coverage ensures that any Ubuntu-based community can benefit from the robust testing framework that KubuQA offers.
  • Support for Latest Releases: KubuQA has been updated to include support for the newest Ubuntu release cycles, including the 24.04 Noble Numbat and the upcoming 24.10 Oracular Oriole. This ensures that communities can start testing early and often, leading to more stable and polished releases.
  • Enhanced User Experience: With improvements to the Kdialog interactions, testers will find the interface more intuitive and responsive, which enhances the overall testing experience.

Call to Action for Ubuntu Flavor Leads

The Kubuntu Team is keen to collaborate closely with leaders and testers from all Ubuntu flavors to adopt and adapt KubuQA for their testing needs. We believe that by sharing this tool, we can foster a stronger, more cohesive testing community across the Ubuntu ecosystem.

We encourage flavor leads to try out KubuQA, integrate it into their testing processes, and share feedback with us. This collaboration will not only improve the tool but also ensure that all Ubuntu flavors can achieve higher quality and stability in their releases.

Getting Involved

For those interested in getting involved with ISO testing using KubuQA:

  • Download the Tool: You can find KubuQA on the Kubuntu Team Github.
  • Join the Community: Engage with the Kubuntu community for support and to connect with other testers. Your contributions and feedback are invaluable to the continuous improvement of KubuQA.

Conclusion

The enhancements to KubuQA signify our commitment to improving the quality and reliability of Ubuntu and its derivatives. By extending its coverage and simplifying the testing process, we aim to empower more contributors to participate in the development cycle. Whether you’re a seasoned tester or new to the community, your efforts are crucial to the success of Ubuntu.

We look forward to seeing how different communities will utilise KubuQA to enhance their testing practices. And by the way, have you thought about becoming a member of the Kubuntu Community? Join us today to make a difference in the world of open-source software!

on May 12, 2024 09:28 PM

May 08, 2024

I recently discovered that there's an old software edition of the Oxford English Dictionary (the second edition) on archive.org for download. Not sure how legal this is, mind, but I thought it would be useful to get it running on my Ubuntu machine. So here's how I did that.

Firstly, download the file; that will give you a file called Oxford English Dictionary (Second Edition).iso, which is a CD image. We want to unpack that, and usefully there is 7zip in the Ubuntu archives which knows how to unpack ISO files.1 So, unpack the ISO with 7z x "Oxford English Dictionary (Second Edition).iso". That will give you two more files: OED2.DAT and SETUP.EXE. The .DAT file is, I think, all the dictionary entries in some sort of binary format (and is 600MB, so be sure you have the space for it). You can then run wine SETUP.EXE, which will install the software using wine, and that's all good.2 Choose a folder to install it in (I chose the same folder that SETUP.EXE is in, at which point it will create an OED subfolder in there and unpack a bunch of files into it, including OED.EXE).

That's the easy part. However, it won't quite work yet. You can see this by running wine OED/OED.EXE. It should start up OK, and then complain that there's no CDROM.

a Windows dialog box reading 'CD-ROM not found'

This is because it expects there to be a CDROM drive with the OED2.DAT file on it. We can set one up, though; we tell Wine to pretend that there's a CD drive connected, and what's on it. Run winecfg, and in the Drives tab, press Add… to add a new drive. I chose D: (which is a common Windows drive letter for a CD drive), and OK. Select your newly added D: drive and set the Path to be the folder where OED2.DAT is (which is wherever you unpacked the ISO file). Then say Show Advanced and change the drive Type to CD-ROM to tell Wine that you want this new drive to appear to be a CD. Say OK.

a Windows dialog box reading 'CD-ROM not found'

Now, when you wine OED/OED.EXE again, it should start up fine! Hooray, we're done! Except…

the OED Windows app, except that all the text is little squares rather than actual text, which looks like a font rendering error

…that's not good. The app runs, but it looks like it's having font issues. (In particular, you can select and copy the text, even though it looks like a bunch of little squares, and if you paste that text into somewhere else it's real text! So this is some sort of font display problem.)

Fortunately, the OED app does actually come with the fonts it needs. Unfortunately, it seems to unpack them to somewhere (C:\WINDOWS\SYSTEM)3 that Wine doesn't appear to actually look at. What we need to do is to install those font files so Linux knows about them. You could click them all to install them, but there's a quicker way; copy them, from where the installer puts them, into our own font folder.

To do this...

  • first make a new folder to put them in: mkdir ~/.local/share/fonts/oed.
  • Then find out where the installer put the font files, as a real path on our Linux filesystem: winepath -u "C:/WINDOWS/SYSTEM". Let's say that that ends up being /home/you/.wine/dosdevices/c:/windows/system
  • Copy the TTF files from that folder (remembering to change the first path to the one that winepath output just now): cp /home/you/.wine/dosdevices/c:/windows/system/*.TTF ~/.local/share/fonts/oed
  • And tell the font system that we've added a bunch of new fonts: fc-cache

And now it all ought to work! Run wine OED/OED.EXE one last time…

the OED Windows app in all its glory

  1. and using 7zip is much easier than mounting the ISO file as a loopback thing
  2. There's a Microsoft Word macro that it offers to install; I didn't want that, and I have no idea whether it works
  3. which we can find out from OED/INSTALL.LOG
on May 08, 2024 10:18 PM

May 03, 2024

Many years ago (2012!) I was invited to be part of "The Pastry Box Project", which described itself thus:

Each year, The Pastry Box Project gathers 30 people who are each influential in their field and asks them to share thoughts regarding what they do. Those thoughts are then published every day throughout the year at a rate of one per day, starting January 1st and ending December 31st.

It was interesting. Sadly, it's dropped off the web (as has its curator, Alex Duloz, as far as I can tell), but thankfully the Wayback Machine comes to the rescue once again.1 I was quietly proud of some of the things I wrote there (and I was recently asked for a reference to a thing I said which the questioner couldn't find, which is what made me realise that the site's not around any more), so I thought I'd republish the stuff I wrote there, here, for ease of finding. This was all written in 2012, and the world has moved on in a few ways since then, a dozen years ago at time of writing, but... I think I'd still stand by most of this stuff. The posts are still at archive.org and you can get to and read other people's posts from there too, some of which are really good and worth your time. But here are mine, so I don't lose them again.

Tuesday, 18 December 2012

My daughter’s got a smartphone, because, well, everyone has. It has GPS on it, because, well, every one does. What this means is that she will never understand the concept of being lost.

Think about that for a second. She won’t ever even know what it means to be lost.

Every argument I have in the pub now goes for about ten minutes before someone says, right, we’ve spent long enough arguing now, someone look up the correct answer on Wikipedia. My daughter won’t ever understand the concept of not having a bit of information available, of being confused about a matter of fact.

A while back, it was decreed that telephone directories are not subject to copyright, that a list of phone numbers is “information alone without a minimum of original creativity” and therefore held no right of ownership.

What instant access to information has provided us is a world where all the simple matters of fact are now yours; free for the asking. Putting data on the internet is not a skill; it is drudgery, a mechanical task for robots. Ask yourself: why do you buy technical books? It’s not for the information inside: there is no tech book anywhere which actually reveals something which isn’t on the web already. It’s about the voice; about the way it’s written; about how interesting it is. And that is a skill. Matters of fact are not interesting — they’re useful, right enough, but not interesting. Making those facts available to everyone frees up authors, creators, makers to do authorial creative things. You don’t have to spend all your time collating stuff any more: now you can be Leonardo da Vinci all the time. Be beautiful. Appreciate the people who do things well, rather than just those who manage to do things at all. Prefer those people who make you laugh, or make you think, or make you throw your laptop out of a window with annoyance: who give you a strong reaction to their writing, or their speaking, or their work. Because information wanting to be free is what creates a world of creators. Next time someone wants to build a wall around their little garden, ask yourself: is what you’re paying for, with your time or your money or your personal information, something creative and wonderful? Or are they just mechanically collating information? I hope to spend 2013 enjoying the work of people who do something more than that.

Wednesday, 31 October 2012

Not everyone who works with technology loves technology. No, really, it’s true! Most of the people out there building stuff with web tech don’t attend conferences, don’t talk about WebGL in the pub, don’t write a blog with CSS3 “experiments” in it, don’t like what they do. It’s a job: come in at 9, go home at 5, don’t think about HTML outside those hours. Apparently 90% of the stuff in the universe is “dark matter”: undetectable, doesn’t interact with other matter, can’t be seen even with a really big telescope. Our “dark matter developers”, who aren’t part of the community, who barely even know that the community exists… how are we to help them? You can write all the A List Apart articles you like but dark matter developers don’t read it. And so everyone’s intranet is horrid and Internet-Explorer-specific and so the IE team have to maintain backwards compatibility with that and that hurts the web. What can we do to reach this huge group of people? Everyone’s written a book about web technologies, and books help, but books are dying. We want to get the word out about all the amazing things that are now possible to everyone: do we know how? Do we even have to care? The theory is that this stuff will “trickle down”, but that doesn’t work for economics: I’m not sure it works for @-moz-keyframes either.

Monday, 8 October 2012

The web moves really fast. How many times have you googled for a tutorial on or an example of something and found that the results, written six months or a year or two years ago, no longer work? The syntax has changed, or there’s a better way now, or it never worked right to begin with. You’ll hear people bemoaning this: trying to stop the web moving so quickly in order that knowledge about it doesn’t go out of date. But that ship’s sailed. This is the world we’ve built: it moves fast, and we have to just hat up and deal with it. So, how? How can we make sure that old and wrong advice doesn’t get found? It’s a difficult question, and I don’t think anyone’s seriously trying to answer it. We should try and think of a way.

Tuesday, 18 September 2012

Software isn’t always a solution to problems. If you’re a developer, everything generally looks like a nail: a nail which is solved by making a new bit of code. I’ve got half-finished mobile apps done for tracking my running with GPS, for telling me when to switch between running and walking, and… I’m still fat, because I’m writing software instead of going running. One of the big ideas behind computers was to automate repetitive and boring tasks, certainly, which means that it should work like this: identify a thing that needs doing, do it for a while, think “hm, a computer could do this more easily”, write a bit of software to do it. However, there’s too much premature optimisation going on, so it actually looks like this: identify a thing that needs doing, think “hm, I’m sure a computer would be able to do this more easily”, write a bit of software to do it. See the difference? If the software never gets finished, then in the first approach the thing still gets done. Don’t always reach for the keyboard: sometimes it’s better to reach for Post-It notes, or your running shoes.

Saturday, 18 August 2012

Changing the world is within your grasp.

This is not necessarily a good thing.

If you go around and talk to normal people, it becomes clear that, weirdly, they don’t ever imagine how to get ten million dollars. They don’t think about new ways to redesign a saucepan or the buttons in their car. They don’t contemplate why sending a parcel is slow and how it could be a slicker process. They don’t think about ways to change the world.

I find it hard to talk to someone who doesn’t think like that.

To an engineer, the world is a toy box full of sub-optimized and feature-poor toys, as Scott Adams once put it. To a designer, the world is full of bad design. And to both, it is not only possible but at a high level obvious how to (a) fix it (b) for everyone (c) and make a few million out of doing so.

At first, this seems a blessing: you can see how the world could be better! And make it happen!

Then it’s a curse. Those normal people I mentioned? Short of winning the lottery or Great Uncle Brewster dying, there’s no possibility of becoming a multi-millionaire, and so they’re not thinking about it. Doors that have a handle on them but say “Push” are not a source of distress. Wrong kerning in signs is not like sandpaper on their nerves.

The curse of being able to change the world is… the frustration that you have so far failed to do so.

Perhaps there is a Zen thing here. Some people have managed it. Maybe you have. So the world is better, and that’s a good thing all by itself, right?

Friday, 27 July 2012

The best systems are built by people who can accept that no-one will ever know how hard it was to do, and who therefore don’t seek validation by explaining to everyone how hard it was to do.

Tuesday, 12 June 2012

The most poisonous idea in the world is when you’re told that something which achieved success through lots of hard work actually got there just because it was excellent.

Friday, 18 May 2012

Ever notice how the things you slave over and work crushingly hard on get less attention, sometimes, than the amusing things you threw together in a couple of evenings?

I can't decide whether this is a good thing or not.

Thursday, 5 April 2012

It's OK to not want to build websites for everybody and every browser. Making something which is super-dynamic in Chrome 18 and also works excellently in w3m is jolly hard work, and a lot of the time you might well be justified in thinking it's not worth it. If your site stats, or your belief, or your prediction of the market's direction, or your favourite pundit tell you that the best use of your time is to only support browsers with querySelector, or only support browsers with JavaScript, or only support WebKit, or only support iOS Safari, then that's a reasonable decision to make; don't let anyone else tell you what your relationship with your users and customers and clients is, because you know better than them.

Just don't confuse what you're doing with supporting "the web". State your assumptions up front. Own your decisions, and be prepared to back them up, for your project. If you're building something which doesn't work in IE6, that requires JavaScript, that requires mobile WebKit, that requires Opera Mobile, then you are letting some people down. That's OK; you've decided to do that. But your view's no more valid than theirs, for a project you didn't build. Make your decisions, and state what the axioms you worked from were, and then everyone else can judge whether what you care about is what they care about. Just don't push your view as being what everyone else should do, and we'll all be fine.

Sunday, 18 March 2012

Publish and be damned, said the Duke of Wellington; these days, in between starting wars in France and being sick of everyone repeating the jokes about his name from Blackadder, he’d probably say that we should publish or be damned. If you’re anything like me, you’ve got folders full of little experiments that you never got around to finishing or that didn’t pan out. Put ’em up somewhere. These things are useful.

Twitter, autobiographies, collections of letters from authors, all these have shown us that the minutiae can be as fascinating as carefully curated and sieved and measured writings, and who knows what you’ll inspire the next person to do from the germ of one of your ideas?

Monday, 27 February 2012

There's a lot to think about when you're building something on the web. Is it accessible? How do I handle translations of the text? Is the design OK on a 320px-wide screen? On a 2320px-wide screen? Does it work in IE8? In Android 4.0? In Opera Mini? Have I minimized the number of HTTP requests my page requires? Is my JavaScript minified? Are my images responsive? Is Google Analytics hooked up properly? AdSense? Am I handling Unicode text properly? Avoiding CSRF? XSS? Have I encoded my videos correctly? Crushed my pngs? Made a print stylesheet?

We've come a long way since:

<HEADER>
<TITLE>The World Wide Web project</TITLE>
<NEXTID N="55">
</HEADER>
<BODY>
<H1>World Wide Web</H1>The WorldWideWeb (W3) is a wide-area<A
NAME=0 HREF="WhatIs.html">
hypermedia</A> information retrieval
initiative aiming to give universal
access to a large universe of documents.

Look at http://html5boilerplate.com/—a base level page which helps you to cover some (nowhere near all) of the above list of things to care about (and the rest of the things you need to care about too, which are the other 90% of the list). A year in development, 900 sets of changes and evolutions from the initial version, seven separate files. That's not over-engineering; that's what you need to know to build things these days.

The important point is: one of the skills in our game is knowing what you don't need to do right now but still leaving the door open for you to do it later. If you become the next Facebook then you will have to care about all these things; initially you may not. You don't have to build them all on day one: that is over-engineering. But you, designer, developer, translator, evangelist, web person, do have to understand what they all mean. And you do have to be able to layer them on later without having to tear everything up and start again. Feel guilty that you're not addressing all this stuff in the first release if necessary, but you should feel a lot guiltier if you didn't think of some of it.

Wednesday, 18 January 2012

Don't be creative. Be a creator. No one ever looks back and wishes that they'd given the world less stuff.

  1. Also, the writing is all archived at Github!
on May 03, 2024 06:08 PM
The
<figcaption> The “Secure Rollback Prevention” entry in the UEFI BIOS configuration </figcaption>

The bottom line is that there is a new configuration called “AMD Secure Processor Rollback protection” on recent AMD systems in addition to “Secure Rollback Prevention” (BIOS rollback protection). If it’s enabled by a vendor, you cannot downgrade the UEFI BIOS revisions once you install a one with security vulnerability fixes.

https://fwupd.github.io/libfwupdplugin/hsi.html#org.fwupd.hsi.Amd.RollbackProtection

This feature prevents an attacker from loading an older firmware onto the part after a security vulnerability has been fixed.
[…]
End users are not able to directly modify rollback protection, this is controlled by the manufacturer.

Previously I installed the revision 1.49 (R23ET73W) but it’s gone from Lenovo’s official page with the notice below. I’ve been annoyed by a symptom which is likely from a firmware so I wanted to try multiple revisions for bisecting, and also I thought I should downgrade it to the latest official revision as 1.40 (R23ET70W) since the withdrawal clearly indicates that there is something wrong with 1.49.

This BIOS version R23UJ73W is reported Lenovo cloud not working issue, hence it has been withdrawn from support site.

First, I turned off Secure Rollback Prevention and tried downgrading it with fwupdmgr like the following. However, it failed to be applied with Secure Flash Authentication Failed when rebooted.

$ fwupdmgr downgrade
0.	Cancel
1.	b0fb0282929536060857f3bd5f80b319233340fd (Battery)
2.	6fd62cb954242863ea4a184c560eebd729c76101 (Embedded Controller)
3.	0d5d05911800242bb1f35287012cdcbd9b381148 (Prometheus)
4.	3743975ad7f64f8d6575a9ae49fb3a8856fe186f (SKHynix HFS256GDE9X081N)
5.	d77c38c163257a2c2b0c0b921b185f481d9c1e0c (System Firmware)
6.	6df01b2df47b1b08190f1acac54486deb0b4c645 (TPM)
7.	362301da643102b9f38477387e2193e57abaa590 (UEFI dbx)
Choose device [0-7]: 5
0.	Cancel
1.	0.1.46
2.	0.1.41
3.	0.1.38
4.	0.1.36
5.	0.1.23
Choose release [0-5]: 

Next, I tried their ISO image r23uj70wd.iso, but no luck with another error.

Error

The system program file is not correct for this system.

Also, Windows failed to apply it so I became convinced it was impossible. However, I didn’t have a clear idea why at that point and bumped into a handy command in fwupdmgr.

$ fwupdmgr security
Host Security ID: HSI:1! (v1.9.16)

HSI-1
✔ BIOS firmware updates:         Enabled
✔ Fused platform:                Locked
✔ Supported CPU:                 Valid
✔ TPM empty PCRs:                Valid
✔ TPM v2.0:                      Found
✔ UEFI bootservice variables:    Locked
✔ UEFI platform key:             Valid
✔ UEFI secure boot:              Enabled

HSI-2
✔ SPI write protection:          Enabled
✔ IOMMU:                         Enabled
✔ Platform debugging:            Locked
✔ TPM PCR0 reconstruction:       Valid
✘ BIOS rollback protection:      Disabled

HSI-3
✔ SPI replay protection:         Enabled
✔ CET Platform:                  Supported
✔ Pre-boot DMA protection:       Enabled
✔ Suspend-to-idle:               Enabled
✔ Suspend-to-ram:                Disabled

HSI-4
✔ Processor rollback protection: Enabled
✔ Encrypted RAM:                 Encrypted
✔ SMAP:                          Enabled

Runtime Suffix -!
✔ fwupd plugins:                 Untainted
✔ Linux kernel lockdown:         Enabled
✔ Linux kernel:                  Untainted
✘ CET OS Support:                Not supported
✘ Linux swap:                    Unencrypted

This system has HSI runtime issues.
 » https://fwupd.github.io/hsi.html#hsi-runtime-suffix

Host Security Events
  2024-05-01 15:06:29:  ✘ BIOS rollback protection changed: Enabled → Disabled

As you can see, the BIOS rollback protection in the HSI-2 section is “Disabled” as intended. But Processor rollback protection in HSI-4 is “Enabled”. I found a commit suggesting that there was a system with the config disabled and it was able to be enabled when OS Optimized Defaults is turned on.

https://github.com/fwupd/fwupd/commit/52d6c3cb78ab8ebfd432949995e5d4437569aaa6

Update documentation to indicate that loading “OS Optimized Defaults”

may enable security processor rollback protection on Lenovo systems.

I hoped that Processor rollback protection might be disabled by turning off OS Optimized Defaults instead.

Tried OS Optimized Defaults turned off but no luck
<figcaption> Tried OS Optimized Defaults turned off but no luck </figcaption>
$ fwupdmgr security
Host Security ID: HSI:1! (v1.9.16)

...

✘ BIOS rollback protection:      Disabled

...

HSI-4
✔ Processor rollback protection: Enabled

...

Host Security Events
  2024-05-02 03:24:45:  ✘ Kernel lockdown disabled
  2024-05-02 03:24:45:  ✘ Secure Boot disabled
  2024-05-02 03:24:45:  ✘ Pre-boot DMA protection is disabled
  2024-05-02 03:24:45:  ✘ Encrypted RAM changed: Encrypted → Not supported

Some configurations were overridden, but the Processor rollback protection stayed the same. It’s confirmed that it’s really impossible to downgrade the firmware with vulnerability fixes. I learned the hard way that there was a clear difference between “a vendor doesn’t support downgrading” and “it can’t be downgraded” as per the release notes.

https://download.lenovo.com/pccbbs/mobiles/r23uj73wd.txt

CHANGES IN THIS RELEASE

Version 1.49 (UEFI BIOS) 1.32 (ECP)

[Important updates]

  • Notice that BIOS can’t be downgraded to older BIOS version after upgrade to r23uj73w(1.49).

[New functions or enhancements]

  • Enhancement to address security vulnerability, CVE-2023-5058,LEN-123535,LEN-128083,LEN-115697,LEN-123534,LEN-118373,LEN-119523,LEN-123536.
  • Change to permit fan rotation after fan error happen.

I have to wait for a new and better firmware.

on May 03, 2024 03:20 AM

May 01, 2024

Thanks to a colleague who introduced me to Nim during last week’s SUSE Labs conference, I became a man with a dream, and after fiddling with compiler flags and obviously not reading documentation, I finally made it.

This is something that shouldn’t exist; from the list of ideas that should never have happened.

But it does. It’s a Perl interpreter embedded in Rust. Get over it.

Once cloned, you can run the following commands to see it in action:

  • cargo run --verbose -- hello.pm showtime
  • cargo run --verbose -- hello.pm get_quick_headers

How it works

There is a lot of autogenerated code, mainly for two things:

  • bindings.rs and wrapper.h; I made a lot of assumptions and perlxsi.c may or may not be necessary in the future (see main::xs_init_rust), depends on how bad or terrible my C knowledge is by the time you’re reading this.
  • xs_init_rust function is the one that does the magic, as far as my understanding goes, by hooking up boot_DynaLoader to DynaLoader in Perl via ffi.

With those two bits in place, and thanks to the magic of the bindgen crate, and after some initialization, I decided to use Perl_call_argv, do note that Perl_ in this case comes from bindgen, I might change later the convention to ruperl or something to avoid confusion between that a and perl_parse or perl_alloc which (if I understand correctly) are exposed directly by the ffi interface.

What I ended up doing, is passing the same list of arguments (for now, or at least for this PoC), directly to Perl_call_argv, which will in turn, take the third argument and pass it verbatim as the call_argv

        Perl_call_argv(myperl, perl_sub, flags_ptr, perl_parse_args.as_mut_ptr());

Right now hello.pm defines two sub routines, one to open a file, write something and print the time to stdout, and a second one that will query my blog, and show the headers. This is only example code, but enough to demostrate that the DynaLoader works, and that the embedding also works :)

itsalive

I got most of this working by following the perlembed guide.

Why?

Why not?.

I want to see if I can embed also python in the same binary, so I can call native perl, from native python and see how I can fiddle all that into os-autoinst

Where to find the code?

On github: https://github.com/foursixnine/ruperl or under https://crates.io/crates/ruperl

on May 01, 2024 12:00 AM

April 30, 2024

Ubuntu 24.04 LTS OneDrive

Dougie Richardson

I’ve not had much time to play around with the latest release but this is cool – OneDrive Nautilus integration.

Settings > Online Accounts > Microsoft 365, leave everything blank and hit “Sign in…”. Web page opens to authenticate and then you can mount OneDrive in Nautilus.

on April 30, 2024 08:22 PM

April 27, 2024

The Joy of Code

Alan Pope

A few weeks ago, in episode 25 of Linux Matters Podcast I brought up the subject of ‘Coding Joy’. This blog post is an expanded follow-up to that segment. Go and listen to that episode - or not - it’s all covered here.

The Joy of Linux Torture

Not a Developer

I’ve said this many times - I’ve never considered myself a ‘Developer’. It’s not so much imposter syndrome, but plain facts. I didn’t attend university to study software engineering, and have never held a job with ‘Engineer’ or Developer’ in the title.

(I do have Engineering Manager and Developer Advocate roles in my past, but in popey’s weird set of rules, those don’t count.)

I have written code over the years. Starting with BASIC on the Sinclair ZX81 and Sinclair Spectrum, I wrote stuff for fun and no financial gain. I also coded in Z80 & 6502 assembler, taught myself Pascal on my Epson 8086 PC in 1990, then QuickBasic and years later, BlitzBasic, Lua (via LÖVE) and more.

In the workplace, I wrote some alarmingly complex utilities in Windows batch scripts and later Bash shell scripts on Linux. In a past career, I would write ABAP in SAP - which turned into an internal product mildly amusingly called “Alan’s Tool”.

These were pretty much all coding for fun, though. Nobody specced up a project and assigned me as a developer on it. I just picked up the tools and started making something, whether that was a sprite routine in Z80 assembler, an educational CPU simulator in Pascal, or a spreadsheet uploader for SAP BiW.

In 2003, three years before Twitter launched in 2006, I made a service called ‘Clunky.net’. It was a bunch of PHP and Perl smashed together and published online with little regard for longevity or security. Users could sign up and send ’tweet’ style messages from their phone via SMS, which would be presented in a reverse-chronological timeline. It didn’t last, but I had fun making it while it did.

They were all fun side-quests.

None of this makes me a developer.

Volatile Memories

It’s rapidly approaching fifty years since I first wrote any code on my first computer. Back then, you’d typically write code and then either save it on tape (if you were patient) or disk (if you were loaded). Maybe you’d write it down - either before or after you typed it in - or perhaps you’d turn the computer off and lose it all.

When I studied for a BTEC National Diploma in Computer Studies at college, one of our classes was on the IBM PC with two floppy disc drives. The lecturer kept hold of all the floppies because we couldn’t be trusted not to lose, damage or forget them. Sometimes the lecturer was held up at the start of class, so we’d be sat twiddling our thumbs for a bit.

In those days, when you booted the PC with no floppy inserted, it would go directly into BASICA, like the 8-bit microcomputers before it. I would frequently start writing something, anything, to pass the time.

With no floppy disks on hand, the code - beautiful as it was - would be lost. The lecturer often reset the room when they entered, hitting a big red ‘Stop’ button, which instantly powered down all the computers, losing whatever ‘work’ you’d done.

I was probably a little irritated at the moment, just as I would when the RAM pack wobbled on my ZX81, losing everything. You move on, though, and make something else, or get on with your college work, and soon forget about it.

Or you bitterly remember it and write a blog post four decades later. Each to their own.

Sharing is Caring

This part was the main focus of the conversation when we talked about this on the show.

In the modern age, over the last ten to fifteen years or so, I’ve not done so much of the kind of coding I wrote about above. I certainly have done some stuff for work, mostly around packaging other people’s software as snaps or writing noddy little shell scripts. But I lost a lot of the ‘joy’ of coding recently.

Why?

I think a big part is the expectation that I’d make the code available to others. The public scrutiny others give your code may have been a factor. The pressure I felt that I should put my code out and continue to maintain it rather than throw it over the wall wouldn’t have helped.

I think I was so obsessed with doing the ‘right’ thing that coding ‘correctly’ or following standards and making it all maintainable became a cognitive roadblock.

I would start writing something and then begin wondering, ‘How would someone package this up?’ and ‘Am I using modern coding standards, toolkits, and frameworks?’ This held me back from the joy of coding in the first place. I was obsessing too much over other people’s opinions of my code and whether someone else could build and run it.

I never used to care about this stuff for personal projects, and it was a lot more joyful an experience - for me.

I used to have an idea, pick up a text editor and start coding. I missed that.

Realisation

In January this year, Terence Eden wrote about his escapades making a FourSquare-like service using ActivityPub and OpenStreetMap. When he first mentioned this on Mastodon, I grabbed a copy of the code he shared and had a brief look at it.

The code was surprisingly simple, scrappy, kinda working, and written in PHP. I was immediately thrown back twenty years to my terrible ‘Clunky’ code and how much fun it was to throw together.

In February, I bumped into Terence at State of Open Con in London and took the opportunity to quiz him about his creation. We discussed his choice of technology (PHP), and the simple ’thrown together in a day’ nature of the project.

At that point, I had a bit of a light-bulb moment, realising that I could get back to joyful coding. I don’t have to share everything; not every project needs to be an Open-Source Opus.

I can open a text editor, type some code, and enjoy it, and that’s enough.

Joy Rediscovered

I had an idea for a web application and wanted to prototype something without too much technological research or overhead. So I created a folder on my home server, ran php -S 0.0.0.0:9000 in a terminal there, made a skeleton index.php and pointed a browser at the address. Boom! Application created!

I created some horribly insecure and probably unmaintainable PHP that will almost certainly never see the light of day.

I had fun doing it though. Which is really the whole point.

More side-quests, fewer grand plans.

on April 27, 2024 08:00 AM

April 26, 2024

Over coffee this morning, I stumbled upon simone, a fledgling Open-Source tool for repurposing YouTube videos as blog posts. The Python tool creates a text summary of the video and extracts some contextual frames to illustrate the text.

A neat idea! In my experience, software engineers are often tasked with making demonstration videos, but other engineers commonly prefer consuming the written word over watching a video. I took simone for a spin, to see how well it works. Scroll down and tell me what you think!

I was sat in front of my work laptop, which is a mac, so roughly speaking, this is what I did:

  • Install host pre-requisites
$ brew install ffmpeg tesseract virtualenv
git clone https://github.com/rajtilakjee/simone
  • Get a free API key from OpenRouter
  • Put the API key in .env
GEMMA_API_KEY=sk-or-v1-0000000000000000000000000000000000000000000000000000000000000000
  • Install python requisites
$ cd simone
$ virtualenv .venv
$ source .venv/bin/activate
(.venv) $ pip install -r requirements.txt
  • Run it!
(.venv) $ python src/main.py
Enter YouTube URL: https://www.youtube.com/watch?v=VDIAHEoECfM
/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/whisper/transcribe.py:115: UserWarning: FP16 is not supported on CPU; using FP32 instead
 warnings.warn("FP16 is not supported on CPU; using FP32 instead")
Traceback (most recent call last):
 File "/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/pytesseract/pytesseract.py", line 255, in run_tesseract
 proc = subprocess.Popen(cmd_args, **subprocess_args())
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 File "/opt/homebrew/Cellar/python@3.12/3.12.3/Frameworks/Python.framework/Versions/3.12/lib/python3.12/subprocess.py", line 1026, in __init__
 self._execute_child(args, executable, preexec_fn, close_fds,
 File "/opt/homebrew/Cellar/python@3.12/3.12.3/Frameworks/Python.framework/Versions/3.12/lib/python3.12/subprocess.py", line 1955, in _execute_child
 raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'C:/Program Files/Tesseract-OCR/tesseract.exe'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
 File "/Users/alan/Work/rajtilakjee/simone/src/main.py", line 47, in <module>
 blogpost(url)
 File "/Users/alan/Work/rajtilakjee/simone/src/main.py", line 39, in blogpost
 score = scores.score_frames()
 ^^^^^^^^^^^^^^^^^^^^^
 File "/Users/alan/Work/rajtilakjee/simone/src/utils/scorer.py", line 20, in score_frames
 extracted_text = pytesseract.image_to_string(
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 File "/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/pytesseract/pytesseract.py", line 423, in image_to_string
 return {
 ^
 File "/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/pytesseract/pytesseract.py", line 426, in <lambda>
 Output.STRING: lambda: run_and_get_output(*args),
 ^^^^^^^^^^^^^^^^^^^^^^^^^
 File "/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/pytesseract/pytesseract.py", line 288, in run_and_get_output
 run_tesseract(**kwargs)
 File "/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/pytesseract/pytesseract.py", line 260, in run_tesseract
 raise TesseractNotFoundError()
pytesseract.pytesseract.TesseractNotFoundError: C:/Program Files/Tesseract-OCR/tesseract.exe is not installed or it's not in your PATH. See README file for more information.
  • Oof!
  • File a bug (like a good Open Source citizen)
  • Locally patch the file and try again
(.venv) python src/main.py
Enter YouTube URL: https://www.youtube.com/watch?v=VDIAHEoECfM
/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/whisper/transcribe.py:115: UserWarning: FP16 is not supported on CPU; using FP32 instead
 warnings.warn("FP16 is not supported on CPU; using FP32 instead")
  • Look for results
(.venv) $ ls -l generated_blogpost.txt *.jpg
-rw-r--r-- 1 alan staff 2163 26 Apr 09:26 generated_blogpost.txt
-rw-r--r--@ 1 alan staff 132984 26 Apr 09:27 top_frame_4_score_106.jpg
-rw-r--r-- 1 alan staff 184705 26 Apr 09:27 top_frame_5_score_105.jpg
-rw-r--r-- 1 alan staff 126148 26 Apr 09:27 top_frame_9_score_101.jpg

In my test I pointed simone at a short demo video from my employer, Anchore’s YouTube channel. The results are below, with no editing, I even included the typos. The images at the bottom of this post are frames from the video that simone selected.


Ancors Static Stick Checker Tool Demo: Evaluating and Resolving Security Findings

Introduction

Static stick checker tool helps developers identify security vulnerabilities in Docker images by running open-source security checks and generating remediation recommendations. This blog post summarizes a live demo of the tool’s capabilities.

How it works

The tool works by:

  • Downloading and analyzing the Docker image.
  • Detecting the base operating system distribution and selecting the appropriate stick profile.
  • Running open-source security checks on the image.
  • Generating a report of identified vulnerabilities and remediation actions.

Demo Walkthrough

The demo showcases the following steps:

  • Image preparation: Uploading a Docker image to a registry.
  • Tool execution: Running the static stick checker tool against the image.
  • Results viewing: Analyzing the generated stick results and identifying vulnerabilities.
  • Remediation: Implementing suggested remediation actions by modifying the Dockerfile.
  • Re-checking: Running the tool again to verify that the fixes have been effective.

Key findings

  • The static stick checker tool identified vulnerabilities in the Docker image in areas such as:
    • Verifying file hash integrity.
    • Configuring cryptography policy.
    • Verifying file permissions.
  • Remediation scripts were provided to address each vulnerability.
  • By implementing the recommended changes, the security posture of the Docker image was improved.

Benefits of using the static stick checker tool

  • Identify security vulnerabilities early in the development process.
  • Automate the remediation process.
  • Shift security checks leftward in the development pipeline.
  • Reduce the burden on security teams by addressing vulnerabilities before deployment.

Conclusion

The Ancors static stick checker tool provides a valuable tool for developers to improve the security of their Docker images. By proactively addressing vulnerabilities during the development process, organizations can ensure their applications are secure and reduce the risk of security incidents


Here’s the images it pulled out:

First image taken from the video

Second image taken from the video

Third image taken from the video

Not bad! It could be better - getting the company name wrong, for one!

I can imagine using this to create a YouTube description, or use it as a skeleton from which a blog post could be created. I certainly wouldn’t just pipe the output of this into blog posts! But so many videos need better descriptions, and this could help!

on April 26, 2024 09:00 AM

April 25, 2024

The Kubuntu Team is happy to announce that Kubuntu 24.04 has been released, featuring the ‘beautiful’ KDE Plasma 5.27 simple by default, powerful when needed.

Codenamed “Noble Numbat”, Kubuntu 24.04 continues our tradition of giving you Friendly Computing by integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.

Under the hood, there have been updates to many core packages, including a new 6.8-based kernel, KDE Frameworks 5.115, KDE Plasma 5.27 and KDE Gear 23.08.

Kubuntu 24.04 with Plasma 5.27.11

Kubuntu has seen many updates for other applications, both in our default install, and installable from the Ubuntu archive.

Haruna, Krita, Kdevelop, Yakuake, and many many more applications are updated.

Applications for core day-to-day usage are included and updated, such as Firefox, and LibreOffice.

For a list of other application updates, and known bugs be sure to read our release notes.

Download Kubuntu 24.04, or learn how to upgrade from 23.10 or 22.04 LTS.

Note: For upgrades from 23.10, there may a delay of a few hours to days between the official release announcements and the Ubuntu Release Team enabling upgrades.

on April 25, 2024 04:16 PM

The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 24.04 LTS, code-named “Noble Numbat”. This marks Ubuntu Studio’s 34th release. This release is a Long-Term Support release and as such, it is supported for 3 years (36 months, until April 2027).

Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a more complete list of changes and known issues. Listed here are some of the major highlights.

You can download Ubuntu Studio 24.04 LTS from our download page.

Special Notes

The Ubuntu Studio 24.04 LTS disk image (ISO) exceeds 4 GB and cannot be downloaded to some file systems such as FAT32 and may not be readable when burned to a standard DVD. For this reason, we recommend downloading to a compatible file system. When creating a boot medium, we recommend creating a bootable USB stick with the ISO image or burning to a Dual-Layer DVD.

Minimum installation media requirements: Dual-Layer DVD or 8GB USB drive.

Images can be obtained from this link: https://cdimage.ubuntu.com/ubuntustudio/releases/24.04/beta/

Full updated information, including Upgrade Instructions, are available in the Release Notes.

Please note that upgrading from 22.04 before the release of 24.04.1, due August 2024, is unsupported.

Upgrades from 23.10 should be enabled within a month after release, so we appreciate your patience.

New This Release

All-New System Installer

In cooperation with the Ubuntu Desktop Team, we have an all-new Desktop installer. This installer uses the underlying code of the Ubuntu Server installer (“Subiquity”) which has been in-use for years, with a frontend coded in “Flutter”. This took a large amount of work for this release, and we were able to help a lot of other official Ubuntu flavors transition to this new installer.

Be on the lookout for a special easter egg when the graphical environment for the installer first starts. For those of you who have been long-time users of Ubuntu Studio since our early days (even before Xfce!), you will notice exactly what it is.

PipeWire 1.0.4

Now for the big one: PipeWire is now mature, and this release contains PipeWire 1.0. With PipeWire 1.0 comes the stability and compatibility you would expect from multimedia audio. In fact, at this point, we recommend PipeWire usage for both Professional, Prosumer, and Everyday audio needs. At Ubuntu Summit 2023 in Riga, Latvia, our project leader Erich Eickmeyer used PipeWire to demonstrate live audio mixing with much success and has since done some audio mastering work using it. JACK developers even consider it to be “JACK 3”.

PipeWire’s JACK compatibility is configured to use out-of-the-box and is zero-latency internally. System latency is configurable via Ubuntu Studio Audio Configuration.

However, if you would rather use straight JACK 2 instead, that’s also possible. Ubuntu Studio Audio Configuration can disable and enable PipeWire’s JACK compatibility on-the-fly. From there, you can simply use JACK via QJackCtl.

With this, we consider audio production with Ubuntu Studio so mature that it can now rival operating systems such as macOS and Windows in ease-of-use since it’s ready to go out-of-the-box.

Deprecation of PulseAudio/JACK setup/Studio Controls

Due to the maturity of PipeWire, we now consider the traditional PulseAudio/JACK setup, where JACK would be started/stopped by Studio Controls and bridged to PulseAudio, deprecated. This configuration is still installable via Ubuntu Studio Audio Configuration, but we do not recommend it. Studio Controls may return someday as a PipeWire fine-tuning solution, but for now it is unsupported by the developer. For that reason, we recommend users not use this configuration. If you do, it is at your own risk and no support will be given. In fact, it’s likely to be dropped for 24.10.

Ardour 8.4

While this does not represent the latest release of Ardour, Ardour 8.4 is a great release. If you would like the latest release, we highly recommend purchasing one-time or subscribing to Ardour directly from the developers to help support this wonderful application. Also, for that reason, this will be an application we will not directly backport. More on that later.

Ubuntu Studio Audio Configuration

Ubuntu Studio Audio Configuration has undergone a UI overhaul and contains the ability to start and stop a Dummy Audio Device which can also be configured to start or stop upon login. When assigned as the default, this will free-up channels that would normally be assigned to your system audio to be assigned to a null device.

Meta Package for Music Education

In cooperation with Edubuntu, we have created a metapackage for music education. This package is installable from Ubuntu Studio Installer and includes the following packages:

  • FMIT: Free Musical Instrument Tuner, a tool for tuning musical Instruments (also included by default)
  • GNOME Metronome: Exactly what it sounds like (pun unintended): a metronome.
  • Minuet: Ear training for intervals, chords, scales, and more.
  • MuseScore: Create, playback, and print sheet music for free (this one is no stranger to the Ubuntu Studio community)
  • Piano Booster: MIDI player/game that displays musical notes and teaches you how to play piano, optionally using a MIDI keyboard.
  • Solfege: Ear training program for harmonic and melodic intervals, chords, scales, and rhythms.

New Artwork

Thanks to the work of Eylul and the submissions to the Ubuntu Studio Noble Numbat Wallpaper Contest, we have a number of wallpapers to choose from and a new default wallpaper.

Deprecation of Ubuntu Studio Backports Is In Effect

As stated in the Ubuntu 23.10 Release Announcement, the Ubuntu Studio Backports PPA is now deprecated in favor of the official Ubuntu Backports repository. However, the Backports repository only works for LTS releases and for good reason. There are a few requirements for backporting:

  • It must be an application which already exists in the Ubuntu repositories
  • It must be an application which would not otherwise qualify for a simple bugfix, which would then qualify it to be a Stable Release Update. This means it must have new features.
  • It must not rely on new libraries or new versions of libraries.
  • It must exist within a later supported release or the development release of Ubuntu.

If you have a suggestion for an application for which to backport that meets those requirements, feel free to join and email the Ubuntu Studio Users Mailing List with your suggestion with the tag “[BPO]” at the beginning of the subject line. Backports to 22.04 LTS are now closed and backports to 24.04 LTS are now open. Additionally, suggestions must pertain to Ubuntu Studio and preferably must be applications included with Ubuntu Studio. Suggestions can be rejected at the Project Leader’s discretion.

One package that is exempt to backporting is Ardour. To help support Ardour’s funding, you may obtain later versions directly from them. To do so, please one-time purchase or subscribe to Ardour from their website. If you wish to get later versions of Ardour from us, you will have to wait until the next regular release of Ubuntu Studio, due in October 2024.

We’re back on Matrix

You’ll notice that the menu links to our support chat and on our website will now take you to a Matrix chat. This is due to the Ubuntu community carving its own space within the Matrix federation.

However, this is not only a support chat. This is also a creativity discussion chat. You can pass ideas to each other and you’re welcome to it if the topic remains within those confines. However, if a moderator or admin warns you that you’re getting off-topic (or the intention for the chat room), please heed the warning.

This is a persistent connection, meaning if you close the window (or chat), it won’t lose your place as you may only need to sign back in to resume the chat.

Frequently Asked Questions

Q: Does Ubuntu Studio contain snaps?
A: Yes. Mozilla’s distribution agreement with Canonical changed, and Ubuntu was forced to no longer distribute Firefox in a native .deb package. We have found that, after numerous improvements, Firefox now performs just as well as the native .deb package did.

Thunderbird also became a snap during this cycle for the maintainers to get security patches delivered faster.

Additionally, Freeshow is an Electron-based application. Electron-based applications cannot be packaged in the Ubuntu repositories in that they cannot be packaged in a traditional Debian source package. While such apps do have a build system to create a .deb binary package, it circumvents the source package build system in Launchpad, which is required when packaging for Ubuntu. However, Electron apps also have a facility for creating snaps, which can be uploaded and included. Therefore, for Freeshow to be included in Ubuntu Studio, it had to be packaged as a snap.

Q: Will you make an ISO with {my favorite desktop environment}?
A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since we’re completely volunteer-run, we don’t have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio – which does *not* convert that flavor to Ubuntu Studio but adds its benefits.

Q: What if I don’t want all these packages installed on my machine?
A: Simply use the Ubuntu Studio Installer to remove the features of Ubuntu Studio you don’t want or need!

Looking Toward the Future

Plasma 6

Ubuntu Studio, in cooperation with Kubuntu, will be switching to Plasma 6 during the 24.10 development cycle. Likewise, Lubuntu will be switching to LXQt 2.0 and Qt 6, so the three flavors will be cooperating to do the move.

New Look

Ubuntu Studio has been using the same theming, “Materia” (except for the 22.04 LTS release which was a re-colored Breeze theme) since 19.04. However, Materia has gone dead upstream. To stay consistent, we found a fork called “Orchis” which seems to match closely and will be switching to that. More on that soon.

Minimal Installation

The new system installer has the capability to do minimal installations. This was something we did not have time to implement this cycle but intend to do for 24.10. This will let users install a minimal desktop to get going and then install what they need via Ubuntu Studio Installer. This will make a faster installation process but will not make the installation .iso image smaller. However, we have an idea for that as well.

Minimal Installation .iso Image

We are going to research what it will take to create a minimal installer .iso image that will function much like the regular .iso image minus the ability to install everything and allow the user to customize the installation via Ubuntu Studio Installer. This should lead to a much smaller initial download. Unlike creating a version with a different desktop environment, the Ubuntu Technical Board has been on record as saying this would not require going through the new flavor creation process. Our friends at Xubuntu recently did something similar.

Get Involved!

A wonderful way to contribute is to get involved with the project directly! We’re always looking for new volunteers to help with packaging, documentation, tutorials, user support, and MORE! Check out all the ways you can contribute!

Our project leader, Erich Eickmeyer, is now working on Ubuntu Studio at least part-time, and is hoping that the users of Ubuntu Studio can give enough to generate a monthly part-time income. Your donations are appreciated! If other distributions can do it, surely we can! See the sidebar for ways to give!

Special Thanks

Huge special thanks for this release go to:

  • Eylul Dogruel: Artwork, Graphics Design
  • Ross Gammon: Upstream Debian Developer, Testing, Email Support
  • Sebastien Ramacher: Upstream Debian Developer
  • Dennis Braun: Upstream Debian Developer
  • Rik Mills: Kubuntu Council Member, help with Plasma desktop
  • Scarlett Moore: Kubuntu Project Lead, help with Plasma desktop
  • Zixing Liu: Simplified Chinese translations in the installer
  • Simon Quigley: Lubuntu Release Manager, help with Qt items, Core Developer stuff, keeping Erich sane and focused
  • Steve Langasek: Help with livecd-rootfs changes to make the new installer work properly.
  • Dan Bungert: Subiquity, seed fixes
  • Dennis Loose: Ubuntu Desktop Provision (installer)
  • Lukas Klingsbo: Ubuntu Desktop Provision (installer)
  • Len Ovens: Testing, insight
  • Wim Taymans: Creator of PipeWire
  • Mauro Gaspari: Tutorials, Promotion, and Documentation, Testing, keeping Erich sane
  • Krytarik Raido: IRC Moderator, Mailing List Moderator
  • Erich Eickmeyer: Project Leader, Packaging, Development, Direction, Treasurer

A Note from the Project Leader

When I started out working on Ubuntu Studio six years ago, I had a vision of making it not only the easiest Linux-based operating system for content creation, but the easiest content creation operating system… full-stop.

With the release of Ubuntu Studio 24.04 LTS, I believe we have achieved that goal. No longer do we have to worry about whether an application is JACK or PulseAudio or… whatever. It all just works! Audio applications can be patched to each other!

If an audio device doesn’t depend on complex drivers (i.e. if the device is class-compliant), it will just work. If a user wishes to lower the latency or change the sample rate, we have a utility that does that (Ubuntu Studio Audio Configuration). If a user wants to have finer control use pure JACK via QJackCtl, they can do that too!

I honestly don’t know how I would replicate this on Windows, and replicating on macOS would be much harder without downloading all sorts of applications. With Ubuntu Studio 24.04 LTS, it’s ready to go and you don’t have to worry about it.

Where we are now is a dream come true for me, and something I’ve been hoping to see Ubuntu Studio become. And now, we’re finally here, and I feel like it can only get better.

-Erich Eickmeyer

on April 25, 2024 03:16 PM

Ubuntu MATE 24.04 is more of what you like, stable MATE Desktop on top of current Ubuntu. This release rolls up some fixes and more closely aligns with Ubuntu. Read on to learn more 👓️

Ubuntu MATE 24.04 LTS Ubuntu MATE 24.04 LTS

Thank you! 🙇

I’d like to extend my sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this release 👏 I’d like to acknowledge the close collaboration with all the Ubuntu flavour teams and the Ubuntu Foundations and Desktop Teams. The assistance and support provided by Erich Eickmeyer (Ubuntu Studio), Simon Quigley (Lubuntu) and David Muhammed (Ubuntu Budgie) have been invaluable. Thank you! 💚

What changed since the Ubuntu MATE 23.10?

Here are the highlights of what’s changed since the release of Ubuntu MATE 23.10

  • Ships stable MATE Desktop 1.26.2 with a selection of bug fixes 🐛 and minor improvements 🩹 to associated components.
  • Integrated the new ✨ Ubuntu Desktop Bootstrap installer 📀
  • Added GNOME Firmware, that replaces Firmware Updater.
  • Added App Center, that replaces Software Boutique.
  • Retired Ubuntu MATE Welcome; although it is still available for Ubuntu MATE 23.10 and earlier.

Major Applications

Accompanying MATE Desktop 1.26.2 🧉 and Linux 6.8 🐧 are Firefox 125 🔥🦊, Celluloid 0.26 🎥, Evolution 3.52 📧, LibreOffice 24.2.2 📚

See the Ubuntu 24.04 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 24.04

This new release will be first available for PC/Mac users.

Download

Upgrading to Ubuntu MATE 24.04

The upgrade process to Ubuntu MATE 24.04 LTS from either Ubuntu MATE 22.04 LTS or 23.10 is the same as Ubuntu.

There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

on April 25, 2024 02:57 PM

We are pleased to announce the release of the next version of our distro, 24.04 Long Term Support. The LTS version is supported for 3 years while the regular releases are supported for 9 months. The new release rolls-up various fixes and optimizations that the Ubuntu Budgie team have been released since the 22.04 release in April 2022: We also inherits hundreds of stability…

Source

on April 25, 2024 01:37 PM
Thanks to the hard work from our contributors, Lubuntu 24.04 LTS has been released. With the codename Noble Numbat, Lubuntu 24.04 is the 26th release of Lubuntu, the 12th release of Lubuntu with LXQt as the default desktop environment. Download and Support Lifespan With Lubuntu 24.04 being a long-term support interim release, it will follow […]
on April 25, 2024 01:31 PM

The Xubuntu team is happy to announce the immediate release of Xubuntu 24.04.

Xubuntu 24.04, codenamed Noble Numbat, is a long-term support (LTS) release and will be supported for 3 years, until 2027.

Xubuntu 24.04, featuring the latest updates from Xfce 4.18 and GNOME 46.

Xubuntu 24.04 features the latest updates from Xfce 4.18, GNOME 46, and MATE 1.26. For new users and those coming from Xubuntu 22.04, you’ll appreciate the performance, stability, and improved hardware support found in Xubuntu 24.04. Xfce 4.18 is stable, fast, and full of user-friendly features. Enjoy frictionless bluetooth headphone connections and out-of-the-box touchpad support. Updates to our icon theme and wallpapers make Xubuntu feel fresh and stylish.

The final release images for Xubuntu Desktop and Xubuntu Minimal are available as torrents and direct downloads from xubuntu.org/download/.

As the main server might be busy in the first few days after the release, we recommend using the torrents if possible.

We’d like to thank everybody who contributed to this release of Xubuntu!

Highlights and Known Issues

Highlights

  • Xfce 4.18 is included and well-polished since it’s initial release in December 2022
  • Xubuntu Minimal is included as an officially supported subproject
  • GNOME Software has been replaced by Snap Store and GDebi
  • Snap Desktop Integration is now included for improved snap package support
  • Firmware Updater has been added to enable firmware updates in Xubuntu is included to support firmware updates from the Linux Vendor Firmware Service (LVFS)
  • Thunderbird is now distributed as a Snap package
  • Ubiquity has been replaced by the Flutter-based Ubuntu Installer to provide fast and user-friendly installation
  • Pipewire (and wireplumber) are now included in Xubuntu
  • Improved hardware support for bluetooth headphones and touchpads
  • Color emoji is now included and supported in Firefox, Thunderbird, and newer Gtk-based apps
  • Significantly improved screensaver integration and stability

Known Issues

  • The shutdown prompt may not be displayed at the end of the installation. Instead you might just see a Xubuntu logo, a black screen with an underscore in the upper left hand corner, or just a black screen. Press Enter and the system will reboot into the installed environment. (LP: #1944519)
  • Xorg crashes and the user is logged out after logging in or switching users on some virtual machines, including GNOME Boxes. (LP: #1861609)
  • You may experience choppy audio or poor system performance while playing audio, but only in some virtual machines (observed in VMware and VirtualBox)
  • OEM installation options are not currently supported or available, but will be included for Xubuntu 24.04.1

For more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions, please refer to the Xubuntu Release Notes.

The main Ubuntu Release Notes cover many of the other packages we carry and more generic issues.

Support

For support with the release, navigate to Help & Support for a complete list of methods to get help.

on April 25, 2024 12:00 PM

With the work that has been done in the debian-installer/netcfg merge-proposal !9 it is possible to install a standard Debian system, using the normal Debian-Installer (d-i) mini.iso images, that will come pre-installed with Netplan and all network configuration structured in /etc/netplan/.

In this write-up, I’d like to run you through a list of commands for experiencing the Netplan enabled installation process first-hand. For now, we’ll be using a custom ISO image, while waiting for the above-mentioned merge-proposal to be landed. Furthermore, as the Debian archive is going through major transitions builds of the “unstable” branch of d-i don’t currently work. So I implemented a small backport, producing updated netcfg and netcfg-static for Bookworm, which can be used as localudebs/ during the d-i build.

Let’s start with preparing a working directory and installing the software dependencies for our virtualized Debian system:

$ mkdir d-i_bookworm && cd d-i_bookworm
$ apt install ovmf qemu-utils qemu-system-x86

Now let’s download the custom mini.iso, linux kernel image and initrd.gz containing the Netplan enablement changes, as mentioned above.

$ wget https://people.ubuntu.com/~slyon/d-i/bookworm/mini.iso
$ wget https://people.ubuntu.com/~slyon/d-i/bookworm/linux
$ wget https://people.ubuntu.com/~slyon/d-i/bookworm/initrd.gz

Next we’ll prepare a VM, by copying the EFI firmware files, preparing some persistent EFIVARs file, to boot from FS0:\EFI\debian\grubx64.efi, and create a virtual disk for our machine:

$ cp /usr/share/OVMF/OVMF_CODE_4M.fd .
$ cp /usr/share/OVMF/OVMF_VARS_4M.fd .
$ qemu-img create -f qcow2 ./data.qcow2 5G

Finally, let’s launch the installer using a custom preseed.cfg file, that will automatically install Netplan for us in the target system. A minimal preseed file could look like this:

# Install minimal Netplan generator binary
d-i preseed/late_command string in-target apt-get -y install netplan-generator

For this demo, we’re installing the full netplan.io package (incl. Python CLI), as the netplan-generator package was not yet split out as an independent binary in the Bookworm cycle. You can choose the preseed file from a set of different variants to test the different configurations:

We’re using the custom linux kernel and initrd.gz here to be able to pass the preseed URL as a parameter to the kernel’s cmdline directly. Launching this VM should bring up the normal debian-installer in its netboot/gtk form:

$ export U=https://people.ubuntu.com/~slyon/d-i/bookworm/netplan-preseed+networkd.cfg
$ qemu-system-x86_64 \
	-M q35 -enable-kvm -cpu host -smp 4 -m 2G \
	-drive if=pflash,format=raw,unit=0,file=OVMF_CODE_4M.fd,readonly=on \
	-drive if=pflash,format=raw,unit=1,file=OVMF_VARS_4M.fd,readonly=off \
	-device qemu-xhci -device usb-kbd -device usb-mouse \
	-vga none -device virtio-gpu-pci \
	-net nic,model=virtio -net user \
	-kernel ./linux -initrd ./initrd.gz -append "url=$U" \
	-hda ./data.qcow2 -cdrom ./mini.iso;

Now you can click through the normal Debian-Installer process, using mostly default settings. Optionally, you could play around with the networking settings, to see how those get translated to /etc/netplan/ in the target system.

After you confirmed your partitioning changes, the base system gets installed. I suggest not to select any additional components, like desktop environments, to speed up the process.

During the final step of the installation (finish-install.d/55netcfg-copy-config) d-i will detect that Netplan was installed in the target system (due to the preseed file provided) and opt to write its network configuration to /etc/netplan/ instead of /etc/network/interfaces or /etc/NetworkManager/system-connections/.

Done! After the installation finished, you can reboot into your virgin Debian Bookworm system.

To do that, quit the current Qemu process, by pressing Ctrl+C and make sure to copy over the EFIVARS.fd file that was written by grub during the installation, so Qemu can find the new system. Then reboot into the new system, not using the mini.iso image any more:

$ cp ./OVMF_VARS_4M.fd ./EFIVARS.fd
$ qemu-system-x86_64 \
        -M q35 -enable-kvm -cpu host -smp 4 -m 2G \
        -drive if=pflash,format=raw,unit=0,file=OVMF_CODE_4M.fd,readonly=on \
        -drive if=pflash,format=raw,unit=1,file=EFIVARS.fd,readonly=off \
        -device qemu-xhci -device usb-kbd -device usb-mouse \
        -vga none -device virtio-gpu-pci \
        -net nic,model=virtio -net user \
        -drive file=./data.qcow2,if=none,format=qcow2,id=disk0 \
        -device virtio-blk-pci,drive=disk0,bootindex=1
        -serial mon:stdio

Finally, you can play around with your Netplan enabled Debian system! As you will find, /etc/network/interfaces exists but is empty, it could still be used (optionally/additionally). Netplan was configured in /etc/netplan/ according to the settings given during the d-i installation process.

In our case, we also installed the Netplan CLI, so we can play around with some of its features, like netplan status:

Thank you for following along the Netplan enabled Debian installation process and happy hacking! If you want to learn more, join the discussion at Salsa:installer-team/netcfg and find us at GitHub:netplan.

on April 25, 2024 10:19 AM

April 24, 2024

Ubuntu MATE 23.10 is more of what you like, stable MATE Desktop on top of current Ubuntu. This release rolls up a number of bugs fixes and updates that continues to build on recent releases, where the focus has been on improving stability 🪨

Ubuntu MATE 23.10 Ubuntu MATE 23.10

Thank you! 🙇

I’d like to extend my sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this release 👏 From reporting bugs, submitting translations, providing patches, contributing to our crowd-funding, developing new features, creating artwork, offering community support, actively testing and providing QA feedback to writing documentation or creating this fabulous website. Thank you! 💚

What changed since the Ubuntu MATE 23.04?

Here are the highlights of what’s changed since the release of Ubuntu MATE 23.04

MATE Desktop

MATE Desktop has been updated to 1.26.2 with a selection of bugs fixes 🐛 and minor improvements 🩹 to associated components.

  • caja-rename 23.10.1-1 has been ported from Python to C.
  • libmatemixer 1.26.0-2+deb12u1 resolves heap corruption and application crashes when removing USB audio devices.
  • mate-desktop 1.26.2-1 improves portals support.
  • mate-notification-daemon 1.26.1-1 fixes several memory leaks.
  • mate-system-monitor 1.26.0-5 now picks up libexec files from /usr/libexec
  • mate-session-manager 1.26.1-2 set LIBEXECDIR to /usr/libexec/ for correct interaction with mate-system-monitor ☝️
  • mate-user-guide 1.26.2-1 is a new upstream release.
  • mate-utils 1.26.1-1 fixes several memory leaks.

Yet more AI Generated wallpaper

My friend Simon Butcher 🇬🇧 is Head of Research Platforms at Queen Mary University of London managing the Apocrita HPC cluster service. Once again, Simon has created a stunning AI-generated 🤖🧠 wallpaper for Ubuntu MATE using bleeding edge diffusion models 🖌 The sample below is 1920x1080 but the version included in Ubuntu MATE 23.10 are 3840x2160.

Here’s what Simon has to say about the process of creating this new wallpaper for Mantic Minotaur:

Since Minotaurs are imaginary creatures, interpretations tend to vary widely. I wanted to produce an image of a powerful creature in a graphic novel style, although not gruesome like many depictions. The latest open source Stable Diffusion XL base model was trained at a higher resolution and the difference in quality has been noticeable, particularly at better overall consistency and detail, while reducing anatomical irregularities in images. The image was produced locally using Linux and an NVIDIA A100 80GB GPU, starting from an initial text prompt and refined using img2img, inpainting and upscaling features.

Major Applications

Accompanying MATE Desktop 1.26.2 🧉 and Linux 6.5 🐧 are Firefox 118 🔥🦊, Celluloid 0.25 🎥, Evolution 3.50 📧, LibreOffice 7.6.1 📚

See the Ubuntu 23.10 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 23.10

This new release will be first available for PC/Mac users.

Download

Upgrading from Ubuntu MATE 23.04

You can upgrade to Ubuntu MATE 23.10 from Ubuntu MATE 23.04. Ensure that you have all updates installed for your current version of Ubuntu MATE before you upgrade.

  • Open the “Software & Updates” from the Control Center.
  • Select the 3rd Tab called “Updates”.
  • Set the “Notify me of a new Ubuntu version” drop down menu to “For any new version”.
  • Press Alt+F2 and type in update-manager -c -d into the command box.
  • Update Manager should open up and tell you: New distribution release ‘23.10’ is available.
    • If not, you can use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
  • Click “Upgrade” and follow the on-screen instructions.

There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

Feedback

Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

on April 24, 2024 09:55 PM

April 15, 2024

Ubuntu Budgie 24.04 LTS (Noble Numbat) is a Long Term Support release with 3 years of support by your distro maintainers, from April 2024 to May 2027. These release notes showcase the key takeaways for 22.04 upgraders to 24.04. In these release notes the areas covered are: Quarter & half tiling is pretty much self-explaining. Dragging a window to the…

Source

on April 15, 2024 09:02 PM