September 10, 2024

Canonical is thrilled to be joining forces with Dell Technologies at the upcoming Dell Technologies Forum – Bangalore, taking place on 12 September. This premier event brings together industry leaders and technology enthusiasts to explore the latest advancements and solutions shaping the digital landscape.

Register to Dell Technologies Forum – Bangalore

A spotlight on powerful partnerships

At the forum, we’ll be hosting a comprehensive session showcasing the combined power of Canonical and Dell Technologies. This session will delve into a range of solutions that empower organizations to unlock innovation and achieve their business goals.

Where: Poinsettia room
When: 11:45 AM

What you can expect from Canonical

  • Empowering Freedom with Open Source Alternatives: We’ll explore a compelling array of open-source solutions that provide powerful alternatives to proprietary software, offering flexibility, security, and cost-effectiveness for businesses.
  • Unlocking Scalability with PowerFlex + MicroCloud: Discover how the dynamic duo of Dell PowerFlex storage and Canonical’s MicroCloud platform empowers you to build and manage containerized applications with incredible agility and scalability.
  • Securing Your Open Source Journey: Keeping your open-source environment safe and secure is paramount. We’ll discuss the robust security solutions available from Canonical and Dell Technologies, ensuring your open-source deployments are protected against potential threats.
  • Accelerating Innovation with AI Solutions: Explore how Canonical and Dell Technologies join forces to provide powerful AI solutions that fuel innovation and unlock new possibilities for your business.

A spotlight on powerful partnerships

At the forum, we’ll be hosting a comprehensive session showcasing the combined power of Canonical and Dell Technologies. This session will delve into a range of solutions that empower organizations to unlock innovation and achieve their business goals.

Where: Poinsettia room
When: 11:45 AM

A collaboration built for success

Our partnership with Dell Technologies goes beyond just products. We share a deep commitment to empowering organizations with the tools and solutions they need to thrive in the ever-evolving digital world.

Learn more about

Join us at the Forum

Don’t miss this opportunity to learn more about how Canonical and Dell Technologies can help you achieve your IT goals. Visit our booth at the Dell Technologies Forum – Bangalore, and engage with our experts to discuss your specific needs. 

Location
Sheraton Grand Bengaluru Whitefield Hotel & Convention Center
Prestige Shantiniketan, Hoodi, Thigalarapalya
Whitefield, Bengaluru – 560048

Dates
Thursday, 12 September

Hours
7:30 AM – 4:20 PM

We look forward to connecting with you and exploring the path to innovation together!

Register to Dell Technologies Forum – Bangalore

Are you interested in setting up a meeting with our team?
Reach out to our Alliances team at partners@canonical.com

on September 10, 2024 07:08 PM

Amateur radio, or “ham” radio, is a hobby that combines electronics, communication technology, and experimentation. It’s a perfect blend for those who enjoy tinkering with both hardware and software. While Windows and macOS are popular choices for many hams, Linux distributions, especially Ubuntu, offer a robust, flexible, and cost-effective alternative for building a ham shack. In this blog post, we’ll explore why Ubuntu is a great choice for ham radio operators and provide a step-by-step guide on setting up a ham shack operating system using Ubuntu.

Why Choose Ubuntu for Your Ham Shack?

Ubuntu, a Debian-based Linux distribution, is known for its user-friendly interface, vast repository of software, and strong community support. Here are a few reasons why Ubuntu is a great choice for amateur radio enthusiasts:

  1. Open Source and Free: Ubuntu is free to download, install, and use. Being open-source means you have full control over the operating system, including the ability to tweak it to suit your specific needs.
  2. Stability and Security: Ubuntu is known for its stability and security. The Linux kernel is less prone to viruses and malware compared to other operating systems, which is crucial when running a reliable ham shack.
  3. Vast Software Repository: Ubuntu has a huge software repository, including a wide variety of applications specifically designed for amateur radio. This makes it easy to find and install the tools you need.
  4. Community Support: Ubuntu has a large, active community. If you run into problems or need help setting up a particular piece of software, you’re likely to find solutions in forums, user groups, or dedicated ham radio communities.
  5. Customization: Ubuntu allows for extensive customization. You can strip down the OS to its bare essentials to maximize performance or build a fully-featured desktop environment with all the tools and utilities you need.

Getting Started: Installing Ubuntu

Step 1: Download Ubuntu

Visit the official Ubuntu website to download the latest version of Ubuntu. You can choose between the Long-Term Support (LTS) version, which is stable and receives updates for five years, or the regular release, which includes newer features but is only supported for nine months.

Step 2: Create a Bootable USB Drive

Once you have downloaded the Ubuntu ISO file, create a bootable USB drive. You can use tools like Rufus (Windows) or Etcher (Linux/macOS) to make a bootable USB stick.

Step 3: Install Ubuntu

Boot your computer from the USB drive and follow the on-screen instructions to install Ubuntu. You can choose to install Ubuntu alongside your existing operating system or as a standalone OS.

Essential Ham Radio Software for Ubuntu

Now that you have Ubuntu installed, it’s time to set up your ham shack environment. Here are some essential ham radio software packages you should consider:

1. FLDigi

FLDigi (Fast Light Digital Modem Application) is a popular digital mode software suite for Linux, Windows, and macOS. It supports a wide range of digital modes like PSK31, RTTY, MFSK, and more. FLDigi integrates well with other software, making it an essential part of any ham shack setup.

  • Installation: You can install FLDigi directly from the Ubuntu repository using the following command:
  sudo apt-get install fldigi

2. WSJT-X

WSJT-X is a software suite designed for weak-signal digital communication by K1JT. It supports FT8, JT65, JT9, and other popular digital modes. The software is user-friendly and widely used in the ham radio community.

  • Installation: Download the latest WSJT-X package from the official website and follow the installation instructions provided.

3. CQRLOG

CQRLOG is an advanced logging program for Linux that integrates seamlessly with ham radio applications. It supports real-time logging, QSO records, and features like LoTW and eQSL synchronization.

  • Installation: Install CQRLOG from the Ubuntu repository:
  sudo apt-get install cqrlog

4. GPredict

GPredict is a satellite tracking application that helps you monitor satellite passes in real-time. It’s a must-have for any ham operator interested in satellite communication.

  • Installation: Install GPredict using the following command:
  sudo apt-get install gpredict

5. Hamlib

Hamlib provides a standardized API for controlling radios and other shack equipment. Many ham radio applications rely on Hamlib to interface with various radios. It’s an essential library for integrating different hardware with your Ubuntu system.

  • Installation: Install Hamlib via the terminal:
  sudo apt-get install libhamlib-utils

Setting Up Rig Control and CAT Interfaces

One of the key aspects of setting up a ham shack on Ubuntu is ensuring seamless communication between your computer and radio equipment. This usually involves setting up rig control and Computer-Aided Transceiver (CAT) interfaces. The Hamlib library mentioned earlier is crucial for this.

  • Rig Control Setup: Use rigctl (part of Hamlib) to set up rig control. You may need to specify the serial port or USB port where your rig is connected:
  rigctl -m <radio_model_number> -r /dev/ttyUSB0 -s <baud_rate>
  • Testing the Interface: Once set up, test the interface to ensure commands from the computer are correctly interpreted by the radio.

Customizing Ubuntu for Ham Radio Use

To optimize Ubuntu for your ham shack, consider the following:

  1. Disable Unnecessary Services: Disable services that aren’t needed to reduce system load.
  2. Optimize Audio Settings: Properly configure ALSA and PulseAudio settings to ensure clear and reliable audio communication.
  3. Set Up a Backup System: Use tools like rsync or Timeshift to set up regular backups of your log files and settings.
  4. Use Virtual Desktops: Take advantage of Ubuntu’s multiple desktops feature to separate your ham radio operations from general computing tasks.

Conclusion

Using Ubuntu as your ham shack operating system offers flexibility, stability, and a wide range of powerful software tools. Whether you’re a digital mode enthusiast, a satellite tracker, or someone who loves experimenting with different radio setups, Ubuntu provides an open, customizable platform that can meet your needs. With a little bit of setup and configuration, you’ll have a robust, reliable ham shack operating system tailored just for you.

Dive in, experiment, and enjoy the freedom that comes with using an open-source operating system like Ubuntu in your amateur radio adventures!

The post Using Ubuntu as Your Ham Shack Operating System: A Comprehensive Guide for Amateur Radio Enthusiasts appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.

on September 10, 2024 05:16 PM

OpenUK Awards 2024

Jonathan Riddell

https://openuk.uk/openuk-september-2024-newsletter-1/

https://www.linkedin.com/feed/update/urn:li:activity:7238138962253344769/

Our 5th annual Awards are open for nominations and our 2024 judges are waiting for your nominations! Hannah Foxwell, Jonathan Riddell, and Nicole Tandy will be selecting winners for 12 categories. ?

Nominations are now open until midnight UK, 8 September 2024. Our 5th Awards again celebrate the UK’s leadership and global collaboration in open technology!

Nominate now! https://openuk.uk/awards/openuk-awards-2024/

Up to 3 shortlisted nominees will be selected in each category by early October and each nominee will be given one place at the Oscars of Open Source, the black tie Awards Ceremony and Gala Dinner for our 5th Awards held at the House of Lords on 28 November, thanks to the sponsorship of Lord Wei.

on September 10, 2024 02:28 PM

In an era where digital privacy is increasingly at risk, securing your DNS (Domain Name System) queries is crucial. Traditional DNS requests are sent in plaintext, making them vulnerable to eavesdropping and tampering. Fortunately, DNS over HTTPS (DoH) and DNS over TLS (DoT) offer encrypted channels for DNS queries, significantly enhancing your privacy and security.

In this guide, we’ll explore how to set up your own personal DNS resolver using open-source software that supports both DoH and DoT. We will cover the installation and configuration of Unbound, Caddy, Stubby, and other relevant tools to ensure your DNS traffic remains private and secure.

Understanding DoH and DoT

  • DNS over HTTPS (DoH): Encrypts DNS queries using the HTTPS protocol, making it difficult to distinguish DNS traffic from regular web traffic. This helps bypass censorship and improve privacy.
  • DNS over TLS (DoT): Encrypts DNS queries using Transport Layer Security (TLS), securing the communication channel between your device and the DNS resolver.

Both protocols prevent eavesdropping and manipulation of DNS data by external parties, such as ISPs or malicious actors.

Why Run Your Own DNS Resolver?

Running your own DNS resolver has several advantages:

  1. Enhanced Privacy: Prevent third-party DNS services from logging or selling your DNS queries.
  2. Increased Security: Protect against DNS hijacking and other DNS-related threats.
  3. Customization: Apply custom DNS filtering rules, block ads and trackers, or direct specific domains to chosen IPs.
  4. Improved Performance: Reduce latency by caching DNS responses and optimizing resolver placement.

Open-Source Software for DoH and DoT

We’ll focus on the following open-source tools to set up a personal DNS resolver with support for DoH and DoT:

  1. Unbound: A high-performance DNS resolver that supports DNS over TLS (DoT).
  2. Caddy: A modern web server with native support for DNS over HTTPS (DoH).
  3. Stubby: A DNS privacy daemon designed for DNS over TLS (DoT).
  4. Knot Resolver: A versatile DNS resolver supporting both DoH and DoT.
  5. CoreDNS: A DNS server with modular support for DoH and DoT via plugins.
  6. DNSDist: A DNS load balancer that can proxy DNS queries over HTTPS and TLS.

Protect your online privacy and enhance your security by setting up your personal DNS resolver with DNS over HTTPS (DoH) and DNS over TLS (DoT). Learn how to install and configure popular open-source tools like Unbound, Caddy, Stubby, Knot Resolver, CoreDNS, and DNSDist to secure your DNS queries.

1. Unbound: High-Performance DNS Resolver with DoT Support

Unbound is a powerful DNS resolver that supports DNS over TLS (DoT). Here’s how to install and configure it:

Install Unbound

For Debian-based systems:

sudo apt update
sudo apt install unbound

For Red Hat-based systems:

sudo yum install unbound

Configure Unbound

Edit the Unbound configuration file at /etc/unbound/unbound.conf:

server:
    interface: 0.0.0.0@853
    interface: ::0@853
    tls-service-key: "/etc/unbound/unbound_server.key"
    tls-service-pem: "/etc/unbound/unbound_server.pem"
    access-control: 127.0.0.0/8 allow
    access-control: ::1 allow
    root-hints: "/etc/unbound/root.hints"
    cache-max-ttl: 86400
    cache-min-ttl: 3600

forward-zone:
    name: "."
    forward-tls-upstream: yes
    forward-addr: 1.1.1.1@853
    forward-addr: 8.8.8.8@853

Generate TLS certificates:

openssl req -x509 -newkey rsa:4096 -keyout /etc/unbound/unbound_server.key -out /etc/unbound/unbound_server.pem -days 365 -nodes -subj "/CN=yourdomain.com"

Start Unbound:

sudo systemctl enable unbound
sudo systemctl start unbound

2. Caddy: Modern Web Server with Native DoH Support

Caddy provides built-in support for DNS over HTTPS (DoH).

Install Caddy

For Debian-based systems:

sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo tee /etc/apt/trusted.gpg.d/caddy-stable.asc
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update
sudo apt install caddy

Configure Caddy

Create or edit the Caddyfile at /etc/caddy/Caddyfile:

yourdomain.com {
    tls /etc/caddy/caddy_server.pem /etc/caddy/caddy_server.key

    route {
        forward_proxy {
            to dns://127.0.0.1:53
        }
    }
}

Generate TLS certificates:

openssl req -x509 -newkey rsa:4096 -keyout /etc/caddy/caddy_server.key -out /etc/caddy/caddy_server.pem -days 365 -nodes -subj "/CN=yourdomain.com"

Start Caddy:

sudo systemctl enable caddy
sudo systemctl start caddy

3. Stubby: DNS Privacy Daemon for DoT

Stubby is a lightweight daemon for DNS over TLS.

Install Stubby

For Debian-based systems:

sudo apt update
sudo apt install stubby

Configure Stubby

Edit the configuration file at /etc/stubby/stubby.yml:

resolution_type: GETDNS_RESOLUTION_STUB
dns_transport_list:
  - GETDNS_TRANSPORT_TLS

tls_authentication: GETDNS_AUTHENTICATION_REQUIRED
tls_query_padding_blocksize: 128
edns_client_subnet_private: 1

round_robin_upstreams: 1

upstream_recursive_servers:
  - address_data: 1.1.1.1
    tls_port: 853
    tls_auth_name: "cloudflare-dns.com"
  - address_data: 8.8.8.8
    tls_port: 853
    tls_auth_name: "dns.google"

Start Stubby:

sudo systemctl enable stubby
sudo systemctl start stubby

4. Knot Resolver: Versatile DNS Resolver with DoH and DoT

Knot Resolver supports both DoH and DoT.

Install Knot Resolver

For Debian-based systems:

sudo apt update
sudo apt install knot-resolver

Configure Knot Resolver

Edit the configuration file at /etc/knot-resolver/kresd.conf:

-- Set up DNS over TLS
resolver:tls("1.1.1.1", 853)
resolver:tls("8.8.8.8", 853)

-- Set up DNS over HTTPS
http:doa({
  ["doh"] = "https://yourdomain.com/dns-query"
})

Start Knot Resolver:

sudo systemctl enable kresd
sudo systemctl start kresd

5. CoreDNS: Modular DNS Server with DoH and DoT Plugins

CoreDNS supports DoH and DoT through plugins.

Install CoreDNS

For Debian-based systems:

curl -sL https://coredns.io/downloads/ | tar xz
sudo mv coredns /usr/local/bin/

Configure CoreDNS

Create or edit the CoreDNS configuration file (e.g., Corefile):

.:53 {
    forward . 1.1.1.1 8.8.8.8
    log
}

# For DoH
example.org {
    forward . https://yourdomain.com/dns-query
}

Start CoreDNS:

coredns

6. DNSDist: DNS Load Balancer with DoH and DoT Proxy

DNSDist can proxy DNS queries over HTTPS and TLS.

Install DNSDist

For Debian-based systems:

sudo apt update
sudo apt install dnsdist

Configure DNSDist

Edit the configuration file at /etc/dnsdist/dnsdist.conf:

-- Configure DNS over TLS
addTLS("127.0.0.1", 853)
addServer("1.1.1.1", {tls = true})
addServer("8.8.8.8", {tls = true})

-- Configure DNS over HTTPS
addDOH("127.0.0.1", 443, "https://yourdomain.com/dns-query")

Start DNSDist:

sudo systemctl enable dnsdist
sudo systemctl start dnsdist

Combining Tools for Comprehensive DNS Privacy

Integrating multiple tools can provide a robust DNS privacy solution. For instance:

  • Stubby + Unbound: Use Stubby to forward queries over TLS to Unbound, which performs DNS resolution and caching.
  • Caddy + Unbound: Set up Unbound for DoT and Caddy for DoH to provide secure DNS resolution over both protocols.
  • Knot Resolver: As an all-in-one solution for both DoH and DoT.

Conclusion

Securing your DNS traffic is essential to maintaining privacy and protecting against potential threats. With open-source tools like Unbound, Caddy, Stubby, Knot Resolver, CoreDNS, and DNSDist, you can set up a personal DNS resolver that supports both DNS over HTTPS and DNS over TLS. These tools offer flexibility, privacy, and control over your DNS queries, ensuring a more secure and private browsing experience.

Explore and configure these solutions to meet your specific needs and enjoy a safer online experience.

The post Setting Up Personal DNS over HTTPS (DoH) and DNS over TLS (DoT) Using Open-Source Software appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.

on September 10, 2024 01:55 PM

Date: 9-10 October 2024

Booth: B8

After Data & AI Masters, we cross the North Sea to attend one of the leading AI events inEurope. Between the 9th and 10th of October, our team will be in Amsterdam at World Summit AI for the second year in a row. In 2023, we had a blast learning about the ethics of AI, security risks and the biggest challenges enterprises face. Join Canonical at our booth and talk, where you’ll be able to get in-person recommendations for innovating at speed with open source AI.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-rt.googleusercontent.com/docsz/AD_4nXd_FkYceGAxmYgqAdcBixO0S7kU4M46FnLkqctQ6-YbY1dk6-9UCP4ifEagGTh6FNFrRMIvKdCHQK756lLNAWUrKYe7cPQb_1W6cTsQynogo51-_mZ032UTIZGuLEOkCZ2j7dQPyJe8W2mC78pg4joJxQ1R?key=eFJjAWm3wpA6ZUh0AcX1OA" width="720" /> </noscript>

Canonical is the publisher of Ubuntu, the leading Linux distribution that has been around for 20 years. Our promise to provide secure open source software does not fall short when it comes to AI infrastructure. We have been pioneering in this space, being active, for example, in the Kubeflow community since its early days. We are one of the official distributions of the MLOps platform, providing not only security maintenance but also enterprise support, further tooling integrations and managed services. Nowadays, Canonical’s MLOps portfolio includes a suite of tools that help you run the entire machine learning lifecycle at all scales, from AI workstations to the cloud to edge devices. Let’s meet to talk more about it!

Engaging with industry leaders, open source users, and organisations looking to scale their ML projects is a priority for us. We’re excited to connect with attendees at World Summit AI to meet, share a cup of coffee, and give you our insights in this vibrant ecosystem.

Innovate at speed with open source AI

Ever since we launched our portfolio of cloud-native applications, our goal has been to enable organisations to run their AI projects using one integrated open source stack from one vendor – which works on any CNCF-conformant Kubernetes distribution and on any environment, whether is on-prom or on any major public cloud.

During World Summit AI, we have prepared for a series of demos to show you how open source tooling can help you with your data & AI projects. You should join our booth if you:

  • Have questions about AI, MLOps, Data and the role of open source
  • Need help with defining your MLOps architecture
  • Are looking for secure open source software for your Data & AI initiatives
  • Would like to learn more about Canonical and our solutions

Infrastructure for GenAI: how do we scale it?

In 2023, the Linux Foundation published a report which found that almost half of the surveyed organisations prefer open source tooling for their GenAI projects. The rapid adoption of emerging technologies challenges enterprises to address some pressing issues such as security, transparency and costs. While initial experimentation seems an easy step for anyone because of the large number of solutions available on the market, scaling GenAI projects has proved to be a difficult task that requires organisations to upgrade their AI infrastructure, address data protection and enable large teams to collaborate.

Join my talk, “GenAI with open source: simplify and scale“, at World Summit AI on 9th October at 15:30 in the Accelerating AI Adoption track. During the presentation, I will guide you through how to scale your GenAI projects using open source tooling such as Kubeflow and OpenSearch. We will explore the key considerations, common pitfalls, and challenges organisations face when starting a new initiative. Finally, we will analyse ready-made ML models and scenarios to determine when they are more suitable than building your own solution. 

At the end of the talk, you will be better equipped to run your GenAI projects in production, using secure open source tooling. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-rt.googleusercontent.com/docsz/AD_4nXcS_bYLT2KD3I82Kw0YykP4gGUoucsxsHeW9NABE0C-DTLc_S90AlOjng7R8baqPLnFwEdKKSItBe_st9KKJjG2RYCpFkSUMbuXDKkqWd52cMuxHN--R9dRuSdGSLvpXeZ2v2LXqLO2kPv7edRJR4ITdRmQ?key=eFJjAWm3wpA6ZUh0AcX1OA" width="720" /> </noscript>

Join us at Booth B8 

If you are attending World AI Summit 2024 in Amsterdam, Netherlands between 9-10 October, make sure to visit booth B8. Our team of open source experts will be available throughout the day to answer all your questions about AI/ML and beyond.

You can already book a meeting with one of our team members using the link below.

on September 10, 2024 08:35 AM

Announcing Incus 6.5

Stéphane Graber

This release contains a very good mix of bug fixes and performances improvements as well as exciting new features across the board!

The highlights for this release are:

  • Instance auto-restart
  • Column selection in all list commands
  • QMP command hooks and scriptlet
  • Live disk resize for VMs
  • PCI devices hotplug for VMs
  • OVN load-balancer health checks
  • OVN Interconnect ECMP support
  • OVN NICs promiscuous mode
  • OVN NICs disabling of IP allocation
  • Configurable LVM PV metadata size
  • Configurable OVS socket path

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Enjoy!

on September 10, 2024 07:00 AM

September 09, 2024

Welcome to the Ubuntu Weekly Newsletter, Issue 856 for the week of September 1 – 7, 2024. The full version of this issue is available here.

In this issue we cover:

  • Upgrades to Ubuntu 24.04 LTS Suspended / Re-enabled
  • Ubuntu Stats
  • Hot in Support
  • Ubuntu Meeting Activity Reports
  • LXD: Weekly news – 361
  • Starcraft Clinic – 2024-Aug-30
  • UbuCon Asia
  • LoCo Events
  • Jammy Jellyfish (22.04.5 LTS) Point-Release Status Tracking
  • Ubuntu Representation at EthAccra 2024
  • A desktop touched by Midas: Oracular Oriole
  • Looking for more internship project ideas for Outreachy (December-March cohort)
  • Ubuntu Summit 2024: A logo takes flight
  • Canonical News
  • In the Press
  • In the Blogosphere
  • In Other News
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, and 24.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • soumyadghosh
  • sukso96100
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

on September 09, 2024 11:50 PM

September 06, 2024

This is mostly an informational PSA for anyone struggling to get Windows 3.11 working in modern versions of QEMU. Yeah, I know, not exactly a massively viral target audience.

Anyway, short answer, use QEMU 5.2.0 from December 2020 to run Windows 3.11 from November 1993.

Windows 3.11, at 1280x1024, running Internet Explorer 5, looking at a GitHub issue

An innocent beginning

I made a harmless jokey reply to a toot from Thom at OSNews, lamenting the lack of native Mastodon client for Windows 3.11.

When I saw Thom’s toot, I couldn’t resist, and booted a Windows 3.11 VM that I’d installed six weeks ago, manually from floppy disk images of MSDOS and Windows.

I already had Lotus Organiser installed to post a little bit of nostalgia-farming on threads - it’s what they do over there.

Post by @popey
View on Threads

I thought it might be fun to post a jokey diary entry. I hurriedly made my silly post five minutes after Thom’s toot, expecting not to think about this again.

Incorrect, brain

I shut the VM down, then went to get coffee, chuckling to my smart, smug self about my successful nerdy rapid-response. While the kettle boiled, I started pondering - “Wait, if I really did want to make a Mastodon client for Windows 3.11, how would I do it?

I pondered and dismissed numerous shortcuts, including, but not limited to:

  • Fake it with screenshots doctored in MS Paint
  • Run an existing DOS Mastodon Client in a Window
  • Use the Windows Telnet client to connect insecurely to my laptop running the Linux command-line Mastodon client, Toot
  • Set up a proxy through which I could get to a Mastodon web page

I pondered a different way, in which I’d build a very simple proof of concept native Windows client, and leverage the Mastodon API. I’m not proficient in (m)any programming languages, but felt something like Turbo Pascal was time-appropriate and roughly within my capabilities.

Diversion

My mind settled on Borland Delphi, which I’d never used, but looked similar enough for a silly project to Borland Turbo Pascal 7.0 for DOS, which I had. So I set about installing Borland Delphi 1.0 from fifteen (virtual) floppy disks, onto my Windows 3.11 “Workstation” VM.

Windows 3.11, with a Borland Delphi window open

Thank you, whoever added the change floppy0 option to the QEMU Monitor. That saved a lot of time, and was reduced down to a loop of this fourteen times:

"Please insert disk 2"
CTRL+ALT+2
(qemu) change floppy 0 Disk02.img
CTRL+ALT+1
[ENTER]

During my research for this blog, I found a delightful, nearly decade-old video of David Intersimone (“David I”) running Borland Delphi 1 on Windows 3.11. David makes it all look so easy. Watch this to get a moving-pictures-with-sound idea of what I was looking at in my VM.

Once Delphi was installed, I started pondering the network design. But that thought wasn’t resident in my head for long, because it was immediately replaced with the reason why I didn’t use that Windows 3.11 VM much beyond the original base install.

The networking stack doesn’t work. Or at least, it didn’t.

That could be a problem.

Retro spelunking

I originally installed the VM by following this guide, which is notable as having additional flourishes like mouse, sound, and SVGA support, as well as TCP/IP networking. Unfortunately I couldn’t initially get the network stack working as Windows 3.11 would hang on a black screen after the familiar OS splash image.

Looking back to my silly joke, those 16-bit Windows-based Mastodon dreams quickly turned to dust when I realised I wouldn’t get far without an IP address in the VM.

Hopes raised

After some digging in the depths of retro forums, I stumbled on a four year-old repo maintained by Jaap Joris Vens.

Here’s a fully configured Windows 3.11 machine with a working internet connection and a load of software, games, and of course Microsoft BOB 🤓

Jaap Joris published this ready-to-go Windows 3.11 hard disk image for QEMU, chock full of games, utilities, and drivers. I thought that perhaps their image was configured differently, and thus worked.

However, after downloading it, I got the same “black screen after splash” as with my image. Other retro enthusiasts had the same issue, and reported the details on this issue, about a year ago.

does not work, black screen.

It works for me and many others. Have you followed the instructions? At which point do you see the black screen?

The key to finding the solution was a comment from Jaap Joris pointing out that the disk image “hasn’t changed since it was first committed 3 years ago”, implying it must have worked back then, but doesn’t now.

Joy of Open Source

I figured that if the original uploader had at least some success when the image was created and uploaded, it is indeed likely QEMU or some other component it uses may have (been) broken in the meantime.

So I went rummaging in the source archives, looking for the most recent release of QEMU, immediately prior to the upload. QEMU 5.2.0 looked like a good candidate, dated 8th December 2020, a solid month before 18th January 2021 when the hda.img file was uploaded.

If you build it, they will run

It didn’t take long to compile QEMU 5.2.0 on my ThinkPad Z13 running Ubuntu 24.04.1. It went something like this. I presumed that getting the build dependencies for whatever is the current QEMU version, in the Ubuntu repo today, will get me most of the requirements.

$ sudo apt-get build-dep qemu
$ mkdir qemu
$ cd qemu
$ wget https://download.qemu.org/qemu-5.2.0.tar.xz
$ tar xvf qemu-5.2.0.tar.xz
$ cd qemu-5.2.0
$ ./configure
$ make -j$(nproc)

That was pretty much it. The build ran for a while, and out popped binaries and the other stuff you need to emulate an old OS. I copied the bits required directly to where I already had put Jaap Joris’ hda.img and start script.

$ cd build
$ cp qemu-system-i386 efi-rtl8139.rom efi-e1000.rom efi-ne2k_pci.rom kvmvapic.bin vgabios-cirrus.bin vgabios-stdvga.bin vgabios-vmware.bin bios-256k.bin ~/VMs/windows-3.1/

I then tweaked the start script to launch the local home-compiled qemu-system-i386 binary, rather than the one in the path, supplied by the distro:

$ cat start
#!/bin/bash
./qemu-system-i386 -nic user,ipv6=off,model=ne2k_pci -drive format=raw,file=hda.img -vga cirrus -device sb16 -display gtk,zoom-to-fit=on

This worked a treat. You can probably make out in the screenshot below, that I’m using Internet Explorer 5 to visit the GitHub issue which kinda renders when proxied via FrogFind by Action Retro.

Windows 3.11, at 1280x1024, running Internet Explorer 5, looking at a GitHub issue

Share…

I briefly toyed with the idea of building a deb of this version of QEMU for a few modern Ubuntu releases, and throwing that in a Launchpad PPA then realised I’d need to make sure the name doesn’t collide with the packaged QEMU in Ubuntu.

I honestly couldn’t be bothered to go through the pain of effectively renaming (forking) QEMU to something like OLDQEMU so as not to damage existing installs. I’m sure someone could do it if they tried, but I suspect it’s quite a search and replace, or move the binaries somewhere under /opt. Too much effort for my brain.

I then started building a snap of qemu as oldqemu - which wouldn’t require any “real” forking or renaming. The snap could be called oldqemu but still contain qemu-system-i386 which wouldn’t clash with any existing binaries of the same name as they’d be self-contained inside the compressed snap, and would be launched as oldqemu.qemu-system-i386.

That would make for one package to maintain rather than one per release of Ubuntu. (Which is, as I am sure everyone is aware, one of the primary advantages of making snaps instead of debs in the first place.)

Anyway, I got stuck with another technical challenge in the time I allowed myself to make the oldqemu snap. I might re-visit it, especially as I could leverage the Launchpad Build farm to make multiple architecture builds for me to share.

…or not

In the meantime, the instructions are above, and also (roughly) in the comment I left on the issue, which has kindly been re-opened.

Now, about that Windows 3.11 Mastodon client…

on September 06, 2024 01:40 PM

September 05, 2024

uCareSystem has had the ability to detect packages that were uninstalled and then remove their config files. Now it uses a better way that detects more. Also with this release, there are fixes and enhancements that make it even more useful. First of all, Its the Olympics… you saw the app icon that was change […]
on September 05, 2024 09:09 PM

E314 Rute Correia II

Podcast Ubuntu Portugal

Continua a conversa com a Rute Correia, fã incondicional de Pusheen, para nos dizer que as consolas estão pela hora da morte; como o Steamdeck é uma bisarma fofinha, graças às possibilidades de personalização do software e do interface aberto - ou até com estojos em malha que tornam tudo mais tchanan. Falámos de XBOX, Vita, como a SEGA Dreamcast estava à frente do seu tempo e ainda lançámos invectivas aos fabricantes para regressarem ao aspecto colorido e translúcido dos anos 90, em vez de fazerem tudo em preto.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on September 05, 2024 12:00 AM

September 02, 2024

Welcome to the Ubuntu Weekly Newsletter, Issue 855 for the week of August 25 – 31, 2024. The full version of this issue is available here.

In this issue we cover:

  • Ubuntu 22.04.5 final point-release delayed until September 12
  • Ubuntu 24.04.1 LTS released
  • Ubuntu Stats
  • Hot in Support
  • Ubuntu Meeting Activity Reports
  • Rocks Public Journal; 2024-08-27
  • Convocatória para apresentação de propostas (Call for proposals)
  • UbuCon Asia 2025 – Call for Bids!
  • LoCo Council approved and formalized LoCo Handover process
  • LoCo Events
  • Introducing Kernel 6.11 for the 24.10 Oracular Oriole Release
  • Other Community News
  • Ubuntu Cloud News
  • Canonical News
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, and 24.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

on September 02, 2024 11:06 PM

Beer, cake and ISO testing amidst rugby and jazz band chaos

On Saturday, the Debian South Africa team got together in Cape Town to celebrate Debian’s 31st birthday and to perform ISO testing for the Debian 11.11 and 12.7 point releases.

We ran out of time to organise a fancy printed cake like we had last year, but our improvisation worked out just fine!

We thought that we had allotted plenty of time for all of our activities for the day, and that there would be plenty of time for everything including training, but the day zipped by really fast. We hired a venue at a brewery, which is usually really nice because they have an isolated area with lots of space and a big TV – nice for presentations, demos, etc. But on this day, there was a big rugby match between South Africa and New Zealand, and as it got closer to the game, the place just got louder and louder (especially as a band started practicing and doing sound tests for their performance for that evening) and it turned out our space was also double-booked later in the afternoon, so we had to relocate.

Even amidst all the chaos, we ended up having a very productive day and we even managed to have some fun!

Four people from our local team performed ISO testing for the very first time, and in total we covered 44 test cases locally. Most of the other testers were the usual crowd in the UK, we also did a brief video call with them, but it was dinner time for them so we had to keep it short. Next time we’ll probably have some party line open that any tester can also join.

Logo

We went through some more iterations of our local team logo that Tammy has been working on. They’re turning out very nice and have been in progress for more than a year, I guess like most things Debian, it will be ready when it’s ready!

Debian 11.11 and Debian 12.7 released, and looking ahead towards Debian 13

Both point releases tested just fine and was released later in the evening. I’m very glad that we managed to be useful and reduce total testing time and that we managed to cover all the test cases in the end.

A bunch of things we really wanted to fix by the time Debian 12 launched are now finally fixed in 12.7. There’s still a few minor annoyances, but over all, Debian 13 (trixie) is looking even better than Debian 12 was around this time in the release cycle.

Freeze dates for trixie has not yet been announced, I hope that the release team announces those sooner rather than later, also KDE Plasma 6 hasn’t yet made its way into unstable, I’ve seen quite a number of people ask about this online, so hopefully that works out.

And by the way, the desktop artwork submissions for trixie ends in two weeks! More information about that is available on the Debian wiki if you’re interested in making a contribution. There are already 4 great proposals.

Debian Local Groups

Organising local events for Debian is probably easier than you think, and Debian does make funding available for events. So, if you want to grow Debian in your area, feel free to join us at -localgroups on the OFTC IRC network, also plumbed on Matrix at -localgroups:matrix.debian.social – where we’ll try to answer any questions you might have and guide you through the process!

Oh and btw… South Africa won the Rugby!

on September 02, 2024 01:01 PM

September 01, 2024

All but about four hours of my Debian contributions this month were sponsored by Freexian. (I ended up going a bit over my 20% billing limit this month.)

You can also support my work directly via Liberapay.

man-db and friends

I released libpipeline 1.5.8 and man-db 2.13.0.

Since autopkgtests are great for making sure we spot regressions caused by changes in dependencies, I added one to man-db that runs the upstream tests against the installed package. This required some preparatory work upstream, but otherwise was surprisingly easy to do.

OpenSSH

I fixed the various 9.8 regressions I mentioned last month: socket activation, libssh2, and Twisted. There were a few other regressions reported too: TCP wrappers support, openssh-server-udeb, and xinetd were all broken by changes related to the listener/per-session binary split, and I fixed all of those.

Once all that had made it through to testing, I finally uploaded the first stage of my plan to split out GSS-API support: there are now openssh-client-gssapi and openssh-server-gssapi packages in unstable, and if you use either GSS-API authentication or key exchange then you should install the corresponding package in order for upgrades to trixie+1 to work correctly. I’ll write a release note once this has reached testing.

Multiple identical results from getaddrinfo

I expect this is really a bug in a chroot creation script somewhere, but I haven’t been able to track down what’s causing it yet. My sbuild chroots, and apparently Lucas Nussbaum’s as well, have an /etc/hosts that looks like this:

$ cat /var/lib/schroot/chroots/sid-amd64/etc/hosts
127.0.0.1       localhost
127.0.1.1       [...]
127.0.0.1       localhost ip6-localhost ip6-loopback

The last line clearly ought to be ::1 rather than 127.0.0.1; but things mostly work anyway, since most code doesn’t really care which protocol it uses to talk to localhost. However, a few things try to set up test listeners by calling getaddrinfo("localhost", ...) and binding a socket for each result. This goes wrong if there are duplicates in the resulting list, and the test output is typically very confusing: it looks just like what you’d see if a test isn’t tearing down its resources correctly, which is a much more common thing for a test suite to get wrong, so it took me a while to spot the problem.

I ran into this in both python-asyncssh (#1052788, upstream PR) and Ruby (ruby3.1/#1069399, ruby3.2/#1064685, ruby3.3/#1077462, upstream PR). The latter took a while since Ruby isn’t one of my languages, but hey, I’ve tackled much harder side quests. I NMUed ruby3.1 for this since it was showing up as a blocker for openssl testing migration, but haven’t done the other active versions (yet, anyway).

OpenSSL vs. cryptography

I tend to care about openssl migrating to testing promptly, since openssh uploads have a habit of getting stuck on it otherwise.

Debian’s OpenSSL packaging recently split out some legacy code (cryptography that’s no longer considered a good idea to use, but that’s sometimes needed for compatibility) to an openssl-legacy-provider package, and added a Recommends on it. Most users install Recommends, but package build processes don’t; and the Python cryptography package requires this code unless you set the CRYPTOGRAPHY_OPENSSL_NO_LEGACY=1 environment variable, which caused a bunch of packages that build-depend on it to fail to build.

After playing whack-a-mole setting that environment variable in a few packages’ build process, I decided I didn’t want to be caught in the middle here and filed an upstream issue to see if I could get Debian’s OpenSSL team and cryptography’s upstream talking to each other directly. There was some moderately spirited discussion and the issue remains open, but for the time being the OpenSSL team has effectively reverted the change so it’s no longer a pressing problem.

GCC 14 regressions

Continuing from last month, I fixed build failures in pccts (NMU) and trn4.

Python team

I upgraded alembic, automat, gunicorn, incremental, referencing, pympler (fixing compatibility with Python >= 3.10), python-aiohttp, python-asyncssh (fixing CVE-2023-46445, CVE-2023-46446, and CVE-2023-48795), python-avro, python-multidict (fixing a build failure with GCC 14), python-tokenize-rt, python-zipp, pyupgrade, twisted (fixing CVE-2024-41671 and CVE-2024-41810), zope.exceptions, zope.interface, zope.proxy, zope.security, zope.testrunner. In the process, I added myself to Uploaders for zope.interface; I’m reasonably comfortable with the Zope Toolkit and I seem to be gradually picking up much of its maintenance in Debian.

A few of these required their own bits of yak-shaving:

I improved some Multi-Arch: foreign tagging (python-importlib-metadata, python-typing-extensions, python-zipp).

I fixed build failures in pipenv, python-stdlib-list, psycopg3, and sen, and fixed autopkgtest failures in autoimport (upstream PR), python-semantic-release and rstcheck.

Upstream for zope.file (not in Debian) filed an issue about a test failure with Python 3.12, which I tracked down to a Python 3.12 compatibility PR in zope.security.

I made python-nacl build reproducibly (upstream PR).

I moved aliased files from / to /usr in timekpr-next (#1073722).

Installer team

I applied a patch from Ubuntu to make os-prober support building with the noudeb profile (#983325).

on September 01, 2024 01:29 PM

Plesk high swap usage

Dougie Richardson

Seen warnings about high swap consumption in Plesk on Ubuntu 20.04.6 LTS:

Had a look in top and noticed clamavd using 1.0G of swap. After a little digging around, it might be related to a change in ClamAV 0.103.0 where non-blocking signature database reloads were introduced.

Major changes

  • clamd can now reload the signature database without blocking scanning. This multi-threaded database reload improvement was made possible thanks to a community effort.
    • Non-blocking database reloads are now the default behavior. Some systems that are more constrained on RAM may need to disable non-blocking reloads, as it will temporarily consume double the amount of memory. We added a new clamd config option ConcurrentDatabaseReload, which may be set to no.

I disabled the option and the difference is dramatic:

I’ll keep an eye on it I guess.

on September 01, 2024 09:11 AM

August 31, 2024

Thanks to all the hard work from our contributors, Lubuntu 24.04.1 LTS has been released. With the codename Noble Numbat, Lubuntu 24.04 is the 26th release of Lubuntu, the 12th release of Lubuntu with LXQt as the default desktop environment. Support lifespan Lubuntu 24.04 LTS will be supported for 3 years until April 2027. Our […]
on August 31, 2024 12:17 PM

August 30, 2024

tl;dr

I bodged together a Python script using Spotipy (not a typo) to feed me #NewMusicDaily in a Spotify playlist.

No AI/ML, all automated, “fresh” tunes every day. Tunes that I enjoy get preserved in a Keepers playlist; those I don’t like to get relegated to the Sleepers playlist.

Any tracks older than eleven days are deleted from the main playlist, so I automatically get a constant flow of new stuff.

My personal Zane Lowe in a box

Nutshell

  1. The script automatically populates this Virtual Zane Lowe playlist with semi-randomly selected songs that were released within the last week or so, no older (or newer).
  2. I listen (exclusively?) to that list for a month, signaling songs I like by hitting a button on Spotify.
  3. Every day, the script checks for ’expired’ songs whose release date has passed by more than 11 days.
  4. The script moves songs I don’t like to the Sleepers playlist for archival (and later analysis), and to stop me hearing them.
  5. It moves songs I do like to the Keepers playlist, so I don’t lose them (and later analysis).
  6. Goto 1.

I can run the script at any time to “top up” the playlist or just let it run regularly to drip-feed me new music, a few tracks at a time.

Clearly, once I have stashed some favourites away in the Keepers pile, I can further investigate those artists, listen to their other tracks, and potentially discover more new music.

Below I explain at some length how and why.

NoCastAuGast

I spent an entire month without listening to a single podcast episode in August. I even unsubscribed from everything and deleted all the cached episodes.

Aside: Fun fact: The Apple Podcasts app really doesn’t like being empty and just keeps offering podcasts it knows I once listened to despite unsubscribing. Maybe I’ll get back into listening to these shows again, but music is on my mind for now.

While this is far from a staggering feat of human endeavour in the face of adversity, it was a challenge for me, given that I listened to podcasts all the time. This has been detailed in various issues of my personal email newsletter, which goes out on Fridays and is archived to read online or via RSS.

In August, instead, I re-listened to some audio books I previously enjoyed and re-listened to a lot of music already present on my existing Spotify playlists. This became a problem because I got bored with the playlists. Spotify has an algorithm that can feed me their idea of what I might want, but I decided to eschew their bot and make my own.

Note: I pay for Spotify Premium, then leveraged their API and built my “application” against that platform. I appreciate some people have Strong Opinions™️ about Spotify. I have no plans to stop using Spotify anytime soon. Feel free to use whatever music service you prefer, or self-host your 64-bit, 192 kHz Hi-Res Audio from HDTracks through an Elipson P1 Pre-Amp & DAC and Cary Audio Valve MonoBlok Power Amp in your listening room. I don’t care.

I’ll be here, listening on my Apple AirPods, or blowing the cones out of my car stereo. Anyway…

I spent the month listening to great (IMHO) music, predominantly released in the (distant) past on playlists I chronically mis-manage. On the other hand, my son is an expert playlist curator, a skill he didn’t inherit from me. I suspect he “gets the aux” while driving with friends, partly due to his Spotify playlist mastery.

As I’m not a playlist charmer, I inevitably got bored of the same old music during August, so I decided it was time for a change. During the month of September, my goal is to listen to as much new (to me) music as I can and eschew the crusty playlists of 1990s Brit-pop and late-70s disco.

How does one discover new music though?

Novel solutions

I wrote a Python script.

Hear me out. Back in the day, there was an excellent desktop music player for Linux called Banshee. One of the great features Banshee users loved was “Smart Playlists.” This gave users a lot of control over how a playlist was generated. There was no AI, no cloud, just simple signals from the way you listen to music that could feed into the playlist.

Watch a youthful Jorge Castro from 13 years ago do a quick demo.

Jorge Demonstrating the awesome power of Smart Playlists in Banshee (RIP in Peace)

Aside: Banshee was great, as were many other Mono applications like Tomboy and F-Spot. It’s a shame a bunch of blinkered, paranoid, noisy, and wrong Linux weirdos chased the developers away, effectively killing off those excellent applications. Good job, Linux community.

Hey ho. Moving on. Where was I…

Spotify clearly has some built-in, cloud-based “smarts” to create playlists, recommendations, and queues of songs that its engineers and algorithm think I might like. There’s a fly in the ointment, though, and her name is Alexa.

No, Alexa, NO!

We have a “Smart” speaker in the kitchen; the primary music consumers are not me. So “my” listening history is now somewhat tainted by all the Chase Atlantic & Central Cee my son listens to and Michael (fucking) Bublé, my wife, enjoys. She enjoys it so much that Bublé has featured on my end-of-year “Spotify Unwrapped” multiple times.

I’m sure he’s a delightful chap, but his stuff differs from my taste.

I had some ideas to work around all this nonsense. My goals here are two-fold.

  1. I want to find and enjoy some new music in my life, untainted by other house members.
  2. Feed the Spotify algorithm with new (to me) artists, genres and songs, so it can learn what else I may enjoy listening to.

Obviously, I also need to do something to muzzle the Amazon glossy screen of shopping recommendations and stupid questions.

The bonus side-quest is learning a bit more Python, which I completed. I spent a few hours one evening on this project. It was a fun and educational bit of hacking during time I might otherwise use for podcast listening. The result is four hundred or so lines of Python, including comments. My code, like my blog, tends to be a little verbose because I’m not an expert Python developer.

I’m pretty positive primarily professional programmers potentially produce petite Python.

Not me!

Noodling

My script uses the Spotify API via Spotipy to manage an initially empty, new, “dynamic” playlist. In a nutshell, here’s what the python script does with the empty playlist over time:

  • Use the Spotify search API to find tracks and albums released within the last eleven days to add to the playlist. I also imposed some simple criteria and filters.
    • Tracks must be accessible to me on a paid Spotify account in Great Britain.
    • The maximum number of tracks on the playlist is currenly ninety-four, so there’s some variety, but not too much as to be unweildy. Enough for me to skip some tracks I don’t like, but still have new things to listen to.
    • The maximum tracks per artist or album permitted on the playlist is three, again, for variety. Initially this was one, but I felt it’s hard to fully judge the appeal of an artist or album based off one song (not you: Black Lace), but I don’t want entire albums on the list. Three is a good middle-ground.
    • The maximum number of tracks to add per run is configurable and was initially set at twenty, but I’ll likely reduce that and run the script more frequently for drip-fed freshness.
  • If I use the “favourite” or “like” button on any track in the list before it gets reaped by the script after eleven days, the song gets added to a more permanent keepers playlist. This is so I can quickly build a collection of newer (to me) songs discovered via my script and curated by me with a single button-press.
  • Delete all tracks released more than eleven days ago if I haven’t favourited them. I chose eleven days to keep it modern (in theory) and fresh (foreshadowing). Technically, the script does this step first to make room for additional new songs.

None of this is set in stone, but it is configurable with variables at the start of the script. I’ll likely be fiddling with these through September until I get it “right,” whatever that means for me. Here’s a handy cut-out-and-keep block diagram in case that helps, but I suspect it won’t.

 +-----------------------------+
 | Spotify (Cloud) |
 | +---------------------+ |
 | | Main Playlist | |
 | +---------------------+ |
 | | | |
 | Like | | Dislike |
 | v | |
 | +---------------------+ |
 | | Keeper Playlist | |
 | +---------------------+ |
 | | |
 | v |
 | +---------------------+ |
 | | Sleeper Playlist | |
 | +---------------------+ |
 +-------------+---------------+
 ^
 |
 v
 +---------------------------+
 | Python Script |
 | +---------------------+ |
 | | Calls Spotify API | |
 | | and Manages Songs | |
 | +---------------------+ |
 +---------------------------+

Next track

The expectation is to run this script automatically every day, multiple times a day, or as often as I like, and end up with a frequently changing list of songs to listen to in one handy playlist. If I don’t like a song, I’ll skip it, and when I do like a song, I’ll likely play it more than once. and maybe click the “Like” icon.

My theory is that the list becomes a mix between thirty and ninety artists who have released albums over the previous rolling week. After the first test search on Tuesday, the playlist contained 22 tracks, which isn’t enough. I scaled the maximum up over the next few days. It’s now at ninety-four. If I exhaust all the music and get bored of repeats, I can always up the limit to get a few new songs.

In fact, on the very first run of the script, the test playlist completely filled with songs from one artist who had just released a new album. That triggered the implementation of the three songs per artist/album rule to reduce that happening.

I appreciate listening to tracks out of sequence, and a full album is different from the artist intended. But thankfully, I don’t listen to a lot of Adele, and the script no longer adds whole albums full of songs to the list. So, no longer a “me” problem.

No AI

I said at the top I’m not using any “AI/ML” in my script, and while that’s true, I don’t control what goes on inside the Spotify datacentre. The script is entirely subject to the whims of the Spotify API as to which tracks get returned to my requests. There are some constraints to the search API query complexity, and limits on what the API returns.

The Spotify API documentation has been excellent so far, as has the Spotipy docs.

Popular songs and artists often organically feature prominently in the API responses. Plus (I presume) artists and labels have financial incentives or an active marketing campaign with Spotify, further skewing search results. Amusingly, the API has an optional (and amusing) “hipster” tag to show the bottom 10% of results (ranked by popularity). I did that once, didn’t much like it, and won’t do it again.

It’s also subject to the music industry publishing music regularly, and licensing it to be streamed via Spotify where I live.

Not quite

With the script as-is, initially, I did not get fresh new tunes every single day as expected, so I had a further fettle to increase my exposure to new songs beyond what’s popular, trending, or tagged “new”. I changed the script to scan the last year of my listening habits to find genres of music I (and the rest of the family) have listened to a lot.

I trimmed this list down (to remove the genre taint) and then fed these genres to the script. It then randomly picks a selection of those genres and queries the API for new releases in those categories.

With these tweaks, I certainly think this script and the resulting playlist are worth listening to. It’s fresher and more dynamic than the 14-year-old playlist I currently listen to. Overall, the script works so that I now see songs and artists I’ve not listened to—or even heard of—before. Mission (somewhat) accomplished.

Indeed, with the genres feature enabled, I could add a considerable amount of new music to the list, but I am trying to keep it a manageable size, under a hundred tracks. Thankfully, I don’t need to worry about the script pulling “Death Metal,” “Rainy Day,” and “Disney” categories out of thin air because I can control which ones get chosen. Thus, I can coerce the selection while allowing plenty of randomness and newness.

I have limited the number of genre-specific songs so I don’t get overloaded with one music category over others.

Not new

There are a couple of wrinkles. One song that popped into the playlist this week is “Never Going Back Again” by Fleetwood Mac, recorded live at The Forum, Inglewood, in 1982. That’s older than the majority of what I listened to in all of August! It looks like Warner Records Inc. released that live album on 21st August 2024, well within my eleven-day boundary, so it’s technically within “The Rules” while also not being fresh, new music.

There’s also the compilation complication. Unfresh songs from the past re-released on “TOP HITS 2024” or “DANCE 2024 100 Hot Tracks” also appeared in my search criteria. For example, “Talk Talk” by Charli XCX, from her “Brat” album, released in June, is on the “DANCE 2024 100 Hot Tracks” compilation, released on 23rd August 2024, again, well within my eleven-day boundary.

I’m in two minds about these time-travelling playlist interlopers. I have never knowingly listened to Charli XCX’s “Brat” album by choice, nor have I heard live versions of Fleetwood Mac’s music. I enjoy their work, but it goes against the “new music” goal. But it is new to me which is the whole point of this exercise.

The further problem with compilations is that they contain music by a variety of artists, so they don’t hit the “max-per-artist” limit but will hit the “max-per-album” rule. However, if the script finds multiple newly released compilations in one run, I might end up with a clutch of random songs spread over numerous “Various Artists” albums, maxing out the playlist with literal “filler.”

I initially allowed compilations, but I’m irrationally bothered that one day, the script will add “The Birdie Song” by Black Lace as part of “DEUTSCHE TOP DISCO 3000 POP GEBURTSTAG PARTY TANZ SONGS ZWANZIG VIERUNDZWANZIG”.

Nein.

I added a filter to omit any “album type: compilation,” which knocks that bopping-bird-based botherer squarely on the bonce.

No more retro Europop compilation complications in my playlist. Alles klar.

Not yet

Something else I had yet to consider is that some albums have release dates in the future. Like a fresh-faced newborn baby with an IDE and API documentation, I assumed that albums published would generally have release dates of today or older. There may be a typo in the release_date field, or maybe stuff gets uploaded and made public ahead of time in preparation for a big marketing push on release_date.

I clearly do not understand the music industry or publishing process, which is fine.

Nuke it from orbit

I’ve been testing the script while I prototyped it, this week, leading up to the “Grand Launch” in September 2024 (next month/week). At the end of August I will wipe the slate (playlist) clean, and start again on 1st September with whatever rules and optimisations I’ve concocted this week. It will almost certainly re-add some of the same tracks back-in after the 31st August “Grand Purge”, but that’s as expected, working as designed. The rest will be pseudo-random genre-specific tracks.

I hope.

Newsletter

I will let this thing go mad each day with the playlist and regroup at the end of September to evaluate how this scheme is going. Expect a follow-up blog post detailing whether this was a fun and interesting excursion or pure folly. Along the way, I did learn a bit more Python, the Spotify API, and some other interesting stuff about music databases and JSON.

So it’s all good stuff, whether I enjoy the music or not.

You can get further, more timely updates in my weekly email newsletter, or view it in the newsletter archive, and via RSS, a little later.

Ken said he got “joy out of reading your newsletter”. YMMV. E&OE. HTH. HAND.

Nomenclature

Every good project needs a name. I initially called it my “Personal Dynamic Playlist of Sixty tracks over Eleven days,” or PDP-11/60 for short, because I’m a colossal nerd. Since bumping the max-tracks limit for the playlist, it could be re-branded PDP-11/94. However, this is a relatively niche and restrictive playlist naming system, so I sought other ideas.

My good friend Martin coined the term “Virtual Zane Lowe” (Zane is a DJ from New Zealand who is apparently renowned for sharing new music). That’s good enough for me. Below are links to all three playlists if you’d like to listen, laugh, live, love, or just look at them.

The “Keepers” and “Sleepers” lists will likely be relatively empty for a few days until the script migrates my preferred and disliked tracks over for safe-keeping & archival, respectively.

November approaches

Come back at the end of the month to see if: My script still works. The selections are good. I’m still listening to this playlist, and most importantly. Whether I enjoy doing so!

If it works, I’ll probably continue using it through October and into November as I commute to and from the office. If that happens, I’ll need to update the playlist artwork. Thankfully, there’s an API for that, too!

I may consider tidying up the script and sharing it online somewhere. It feels a bit niche and requires a paid Spotify account to even function, so I’m not sure what value others would get from it other than a hearty chuckle at my terribad Python “skills.”

One potentially interesting option would be to map the songs in Spotify to another, such as Apple Music or even videos on YouTube. The YouTube API should enable me to manage video playlists that mirror the ones I manage directly on Spotify. That could be a fun further extension to this project.

Another option I considered was converting it to a web app, a service I (and other select individuals) can configure and manage in a browser. I’ll look into that at the end of the month. If the current iteration of the script turns out to be a complete bust, then this idea likely won’t go far, either.

Thanks for reading. AirPods in. Click “Shuffle”.

on August 30, 2024 12:00 PM

August 29, 2024

Upgrades from 22.04 LTS also enabled!

The Ubuntu Studio team is pleased to announce the first service release of Ubuntu Studio 24.04 LTS, 24.04.1. This also marks the opening of upgrades from Ubuntu Studio 22.04 LTS to 24.04 LTS.

If you are running Ubuntu Studio 22.04, you should be receiving an upgrade notification in a matter of days upon login.

Notable Bugs Fixed Specific to Ubuntu Studio:

  • Fixed an issue where PipeWire could not send long SysEx messages bridging to some MIDI controllers .
  • DisplayCal would not launch as it required an older version of Python than Python 3.12 that was released with Ubuntu 24.04. This was fixed.
  • The new installer doesn’t configure users to be part of the audio group by default. However, upon first login, the user that just logged-in is automatically configured, but this requires the system to be completely restarted to take effect. The fix to make this seamless is in progress.

Other bugfixes are in progress and/or fixed and can be found in the Ubuntu release notes or the Kubuntu release notes for the desktop environment.

How to get Ubuntu Studio 24.04.1 LTS

Ubuntu Studio 24.04.1 LTS can be found on our download page.

Upgrading to Ubuntu Studio 24.04.1 LTS

If you are running Ubuntu Studio 24.04 LTS, you arleady have it.

If you are running Ubuntu Studio 22.04 LTS, wait for a notification in your system tray. Otherwise, see the instructions in the release notes.

Contributing and Donating

Right now we mostly need financial contributions and donations. As stated before, our project leader’s family is in a time of lead with his wife losing her job unexpectedly. We would like to keep the project running and be able to give above and beyond to help them.

Therefore, if you find Ubuntu Studio useful and can find it in your heart to give what you think it’s worth and then some, please do give.

Ways to donate can be found in the sidebar as well as at ubuntustudio.org/contribute.

on August 29, 2024 09:17 PM

Around a decade ago, I was happy to learn about bcache – a Linux block cache system that implements tiered storage (like a pool of hard disks with SSDs for cache) on Linux. At that stage, ZFS on Linux was nowhere close to where it is today, so any progress on gaining more ZFS features in general Linux systems was very welcome. These days we care a bit less about tiered storage, since any cost benefit in using anything else than nvme tends to quickly evaporate compared to time you eventually lose on it.

In 2015, it was announced that bcache would grow into its own filesystem. This was particularly exciting and it caused quite a buzz in the Linux community, because it brought along with it more features that compare with ZFS (and also btrfs), including built-in compression, built-in encryption, check-summing and RAID implementations.

Unlike ZFS, it didn’t have a dkms module, so if you wanted to test bcachefs back then, you’d have to pull the entire upstream bcachefs kernel source tree and compile it. Not ideal, but for a promise of a new, shiny, full-featured filesystem, it was worth it.

In 2019, it seemed that the time has come for bcachefs to be merged into Linux, so I thought that it’s about time we have the userspace tools (bcachefs-tools) packaged in Debian. Even if the Debian kernel wouldn’t have it yet by the time the bullseye (Debian 11) release happened, it might still have been useful for a future backported kernel or users who roll their own.

By total coincidence, the first git snapshot that I got into Debian (version 0.1+git20190829.aa2a42b) was committed exactly 5 years ago today.

It was quite easy to package it, since it was written in C and shipped with a makefile that just worked, and it made it past NEW into unstable in 19 January 2020, just as I was about to head off to FOSDEM as the pandemic started, but that’s of course a whole other story.

Fast-forwarding towards the end of 2023, version 1.2 shipped with some utilities written in Rust, this caused a little delay, since I wasn’t at all familiar with Rust packaging yet, so I shipped an update that didn’t yet include those utilities, and saw this as an opportunity to learn more about how the Rust eco-system worked and Rust in Debian.

So, back in April the Rust dependencies for bcachefs-tools in Debian didn’t at all match the build requirements. I got some help from the Rust team who says that the common practice is to relax the dependencies of Rust software so that it builds in Debian. So errno, which needed the exact version 0.2, was relaxed so that it could build with version 0.4 in Debian, udev 0.7 was relaxed for 0.8 in Debian, memoffset from 0.8.5 to 0.6.5, paste from 1.0.11 to 1.08 and bindgen from 0.69.9 to 0.66.

I found this a bit disturbing, but it seems that some Rust people have lots of confidence that if something builds, it will run fine. And at least it did build, and the resulting binaries did work, although I’m personally still not very comfortable or confident about this approach (perhaps that might change as I learn more about Rust).

With that in mind, at this point you may wonder how any distribution could sanely package this. The problem is that they can’t. Fedora and other distributions with stable releases take a similar approach to what we’ve done in Debian, while distributions with much more relaxed policies (like Arch) include all the dependencies as they are vendored upstream.

As it stands now, bcachefs-tools is impossible to maintain in Debian stable. While my primary concerns when packaging, are for Debian unstable and the next stable release, I also keep in mind people who have to support these packages long after I stopped caring about them (like Freexian who does LTS support for Debian or Canonical who has long-term Ubuntu support, and probably other organisations that I’ve never even heard of yet). And of course, if bcachfs-tools don’t have any usable stable releases, it doesn’t have any LTS releases either, so anyone who needs to support bcachefs-tools long-term has to carry the support burden on their own, and if they bundle it’s dependencies, then those as well.

I’ll admit that I don’t have any solution for fixing this. I suppose if I were upstream I might look into the possibility of at least supporting a larger range of recent dependencies (usually easy enough if you don’t hop onto the newest features right away) so that distributions with stable releases only need to concern themselves with providing some minimum recent versions, but even if that could work, the upstream author is 100% against any solution other than vendoring all its dependencies with the utility and insisting that it must only be built using these bundled dependencies. I’ve made 6 uploads for this package so far this year, but still I constantly get complaints that it’s out of date and that it’s ancient. If a piece of software is considered so old that it’s useless by the time it’s been published for two or three months, then there’s no way it can survive even a usual stable release cycle, nevermind any kind of long-term support.

With this in mind (not even considering some hostile emails that I recently received from the upstream developer or his public rants on lkml and reddit), I decided to remove bcachefs-tools from Debian completely. Although after discussing this with another DD, I was convinced to orphan it instead, which I have now done. I made an upload to experimental so that it’s still available if someone wants to work on it (without having to go through NEW again), it’s been removed from unstable so that it doesn’t migrate to testing, and the ancient (especially by bcachefs-tools standards) versions that are in stable and oldstable will be removed too, since they are very likely to cause damage with any recent kernel versions that support bcachefs.

And so, my adventure with bcachefs-tools comes to an end. I’d advise that if you consider using bcachefs for any kind of production use in the near future, you first consider how supportable it is long-term, and whether there’s really anyone at all that is succeeding in providing stable support for it.

on August 29, 2024 01:04 PM

E313 Rute Correia I

Podcast Ubuntu Portugal

Esta semana recebemos a visita de Rute Correia; jogadora inveterada, malhadeira extraordinaire e uma das Tias que Malham em Jogos. Falámos de tudo um pouco, mas sobretudo de jogos de vídeo, computadores, consolas fofinhas de muitas cores e Software Livre (claro). É mesmo inevitável que uma pessoa que cresça rodeada de computadores e ecrãs se torne míope e nerd? Hmmm. Qual é a diferença entre um Steamdeck e um Stream Deck? É verdade que lhe chamam a Zita Seabra das Consolas, por ter abandonado o PC? A conversa correu tão bem e divertimo-nos tanto, que tivemos de dividir este episódio em duas partes - esta é a primeira.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on August 29, 2024 12:00 AM

August 22, 2024

Incus is a manager for virtual machines and system containers.

A system container (therein container) is an instance of an operating system that also runs on a computer, along with the main operating system. A system container uses, instead, security primitives of the Linux kernel for the separation from the main operating system. You can think of system containers as software virtual machines.

In this post we are going to see how to conveniently let an Incus container to have access to the Android USB debugging (ADB) of our Android mobile phone. Normally, you would run adb commands on the host. But in this case, we are launching a container and give it access to ADB. We set it up so that only the container can access the phone with ADB.

The reason why I am writing this post is that there are several pitfalls along the way. Let’s figure out how it all works.

Setting up the Android phone for USB Debugging

To enable USB debugging on your Android phone, there’s a hidden list of steps that involves going into your phone settings, tapping seven times on the appropriate place of the About page, and then your phone shows the message You are now a developer. You will need to search for the exact steps for your device, as the place where you need to tap seven times may be different among manufacturers.

Still on the phone, you then need to visit another location in the phone Settings, one that is called Developer options. In there, you need to scroll a bit until you see the option USB debugging. You will need to enable that. When you click to enable, you will be presented with a warning dialog box about the ramifications of having enabled the USB debugging. Read that carefully, and enable USB debugging. Note that after this exercise, and if you do not need to use ADB for a long period, you should disable the Developer options. The risk with the enabled USB debugging is that if you connect your phone with a (data) USB cable to some malicious computer or even a malicious USB charger, they may take over your device in a very bad way.

In the first screenshot it shows the warning when you try to enable USB debugging. The second screenshot shows that the USB debugging has been enabled successfully.

There’s an option in the second screenshot to Revoke USB debugging authorizations. I recommend to do that, especially if you have already connected your phone to your computer. By doing so, we can make sure that the host is not able to connect successfully to the device, and only the container can do so. Note that when you try to connect to the device with adb, you get a dialog box on the phone on whether to authorize this new connection.

When you connect your Android phone to your Linux host, the device should appear in the output of lsusb (list USB devices) as follows. I think the USB Vendor and Product IDs should be the same, 0x18d1 and 0x4ee7 respectively. And it should say at the end “(charging + debug)“. If you get something else, then you fell for the Android notification that says Use USB for [File Transfer / Android Auto]. That’s not good for us that we need USB debugging. The proper setting is Use USB for [No data transfer]. Yeah, a bit counter-intuitive.

$ lsusb
...
Bus 005 Device 005: ID 18d1:4ee7 Google Inc. Nexus/Pixel Device (charging + debug)
...
$ 

Setting up the host

In order to let the container have access to the phone, we need to make sure that there is no adb command on the host that is running. We can make sure that this is the case, if we run the following. If the ADB server is running on the host, then the container does not have access to the device. Interestingly, if you setup the container properly and ADB is running on the host, then as soon as you adb kill-server on the host, the container immediately has access to the device.

sudo adb kill-server

Creating the container

We are creating the container, we will called it adb, and it has access to USB debugging on the Android phone. The way we work with Incus is that we create a container with the task of accessing the phone, and then keep that container whenever we need to access the phone. In the incus config device command, we add to the adb container a device called adb(can use any name here), which is of type usb, and has the vendor and product IDs shown below. Finally, we restart the container.

$ incus launch images:debian/12/cloud adb
Launching adb
$ incus config device add adb adb usb vendorid=18d1 productid=4ee7
Device adb added to adb
user@user-desktop:~$ incus exec adb -- apt install -y adb
...
$ incus restart adb
$ 

Using adb in the Incus container

Let’s run adb devices in the container. You will most likely get unauthorized. When you run this command, your phone will prompt you whether you want to authorize this access. You should select to authorize the access.

$ incus exec adb -- adb devices
* daemon not running; starting now at tcp:5037
* daemon started successfully
List of devices attached
42282370	unauthorized
$ 

Now, run again the command.

$ incus exec adb -- adb devices
List of devices attached
42282370	device
$ 

And that’s it.

Stress testing adb in the Incus container

You would like to make this setup as robust as possible. One step is to remove the adb binary from the host.

Another test is to restart the container, and then check whether it still has access to the device.

$ incus restart adb
$ incus exec adb -- adb devices
* daemon not running; starting now at tcp:5037
* daemon started successfully
List of devices attached
42282370	device

$ incus restart adb
$ incus exec adb -- adb devices
* daemon not running; starting now at tcp:5037
* daemon started successfully
List of devices attached
42282370	device
$ 

Obviously, when you want to perform more work with adb, you can just get a shell into the container.

$ incus exec adb -- sudo --login --user debian
debian@adb:~$ adb devices
List of devices attached
42285120	device

debian@adb:~$ adb shell
komodo:/ $ exit
debian@adb:~$ logout
$ 

Conclusion

If you setup your system so that only a designated container can have access to your Android phone with USB debugging, then you get a bit better setup in terms of security.

on August 22, 2024 03:35 PM

Incus is a manager for virtual machines and system containers.

A system container (therein container) is an instance of an operating system that also runs on a computer, along with the main operating system. A system container uses, instead, security primitives of the Linux kernel for the separation from the main operating system. You can think of system containers as software virtual machines.

In this post we are going to see how to conveniently share a folder between the host and an Incus container. The common use-case is that you want to share directly files between the host and a container and you want the file ownership to be handled well. Note that in a container the UID and GID do not correspond to the same values as on the host.

Therefore, we are looking on how to share storage between the host and one or more Incus containers. The other case that we look into earlier, is to share storage between containers that has been allocated on the Incus storage pool.

Quick answer

incus config device add mycontainer mysharedfolder disk source=/home/myusername/SHARED path=/SHARED shift=true

Table of Contents

Background

On a Linux system the User ID (UID) and Group ID (GID) values are generally in the range of 0 (for root) to 65534 (for nobody). You can verify this by having a look at your /etc/passwd file on your system. In this file, each line is a different account; either a system account or a user account. Such a sample line is the following. There are several fields, separated by a colon character (:). The first field is the user name (here, root). The third field is the numeric user ID (UID) with the value of 0. The fourth field is the numeric group ID (GID) with the value of 0 as well. This is the root account.

root:x:0:0:root:/root:/bin/bash

Let’s do another one. The default user account in Debian and Debian-derived Linux distributions. The username in this Linux installation is myusername, the UID is 1000 and the GID is 1000 as well. This value of 1000 is quite common between Linux distributions.

myusername:x:1000:1000:User,,,:/home/user:/bin/bash

And now this one is the last one we will do. This is a special account with username nobody, and UID/GID at 65534. The purpose of this account is to be used for resources that somehow do not have a valid ID, or for processes and services that are expected to have the least privileges. In an Incus container you will see nobody and nogroup if you shared a folder and the translation of the IDs between the host and the container did not work well or did not happen at all.

nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin

Somewhat sharing folders between the host and Incus containers

To share a directory from the host to your Incus container(s), you add a disk device using incus config device. Here, mycontainer is the name of the container. And mysharedfolder is the name of the share which is only visible in Incus. You specify the source (an existing folder on the host) and the path (a folder in the container). However, there’s one big issue here. There’s no translation between the UID and GID of the files and directories in that folder.

incus config device add mycontainer mysharedfolder disk source=/home/myusername/SHARED path=/SHARED

Here’s on the host and in the container and then again on the host. In the last command it shows an empty folder. I think it’s possible to view the contents of SHARED in the container from the viewpoint of the host. It would require a smart use of the nsenter command to enter the proper namespace which I am not sure yet how to do for ZFS storage pools. If it worked, it would show that UID of myusername in the container from the viewpoint of the host, it would be something like 1001000 (Base ID 100000 plus 1000 for the first non-root account). That is, if the container has files with UID/GID outside of its range of 100000 – 165535, those files are not accessible from within the container.

$ ls -l SHARED/
total 0
-rw-rw-r-- 1 myusername myusername 0 Ιουλ 30 19:26 one
-rw-rw-r-- 1 myusername myusername 0 Ιουλ 30 19:26 two
-rw-rw-r-- 1 myusername myusername 0 Ιουλ 30 19:26 three
$ incus exec mycontainer -- ls -l /SHARED/
total 0
-rw-rw-r-- 1 nobody nogroup 0 Jul 30 16:26 one
-rw-rw-r-- 1 nobody nogroup 0 Jul 30 16:26 three
-rw-rw-r-- 1 nobody nogroup 0 Jul 30 16:26 two
$ sudo ls -l /var/lib/incus/containers/mycontainer/rootfs/SHARED
total 0
$ 

This was a good exercise to show the common mistake in sharing a folder from the host to the container. You can now remove the disk device and do it again properly.

$ incus config device remove mycontainer mysharedfolder
Device mysharedfolder removed from mycontainer
$ 

How to properly share a folder between a host and a container in Incus

The shared folder requires some sort of automated UID/GID shifting so that from the point of view of the container, it has valid (within range) values for the UID/GID. This is achieved with the parameter shift=true when we create the Incus disk device.

Let’s see a full example. We create a container and then create a folder on the host, which we call SHARED. Then, in that folder we create three empty files using the touch command. We are being fancy here. Then, we create the Incus disk device to share the folder into the container, and enable shift=true.

$ incus launch images:ubuntu/24.04/cloud mycontainer
Launching mycontainer
$ mkdir SHARED
$ touch SHARED/one SHARED/two SHARED/there
$ incus config device add mycontainer mysharedfolder disk source=/home/myusername/SHARED path=/SHARED shift=true
Device mysharedfolder added to mycontainer
$ 

where

  • incus config device, the Incus command to configure devices.
  • add, we add a device.
  • mycontainer, the name of the container.
  • mysharedfolder, the name of the shared folder. This is only visible from the host, and it’s just any name. We need a name so that we can specify the device when we want to perform further management.
  • disk, this is a disk device. Currently, Incus supports 12 types of devices.
  • source=/home/myusername/SHARED, the absolute path to the source folder. We type source= and then source folder, no spaces in between. The source folder (which is on the host) must already exist.
  • path=/SHARED, the absolute path to the folder in the container.
  • shift=true, the setting that automagically performs the necessary UID/GID translation.

In some cases, for example when your Linux kernel on the Incus host is old, the shift=true setting may not work. Or, some filesystems may not support it. I leave it to you to report back any cases where this did not work. In the comments below.

Let’s verify that everything work OK. First we see the files on the host. In my case, both the user myusername and group myusername have UID and GID 1000. In the container (which was images:ubuntu/24.04/cloud) they also have the same UID/GID of 1000, but for this Ubuntu runtime the default username with UID 1000 is ubuntu, and the default group with GID 1000 is lxd. These are just names and are not important. If you are still unsure, then use ls with the added parameter --numeric-uid-gid to show the numeric IDs. In both cases below, the UID and GID are 1000.

$ ls -l SHARED/
total 0
-rw-rw-r-- 1 myusername myusername 0 Ιουλ 30 19:26 one
-rw-rw-r-- 1 myusername myusername 0 Ιουλ 30 19:26 two
-rw-rw-r-- 1 myusername myusername 0 Ιουλ 30 19:26 three
$ incus exec mycontainer -- ls -l /SHARED/
total 0
-rw-rw-r-- 1 ubuntu lxd 0 Jul 30 16:26 one
-rw-rw-r-- 1 ubuntu lxd 0 Jul 30 16:26 two
-rw-rw-r-- 1 ubuntu lxd 0 Jul 30 16:26 three
$ 

If we had created an images:debian/12/cloud container, here is how the files would look like. The username with UID 1000 is debian, and the group with GID 1000 is netdev.

$ incus exec mycontainer -- ls -l /SHARED/
total 0
-rw-rw-r-- 1 debian netdev 0 Jul 30 16:26 one
-rw-rw-r-- 1 debian netdev 0 Jul 30 16:26 three
-rw-rw-r-- 1 debian netdev 0 Jul 30 16:26 two
$ 

You can create files and subfolders in the shared folder, either on the host or in the container. You can also share it between more containers.

Incus config device commads

Let’s have a look at the incus config device commands.

$ incus config device 
Usage:
  incus config device [flags]
  incus config device [command]

Available Commands:
  add         Add instance devices
  get         Get values for device configuration keys
  list        List instance devices
  override    Copy profile inherited devices and override configuration keys
  remove      Remove instance devices
  set         Set device configuration keys
  show        Show full device configuration
  unset       Unset device configuration keys

Global Flags:
      --debug          Show all debug messages
      --force-local    Force using the local unix socket
  -h, --help           Print help
      --project        Override the source project
  -q, --quiet          Don't show progress information
      --sub-commands   Use with help or --help to view sub-commands
  -v, --verbose        Show all information messages
      --version        Print version number

Use "incus config device [command] --help" for more information about a command.
$ 

We are going to use some of those commands. First, we list the disk devices. There’s currently only one disk device, mysharedfolder. We then show the disk device. We get the list of parameters, which we can get individually and even set them to different values. We then get the value of the path parameter. Finally, we remove the disk device. We have not shown how to override, set and unset.

$ incus config device list mycontainer
mysharedfolder
$ incus config device show mycontainer
mysharedfolder:
  path: /SHARED
  shift: "true"
  source: /home/myusername/SHARED
  type: disk
$ incus config device get mycontainer mysharedfolder path
/SHARED
$ incus config device remove mycontainer mysharedfolder 
Device mysharedfolder removed from mycontainer
$ 

Conclusion

We have seen how to create disk devices on Incus containers in order to share a folder from the host to one or more containers. Using shift=true we can take care of the translation of the UIDs and GIDs. I am interested in corner cases where these do not work. It would help me fix this post, and eventually move it to the official documentation of Incus.

on August 22, 2024 01:08 PM

August 21, 2024

Here's the tl;dr: if you make web apps in or for the UK, the CMA, the UK tech regulator, want to hear from you about their proposals before August 29th 2024, which is only a week away. Read their list of remedies to anticompetitive behaviour between web browsers and platforms, and email your thoughts to browsersandcloud@cma.gov.uk before the deadline. They really do want to hear from you, confidentally if you want, and your voice is useful here; you don't need to have some formally written legal opinion here. They want to hear from actual web devs and companies. Email them.

We want to hear from you -- Competition and Markets Authority

Now let's look at what the CMA have written in a little more detail. (This is the "tl" bit, although hopefully you will choose to "r".) They have been conducting a "Market Investigation Reference", which is regulator code for "talk to loads of people to see if there's a problem and then decide what to do about that", and the one we care about is about web browsers. I have, as part of Open Web Advocacy, been part of those consultations a number of times, and they've always been very willing to listen, and they do seem to identify a bunch of problems with browser diversity that I personally also think are problems. You know what we're talking about here: all browsers on iOS are required to be Safari's WebKit and not their own engine; Google have a tight grip on a whole bunch of stuff; browser diversity is a good thing and there's not enough of it in the world and this looks to be from deliberate attempts to act like monopolies by some players. These are the sorts of issues that CMA are concerned about (and they have published quite a few working papers explaining their issues in detail which you can read). What we're looking at today is their proposed list of fixes for these problems, which they call "remedies". At OWA we have also done this, of course, and you should read the OWA blog post about the CMA's remedies paper. But the first important point is, to be clear, that a whole bunch of these remedies being proposed by the CMA are good. This is not a complaint that it's all bad or that it's toothless, not at all. They're going to stop the #AppleBrowserBan and require that other browsers are allowed to use their own engines on iOS as browser apps and in in-app browsing, they're going to require both Apple and Google to grant other browsers' access to the same APIs that their own browsers can get at, they've got suggestions in place for how users can choose which browser they use to get past the problem of the "default hotseat" where you get one browser when you buy a phone and never think to change it, they're suggesting that Google open access to WebAPK minting to everyone. All of these help demonopolise the UK market. This is all good.

Stuart Langridge, Bruce Lawson, and Alex Moore of OWA in the CMA offices in London

But there are some places where their remedies don't really go far enough, and this is the stuff where it would be good for you, my web-engaged reader, to send them an email with your thoughts one way or the other. Part of making the web as capable as a platform-specific app is that web sites can be presented like a platform-specific app. This is (part of) what a PWA is, which most of you reading this will already know about. But releasing your app as a PWA, while it has a bunch of good characteristics for you (no reviews needed, instant updates, full control, cross-platform development, no tithing of money required to the people who sold the phone it's on) also has some downsides. In particular, it's difficult to get people to "install" a PWA, especially on iOS where you have to tell your users to go digging around in the share menu. And this is a fairly big area where the CMA could have proposed remedies, and have so far not chosen to. The first problem here is that iOS Safari doesn't support any sort of install prompt: as mentioned, there's the "add to home screen" entry hidden in the share menu. There's an API for this, everyone else implements it, iOS Safari doesn't. Maybe the API's got problems and needs fixing? That seems fine; engage with the web standards process to get it fixed! But there's been no sign of doing that in any useful way.

The second and related issue is that although the CMA's remedies state that browsers can use their own engine rather than having to be mere wrappers around the platform's web view, they do not say that when a browser installs a web app, that that web app will also use that browser's engine. That is: if there were a port of, say, Microsoft Edge to iOS, then Edge would be able to use its own engine, which is Microsoft's port of Blink. That Edge browser can implement install prompts how it likes, because it's using its own engine. But there's no guarantee in the CMA remedies that the PWA that gets installed will then be opened up in Edge. Calling the "install this PWA as an app" API provided by the platform might add it as a PWA in the platform maker's browser -- iOS Safari in this example. This would be rubbish. It means that the installed app might not even work; how will it know your passwords or cookies, etc; this can't be what's intended. But the remedies do not explicitly state this requirement, and so it's quite possible that platform owners will therefore use this as another way to push back against PWAs to make them less of a competitor to their own app stores. I would like to be able to say that platform owners wouldn't do that, that they wouldn't deliberately make things worse in an effort at malicious compliance, but after the debacle earlier this year of Apple dropping PWA support entirely and then only backing off on that after public outcry, we can't assume that there will be a good-faith attempt to improve PWA support (either by implementation, or by engaging wholeheartedly with the public web standards process), and so the remedies need to spell this out in more detail. This should be easy enough if I'm right and the CMA's intent is that this should be done, and your voice adding to that is likely to encourage them.

A tweet from Ada Rose Cannon reading 'Seeing a Web App I worked on used by *Apple* to justify that the Web is a viable platform on iOS is bullshit. The Web can be an ideal place to build apps but Apple is consistently dragging their heals on implementing the Web APIs that would allow them to compete with native apps', quoting a tweet by Peter Gasston with text 'This image from Apple‘s opening presentation in the Epic Games court case is very misleading. “Web Apps and Native Apps can look the same, therefore no-one needs to publish on the App Store”.' and an Apple-created image of the FT web app and FT platform-specific app looking similar

The worry about malicious compliance hampering web apps being a proper competitor to platform-specific apps also extends to another thing missing in the remedies: that access to hardware and software platform APIs for other browsers isn't required to be "which APIs there are", but "which APIs the existing browser elects to use". That is: if you write a native platform-specific app, it can talk to various hardware-ish things; bluetooth, USB, NFC, whichever. Therefore, you ought to be able, if you're a browser, to also have those APIs, in enough detail that you can then offer (mediated, secure) access to those services to your users, the PWAs and websites that people run with you. But the remedies do not ensure that this is the case; they ensure that there is a "requirement for Apple to grant equivalent access to APIs used by WebKit and Safari to browsers using alternative browser engines." What this means is that if Safari doesn't use a thing, no other browser can use it either. So it's not possible to make the browser you build better than Safari at this; Apple get to set the ceiling of what the web can do on the platform, and the ceiling is defined as "whatever their browser can do". That's... not very competitive, is it? No. If you agree with that, then you should also write to the CMA about it. They would like to hear about actual examples where this sort of thing harms UK businesses, of course, and if that's you definitely write in, but it's also worth giving your opinion if you are a UK developer who works in this area, delivering things via the web to your users (or if you want to do that but are currently prevented).

OK. Discussion over: go and write to the CMA with your thoughts on their remedies. Even if you think what they've proposed is perfect, you should still send them a message saying that; one thing that they and all government agencies tend to bemoan is that they only hear from people with lots of skin in the game, and generally only from people who are opposed, not people who are supportive. That means that they get a skewed view of what the web developer community actually think, and this is a chance for us to unskew that a bit, together. You can request that the CMA keep your name, business, or submission confidential, so you don't have to worry about giving away secrets or your participation, and you need only comment on stuff which is relevant to you; you do not need a comprehensive position paper on the whole thing! The address to email is browsersandcloud@cma.gov.uk, the list of remedies is Working Paper 7, and the deadline is Thursday 29th August.

State of the Browser 2024

If you want to hear more about this, then I am speaking about OWA, how it happened, what we've done, and how you can be involved at State of the Browser 2024 on Saturday 14th September (just under a month from now!) in the Barbican in London. I'm told that there are less than 30 in-person tickets left, although there are online streaming tickets available too, so get in quick if you want to hear me and a whole bunch of other speakers!

(Late breaking update: Bruce has also written about this and you should read that too!)

on August 21, 2024 08:07 AM

August 19, 2024

I nerd-sniped myself.

Internet: I am disappointed that you have not provided me with one of those meme-maker websites for a Family Fortunes board. Now I will have to make one.

See, I put word to thought and that was my mistake. So I spent, like, half my Saturday doing this instead of what I should be doing.

A Family Fortunes board from the 1980s reading 'Things I should have done instead of this' with options 'SoTB talk', 'Ironing', 'Lagers', 'D&D campaign', and 'touching grass' in air-quotes (with a score of 0)

I got to use a couple of interesting techniques with it that I hadn't done before, now that nice new web APIs exist for them: sharing an image, for example, which involves creating a Blob (thank you Thomas Steiner), doing the equivalent of the venerable "yellow fade technique" with the web animations API, and yoinking all the information from a <form> conveniently by reading new FormData(form) in form.oninput. This is all cool stuff; the web's such a nice environment to program for.

A number of people have pointed out that this is not emulating the particular style of Family Fortunes board that they prefer, to which our survey says: feel free to make your own. This is the one I grew up with; those of you of more venerable years who prefer Max Bygraves or Bob Monkhouse to Les Dennis will simply have to struggle on under the burden. (Let us not speak of the appalling colourful modern video wall, which is just literally no fun at all and looks like a pub quiz machine.) Also, it turns out that Quite A Lot Of People have Opinions on what the exact shape of the X's should be, and no two of these opinions are the same. All good clean fun, yes.

Anyway, I can't think that this is ever likely to be all that important, but if you ever need a way to mock up a Family Fortunes1 screen from the 1980s with your own custom answers and scores, we asked 100 people and they all said that kryogenix.org/code/family-fortunes is right there waiting patiently for you. Enjoy.

  1. I think Americans call it Family Feud?
on August 19, 2024 03:22 PM

August 14, 2024

Netplan v1.1 released

Lukas Märdian

I’m happy to announce that Netplan version 1.1 is now available on GitHub and is soon to be deployed into a Debian and/or Ubuntu installation near you! Six months and 120 commits after the previous version (including one patch release v1.0.1), this release is brought to you by 17 free software contributors from around the globe. 🚀

Kudos to everybody involved! ❤

Highlights

  • Custom systemd-networkd-wait-online logic override to wait for link-local and routable interfaces. (#456#482)
  • Modification of the embedded-switch-mode setting without virtual-function (VF) definitions on SR-IOV devices (#454)
  • Parser flag to ignore individual, broken configurations, instead of not generating any backend configuration (#412)
  • Fixes for @ProtonVPN (#495) and @microsoft Azure Linux (#445), contributed by those companies

Releasing v1.1

Documentation

Bug fixes

New Contributors

Full Changelog1.0…1.1

on August 14, 2024 01:41 PM

August 12, 2024

Announcing Incus 6.4

Stéphane Graber

This release builds upon the recently added OCI support from Incus 6.3, making it even easier to run application containers. It also adds a number of useful new features for clustered and larger environments with more control on the virtual CPU used when live migrating VMs and finer grained resource constraints within projects.

The highlights for this release are:

  • Cluster group configuration
  • Per-cluster group CPU baseline
  • Attaching sub-directories of custom storage volumes
  • Per storage pool project limits
  • Isolated OVN networks (no uplink)
  • Per-instance LXCFS
  • Environment files at create/launch time

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Enjoy!

on August 12, 2024 05:39 PM

Another loss last week of a friend. I am staying strong and working through it. A big thank you to all of you that have donated to my car fund, I still have a long way to go. I am not above getting a cheap old car, but we live in sand dunes so it must be a cheap old car with 4×4 to get to my property. A vehicle is necessary as we are 50 miles away from staples such as food and water. We also have 2 funerals to attend. Please consider a donation if my work is useful to you. https://gofund.me/1e784e74 All of my work is currently unpaid work, as I am between contracts. Thank you for your consideration. Now onto the good stuff, last weeks work. It was another very busy week with Qt6 packaging in Debian/Kubuntu and KDE snaps. I also have many SRUs for Kubuntu Noble .1 release that needs their verification done.

Kubuntu:

Debian:

Starting the salvage process for kdsoap which is blocking a long line of packages, notably kio-extras.

  • qtmpv – in NEW
  • arianna – in NEW
  • xwaylandvideobridge – NEW
  • futuresql – NEW
  • kpat WIP – failing tests
  • kdegraphics-thumbnailers (WIP)
  • khelpcenter – experimental
  • kde-inotify-survey – experimental
  • ffmpegthumbs – experimental
  • kdialog – experimental
  • kwalletmanager – experimental
  • libkdegames – pushed some fixes – experimental
  • Tokodon – Done, but needs qtmpv to pass NEW
  • Gwenview – WIP needs – kio-extras (blocked)

KDE Snaps:

Please note: Please help test the –edge snaps so I can promote them to stable.

WIP Snaps or MR’s made

  • Kirigami-gallery ( building )
  • Kiriki (building)
  • Kiten (building)
  • kjournald (Building)
  • Kdevelop (WIP)
  • Kdenlive (building)
  • KHangman (WIP)
  • Kubrick (WIP)
  • Palapeli (Manual review in store dbus)
  • Kanagram (WIP)
  • Labplot (WIP)
  • Kjumpingcube (MR)
  • Klettres (MR)
  • Kajongg –edge (Broken, problem with pyqt)
  • Dragon –edge ( Broken, dbus fails)
  • Ghostwriter –edge ( Broken, need to workout Qt webengine obscure way of handling hunspell dictionaries.)
  • Kasts –edge ( Broken, portal failure, testing some plugs)
  • Kbackup –edge ( Needs auto-connect udisks2, added home plug)
  • Kdebugsettings –edge ( Added missing personal-files plug, will need approval)
  • KDiamond –edge ( sound issues )
  • Angelfish –edge https://snapcraft.io/angelfish ( Crashes on first run, but runs fine after that.. looking into it)
  • Qrca –edge ( needs snap connect qrca:camera camera until auto-connect approved, will remain in –edge until official release)

Thanks for stopping by.

on August 12, 2024 04:33 PM

August 11, 2024

There are a lot of privileges most of us probably take for granted. Not everyone is gifted with the ability to do basic things like talk, walk, see, and hear. Those of us (like myself) who can do all of these things don’t really think about them much. Those of us who can’t, have to think about it a lot because our world is largely not designed for them. Modern-day things are designed for a fully-functional human being, and then have stuff tacked onto them to make them easier to use. Not easy, just “not quite totally impossible.”

Issues of accessibility plague much of modern-day society, but I want to focus on one pain-point in particular. Visually-impaired accessibility.

Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

Now I’m not blind, so I am not qualified to say how exactly a blind person would use their computer. But I have briefly tried using a computer with my monitor turned off to test visually-impaired accessibility, so I know a bit about how it works. The basic idea seems to be that you launch a screen reader using a keyboard shortcut. That screen reader proceeds to try to describe various GUI elements to you at a rapid speed, from which you have to divine the right combination of Tab, Space, Enter, and arrow keys to get to the various parts of the application you want to use. Using these arcane sequences of keys, you can make the program do… something. Hopefully it’s what you wanted, but based on reports I’ve seen from blind users, oftentimes the computer reacts to your commands in highly unexpected ways.

The first thing here that jumps out to most people is probably the fact that using a computer blind is like trying to use magic. That’s a problem, but that’s not what I’m focusing on. I'm focusing on two words in particular.

Screen. Reader.

Wha…?

I want you to stop and take a moment to imagine the following scenario. You want to go to a concert, but can’t, so you sent your friend to the concert in your place and ask them to record it for you. They do so, and come back with a video of the whole thing. They’ve transcribed every word of each song, and made music sheets detailing every note and chord the concert played. There’s even some cool colors and visualizer stuff that conveys the feeling and tempo of each song. They then proceed to lay all this glorious work out in front of you, claiming it conveys everything about the concert perfectly. Confronted with this onslaught of visual data, what’s the first thing you’re going to ask?

“Didn’t you record the audio?”

Of course that’s the first thing you’re going to ask, because it’s a concert for crying out loud, 90% of the point of it is the audio. I can listen to a simple, relatively low-quality recording of a concert’s audio and be satisfied. I get to hear the emotion, the artistry, everything. I don’t need a single pixel of images to let me experience it in a satisfactory way. On the other hand, I don’t care how detailed your video analysis of the audio is - if it doesn’t include the audio, I’m going to be upset. Potentially very upset.

Now let’s go back to the topic at hand, visually-impaired accessibility. What does a screen reader do? It takes a user interface, one designed for seeing users, and tries to describe it as best it can to the user via audio. You then have to use keyboard shortcuts to navigate the UI, which the screen reader continues to describe bits of as you move around. For someone who’s looking at the app, this is all fine and dandy. For someone who can kinda see, maybe it’s sufficient. But for someone who’s blind or severely visually impaired, this is difficult to use if you’re lucky. Chances are you’re not going to be lucky and the app your working with might as well not exist.

Why is this so hard? Why have decades of computer development not led to breakthroughs in accessibility for blind people? Because we’re doing the whole thing wrong! We’re taking a user interface designed specifically and explicitly for seeing users, and trying to convey it over audio! It’s as ridiculous as trying to convey a concert over video. A user who’s listening to their computer shouldn’t need to know how an app is visually laid out in order to figure out whether they need to press up arrow, right arrow, or Tab to get to their desired UI element. They shouldn’t have to think in terms of buttons and check boxes. These are inherently visual user interface elements. Forcing a blind person to use these is tantamount to torture.

On top of all of this, half the time screen readers don’t even work! People who design software are usually able to see. You just don’t think about how to make software usable for blind people when you can see. It’s not something that easily crosses your mind. But try turning your screen off and navigating your system with a screen reader, and suddenly you’ll understand what’s lacking about the accessibility features. I tried doing this once, and I went and turned the screen back on after about five minutes of futile keyboard bashing. I can’t imagine the frustration I would have experienced if I had literally no other option than to work with a screen reader. Add on top of that the possibility that the user of your app has never even seen a GUI element in their lives before because they can’t see at all, and now you have essentially a language barrier in the way too.

So what’s the solution to this? Better screen reader compatibility might be helpful, but I don’t think that’s ultimately the correct solution here. I think we need to collectively recognize that blind people shouldn’t have to work with graphical user interfaces, and design something totally new.

One of the advantages of Linux is that it’s just a bunch of components that work together to provide a coherent and usable set of features for working on your computer. You aren’t locked into using a UI that you don’t like - just use or create some other UI. All current desktop environments are based around a screen that the user can see, but there’s no rules that say it has to be that way. Imagine if instead, your computer just talked to you, telling you what app you were using, what keys to press to accomplish certain actions, etc. In response, you talked back to it using the keyboard or voice recognition. There would be no buttons, check boxes, menus, or other graphical elements - instead you’d have actions, options, feature lists, and other conceptual elements that can be conveyed over audio. Switching between UI elements with the keyboard would be intuitive, predictable, and simple, since the app would be designed from step one to work that way. Such an audio-centric user interface would be easy for a blind or vision-impaired person to use. If well-designed, it could even be pleasant. A seeing person might have a learning curve to get past, but it would be usable enough for them too. Taking things a step further, support for Braille displays would be very handy, though as I have never used one I don’t know how hard that would be to implement.

A lot of work would be needed in order to get to the point of having a full desktop environment that worked this way. We’d need toolkits for creating apps with intuitive, uniform user interface controls. Many sounds would be needed to create a rich sound scheme for conveying events and application status to the user. Just like how graphical apps need a display server, we’d also need an audio user interface server that would tie all the apps together, letting users multitask without their apps talking over each other or otherwise interfering. We’d need plenty of apps that would actually be designed to work in an audio-only environment. A text editor, terminal, and web browser are the first things that spring to mind, but email, chat, and file management applications would also be very important. There might even be an actually good use for AI here, in the form of an image “viewer” that could describe an image to the user. And of course, we’d need an actually good text-to-speech engine (Piper seems particularly promising here).

This is a pretty rough overview of how I imagine we could make the world better for visually impaired computer users. Much remains to be designed and thought about, but I think this would work well. Who knows, maybe Linux could end up being easier for blind users to use than Windows is!

Interested in helping make this happen? Head over to the Aurora Aural User Interface project on GitHub, and offer ideas!

Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

on August 11, 2024 08:41 AM

August 10, 2024

For Additional Confusion

Benjamin Mako Hill

The Wikipedia article on antipopes can be pretty confusing! If you’d like to be even more confused, it can help with that!

on August 10, 2024 03:56 PM

August 08, 2024

The Xubuntu development update for August 2024 features Xubuntu 24.10, "Oracular Oriole," featuring Xfce 4.19, and many more updates.

The post Xubuntu Development Update August 2024 appeared first on Sean Davis.

on August 08, 2024 12:27 PM

August 04, 2024

The Freedesktop.org Specifications directory contains a list of common specifications that have accumulated over the decades and define how common desktop environment functionality works. The specifications are designed to increase interoperability between desktops. Common specifications make the life of both desktop-environment developers and especially application developers (who will almost always want to maximize the amount of Linux DEs their app can run on and behave as expected, to increase their apps target audience) a lot easier.

Unfortunately, building the HTML specifications and maintaining the directory of available specs has become a bit of a difficult chore, as the pipeline for building the site has become fairly old and unmaintained (parts of it still depended on Python 2). In order to make my life of maintaining this part of Freedesktop easier, I aimed to carefully modernize the website. I do have bigger plans to maybe eventually restructure the site to make it easier to navigate and not just a plain alphabetical list of specifications, and to integrate it with the Wiki, but in the interest of backwards compatibility and to get anything done in time (rather than taking on a mega-project that can’t be finished), I decided to just do the minimum modernization first to get a viable website, and do the rest later.

So, long story short: Most Freedesktop specs are written in DocBook XML. Some were plain HTML documents, some were DocBook SGML, a few were plaintext files. To make things easier to maintain, almost every specification is written in DocBook now. This also simplifies the review process and we may be able to switch to something else like AsciiDoc later if we want to. Of course, one could have switched to something else than DocBook, but that would have been a much bigger chore with a lot more broken links, and I did not want this to become an even bigger project than it already was and keep its scope somewhat narrow.

DocBook is a markup language for documentation which has been around for a very long time, and therefore has older tooling around it. But fortunately our friends at openSUSE created DAPS (DocBook Authoring and Publishing Suite) as a modern way to render DocBook documents to HTML and other file formats. DAPS is now used to generate all Freedesktop specifications on our website. The website index and the specification revisions are also now defined in structured TOML files, to make them easier to read and to extend. A bunch of specifications that had been missing from the original website are also added to the index and rendered on the website now.

Originally, I wanted to put the website live in a temporary location and solicit feedback, especially since some links have changed and not everything may have redirects. However, due to how GitLab Pages worked (and due to me not knowing GitLab CI well enough…) the changes went live before their MR was actually merged. Rather than reverting the change, I decided to keep it (as the old website did not build properly anymore) and to see if anything breaks. So far, no dead links or bad side effects have been observed, but:

If you notice any broken link to specifications.fd.o or anything else weird, please file a bug so that we can fix it!

Thank you, and I hope you enjoy reading the specifications in better rendering and more coherent look! 😃

on August 04, 2024 06:54 PM

Thankfully no tragedies to report this week! I thank each and everyone of you that has donated to my car fund. I still have a ways to go and could use some more help so that we can go to the funeral. https://gofund.me/033eb25d I am between contracts and work packages, so all of my work is currently for free. Thanks for your consideration.

Another very busy week getting qt6 updates in Debian, Kubuntu, and KDE snaps.

Kubuntu:

  • Merkuro and Neochat SRUs have made progress.
  • See Debian for the qt6 Plasma / applications work.

Debian:

  • qtmpv – in NEW
  • arianna – in NEW
  • kamera – experimental
  • libkdegames – experimental
  • kdenetwork-filesharing – experimental
  • xwaylandvideobridge – NEW
  • futuresql – NEW
  • kpat WIP
  • Tokodon – Done, but needs qtmpv to pass NEW
  • Gwenview – WIP needs kamera, kio-extras
  • kio-extras – Blocked on kdsoap in which the maintainer is not responding to bug reports or emails. Will likely fork in Kubuntu as our freeze quickly approaches.

KDE Snaps:

Updated QT to 6.7.2 which required a rebuild of all our snaps. Also found an issue with mismatched ffmpeg libraries, we have to bundle them for now until versioning issues are resolved.

Made new theme snaps for KDE breeze: gtk-theme-breeze, icon-theme-breeze so if you use the plasma theme breeze please install these and run

for PLUG in $(snap connections | grep gtk-common-themes:icon-themes | awk '{print $2}'); do sudo snap connect ${PLUG} icon-theme-breeze:icon-themes; done
for PLUG in $(snap connections | grep gtk-common-themes:gtk-3-themes | awk '{print $2}'); do sudo snap connect ${PLUG} gtk-theme-breeze:gtk-3-themes; done
for PLUG in $(snap connections | grep gtk-common-themes:gtk-2-themes | awk '{print $2}'); do sudo snap connect ${PLUG} gtk-theme-breeze:gtk-2-themes; done

This should resolve most theming issues. We are still waiting for kdeglobals to be merged in snapd to fix colorscheme issues, it is set for next release. I am still working on qt6 themes and working out how to implement them in snaps as they are more complex than gtk themes with shared libraries and file structures.

Please note: Please help test the –edge snaps so I can promote them to stable.

WIP Snaps or MR’s made

  • Juk (WIP)
  • Kajongg (WIP problem with pyqt)
  • Kalgebra (in store review)
  • Kdevelop (WIP)
  • Kdenlive (MR)
  • KHangman (WIP)
  • Ruqola (WIP)
  • Picmi (building)
  • Kubrick (WIP)
  • lskat (building)
  • Palapeli (MR)
  • Kanagram (WIP)
  • Labplot (WIP)
  • Ktuberling (building)
  • Ksudoku (building)
  • Ksquares (MR)
on August 04, 2024 12:35 PM

August 03, 2024

Gogh

Dougie Richardson

Check out these awesome terminal themes at http://gogh-co.github.io/Gogh/

on August 03, 2024 11:57 AM

August 02, 2024

My Debian contributions this month were all sponsored by Freexian.

You can also support my work directly via Liberapay.

OpenSSH

At the start of the month, I uploaded a quick fix (via Salvatore Bonaccorso) for a regression from CVE-2006-5051, found by Qualys; this was because I expected it to take me a bit longer to merge OpenSSH 9.8, which had the full fix.

This turned out to be a good guess: it took me until the last day of the month to get the merge done. OpenSSH 9.8 included some substantial changes to split the server into a listener binary and a per-session binary, which required some corresponding changes in the GSS-API key exchange patch. At this point I was very grateful for the GSS-API integration test contributed by Andreas Hasenack a little while ago, because otherwise I might very easily not have noticed my mistake: this patch adds some entries to the key exchange algorithm proposal, and on the server side I’d accidentally moved that to after the point where the proposal is sent to the client, which of course meant it didn’t work at all. Even with a failing test, it took me quite a while to spot the problem, involving a lot of staring at strace output and comparing debug logs between versions.

There are still some regressions to sort out, including a problem with socket activation, and problems in libssh2 and Twisted due to DSA now being disabled at compile-time.

Speaking of DSA, I wrote a release note for this change, which is now merged.

GCC 14 regressions

I fixed a number of build failures with GCC 14, mostly in my older packages: grub (legacy), imaptool, kali, knews, and vigor.

autopkgtest

I contributed a change to allow maintaining Incus container and VM images in parallel. I use both of these regularly (containers are faster, but some tests need full machine isolation), and the build tools previously didn’t handle that very well.

I now have a script that just does this regularly to keep my images up to date (although for now I’m running this with PATH pointing to autopkgtest from git, since my change hasn’t been released yet):

RELEASE=sid autopkgtest-build-incus images:debian/trixie
RELEASE=sid autopkgtest-build-incus --vm images:debian/trixie

Python team

I fixed dnsdiag’s uninstallability in unstable, and contributed the fix upstream.

I reverted python-tenacity to an earlier version due to regressions in a number of OpenStack packages, including octavia and ironic. (This seems to be due to #486 upstream.)

I fixed a build failure in python3-simpletal due to Python 3.12 removing the old imp module.

I added non-superficial autopkgtests to a number of packages, including httmock, py-macaroon-bakery, python-libnacl, six, and storm.

I switched a number of packages to build using PEP 517 rather than calling setup.py directly, including alembic, constantly, hyperlink, isort, khard, python-cpuinfo, and python3-onelogin-saml2. (Much of this was by working through the missing-prerequisite-for-pyproject-backend Lintian tag, but there’s still lots to do.)

I upgraded frozenlist, ipykernel, isort, langtable, python-exceptiongroup, python-launchpadlib, python-typeguard, pyupgrade, sqlparse, storm, and uncertainties to new upstream versions. In the process, I added myself to Uploaders for isort, since the previous primary uploader has retired.

Other odds and ends

I applied a suggestion by Chris Hofstaedtler to create /etc/subuid and /etc/subgid in base-passwd, since the login package is no longer essential.

I fixed a wireless-tools regression due to iproute2 dropping its (/usr)/sbin/ip compatibility symlink.

I applied a suggestion by Petter Reinholdtsen to add AppStream metainfo to pcmciautils.

on August 02, 2024 12:27 PM

July 30, 2024

With the work that has been done in the debian-installer/netcfg merge-proposal !9 it is possible to install a standard Debian system, using the normal Debian-Installer (d-i) mini.iso images, that will come pre-installed with Netplan and all network configuration structured in /etc/netplan/.

In this write-up, I’d like to run you through a list of commands for experiencing the Netplan enabled installation process first-hand. Let’s start with preparing a working directory and installing the software dependencies for our virtualized Debian system:

$ mkdir d-i_tmp && cd d-i_tmp
$ apt install ovmf qemu-utils qemu-system-x86

Now let’s download the official (daily) mini.iso, linux kernel image and initrd.gz containing the Netplan enablement changes:

$ wget https://d-i.debian.org/daily-images/amd64/daily/netboot/gtk/mini.iso
$ wget https://d-i.debian.org/daily-images/amd64/daily/netboot/gtk/debian-installer/amd64/initrd.gz
$ wget https://d-i.debian.org/daily-images/amd64/daily/netboot/gtk/debian-installer/amd64/linux

Next we’ll prepare a VM, by copying the EFI firmware files, preparing some persistent EFIVARs file, to boot from FS0:\EFI\debian\grubx64.efi, and create a virtual disk for our machine:

$ cp /usr/share/OVMF/OVMF_CODE_4M.fd .
$ cp /usr/share/OVMF/OVMF_VARS_4M.fd .
$ qemu-img create -f qcow2 ./data.qcow2 20G

Finally, let’s launch the debian-installer using a preseed.cfg file, that will automatically install Netplan (netplan-generator) for us in the target system. A minimal preseed file could look like this:

# Install minimal Netplan generator binary
d-i preseed/late_command string in-target apt-get -y install netplan-generator

For this demo, we’re installing the full netplan.io package (incl. the interactive Python CLI), as well as the netplan-generator package and systemd-resolved, to show the full Netplan experience. You can choose the preseed file from a set of different variants to test the different configurations:

We’re using the linux kernel and initrd.gz here to be able to pass the preseed URL as a parameter to the kernel’s cmdline directly. Launching this VM should bring up the official debian-installer in its netboot/gtk form:

$ export U=https://people.ubuntu.com/~slyon/d-i/netplan-preseed+full.cfg
$ qemu-system-x86_64 \
	-M q35 -enable-kvm -cpu host -smp 4 -m 2G \
	-drive if=pflash,format=raw,unit=0,file=OVMF_CODE_4M.fd,readonly=on \
	-drive if=pflash,format=raw,unit=1,file=OVMF_VARS_4M.fd,readonly=off \
	-device qemu-xhci -device usb-kbd -device usb-mouse \
	-vga none -device virtio-gpu-pci \
	-net nic,model=virtio -net user \
	-kernel ./linux -initrd ./initrd.gz -append "url=$U" \
	-hda ./data.qcow2 -cdrom ./mini.iso;

Now you can click through the normal Debian-Installer process, using mostly default settings. Optionally, you could play around with the networking settings, to see how those get translated to /etc/netplan/ in the target system.

After you confirmed your partitioning changes, the base system gets installed. I suggest not to select any additional components, like desktop environments, to speed up the process.

During the final step of the installation (finish-install.d/55netcfg-copy-config) d-i will detect that Netplan was installed in the target system (due to the preseed file provided) and opt to write its network configuration to /etc/netplan/ instead of /etc/network/interfaces or /etc/NetworkManager/system-connections/.

Done! After the installation finished, you can reboot into your virgin Debian Sid/Trixie system.

To do that, quit the current Qemu process, by pressing Ctrl+C and make sure to copy over the EFIVARS.fd file that was modified by grub during the installation, so Qemu can find the new system. Then reboot into the new system, not using the mini.iso image any more:

$ cp ./OVMF_VARS_4M.fd ./EFIVARS.fd
$ qemu-system-x86_64 \
        -M q35 -enable-kvm -cpu host -smp 4 -m 2G \
        -drive if=pflash,format=raw,unit=0,file=OVMF_CODE_4M.fd,readonly=on \
        -drive if=pflash,format=raw,unit=1,file=EFIVARS.fd,readonly=off \
        -device qemu-xhci -device usb-kbd -device usb-mouse \
        -vga none -device virtio-gpu-pci \
        -net nic,model=virtio -net user \
        -drive file=./data.qcow2,if=none,format=qcow2,id=disk0 \
        -device virtio-blk-pci,drive=disk0,bootindex=1
        -serial mon:stdio

Finally, you can play around with your Netplan enabled Debian system! As you will find, /etc/network/interfaces exists but is empty, it could still be used (optionally/additionally). Netplan was configured in /etc/netplan/ according to the settings given during the d-i installation process.

In our case, we also installed the Netplan CLI, so we can play around with some of its features, like netplan status:

Thank you for following along the Netplan enabled Debian installation process and happy hacking! If you want to learn more, find us at GitHub:netplan.

on July 30, 2024 04:24 AM

July 29, 2024

Updates for July 2024

Ubuntu Studio

The Road to 24.10

We have quite a few exciting changes going on for Ubuntu Studio 24.10, including one that some might find controversial. However, this is not without a lot of thought and foresight, and even research, testing, and coordination.

With that, let’s just dive right into the controversial change.

Switching to Ubuntu’s Generic Kernel

This is the one that’s going to come as a shock. However, with the release of 24.04 LTS, the generic kernel is now fully capable of preemptable low latency workloads. Because of this, the lowlatency kernel in Ubuntu will eventually be depricated.

Rather than take a reactive approach to this, we at Ubuntu Studio decided to be proactive and switch to the generic kernel starting with 24.10. To facilitate this, we will be enabling not only threadirqs like we had done before, but also preempt=full by default.

If you had read the first link above, you’ll also notice that nohz_full=all was also recommended, but we noticed that created performance degradation in high video workloads, so we decided to leave that off by default but give users a GUI option in Ubuntu Studio Audio Configuration to enable and disable these three kernel parameters as they need.

This has been tested on 24.04 LTS with results equivalent to or better than with the lowlatency kernel. The Ubuntu Kernel Team also has mentioned even more improvements coming to the kernel in 24.10, including the potential of ability to change these settings and more on-the-fly without reboot.

There have also been numerous improvements for gaming with these settings, for those of you that like to game. You can explore more of that on the Ubuntu Discourse.

Plasma 6

We are in cooperation with the Kubuntu team doing what we can to help with the transition to KDE Plasma Desktop 6. The work is going along slowly but surely, and we hope to have more information on this in the future. For right now, most testing on new stuff is being done on Ubuntu Studio 24.04 LTS for this reason since desktop environment breakages can be catastrophic for application testing. Hence, any screenshots will be on Plasma 5.

New Theming for Ubuntu Studio

We’ve been using the Materia theme for the past five years, since 19.04, with a brief break for 22.04 LTS. Unfortunately, that is coming to an end as the Materia theme is no longer maintained. Its successor has been found in Orchis, which was forked from Materia. Here’s a general screenshot our Project Leader, Erich Eickmeyer, made from his personal desktop using Ubuntu Studio 24.04 LTS and the Orchis theme:

Message from Erich: “Yes, that’s Microsoft Edge and yes, my system needs a reboot. Don’t @ me. XD”

Contributions Needed, and Help a Family in Need!

Ubuntu Studio is a community-run project, and donations are always welcome. If you find Ubuntu Studio useful and want to support its ongoing development, please contribute!

Erich’s wife, Edubuntu Project Leader Amy Eickmeyer, lost her full-time job two weeks ago and the family is in desperate need of help in this time of hardship. If you could find it in your heart to donate extra to Ubuntu Studio, those funds will help the Eickmeyer family at this time.

Contribution options are on the sidebar to the right or at ubuntustudio.org/contribute.

on July 29, 2024 08:30 PM

July 14, 2024

uCareSystem has had the ability to detect if a system reboot is needed after applying maintenance tasks for some time now. With the new release, it will also show you the list of packages that requested the reboot. Additionally, the new release has squashed some annoying bugs. Restart ? Why though ? uCareSystem has had […]
on July 14, 2024 05:03 PM

July 04, 2024

 

Critical OpenSSH Vulnerability (CVE-2024-6387): Please Update Your Linux

A critical security flaw (CVE-2024-6387) has been identified in OpenSSH, a program widely used for secure remote connections. This vulnerability could allow attackers to completely compromise affected systems (remote code execution).

Who is Affected?

Only specific versions of OpenSSH (8.5p1 to 9.7p1) running on glibc-based Linux systems are vulnerable. Newer versions are not affected.

What to Do?

  1. Update OpenSSH: Check your version by running ssh -V in your terminal. If you're using a vulnerable version (8.5p1 to 9.7p1), update immediately.

  2. Temporary Workaround (Use with Caution): Disabling the login grace timeout (setting LoginGraceTime=0 in sshd_config) can mitigate the risk, but be aware it increases susceptibility to denial-of-service attacks.

  3. Recommended Security Enhancement: Install fail2ban to prevent brute-force attacks. This tool automatically bans IPs with too many failed login attempts.

Optional: IP Whitelisting for Increased Security

Once you have fail2ban installed, consider allowing only specific IP addresses to access your server via SSH. This can be achieved using:

  • ufw for Ubuntu

  • firewalld for AlmaLinux or Rocky Linux

Additional Resources

About Fail2ban

Fail2ban monitors log files like /var/log/auth.log and bans IPs with excessive failed login attempts. It updates firewall rules to block connections from these IPs for a set duration. Fail2ban is pre-configured to work with common log files and can be easily customized for other logs and errors.

Installation Instructions:

  • Ubuntu: sudo apt install fail2ban

  • AlmaLinux/Rocky Linux: sudo dnf install fail2ban


About DevSec Hardening Framework

The DevSec Hardening Framework is a set of tools and resources that helps automate the process of securing your server infrastructure. It addresses the challenges of manually hardening servers, which can be complex, error-prone, and time-consuming, especially when managing a large number of servers. The framework integrates with popular infrastructure automation tools like Ansible, Chef, and Puppet. It provides pre-configured modules that automatically apply secure settings to your operating systems and services such as OpenSSH, Apache and MySQL. This eliminates the need for manual configuration and reduces the risk of errors.


Prepare by LinuxMalaysia with the help of Google Gemini


5 July 2024

 

In Google Doc Format 

 

https://docs.google.com/document/d/e/2PACX-1vTSU27PLnDXWKjRJfIcjwh9B0jlSN-tnaO4_eZ_0V5C2oYOPLLblnj3jQOzCKqCwbnqGmpTIE10ZiQo/pub 



on July 04, 2024 09:42 PM

June 18, 2024

 

Download And Use latest Version Of Nginx Stable

To ensure you receive the latest security updates and bug fixes for Nginx, configure your system's repository specifically for it. Detailed instructions on how to achieve this can be found on the Nginx website. Setting up the repository allows your system to automatically download and install future Nginx updates, keeping your web server running optimally and securely.

Visit this websites for information on how to configure your repository for Nginx.

https://nginx.org/en/linux_packages.html

https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/ 

Installing Nginx on different Linux distributions

Example from https://docs.bunkerweb.io/latest/integrations/#linux 

Ubuntu

sudo apt install -y curl gnupg2 ca-certificates lsb-release debian-archive-keyring && \
curl https://nginx.org/
keys/nginx_signing.key | gpg --dearmor \
| sudo tee /usr/
share/keyrings/nginx-archive-keyring.gpg >/dev/null && \
echo
"deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/debian `lsb_release -cs` nginx"
 \
| sudo tee /etc/apt/sources.list.d/nginx.list

# Latest Stable (pick either latest stable
or by version)

sudo apt
update && \
sudo apt
install -y nginx

#
By version (pick one only, latest stable or by version)

sudo apt
update && \
sudo apt
install -y nginx=1.24.0-1~$(lsb_release -cs)

AlmaLinux / Rocky Linux (Redhat)

Create the following file at /etc/yum.repos.d/nginx.repo

[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/
gpgcheck=1
enabled=0
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

# Latest Stable (pick either latest stable or by version)

sudo dnf install nginx

# Latest Stable (pick either latest stable or by version)

sudo dnf install nginx-1.24.0

Nginx Fork (This for reference only - year 2024)

https://thenewstack.io/freenginx-a-fork-of-nginx/ 

https://github.com/freenginx/ 

Use this Web tool to configure nginx.

https://www.digitalocean.com/community/tools/nginx

https://github.com/digitalocean/nginxconfig.io 

Example

https://www.digitalocean.com/community/tools/nginx?domains.0.server.domain=songketmail.linuxmalaysia.lan&domains.0.server.redirectSubdomains=false&domains.0.https.hstsPreload=true&domains.0.php.phpServer=%2Fvar%2Frun%2Fphp%2Fphp8.2-fpm.sock&domains.0.logging.redirectAccessLog=true&domains.0.logging.redirectErrorLog=true&domains.0.restrict.putMethod=true&domains.0.restrict.patchMethod=true&domains.0.restrict.deleteMethod=true&domains.0.restrict.connectMethod=true&domains.0.restrict.optionsMethod=true&domains.0.restrict.traceMethod=true&global.https.portReuse=true&global.https.sslProfile=modern&global.https.ocspQuad9=true&global.https.ocspVerisign=true&global.security.limitReq=true&global.security.securityTxt=true&global.logging.errorLogEnabled=true&global.logging.logNotFound=true&global.tools.modularizedStructure=false&global.tools.symlinkVhost=false 

Harisfazillah Jamel - LinuxMalaysia - 20240619
on June 18, 2024 10:18 PM

May 24, 2024

In my previous blog, I explored The New APT 3.0 solver. Since then I have been at work in the test suite making tests pass and fixing some bugs.

You see for all intents and purposes, the new solver is a very stupid naive DPLL SAT solver (it just so happens we don’t actually have any pure literals in there). We can control it in a bunch of ways:

  1. We can mark packages as “install” or “reject”
  2. We can order actions/clauses. When backtracking the action that came later will be the first we try to backtrack on
  3. We can order the choices of a dependency - we try them left to right.

This is about all that we really want to do, we can’t go if we reach a conflict, say “oh but this conflict was introduced by that upgrade, and it seems more important, so let’s not backtrack on the upgrade request but on this dependency instead.”.

This forces us to think about lowering the dependency problem into this form, such that not only do we get formally correct solutions, but also semantically correct ones. This is nice because we can apply a systematic way to approach the issue rather than introducing ad-hoc rules in the old solver which had a “which of these packages should I flip the opposite way to break the conflict” kind of thinking.

Now our test suite has a whole bunch of these semantics encoded in it, and I’m going to share some problems and ideas for how to solve them. I can’t wait to fix these and the error reporting and then turn it on in Ubuntu and later Debian (the defaults change is a post-trixie change, let’s be honest).

apt upgrade is hard

The apt upgrade commands implements a safe version of dist-upgrade that essentially calculates the dist-upgrade, and then undoes anything that would cause a package to be removed, but it (unlike its apt-get counterpart) allows the solver to install new packages.

Now, consider the following package is installed:

X Depends: A (= 1) | B

An upgrade from A=1 to A=2 is available. What should happen?

The classic solver would choose to remove X in a dist-upgrade, and then upgrade A, so it’s answer is quite clear: Keep back the upgrade of A.

The new solver however sees two possible solutions:

  1. Install B to satisfy X Depends A (= 1) | B.
  2. Keep back the upgrade of A

Which one does it pick? This depends on the order in which it sees the upgrade action for A and the dependency, as it will backjump chronologically. So

  1. If it gets to the dependency first, it marks A=1 for install to satisfy A (= 1). Then it gets to the upgrade request, which is just A Depends A (= 2) | A (= 1) and sees it is satisfied already and is content.

  2. If it gets to the upgrade request first, it marks A=2 for install to satisfy A (= 2). Then later it gets to X Depends: A (= 1) | B, sees that A (= 1) is not satisfiable, and picks B.

We have two ways to approach this issue:

  1. We always order upgrade requests last, so they will be kept back in case of conflicting dependencies
  2. We require that, for apt upgrade a currently satisfied dependency must be satisfied by currently installed packages, hence eliminating B as a choice.

Recommends are hard too

See if you have a X Recommends: A (= 1) and a new version of A, A (= 2), the solver currently will silently break the Recommends in some cases.

But let’s explore what the behavior of a X Recommends: A (= 1) in combination with an available upgrade of A (= 2) should be. We could say the rule should be:

  • An upgrade should keep back A instead of breaking the Recommends
  • A dist-upgrade should either keep back A or remove X (if it is obsolete)

This essentially leaves us the same choices as for the previous problem, but with an interesting twist. We can change the ordering (and we already did), but we could also introduce a new rule, “promotions”:

A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently satisfied, must continue to be satisfied, that is, it effectively is promoted to a Depends.

This neatly solves the problem for us. We will never break Recommends that are satisfied.

Likewise, we already have a Recommends demotion rule:

A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently unsatisfied, will not be further evaluated (it is treated like a Suggests is in the default configuration).

Whether we should be allowed to break Suggests with our decisions or not (the old autoremover did not, for instance) is a different decision. Should we promote currently satisified Suggests to Depends as well? Should we follow currently satisified Suggests so the solver sees them and doesn’t autoremove them, but treat them as optional?

tightening of versioned dependencies

Another case of versioned dependencies with alternatives that has complex behavior is something like

X Depends: A (>= 2) | B
X Recommends: A (>= 2) | B

In both cases, installing X should upgrade an A < 2 in favour of installing B. But a naive SAT solver might not. If your request to keep A installed is encoded as A (= 1) | A (= 2), then it first picks A (= 1). When it sees the Depends/Recommends it will switch to B.

We can solve this again as in the previous example by ordering the “keep A installed” requests after any dependencies. Notably, we will enqueue the common dependencies of all A versions first before selecting a version of A, so something may select a version for us.

version narrowing instead of version choosing

A different approach to dealing with the issue of version selection is to not select a version until the very last moment. So instead of selecting a version to satisfy A (>= 2) we instead translate

Depends: A (>= 2)

into two rules:

  1. The package selection rule:

     Depends: A
    

    This ensures that any version of A is installed (i.e. it adds a version choice clause, A (= 1) | A (= 2) in an example with two versions for A.

  2. The version narrowing rule:

     Conflicts: A (<< 2)
    

    This outright would reject a choice of A (= 1).

So now we have 3 kinds of clauses:

  1. package selection
  2. version narrowing
  3. version selection

If we process them in that order, we should surely be able to find the solution that best matches the semantics of our Debian dependency model, i.e. selecting earlier choices in a dependency before later choices in the face of version restrictions.

This still leaves one issue: What if our maintainer did not use Depends: A (>= 2) | B but e.g. Depends: A (= 3) | B | A (= 2). He’d expect us to fall back to B if A (= 3) is not installable, and not to B. But we’d like to enqueue A and reject all choices other than 3 and 2. I think it’s fair to say: “Don’t do that, then” here.

Implementing strict pinning correctly

APT knows a single candidate version per package, this makes the solver relatively deterministic: It will only ever pick the candidate, or an installed version. This also happens to significantly reduce the search space which is good - less backtracking. An uptodate system will only ever have one version per package that can be installed, so we never actually have to choose versions.

But of course, APT allows you to specify a non-candidate version of a package to install, for example:

apt install foo/oracular-proposed

The way this works is that the core component of the previous solver, which is the pkgDepCache maintains what essentially amounts to an overlay of the policy that you could see with apt-cache policy.

The solver currently however validates allowed version choices against the policy directly, and hence finds these versions are not allowed and craps out. This is an interesting problem because the solver should not be dependent on the pkgDepCache as the pkgDepCache initialization (Building dependency tree...) accounts for about half of the runtime of APT (until the Y/n prompt) and I’d really like to get rid of it.

But currently the frontend does go via the pkgDepCache. It marks the packages in there, building up what you could call a transaction, and then we translate it to the new solver, and once it is done, it translates the result back into the pkgDepCache.

The current implementation of “allowed version” is implemented by reducing the search space, i.e. every dependency, we outright ignore any non-allowed versions. So if you have a version 3 of A that is ignored a Depends: A would be translated into A (= 2) | A (= 1).

However this has two disadvantages. (1) It means if we show you why A could not be installed, you don’t even see A (= 3) in the list of choices and (2) you would need to keep the pkgDepCache around for the temporary overrides.

So instead of actually enforcing the allowed version rule by filtering, a more reasonable model is that we apply the allowed version rule by just marking every other version as not allowed when discovering the package in the from depcache translation layer. This doesn’t really increase the search space either but it solves both our problem of making overrides work and giving you a reasonable error message that lists all versions of A.

pulling up common dependencies to minimize backtracking cost

One of the common issues we have is that when we have a dependency group

`A | B | C | D`

we try them in order, and if one fails, we undo everything it did, and move on to the next one. However, this isn’t perhaps the best choice of operation.

I explained before that one thing we do is queue the common dependencies of a package (i.e. dependencies shared in all versions) when marking a package for install, but we don’t do this here: We have already lowered the representation of the dependency group into a list of versions, so we’d need to extract the package back out of it.

This can of course be done, but there may be a more interesting solution to the problem, in that we simply enqueue all the common dependencies. That is, we add n backtracking levels for n possible solutions:

  1. We enqueue the common dependencies of all possible solutions deps(A)&deps(B)&deps(C)&deps(D)
  2. We decide (adding a decision level) not to install D right now and enqueue deps(A)&deps(B)&deps(C)
  3. We decide (adding a decision level) not to install C right now and enqueue deps(A)&deps(B)
  4. We decide (adding a decision level) not to install B right now and enqueue A

Now if we need to backtrack from our choice of A we hopefully still have a lot of common dependencies queued that we do not need to redo. While we have more backtracking levels, each backtracking level would be significantly cheaper, especially if you have cheap backtracking (which admittedly we do not have, yet anyway).

The caveat though is: It may be pretty expensive to find the common dependencies. We need to iterate over all dependency groups of A and see if they are in B, C, and D, so we have a complexity of roughly

#A * (#B+#C+#D)

Each dependency group we need to check i.e. is X|Y in B meanwhile has linear cost: We need to compare the memory content of two pointer arrays containing the list of possible versions that solve the dependency group. This means that X|Y and Y|X are different dependencies of course, but that is to be expected – they are. But any dependency of the same order will have the same memory layout.

So really the cost is roughly N^4. This isn’t nice.

You can apply various heuristics here on how to improve that, or you can even apply binary logic:

  1. Enqueue common dependencies of A|B|C|D
  2. Move into the left half, enqueue of A|B
  3. Again divide and conquer and select A.

This has a significant advantage in long lists of choices, and also in the common case, where the first solution should be the right one.

Or again, if you enqueue the package and a version restriction instead, you already get the common dependencies enqueued for the chosen package at least.

on May 24, 2024 08:57 AM

May 20, 2024

OR...

Aaron Rainbolt

Contrary to what you may be thinking, this is not a tale of an inexperienced coder pretending to know what they’re doing. I have something even better for you.

It all begins in the dead of night, at my workplace. In front of me is a typical programmer’s desk - two computers, three monitors (one of which isn’t even plugged in), a mess of storage drives, SD cards, 2FA keys, and an arbitrary RPi 4, along with a host of items that most certainly don’t belong on my desk, and a tangle of cables that would give even a rat a migraine. My dev laptop is sitting idle on the desk, while I stare intently at the screen of a system running a battery of software tests. In front of me is the logs of a failed script run.

Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

Generally when this particular script fails, it gives me some indication as to what went wrong. There are thorough error catching measures (or so I thought) throughout the code, so that if anything goes wrong, I know what went wrong and where. This time though, I’m greeted by something like this:

$ systemctl status test-sh.service
test-sh.service - does testing things
...
May 20 23:00:00 desktop-pc systemd[1]: Starting test-sh.service - does testing things
May 20 23:00:00 desktop-pc systemd[1]: test-sh.service: Failed with result ‘exit-code’.
May 20 23:00:00 desktop-pc systemd[1]: Failed to start test-sh.service.

I stare at the screen in bewilderment for a few seconds. No debugging info, no backtraces, no logs, not even an error message. It’s as if the script simply decided it needed some coffee before it would be willing to keep working this late at night. Having heard the tales of what happens when you give a computer coffee, I elected to try a different approach.

$ vim /usr/bin/test-sh
1 #!/bin/bash
2 #
3 # Copyright 2024 ...
4 set -u;
5 set -e;

Before I go into what exactly is wrong with this picture, I need to explain a bit about how Bash handles the ultimate question of life, “what is truth?”

(RED ALERT: I do not know if I’m correct about the reasoning behind the design decisions I talk about in the rest of this article. Don’t use me as a reference for why things work like this, and please correct me if I’ve botched something. Also, a lot of what I describe here is simplified, so don’t be surprised if you notice or discover that things are a bit more complex in reality than I make them sound like here.)

Bash, as many of you probably know, is primarily a “glue” language - it glues applications to each other, it glues the user to the applications, and it glues one’s sanity to the ceiling, far out of the user’s reach. As such, it features a bewildering combination of some of the most intuitive and some of the least intuitive behaviors one can dream up, and the handling of truth and falsehood is one of these bewildering things.

Every command you run in Bash reports back whether or not what it did “worked”. (“Worked” is subjective and depends on the command, but for the most part if a command says “It worked”, you can trust that it did what you told it to, at least mostly.) This is done by means of an “exit code”, which is nothing more than a number between 0 and 255. If a program exits and hands the shell an exit code of 0, it usually means “it worked”, whereas a non-zero exit code usually means “something went wrong”. (This makes sense if you know a bit about how programs written in C work - if your program is written to just “do things” and then exit, it will default to exiting with code zero.)

Because zero = good and non-zero = not good, it makes sense to treat zero as meaning “true” and non-zero as meaning “false”. That’s exactly what Bash does - if you do something like “if command; then commandIfTrue; else commandIfFalse; fi”, Bash will run “commandIfTrue” if “command” exits with 0, and will run “commandIfFalse” if “command” exits with 1 or higher.

Now since Bash is a glue language, it has to be able to handle it if a command runs and fails. This can be done with some amount of difficulty by testing (almost) every command the script runs, but that can be quite tedious. There’s a (generally) easier way however, which is to tell the script to immediately exit if any command exits with a non-zero exit code. This is done by using the command “set -e” at or near the top of the script. Once “set -e” is active, any command that fails will cause the whole script to stop.

So back to my script. I’m using “set -e” so that if anything goes wrong, the script stops. What could go wrong other than a failed command? To answer that question, we have to take a look at how some things work in C.

C is a very different language than Bash. Whereas Bash is designed to take a bunch of pieces and glue them together, C is designed to make the pieces themselves. You can think of Bash as being a glue gun and C as being a 3d printer. As such, C does not concern itself nearly as much with things like return codes and exiting when a command fails. It focuses on taking data and doing stuff with it.

Since C is more data- and algorithm-oriented, true and false work significantly differently here. C sees 0 as meaning “none, empty, all bits set to 0, etc.” and thus treats it as meaning “false”. Any number greater than 0 has a value, and can be treated as “on” or “true”. An astute reader will notice this is exactly the opposite of how Bash works, where 0 is true and non-zero is false. (In my opinion this is a rather lamentable design decision, but sadly these behaviors have been standardized for longer than I’ve been alive, so there’s not much point in trying to change them. But I digress.)

C also of course has features for doing math, called “operators”. One of the most common operators is the assignment operator, “=”. The assignment operator’s job is to take whatever you put on the right side of it, and store it in whatever you put on the left side. If you say “a = 0”, the value “0” will be stored in the variable “a” (assuming things work right). But the assignment operator has a trick up its sleeve - not only does it assign the value to the variable, it also returns the value. Basically what that means is that the statement “a = 0” spits out an extra value that you can do things with. This allows you to do things like “a = b = 0”, which will assign 0 to “b”, return zero, and then assign that returned zero to "a”. (The assignment of the second zero to “a” also returns a zero, but that simply gets ignored by the program since there’s nothing to do with it.)

You may be able to see where I’m going with this. Assigning a value to a variable also returns that value… and 0 means “false”… so “a = 0” succeeds, but also returns what is effectively “false”. That means if you do something like “if (a = 0) { ... } else { explodeComputer(); }”, the computer will explode. “a = 0” returns “false”, thus the “if” condition does not run and the “else” condition does. (Coincidentally, this is also a good example of the “world’s last programming bug” - the comparison operation in C is “==”, which is awfully easy to mistype as the assignment operator, “=”. Using an assignment operator in an “if” statement like this will almost always result in the code within the “if” being executed, as the value being stored in the variable will usually be non-zero and thus will be seen as “true” by the “if” statement. This also corrupts the variable you thought you were comparing something to. Some fear that a programmer with access to nuclear weapons will one day write something like “if (startWar = 1) { destroyWorld(); }” and thus the world will be destroyed by a missing equals sign.)

“So what,” you say. “Bash and C are different languages.” That’s true, and in theory this would mean that everything here is fine. Unfortunately theory and practice are the same in theory but much different in practice, and this is one of those instances where things go haywire because of weird differences like this. There’s one final piece of the puzzle to look at first though - how to do math in Bash.

Despite being a glue language, Bash has some simple math capabilities, most of which are borrowed from C. Yes, including the behavior of the assignment operator and the values for true and false. When you want to do math in Bash, you write “(( do math here... ))”, and everything inside the double parentheses is evaluated. Any assignment done within this mode is executed as expected. If I want to assign the number 5 to a variable, I can do “(( var = 5 ))” and it shall be so.

But wait, what happens with the return value of the assignment operator?

Well, take a guess. What do you think Bash is going to do with it?

Let’s look at it logically. In C (and in Bash’s math mode), 0 is false and non-zero is true. In Bash, 0 is true and non-zero is false. Clearly if whatever happen within math mode fails and returns false (0), Bash should not misinterpret this as true! Things like “(( 5 == 6 ))” shouldn’t be treated as being true, right? So what do we do with this conundrum? Easy solution - convert the return value to an exit code so that its semantics are retained across the C/Bash barrier. If the return value of the math mode statement is false (0), it should be converted to Bash’s concept of false (non-zero), therefore the return value of 0 is converted to an exit code of 1. On the other hand, if the return value of the math mode statement is true (non-zero), it should be converted to Bash’s concept of true (0), therefore the return value of anything other than 0 is converted to an exit code of 0. (You probably see the writing on the wall at this point. Spoiler, my code was weighed in the balances and found wanting.)

So now we can put all this nice, logical, sensible behavior together and make a glorious mess with it. Guess what happens if you run “(( var = 0 ))” in a script where “set -e” is enabled.

  • “0” is assigned to “var”.

  • The statement returns 0.

  • Bash dutifully converts that to a 1 (false/failure).

  • Bash now sees the command as having failed.

  • set -e” says the script should immediately stop if anything fails.

  • The script crashes.

You can try this for yourself - pop open a terminal and run “set -e; (( var = 0 ));” and watch in awe as your terminal instantly closes (or otherwise shows an indication that Bash has exited).

So back to the code. In my script, I have a function that helps with generating random numbers within any specified bounds. Basically it just grabs the value of “$RANDOM” (which is a special variable in Bash that always returns an integer between 0 and 32767) and does some manipulations on it so that it becomes a random number between a “lower bound” and an “upper bound” parameter. In the guts of that function’s code I have many “math mode” statements for getting those numbers into shape. Those statements include variable assignments, and those variable assignments were throwing exit codes into the script. I had written this before enabling “set -e”, so everything was fine before, but now “set -e” was enabled and Bash was going to enforce it as ruthlessly as possible.

While I will never know what line of code triggered the failure, it’s a fairly safe bet that the culprit was:

88 (( _val = ( _val % ( _adj_upper_bound + 1 ) ) ));

This basically takes whatever is in “_val” , divides it by “_adj_upper_bound + 1”, and then assigns the remainder of that operation to “_val”. This makes sure that “_val” is lower than “_adj_upper_bound + 1”. (This is typically known as a “getting the modulus”, and the “%” operator here is the “modulo operator”. For the math people reading this, don’t worry, I did the requisite gymnastics to ensure this code didn’t have modulo bias.) If “_val” happens to be equal to “_adj_upper_bound + 1”, the code on the right side of the assignment operator will evaluate to 0, which will become an exit code of 1, thus exploding my script because of what appeared to be a failed command.

Sigh.

So there’s the problem. What’s the solution? Turns out it’s pretty simple. Among Bash’s feature set, there is the profoundly handy “logical or operator”, “||”. This operator lets us say “if this OR that is true, return true.” In other words, “Run whatever’s on the left hand of the ||. If it exits 0, move on. If it exits non-zero, run whatever’s on the right hand of the ||. If it exits 0, move on and ignore the earlier failure. Only return non-zero if both commands fail.” There’s also a handy command in Bash called “true” that does nothing except for give an exit code of 0. That means that if you ever have a line of code in Bash that is liable to exit non-zero but it’s no big deal if it does, you can just slap an “|| true” on the end and it will magically make everything work by pretending that nothing went wrong. (If only this worked in real life!) I proceeded to go through and apply this bandaid to every standalone math mode call in my script, and it now seems to be behaving itself correctly again. For now anyway.

tl;dr: Faking success is sometimes a perfectly valid way to solve a computing problem. Just don’t live the way you code and you’ll be alright.

Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

on May 20, 2024 08:06 AM