December 23, 2024

Welcome to the Ubuntu Weekly Newsletter, Issue 871 for the week of December 15 – 21, 2024. The full version of this issue is available here.

In this issue we cover:

  • Vote Extension: 2024 Ubuntu Technical Board
  • Welcome New Members and Developers
  • Ubuntu Stats
  • Hot in Support
  • LXD: Weekly news #376
  • Rocks Public Journal; 2024-12-20
  • Other Meeting Reports
  • Upcoming Meetings and Events
  • UbuCon Latin America, Barranquilla 2024!
  • LoCo Events
  • Advanced Intel® Battlemage GPU features now available for Ubuntu 24.10
  • Other Community News
  • Ubuntu Cloud News
  • Canonical News
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Updates and Security for Ubuntu 20.04, 22.04, 24.04, and 24.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • Din Mušić
  • Cristovao Cordeiro – cjdc
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

on December 23, 2024 09:15 PM
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/11c9/procurement-blog-no-button.png" width="720" /> </noscript>

Technology procurement directly influences business success. The equipment you procure will determine how your teams deliver projects and contribute to your success. So what does being “well-equipped” look like in the world of Linux laptops? 

In this blog, we’ll lay down the best practices for procurement professionals who have been tasked with procuring Linux laptops. We’ll cover how you can ensure you get the most out of your hardware, meet your compliance goals and ensure long-term success.

Defining Linux laptops 

You’ve received your requirements, and your mission is to stay faithful to them. Whether you’re procuring Linux laptops for specialized use cases (like AI and graphics), or general desktop use, it’s important to define the term “Linux laptop”. 

Given that by design, Linux is hardware agnostic, you could describe nearly every laptop as a “Linux laptop.” All you have to do is install Linux. However, given the diversity of Linux distributions, and the different support models available from both software and hardware vendors, the task goes beyond just hardware. Procuring Linux laptops requires taking the whole picture into account – starting with balancing hardware and software.

Balancing software and hardware

Hardware and software are interdependent: you need to find the right combination to reach your security, stability and performance goals. You’ll likely find that the more specialized the use case, the more of a role hardware will play in your overall decision. That’s because specialized hardware is less abundant. Whilst Linux broadens your horizons, your choice of distribution and support model will likely need to accommodate stricter performance requirements than you would find with a general desktop. 

By choosing a Linux distribution that is proven to perform at a high-level across a range of different laptops, you can ensure that you retain a large degree of choice at the hardware level. That’s where certification comes in.

The value of certification 

Regardless of which Linux distribution you choose to use, you need to know that it works on the hardware in question, and can support your specific needs. This is where certification programs come in. Certification programs are when a publisher tests and optimizes their OS, in a laboratory setting, to ensure it can run smoothly on the hardware. This is especially important if your Linux laptops are for specialized use cases where there is no tolerance available for malfunctions. 

Consistent experience

It’s important to check how thorough an organization’s certification program is, and that they’re transparent about how they decide to award (or not) certification. For example, Ubuntu is certified for over 1,000 laptops, from consumer and corporate to prosumer and workstation devices. Canonical documents Ubuntu laptop compatibility and the thorough testing that each device receives in coverage guides, with certification being withheld in the case that a device does not meet the required standard. This ensures a consistent experience for all users. 

Continuous performance

Certification is not just about creating a consistent experience across different devices, but ensuring they continue to perform as required, through updates and patching. Taking Canonical as an example, all Ubuntu certified laptops receive support, through patching and maintenance, until the specific Ubuntu release reaches end of life. In addition, through direct partnerships with Dell, Lenovo and HP, Canonical works proactively to meet device-specific needs. We work closely to fix any issues during the certification process, ensuring that each device performs as expected.

Demonstrating compliance

What makes a compliant Linux laptop? This is decided by the compliance requirements of your organization and the legislation that governs where you operate. It suffices to say that it’s a non-starter if your laptop fails your compliance tests. 

Certification is an important part of compliance. By using an OS that is supported on your specific laptop model, you reduce the risk of unexpected behaviors or processes that may give rise to vulnerabilities or exploits. Taking Ubuntu as an example, certification includes the testing of in-built security features like secure boot, to ensure they function as intended. This enables you to demonstrate that your chosen hardware-software combination is supported and secure. 

In addition, Ubuntu long term support releases include security and patching for 5 years, with the option of extending this to 12 years with an Ubuntu Pro subscription and Legacy support add-on. This demonstrates the importance of selecting both the right distro and the right support model – it can make the difference between your laptop’s end of life and continued high performance. 

Beyond certification

Certification is an important part of the procurement picture, but it’s important to also consider what the OS brings to the table outside of certification. Beyond keeping your laptops up and running, you’ll need an OS that helps you achieve your goals at scale, across a fleet of laptops. This section will focus on manageability, and use Ubuntu to illustrate the key points you should consider.

Support for modern enterprise applications

Your Linux laptops have the ultimate goal of performing to the standards your end users expect. Beyond your OS running smoothly with your hardware, at the application level you should be on the lookout for a mature ecosystem of applications that can run natively on your OS. 

Linux offers the flexibility to onboard new apps via APIs, however you should aim for this to be the exception, rather than the rule. It’s simply not scalable for your administrators to spend time on onboarding and maintaining the core applications for a fleet of laptops with diverse needs. Additionally, non-native applications may not deliver the performance your end users expect.

By selecting an OS like Ubuntu, you gain access to an extensive ecosystem of over 36,000 toolchains and applications that span from productivity to coding, graphics and AI. Backed by both a community of users who are passionate about contributing to Ubuntu, and Canonical’s long-term security maintenance and support, end users gain access to an ecosystem that runs natively and is stable. 

Compliance hardening tools 

Auditing, hardening and maintaining Linux systems in order to conform to standards like CIS or DISA-STIG is a time consuming, but essential process. Choosing a distro that incorporates tools for compliance and hardening will reduce both time and errors in the process. A distro that commits to these tools is likely to be a reliable long-term choice.

Taking Ubuntu as an example, Canonical tests its long-term support release against standards such as FIPS-140, NIST, DISA-STIG and Common Criteria, and offers automated hardening tools for these standards and others, through an Ubuntu Pro subscription.

IT management and governance

Going beyond individual laptops, your laptop fleet as a whole needs to be manageable from a governance perspective. Manually managing large fleets of laptops is inefficient, but also dangerous. A report by Verizon estimates that sysadmins are responsible for around 11% of data breaches, usually due to misconfigurations. Even with the most secure hardware in the world, without the right approach your Linux laptops will be vulnerable.

Your chosen OS must provide you with both visibility, in order to audit the current state of your devices, and manageability, allowing you to manage access and roll out updates at scale without large amounts of manual effort.

For example, Ubuntu supports identity management protocols such as Entra ID (for Microsoft) and AuthD, the open standard supported by the vast majority of enterprise and consumer identity providers. Ubuntu can also be integrated with your chosen device management platform, or you can use Canonical Landscape.

Minimal attack surface

The best distros will build on the hardware security built into your Linux laptops through regular firmware patching, and ensure that the software layer is secure by adopting a zero trust approach. Overall, your distro should actively work to reduce the attack surface of your Linux laptop, rather than increase it.

Taking Ubuntu as an example, you would encounter a set of pre-configurations designed to reduce the attack surface of your laptops to the bare minimum, by ensuring that any access to your Linux laptops is granted on a “need to know” basis, rather than by default. This includes automatic security patching, password hashing, no open ports (important for physical security) and restrictions on unprivileged users.  

Long-term support: where compliance and performance meet

Ultimately, your Linux laptop needs to last the distance, which means remaining supported and secure. If either of these two criteria stop being true, then it usually means you’ve reached the end-of-life. When does this occur? 

You should aim for a laptop that can realistically outlast your desired lifespan in order to give yourself some breathing room. 

This is where the value of long term support comes into play. You should investigate the support offered by both your hardware vendor and your software provider, in order to calculate an accurate estimate. Ubuntu LTS releases are maintained for 5 years as standard, with the option of expanded security maintenance taking the total to up to 12 years. This includes security patching and maintenance for over 36,000 packages, wherever Ubuntu LTS is running – including on any certified devices.

Find out more about where to find the best Ubuntu laptops by visiting our certification page. 

Further reading 

on December 23, 2024 05:35 PM

December 21, 2024

Thug Life

Benjamin Mako Hill

My current playlist is this diorama of Lulu the Piggy channeling Tupac Shakur in a toy vending machine in the basement of New World Mall in Flushing Chinatown.

on December 21, 2024 11:06 PM

December 20, 2024

Introduction

The Linux Containers project maintains Long Term Support (LTS) releases for its core projects. Those come with 5 years of support from upstream with the first two years including bugfixes, minor improvements and security fixes and the remaining 3 years getting only security fixes.

This is now the third round of bugfix releases for LXC, LXCFS and Incus 6.0 LTS.

LXC

LXC is the oldest Linux Containers project and the basis for almost every other one of our projects. This low-level container runtime and library was first released in August 2008, led to the creation of projects like Docker and today is still actively used directly or indirectly on millions of systems.

Announcement: https://discuss.linuxcontainers.org/t/lxc-6-0-3-lts-has-been-released/22402

Highlights of this point release:

  • Added support for PuzzleFS images in lxc-oci
  • SIGHUP is now propagated through lxc.init
  • Reworked testsuite including support for 64-bit Arm

LXCFS

LXCFS is a FUSE filesystem used to workaround some shortcomings of the Linux kernel when it comes to reporting available system resources to processes running in containers. The project started in late 2014 and is still actively used by Incus today as well as by some Docker and Kubernetes users.

Announcement: https://discuss.linuxcontainers.org/t/lxcfs-6-0-3-lts-has-been-released/22401

Highlights of this point release:

  • Better detection of swap accounting support
  • Reworked testsuite including support for 64-bit Arm

Incus

Incus is our most actively developed project. This virtualization platform is just over a year old but has already seen over 3500 commits by over 120 individual contributors. Its first LTS release made it usable in production environments and significantly boosted its user base.

Announcement: https://discuss.linuxcontainers.org/t/incus-6-0-3-lts-has-been-released/22403

Highlights of this point release:

  • OS info for virtual machines (incus info)
  • Console history for virtual machines (incus console --show-log)
  • Ability to create clustered LVM pools directly through Incus
  • QCOW2 and VMDK support in incus-migrate
  • Configurable macvlan mode (bridge, vepa, passthru or private)
  • Load-balancer health information (incus network load-balancer info)
  • External interfaces in OVN networks (support for bridge.external_interfaces)
  • Parallel cluster evacuation/restore (on systems with large number of CPUs)
  • Introduction of incus webui as a quick way to access the web interface
  • Automatic cluster re-balancing
  • Partial instance/volume refresh (incus copy --refresh-exclude-older --refresh)
  • Configurable columns, formatting and refresh time in incus top
  • Support for DHCP ranges in OVN (ipv4.dhcp.ranges)
  • Support for changing the backing interface of a managed physical network
  • Extended QEMU scriptlet (additional functions)
  • New log file for QEMU QMP traffic (qemu.qmp.log)
  • New get_instances_count function available in placement scriptlet
  • Support for --format in incus admin sql
  • Storage live migration for virtual machines
  • New authorization scriptlet as an alternative to OpenFGA
  • API to retrieve console screenshots
  • Configurable initial owner for custom storage volumes (initial.uid, initial.gid, initial.mode)
  • Image alias reuse on import (incus image import --reuse --alias)
  • New incus-simplestreams prune command
  • Console access locking (incus console --force to override)

What’s next?

We’re expecting another LTS bugfix release for the 6.0 branches in the first quarter of 2025.
We’re also actively working on a new stable release (non-LTS) for LXCFS.
Incus will keep going with its usual monthly feature release cadence.

Thanks

This LTS release update was made possible thanks to funding provided by the Sovereign Tech Fund (now part of the Sovereign Tech Agency).

The Sovereign Tech Fund supports the development, improvement, and maintenance of open digital infrastructure. Its goal is to sustainably strengthen the open source ecosystem, focusing on security, resilience, technological diversity, and the people behind the code.

Find out more at: https://www.sovereign.tech

on December 20, 2024 05:39 PM

One of the most critical gaps in traditional Large Language Models (LLMs) is that they rely on static knowledge already contained within them. Basically, they might be very good at understanding and responding to prompts, but they often fall short in providing current or highly specific information. This is where Retrieval-augmented Generation (RAG) comes in;  RAG addresses these critical gaps in traditional LLMs by incorporating current and new information that serves as a reliable source of truth for these models. 

In our previous blog in this series on understanding and deploying RAG, we walked you through the basics of what this technique is and how it enhances generative AI models by utilizing external knowledge sources such as documents and extensive databases. These external knowledge bases enhance machine learning models for enterprise applications by providing verifiable, up-to-date information that reduces errors, simplifies implementation, and lowers the cost of continuous retraining.

In this second blog of our four-part series on RAG, we will focus on creating a robust enterprise AI infrastructure for RAG systems using open source tooling for your Gen AI project.  This blog will discuss AI infrastructure considerations such as hardware, cloud services, and generative AI software. Additionally, it will highlight a few open source tools designed to accelerate the development of generative AI.

RAG AI infrastructure considerations

AI infrastructure encompasses the integrated hardware and software systems created to support AI and machine learning workloads to carry out complex analysis, predictions, and automation. The main challenge when introducing AI in any project is operating the underlying infrastructure stack that supports the models and applications. While similar to regular cloud infrastructures, machine learning tools require a tailored approach to operations to remain reliable and scalable, and the expertise needed for this approach is both difficult to find and expensive to hire. Neglecting proper operations can lead to significant issues for your company, models, and processes, which can seriously damage your image and reputation.

Building a generative AI project, such as a RAG system, requires multiple components and services. Additionally, it’s important to consider the cloud environment for deployment, as well as the choice of operating system and hardware. These factors are crucial for ensuring that the system is scalable, efficient, secure, and cost-effective. The illustration below maps out a full-stack infrastructure delivery of RAG and Gen AI systems:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-rt.googleusercontent.com/docsz/AD_4nXcd0nIhhKndN2IO3F8ARHQEBP77EpJMps4KiEVWgdwrbDcbGJuFNJxtcMOX8QMcFcJKJCdZvuGlQtkQMzw6lSY4M89Nd20_PGBAlukXMVJiB3Aicj9L4nIYDJyeoflF7qkTQIPjIg?key=bWiOXP50Qv6mRzi9JmPlbdNX" width="720" /> </noscript>

Let’s briefly examine each of these considerations and explore their pros and cons.

Hardware

The hardware on which your AI will be deployed is critical. Choosing the right compute options—whether CPUs or GPUs—depends on the specific demands and use cases of your AI workloads. Considerations such as throughput, latency, and the complexity of applications are important; for instance, if your AI requires massive parallel computation and scalable inference, GPUs may be necessary. Additionally, your chosen storage hardware is important, particularly regarding read and write speeds needed for accessing data. Lastly, the network infrastructure should also be carefully considered, especially in terms of workload bandwidth and latency. For example, a low-latency and high-bandwidth network setup is essential for applications like chatbots or search engines

Clouds

Cloud infrastructure provides computing power, storage, and scalability, and meets the demands of AI workloads. There are multiple types of cloud environments – including private, public, and bare-metal deployments – and each one has its pros and cons. For example, bare-metal infrastructure offers high performance for computing and complete control over security. However, managing and scaling a bare-metal setup can be challenging. In comparison, public cloud deployments are currently very popular due to their accessibility, but these infrastructures are owned and managed by public cloud providers. Finally, private cloud environments provide enhanced control over data security and privacy compared to public clouds. 

The good thing is that you can relatively easily blend these different elements of the cloud together into hybrid cloud environments that offer the pros of each one while covering the flaws that single-environment cloud setups may present.

Operating system 

The operating system (OS) plays a crucial role in managing AI workloads, serving as the foundational layer for overseeing hardware and software resources. There are several OS options suitable for running AI workloads, including Linux and enterprise systems like Windows.

Linux is the most widely used OS for AI applications due to its flexibility, stability, and extensive support for machine learning frameworks such as TensorFlow, PyTorch, and Hugging Face. Common distributions used for AI workloads include Ubuntu, Debian, Fedora, CentOS, and many more. Additionally, Linux environments provide excellent support for containerized setups like docker containers and CNCF-compliant setups like Kubernetes. 

Gen AI services

Generative AI projects, such as RAG, may involve multiple components, including a knowledge base, large language models, retrieval systems, generators, inferences, and more. Each of these services will be defined and discussed in greater detail in the upcoming section titled “Advanced RAG and Gen AI Reference Solutions with Open Source.”

While the RAG services may offer different functionalities, it is essential to choose the components that best fit your specific use case. For example, in small-scale RAG deployments, you might need to set aside fine-tuning and early-stage model repositories as these are advanced Gen AI reference solutions. Additionally, it is crucial that all these components integrate smoothly and coherently to create a seamless workflow. This helps reduce latency and accommodates the required throughput for your project.

RAG reference solution

When a query is made in an AI chatbot, the RAG-based system first retrieves relevant information from a large dataset or knowledge base, and then uses this information to inform and guide the generation of the response. The RAG-based system consists of two key components. The first component is the Retriever, which is responsible for locating relevant pieces of information that can help answer a user query. It searches a database to select the most pertinent information. This information is then provided to the second component, the Generator. The Generator is a large language model that produces the final output.

Before using your RAG-based system, you must first create your knowledge base, which consists of external data that is not included in your LLM training data. This external data can originate from various sources, including documents, databases, and API calls. Most RAG systems utilize an AI technique called model embedding, which converts data into numerical representations and stores it in a vector database. By using an embedding model, you can create a knowledge model that is easily understandable and readily retrievable in the context of AI.  Once you have a knowledge base and a vector database set up, you can now perform your RAG process; here is a conceptual flow:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-rt.googleusercontent.com/docsz/AD_4nXcJjaB4NVMu351t4KcBKmXgOFHBkkkfdK-m-RGVjhpxEqoieVIDwEkndFijDcorQ2HDTb1Z3HXAMYGao-C_Q9yENvF0_yTk83XyzHR6fQiKLxEY59OqYPeU5TW5m5fmiHaTMIQJ?key=bWiOXP50Qv6mRzi9JmPlbdNX" width="720" /> </noscript>

This conceptual flow follows 5 general steps:

  1. The user enters a prompt or query. 
  2. Your Retriever searches for relevant information from a knowledge base. The relevance can be determined using mathematical vector calculations and representations through a vector search and database functionality.
  3. The relevant information is retrieved to provide enhanced context to the generator. 
  4. The query and prompts are now enriched with this context and are ready to be augmented for use with a large language model using prompt engineering techniques. The augmented prompt enables the language model to respond accurately to your query. 
  5. Finally, the generated text response is delivered to you.

Advanced RAG and Gen AI reference solution with open source

RAG can be used in various applications, such as AI chatbots, semantic search, data summarization, and even code generation. The reference solution below outlines how RAG can be combined with advanced generative AI reference architectures to create optimized LLM projects that provide contextual solutions to various Gen AI use cases.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-rt.googleusercontent.com/docsz/AD_4nXeUfVjUIYyCfdDI4MUw2coGG4L69S8j116gQ5Q5KjrLLRpUN2uw8R4r1JxOSC2OuFA89v6wfop5eoFPSzJT1xzyMLt87WapnhtauyPPfPgLkb9cIGe2F6GtgWd_cD-6biR1brWltw?key=bWiOXP50Qv6mRzi9JmPlbdNX" width="720" /> </noscript>

Figure: RAG enhanced GenAI Ref solution (source: https://opea.dev/

The GenAI blueprint mentioned above was published by Opea (Open Platform for Enterprise AI), a project of the Linux Foundation. The aim of creating this blueprint is to establish a framework of composable building blocks for state-of-the-art generative AI systems, which include LLM,  data storage, and prompt engines. Additionally, it provides a blueprint for RAG and outlines end-to-end workflows. The recent release 1.1 of the Opea project showcased multiple GenAI projects that demonstrate how RAG systems can be enhanced through open source tools.

Each service within the blocks has distinct tasks to perform, and there are various open source solutions available that can help to accelerate these services based on enterprise needs. These are mapped in the table below:

ServicesDescriptionSome open source solutions
Ingest/data processingThe ingest or data processing is a data pipeline layer. This is responsible for data extraction, cleansing, and the removal of unnecessary data that you will run. Kubeflow
OpenSearch
Embedding modelThe embedding model is a machine-learning model that converts raw data to vector representations.Hugging face sentence transformer
Sentence transformer used by OpenSearch 
Retrieval and rankingThis component retrieves the data from the knowledge base; it also ranks the relevance of the information being fetched based on relevance scores.FAISS (Facebook AI Similarity Search) – such as the one being used in OpenSearch
HayStack
Vector databaseA vector database stores vector embeddings so data can be easily searched by the ‘retrieval and ranking services’.Milvus
PostgreSQL Pg_VectorOpenSearch: KNN Index as a vector database
Prompt processingThis service formats queries and retrieved text into a readable format so it is structured to the LLM model. Langchain
OpenSearch: ML – agent predict
LLMThis component provides the final response using multiple GenAI models.GPT
BART
and many more
LLM inferenceThis refers to operationalizing machine learning in production by processing running data into a machine learning model so that it gives an output.Kserve
VLLM
GuardrailThis component ensures ethical content in the Gen AI response by creating a guardrail filter for the inputs and outputs.Fairness Indicators
OpenSearch: guardrail validation model 
LLM Fine-tuningFine-tuning is the process of taking a pre-trained machine learning model and further training it on a smaller, targeted data set.Kubeflow
LoRA
Model repositoryThis component is used to store and version trained machine learning (ML) models, especially within the process of fine-tuning. This registry can track the model’s lifecycle from deployment to retirement.Kubeflow
MLFlow
Framework for building LLM applicationThis simplifies LLM workflow, prompts, and services so that building LLMs is easier.Langchain

This table provides an overview of the key components involved in building a RAG system and advanced Gen AI reference solution, along with associated open source solutions for each service. Each service performs a specific task that can enhance your LLM setup, whether it relates to data management and preparation, embedding a model in your database, or improving the LLM itself.

The rate of innovation in this field, particularly within the open source community, has become exponential. It is crucial to stay updated with the latest developments, including new models and emerging RAG solutions.

Conclusion 

Building a robust generative AI infrastructure, such as those for RAG, can be complex and challenging. It requires careful consideration of the technology stack, data, scalability, ethics, and security. For the technology stack, the hardware, operating systems, cloud services, and generative AI services must be resilient and efficient based on the scale that enterprises require.

There are multiple open-source software options available for building generative AI infrastructure and applications, which can be tailored to meet the complex demands of modern AI projects. By leveraging open source tools and frameworks, organizations can accelerate development, avoid vendor lock-in, reduce cost and meet the enterprise needs.

Now that you’ve been introduced to Blog Series #1 – What is RAG? and this Blog Series #2 on how to prepare a robust RAG AI infrastructure, it’s time to get hands-on and try building your own RAG using open source tools in our next blog in this series, “Build a one-stop solution for end-to-end RAG workflow with open source tools”. Stay tuned for part 3, to be published soon!

Canonical for your RAG and AI Infra needs

Build the right RAG architecture and application with Canonical RAG and MLOps workshop

Canonical provides workshops and enterprise open source tools and services and can advise on securing the safety of your code, data, and models in production.

Canonical offers a 5-day workshop designed to help you start building your enterprise RAG systems. By the end of the workshop, you will have a thorough understanding of RAG and LLM theory, architecture, and best practices. Together, we will develop and deploy solutions tailored to your specific needs. Download the datasheet here.

Explore more and contact our team for your RAG needs.

Learn and use best-in-class Gen AI tooling  on any hardware and cloud

Canonical offers enterprise-ready AI infrastructure along with open source data and AI tools to help you kickstart your RAG projects. Canonical is the publisher of Ubuntu, a Linux operating system that operates on public cloud platforms, data centres, workstations, and edge/IOT devices. Canonical has established partnerships with major public cloud providers such as Azure, Google Cloud, and AWS. Additionally, Canonical collaborates with silicon vendors, including Intel, AMD, NVIDIA, and RISC-V, ensuring their platform is silicon agnostic.

Secure your stack with confidence

Enhance the security of your GenAI projects while mastering best practices for managing your software stack. Discover ways to safeguard your code, data, and machine learning models in production with Confidential AI.

on December 20, 2024 08:18 AM

December 19, 2024

Announcing Incus 6.8

Stéphane Graber

The Incus team is pleased to announce the release of Incus 6.8!

This is the last release for 2024 but it still packs a punch with a bunch of VM related improvements, including the ability to move a running VM between storage pools, a new authorization backend, improvements to volume handling for application containers and more.

The highlights for this release are:

  • Storage live migration for VMs
  • Authorization scriptlet
  • Console screenshots for VMs
  • Initial owner and mode for custom storage volumes
  • Small updates to the OpenFGA model
  • Image alias reuse on import
  • New incus-simplestreams prune command
  • Console access locking

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

Some of the Incus maintainers will be present at FOSDEM 2025, helping run both the containers and kernel devrooms. For those arriving in town early, there will be a “Friends of Incus” gathering sponsored by FuturFusion on Thursday evening (January 30th), you can find the details of that here.

And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Enjoy!

on December 19, 2024 07:50 AM

Being a bread torus

Benjamin Mako Hill

A concerned nutritional epidemiologist in Tokyo realizes that if you are what you eat, that means…

It’s a similar situation in Seoul, albeit with less oil and more confidence.

on December 19, 2024 02:49 AM

E329 Serial Com Fibra

Podcast Ubuntu Portugal

O Diogo viajou até à Idade Média e descobriu a sensação de viver num tempo diferente, em que não tem velocidades de 1Gbps! Uma barbaridade! O Miguel viajou até à Idade do Cobre e relata-nos o que é viver com apenas 100 Mpbs. Depois, viajaram os dois até à Idade do Alumínio Anodizado, abrindo um pacote com uma misteriosa caixa, de um azul faíscante…que surpresas encerrará?!

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on December 19, 2024 12:00 AM

December 18, 2024

Last week I was bitten by a interesting C feature. The following terminate function was expected to exit if okay was zero (false) however it exited when zero was passed to it. The reason is the missing semicolon after the return function.

 

The interesting part this that is compiles fine because the void function terminate is allowed to return the void return value, in this case the void return from exit().

 

on December 18, 2024 05:43 PM

December 16, 2024

Welcome to the Ubuntu Weekly Newsletter, Issue 870 for the week of December 8 – 14, 2024. The full version of this issue is available here.

In this issue we cover:

  • Ubuntu Stats
  • Hot in Support
  • Rocks Public Journal; 2024-12-13
  • Other Meeting Reports
  • Upcoming Meetings and Events
  • LoCo Events
  • Announcing the Multipass 1.15.0 release
  • Kernel 6.14 planned for Plucky Puffin
  • Canonical News
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Updates and Security for Ubuntu 20.04, 22.04, 24.04, and 24.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • Din Mušić
  • Cristovao Cordeiro – cjdc
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

on December 16, 2024 09:02 PM

December 14, 2024

OCI (open container initiative) images are the standard format based on
the original docker format. Each container image is represented as an
array of ‘layers’, each of which is a .tar.gz. To unpack the container
image, untar the first, then untar the second on top of the first, etc.

Several years ago, while we were working on a product which ships its
root filesystem (and of course containers) as OCI layers, Tycho Andersen
(https://tycho.pizza/) came up with the idea of ‘atomfs’ as a way to
avoid some of the deficiencies of tar
(https://www.cyphar.com/blog/post/20190121-ociv2-images-i-tar). In
‘atomfs’, the .tar.gz layers are replaced by squashfs (now optionally
erofs) filesystems with dm-verity root hashes specified. Mounting an
image now consists of mounting each squashfs, then merging them with
overlay. Since we have the dmverity root hash, we can ensure that the
filesystem has not been corrupted without having to checksum the files
before mounting, and there is no tar unpacking step.

This past week, Ram Chinchani presented atomfs at the OCI weekly
discussion, which you can see here
https://www.youtube.com/watch?v=CUyH319O9hM starting at about 28
minutes. He showed a full use cycle, starting with a Dockerfile,
building atomfs images using stacker, mounting them using atomfs, and
then executing a container with lxc. Ram mentioned his goal is to have
a containerd snapshotter for atomfs soon. I’m excited to hear that, as
it will make it far easier to integrate into e.g. kubernetes.

Exciting stuff!
on December 14, 2024 03:52 AM

December 12, 2024

E328 Reunião De Pies

Podcast Ubuntu Portugal

Continuam as discussões sobre auto-alojamento («self-hosting»), onde recebemos sugestões e opiniões de ouvintes que exploraram esse tema; o Miguel continua a não poder usar um VPN no telefone; abordámos o roteiro de lançamento da próxima versão de Ubuntu, Plucky Puffin e o próximo encontro da Comunidade em Sintra; demos as boas-vindas a novos Snaps criados pela Comunidade e babámos um bocadinho com novos brinquedos da gama Raspberry Pi! E no fim, os patronos tiveram direito a um teatro de fantoches com a República.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on December 12, 2024 12:00 AM

December 11, 2024

I’m pleased to introduce uCareSystem 24.12.11, the latest version of the all-in-one system maintenance tool for Ubuntu, Linux Mint, Debian and its derivatives. This release brings some major changes in UI, fixes and improvements under the hood. Continuing on the path of the earlier release, in this release after many many … many … did […]
on December 11, 2024 01:10 PM

December 06, 2024

The new Firebuild release contains plenty of small fixes and a few notable improvements.

Experimental macOS support

The most frequently asked question from people getting to know Firebuild was if it worked on their Mac and the answer sadly used to be that well, it did, but only in a Linux VM. This was far from what they were looking for. 🙁

Linux and macOS have common UNIX roots, but porting Firebuild to macOS included bigger challenges, like ensuring that dyld(1), macOS’s dynamic loader initializes the preloaded interceptor library early enough to catch all interesting calls, and avoid using anything that uses malloc() or thread local variables which are not yet set up then.

Preloading libraries on Linux is really easy, running LD_PRELOAD=my_lib.so ls just works if the library exports the symbols to be interposed, while macOS employs multiple lines of defense to prevent applications from using unknown libraries. Firebuild’s guide for making DYLD_INSERT_LIBRARIES honored on Macs can be helpful with other projects as well that rely on injecting libraries.

Since GitHub’s Arm64 macOS runners don’t allow intercepting binaries with arm64e ABI yet, Firebuild’s Apple Silicon tests are run at Bitrise, who are proud to be first to provide the latest Xcode stacks and were also quick to make the needed changes to their infrastructure to support Firebuild (thanks! ❤).

Firebuild on macOS can already accelerate simple projects and rebuild itself with Xcode. Since Xcode introduces a lot of nondeterminism to the build, Firebuild can’t shine in acceleration with Xcode yet, but can provide nice reports to show which part of the build is the most time consuming and how each sub-command is called.

If you would like to try Firebuild on macOS please compile it from the GitHub repository for now. Precompiled binaries will be distributed on the Mac App Store and via CI providers. Contact us to get notified when those channels become available.

Dealing with the ‘Epochalypse’

Glibc’s API provides many functions with time parameters and some of those functions are intercepted by Firebuild. Time parameters used to be passed as 32-bit values on 32-bit systems, preventing them to accurately represent timestamps after year 2038, which is known as the Y2038 problem or the Epochalypse.

To deal with the problem glibc 2.34 started providing new function symbol variants with 64-bit time parameters, e.g clock_gettime64() in addition to clock_gettime(). The new 64-bit variants are used when compiling consumers of the API with _TIME_BITS=64 defined.

Processes intercepted by Firebuild may have been compiled with or without _TIME_BITS=64, thus libfirebuild now provides both variants on affected systems running glibc >= 34 to work safely with binaries using 64-bit and 32-bit time representation.

Many Linux distributions already stopped supporting 32-bit architectures, but Debian and Ubuntu still supports armhf, for example, where the Y2038 problem still applies. Both Debian and Ubuntu performed a transition rebuilding every library (and their reverse dependencies) with -D_FILE_OFFSET_BITS=64 set where the libraries exported symbols that changed when switching to 64-bit time representation (thanks to Steve Langasek for driving this!) . Thanks to the transition most programs are ready for 2038, but interposer libraries are trickier to fix and if you maintain one it might be a good idea to check if it works well both 32-bit and 64-bit libraries. Faketime, for example is not fixed yet, see #1064555.

Select passed through environment variables with regular expressions

Firebuild filters out most of the environment variables set when starting a build to make the build more reproducible and achieve higher cache hit rate. Extra environment variables to pass through can be specified on the command line one by one, but with many similarly named variables this may become hard to maintain. With regular expressions this just became easier:

firebuild -o 'env_vars.pass_through += "MY_VARS_.*"' my_build_command

If you are not interested in acceleration just would like to explore what the build does by generating a report you can simply pass all variables:

firebuild -r -o 'env_vars.pass_through += ".*"' my_build_command

Other highlights from the 0.8.3 release

  • Fixed and nicer report in Chrome and other WebKit based browsers
  • Support GLibc 2.39 by intercepting pidfd_spawn() and pidfd_spawnp()
  • Even faster Rust build acceleration

For all the changes please check out the release page on GitHub! 🚀

(This post is also published on The Firebuild blog.)

on December 06, 2024 09:53 PM

December 04, 2024

I am still here. Sadly while I battle this insane infection from my broken arm I got back in July, the hackers got my blog. I am slowly building it back up. Further bad news is I have more surgeries, first one tomorrow. Furthering my current struggles I cannot start my job search due to hospitalization and recovery. Please consider a donation. https://gofund.me/6e99345d

On the open source work front, I am still working on stuff, mostly snaps ( Apps 24.08.3 released )

Thank you everyone that voted me into the Ubuntu Community Council!

I am trying to stay positive, but it seems I can’t catch a break. I will have my computer in the hospital and will work on what I can. Have a blessed day and see you soon.

Scarlett

on December 04, 2024 05:30 PM

December 03, 2024

The new feature bug templates in Launchpad aims to streamline the bug reporting process, making it more efficient for both users and project maintainers.

In the past, Launchpad provided only a basic description field for filling bug reports. This often led to incomplete or vague submissions, as users may not include essential details or steps to reproduce an issue. This could slow down the debugging process when fixing bugs. 

To improve this, we are introducing bug templates. These allow project maintainers to guide users when reporting bugs. By offering a structured template, users are prompted to provide all the necessary information, which helps to speed up the development process.

To start using bug templates in your project, simply follow these steps:

  • Access your project’s bug page view.
  • Select ‘Configure bugs’.
  • A field showing the bug template will prompt you to fill in your desired template.
  • Save the changes. The template will now be available to users when they report a new bug for your project.

For now, only a default bug template can be set per project. Looking ahead, the idea is to expand this by introducing multiple bug templates per project, as well as templates for other content types such as merge proposals or answers. This will allow project maintainers to define various templates for different purposes, making the open-source collaboration process even more efficient.

Additionally, we will introduce Markdown support, allowing maintainers to create structured and visually clear templates using features such as headings, lists, or code blocks.

on December 03, 2024 12:58 PM

December 01, 2024

Most of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

Conferences

I attended MiniDebConf Toulouse 2024, and the MiniDebCamp before it. Most of my time was spent with the Freexian folks working on debusine; Stefano gave a talk about its current status with a live demo (frantically fixed up over the previous couple of days, as is traditional) and with me and others helping to answer questions at the end. I also caught up with some people I haven’t seen in ages, ate a variety of delicious cheeses, and generally had a good time. Many thanks to the organizers and sponsors!

After the conference, Freexian collaborators spent a day and a half doing some planning for next year, and then went for an afternoon visiting the Cité de l’espace.

Rust team

I upgraded these packages to new upstream versions, as part of upgrading pydantic and rpds-py:

  • rust-archery
  • rust-jiter (noticing an upstream test bug in the process)
  • rust-pyo3 (fixing CVE-2024-9979)
  • rust-pyo3-build-config
  • rust-pyo3-ffi
  • rust-pyo3-macros
  • rust-pyo3-macros-backend
  • rust-regex
  • rust-regex-automata
  • rust-regex
  • rust-serde
  • rust-serde-derive
  • rust-serde-json
  • rust-speedate
  • rust-triomphe

Python team

Last month, I mentioned that we still need to work out what to do about the multipart vs. python-multipart name conflict in Debian (#1085728). We eventually managed to come up with an agreed plan; Sandro has uploaded a renamed binary package to experimental, and I’ve begun work on converting reverse-dependencies (asgi-csrf, fastapi, python-curies, and starlette done so far). There’s a bit more still to do, but I expect we can finish it soon.

I fixed problems related to adding Python 3.13 support in:

I fixed some packaging problems that resulted in failures any time we add a new Python version to Debian:

I fixed other build/autopkgtest failures in:

I packaged python-quart-trio, needed for a new upstream version of python-urllib3, and contributed a small packaging tweak upstream.

I backported a twisted fix that caused problems in other packages, including breaking debusine‘s tests.

I disentangled some upstream version confusion in python-catalogue, and upgraded to the current upstream version.

I upgraded these packages to new upstream versions:

Other small fixes

I contributed Incus support to needrestart upstream.

In response to Helmut’s Cross building talk at MiniDebConf Toulouse, I fixed libfilter-perl to support cross-building (5b4c2e10, f9788c27).

I applied a patch to move aliased files from / to /usr in iprutils (#1087733).

I adjusted debconf to use the new /usr/lib/apt/apt-extracttemplates path (#1087523).

I upgraded putty to 0.82.

on December 01, 2024 03:00 PM

November 17, 2024

I’m pleased to introduce uCareSystem 24.11.17, the latest version of the all-in-one system maintenance tool. This release brings some minor fixes and improvements with visual changes that you will love. I’m excited to share the details of the latest update to uCareSystem! With this release, the focus is on refining the user experience and modernizing […]
on November 17, 2024 12:18 AM

November 12, 2024

Complex for Whom?

Paul Tagliamonte

In basically every engineering organization I’ve ever regarded as particularly high functioning, I’ve sat through one specific recurring conversation which is not – a conversation about “complexity”. Things are good or bad because they are or aren’t complex, architectures needs to be redone because it’s too complex – some refactor of whatever it is won’t work because it’s too complex. You may have even been a part of some of these conversations – or even been the one advocating for simple light-weight solutions. I’ve done it. Many times.

Rarely, if ever, do we talk about complexity within its rightful context – complexity for whom. Is a solution complex because it’s complex for the end user? Is it complex if it’s complex for an API consumer? Is it complex if it’s complex for the person maintaining the API service? Is it complex if it’s complex for someone outside the team maintaining it to understand? Complexity within a problem domain I’ve come to believe, is fairly zero-sum – there’s a fixed amount of complexity in the problem to be solved, and you can choose to either solve it, or leave it for those downstream of you to solve that problem on their own.

That being said, while I believe there is a lower bound in complexity to contend with for a problem, I do not believe there is an upper bound to the complexity of solutions possible. It is always possible, and in fact, very likely that teams create problems for themselves while trying to solve a problem. The rest of this post is talking to the lower bound. When getting feedback on an early draft of this blog post, I’ve been informed that Fred Brooks coined a term for what I call “lower bound complexity” – “Essential Complexity”, in the paper “No Silver Bullet—Essence and Accident in Software Engineering”, which is a better term and can be used interchangeably.

Complexity Culture

In a large enough organization, where the team is high functioning enough to have and maintain trust amongst peers, members of the team will specialize. People will begin to engage with subsets of the work to be done, and begin to have their efficacy measured against that part of the organization’s problems. Incentives shift, and over time it becomes increasingly likely that two engineers may have two very different priorities when working on the same system together. Someone accountable for uptime and tasked with responding to outages will begin to resist changes. Someone accountable for rapidly delivering features will resist gates between them and their users. Companies (either wittingly or unwittingly) will deal with this by tasking engineers with both production (feature development) and operational tasks (maintenance), so the difference in incentives isn’t usually as bad as it could be.

When we get a bunch of folks from far-flung corners of an organization in a room, fire up a slide deck and throw up some aspirational to-be architecture diagram in order to get a sign-off to solve some problem (be it someone needs a credible promotion packet, new feature needs to get delivered, or the system has begun to fail and needs fixing), the initial reaction will, more often than I’d like, start to devolve into a discussion of how this is going to introduce a bunch of complexity, going to be hard to maintain, why can’t you make it less complex?

Right around here is when I start to try and contextualize the conversation happening around me – understand what complexity is that being discussed, and understand who is taking on that burden. Think about who should be owning that problem, and work through the tradeoffs involved. Is it best solved here, or left to consumers (be them other systems, developers, or users). Should something become an API call’s optional param, taking on all the edge-cases and on, or should users have to implement the logic using the data you return (leaving everyone else to take on all the edge-cases and maintenance)? Should you process the data, or require the user to preprocess it for you?

Frequently it’s right to make an active and explicit decision to simplify and leave problems to be solved downstream, since they may not actually need to be solved – or perhaps you expect consumers will want to own the specifics of how the problem is solved, in which case you leave lots of documentation and examples. Many other times, especially when it’s something downstream consumers are likely to hit, it’s best solved internal to the system, since the only thing that can come of leaving it unsolved are bugs, frustration and half-correct solutions. This is a grey-space of tradeoffs, not a clear decision tree. No one wants the software manifestation of a katamari ball or a junk drawer, nor does anyone want a half-baked service unable to handle the simplest use-case.

Head-in-sand as a Service

Popoffs about how complex something is, are, to a first approximation, best understood as meaning “complicated for the person making comments”. A lot of the #thoughtleadership believe that an AWS hosted EKS k8s cluster running images built by CI talking to an AWS hosted PostgreSQL RDS is not complex. They’re right. Mostly right. This is less complex – less complex for them. It’s not, however, without complexity and its own tradeoffs – it’s just complexity that they do not have to deal with. Now they don’t have to maintain machines that have pesky operating systems or hard drive failures. They don’t have to deal with updating the version of k8s, nor ensuring the backups work. No one has to push some artifact to prod manually. Deployments happen unattended. You click a button and get a cluster.

On the other hand, developers outside the ops function need to deal with troubleshooting CI, debugging access control rules encoded in turing complete YAML, permissions issues inside the cluster due to whatever the fuck a service mesh is, everyone needs to learn how to use some k8s tools they only actually use during a bad day, likely while doing some x.509 troubleshooting to connect to the cluster (an internal only endpoint; just port forward it) – not to mention all sorts of rules to route packets to their project (a single repo’s binary being run in 3 containers on a single vm host).

Beyond that, there’s the invisible complexity – complexity on the interior of a service you depend on. I think about the dozens of teams maintaining the EKS service (which is either run on EC2 instances, or alternately, EC2 instances in a trench coat, moustache and even more shell scripts), the RDS service (also EC2 and shell scripts, but this time accounting for redundancy, backups, availability zones), scores of hypervisors pulled off the shelf (xen, kvm) smashed together with the ones built in-house (firecracker, nitro, etc) running on hardware that has to be refreshed and maintained continuously. Every request processed by network ACL rules, AWS IAM rules, security group rules, using IP space announced to the internet wired through IXPs directly into ISPs. I don’t even want to begin to think about the complexity inherent in how those switches are designed. Shitloads of complexity to solve problems you may or may not have, or even know you had.

What’s more complex? An app running in an in-house 4u server racked in the office’s telco closet in the back running off the office Verizon line, or an app running four hypervisors deep in an AWS datacenter? Which is more complex to you? What about to your organization? In total? Which is more prone to failure? Which is more secure? Is the complexity good or bad? What type of Complexity can you manage effectively? Which threaten the system? Which threaten your users?

COMPLEXIVIBES

This extends beyond Engineering. Decisions regarding “what tools are we able to use” – be them existing contracts with cloud providers, CIO mandated SaaS products, a list of the only permissible open source projects – will incur costs in terms of expressed “complexity”. Pinning open source projects to a fixed set makes SBOM production “less complex”. Using only one SaaS provider’s product suite (even if its terrible, because it has all the types of tools you need) makes accreditation “less complex”. If all you have is a contract with Pauly T’s lowest price technically acceptable artisinal cloudary and haberdashery, the way you pay for your compute is “less complex” for the CIO shop, though you will find yourself building your own hosted database template, mechanism to spin up a k8s cluster, and all the operational and technical burden that comes with it. Or you won’t and make it everyone else’s problem in the organization. Nothing you can do will solve for the fact that you must now deal with this problem somewhere because it was less complicated for the business to put the workloads on the existing contract with a cut-rate vendor.

Suddenly, the decision to “reduce complexity” because of an existing contract vehicle has resulted in a huge amount of technical risk and maintenance burden being onboarded. Complexity you would otherwise externalize has now been taken on internally. With large enough organizations (specifically, in this case, I’m talking about you, bureaucracies), this is largely ignored or accepted as normal since the personnel cost is understood to be free to everyone involved. Doing it this way is more expensive, more work, less reliable and less maintainable, and yet, somehow, is, in a lot of ways, “less complex” to the organization. It’s particularly bad with bureaucracies, since screwing up a contract will get you into much more trouble than delivering a broken product, leaving basically no reason for anyone to care to fix this.

I can’t shake the feeling that for every story of technical mandates gone awry, somewhere just out of sight there’s a decisionmaker optimizing for what they believe to be the least amount of complexity – least hassle, fewest unique cases, most consistency – as they can. They freely offload complexity from their accreditation and risk acceptance functions through mandates. They will never have to deal with it. That does not change the fact that someone does.

TC;DR (TOO COMPLEX; DIDN’T REVIEW)

We wish to rid ourselves of systemic Complexity – after all, complexity is bad, simplicity is good. Removing upper-bound own-goal complexity (“accidental complexity” in Brooks’s terms) is important, but once you hit the lower bound complexity, the tradeoffs become zero-sum. Removing complexity from one part of the system means that somewhere else - maybe outside your organization or in a non-engineering function - must grow it back. Sometimes, the opposite is the case, such as when a previously manual business processes is automated. Maybe that’s a good idea. Maybe it’s not. All I know is that what doesn’t help the situation is conflating complexity with everything we don’t like – legacy code, maintenance burden or toil, cost, delivery velocity.

  • Complexity is not the same as proclivity to failure. The most reliable systems I’ve interacted with are unimaginably complex, with layers of internal protection to prevent complete failure. This has its own set of costs which other people have written about extensively.
  • Complexity is not cost. Sometimes the cost of taking all the complexity in-house is less, for whatever value of cost you choose to use.
  • Complexity is not absolute. Something simple from one perspective may be wildly complex from another. The impulse to burn down complex sections of code is helpful to have generally, but sometimes things are complicated for a reason, even if that reason exists outside your codebase or organization.
  • Complexity is not something you can remove without introducing complexity elsewhere. Just as not making a decision is a decision itself; choosing to require someone else to deal with a problem rather than dealing with it internally is a choice that needs to be considered in its full context.

Next time you’re sitting through a discussion and someone starts to talk about all the complexity about to be introduced, I want to pop up in the back of your head, politely asking what does complex mean in this context? Is it lower bound complexity? Is this complexity desirable? Is what they’re saying mean something along the lines of I don’t understand the problems being solved, or does it mean something along the lines of this problem should be solved elsewhere? Do they believe this will result in more work for them in a way that you don’t see? Should this not solved at all by changing the bounds of what we should accept or redefine the understood limits of this system? Is the perceived complexity a result of a decision elsewhere? Who’s taking this complexity on, or more to the point, is failing to address complexity required by the problem leaving it to others? Does it impact others? How specifically? What are you not seeing?

What can change?

What should change?

on November 12, 2024 08:21 PM

November 04, 2024

My Keys

Stuart Langridge

I have a problematic relationship with keys.

Well, that's not true. I have a problematic relationship with key rings. For some reason, my pockets are a violently hostile environment for things I put in them. I don't really understand why this is, but it's true. Keyrings bend out of shape; the concentric rings separate, and my actual keys fall off of them. People have expressed scepticism about this in the past, and they've been wrong and I've been right. The last time I complained about this, I thought I'd come up with a solution where I bought a keyring which was a tiny padlock. It lasted three days before a bolt sheared off. You can see the whole thing on posts made to twitter.

At that point most people would give up, or be sad, or just live with split rings continually letting them down. But most people don't have a dad who is a king of engineering.

My dad made me this.

It's a keyring. It's a solid block of brass with the middle cut out, so it looks like a very shallow "U", or like three sides of a long rectangle. There's a hole drilled in each of the short ends, and a long bolt is threaded through those holes. On one end of the bolt is a nut, tight against the outside of the "U", and the bolt protrudes out about an inch where there's another, locking, nut. All the keys are hung from the bolt. To add a new key, I undo the locking nut on the end, undo the tight nut, pull the bolt out, hang another key from it, and then do everything back up. It's brilliant. I've not had a single problem with it.

Those of you carefully studying the picture will notice that there is writing on the brass "U". (And will also notice that I've blacked out the details of the actual keys, because you can cut a key based on a picture, and I'm not stupid.) That's engraving, which mentions my website, so if I lose my keys (which I am really careful to not do1) then whoever finds them can get in touch with me to tell me that happened but my address is not on the keys, so a nefarious finder gets less benefit from it.

I love my keyring. It's the best. I do not know why more keyrings are not like this. It works just like a normal keyring (I have non-key things on mine, such as a USB stick and a tiny flashlight, but they would go on a regular split-ring keyring as well), but it doesn't just fail all the time like normal ones do. I surely can't be the only person who experiences this? Anyway, I don't mind, 'cos I have the solution. Cheers, dad. Maybe I should make this a product or something.

  1. the historical version of checking your pockets, as a man, was to feel for "spectacles, testicles, wallet, and watch" -- this was actually a ribald mnemonic for how to cross yourself as a Catholic, but this modern day man checks his pockets for keys, wallet, and phone in the same way to check they're not lost
on November 04, 2024 10:50 PM

November 01, 2024

Almost all of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

Ansible

I noticed that Ansible had fallen out of Debian testing due to autopkgtest failures. This seemed like a problem worth fixing: in common with many other people, we use Ansible for configuration management at Freexian, and it probably wouldn’t make our sysadmins too happy if they upgraded to trixie after its release and found that Ansible was gone.

The problems here were really just slogging through test failures in both the ansible-core and ansible packages, but their test suites are large and take a while to run so this took some time. I was able to contribute a few small fixes to various upstreams in the process:

This should now get back into testing tomorrow.

OpenSSH

Martin-Éric Racine reported that ssh-audit didn’t list the ext-info-s feature as being available in Debian’s OpenSSH 9.2 packaging in bookworm, contrary to what OpenSSH upstream said on their specifications page at the time. I spent some time looking into this and realized that upstream was mistakenly saying that implementations of ext-info-c and ext-info-s were added at the same time, while in fact ext-info-s was added rather later. ssh-audit now has clearer output, and the OpenSSH maintainers have corrected their specifications page.

I looked into a report of an ssh failure in certain cases when using GSS-API key exchange (which is a Debian patch). Once again, having integration tests was a huge win here: the affected scenario is quite a fiddly one, but I was able to set it up in the test, and thereby make sure it doesn’t regress in future. It still took me a couple of hours to get all the details right, but in the past this sort of thing took me much longer with a much lower degree of confidence that the fix was correct.

On upstream’s advice, I cherry-picked some key exchange fixes needed for big-endian architectures.

Python team

I packaged python-evalidate, needed for a new upstream version of buildbot.

The Python 3.13 transition rolls on. I fixed problems related to it in htmlmin, humanfriendly, postgresfixture (contributed upstream), pylint, python-asyncssh (contributed upstream), python-oauthlib, python3-simpletal, quodlibet, zope.exceptions, and zope.interface.

A trickier Python 3.13 issue involved the cgi module. Years ago I ported zope.publisher to the multipart module because cgi.FieldStorage was broken in some situations, and as a result I got a recommendation into Python’s “dead batteries” PEP 594. Unfortunately there turns out to be a name conflict between multipart and python-multipart on PyPI; python-multipart upstream has been working to disentangle this, though we still need to work out what to do in Debian. All the same, I needed to fix python-wadllib and multipart seemed like the best fit; I contributed a port upstream and temporarily copied multipart into Debian’s python-wadllib source package to allow its tests to pass. I’ll come back and fix this properly once we sort out the multipart vs. python-multipart packaging.

tzdata moved some timezone definitions to tzdata-legacy, which has broken a number of packages. I added tzdata-legacy build-dependencies to alembic and python-icalendar to deal with this in those packages, though there are still some other instances of this left.

I tracked down an nltk regression that caused build failures in many other packages.

I fixed Rust crate versioning issues in pydantic-core, python-bcrypt, and python-maturin (mostly fixed by Peter Michael Green and Jelmer Vernooij, but it needed a little extra work).

I fixed other build failures in entrypoints, mayavi2, python-pyvmomi (mostly fixed by Alexandre Detiste, but it needed a little extra work), and python-testing.postgresql (ditto).

I fixed python3-simpletal to tolerate future versions of dh-python that will drop their dependency on python3-setuptools.

I fixed broken symlinks in python-treq.

I removed (build-)depends on python3-pkg-resources from alembic, autopep8, buildbot, celery, flufl.enum, flufl.lock, python-public, python-wadllib (contributed upstream), pyvisa, routes, vulture, and zodbpickle (contributed upstream).

I upgraded astroid, asyncpg (fixing a Python 3.13 failure and a build failure), buildbot (noticing an upstream test bug in the process), dnsdiag, frozenlist, netmiko (fixing a Python 3.13 failure), psycopg3, pydantic-settings, pylint, python-asyncssh, python-bleach, python-btrees, python-cytoolz, python-django-pgtrigger, python-django-test-migrations, python-gssapi, python-icalendar, python-json-log-formatter, python-pgbouncer, python-pkginfo, python-plumbum, python-stdlib-list, python-tokenize-rt, python-treq (fixing a Python 3.13 failure), python-typeguard, python-webargs (fixing a build failure), pyupgrade, pyvisa, pyvisa-py (fixing a Python 3.13 failure), toolz, twisted, vulture, waitress (fixing CVE-2024-49768 and CVE-2024-49769), wtf-peewee, wtforms, zodbpickle, zope.exceptions, zope.interface, zope.proxy, zope.security, and zope.testrunner to new upstream versions.

I tried to fix a regression in python-scruffy, but I need testing feedback.

I requested removal of python-testing.mysqld.

on November 01, 2024 12:19 PM

October 22, 2024

Two Plumbers

Stuart Langridge

In a land far away, there were two brothers, two plumbers. To preserve their anonymity, we'll call them... Mario and Luigi. Their mother, a kind and friendly woman, and their father, a man with (by the laws of averages and genetics) a truly gargantuan moustache, raised them both to be kind and friendly (and moustachioed) in their turn. There was enough work in the town to keep both the plumbers busy, and they each grew through apprentice to journeyman to experience and everyone liked them. They both cared about the job, about their clients, and they each did good work, always going the extra mile, doing more than was necessarily asked for, putting in an extra hour to tighten that pipe or fit a better S-bend or clean up the poor workmanship of lesser craftsmen and cowboys. They were happy. Even their rivalry for each job was good-humoured, a friendly source of amusement to them and to the town. Sometimes people would flip a coin to choose which to ring, having no way to choose between them, and Mario would laugh and suggest that he should have two-headed coins made, or Luigi would laugh and say that that ought to make it his turn next.

But there came a time of downturn, when the people of the town had to hold tighter to their purses, and fewer called out for plumbers. And Luigi, after much thought, decided to take a job with Bowser's, the big plumbing conglomerate from the city. He was worried: the big company were often slapdash or inexperienced in their work, and discourteous or evasive to their clients, and more interested in bottom lines than hot water lines. But they paid extremely well, and they had the latest tools, and there was security in having a contract and a title and a boss. Besides, Bowser's worked for so many more people that Luigi's own skills could only help that many more. Maybe he could even teach them something about quality, and craftsmanship, and care. He suggested to Mario that they both joined, and Mario thought hard about it, and eventually decided not to, though it was a close-run thing. Both the brothers shook hands on it, respecting one another's decision, although in the silence of their hearts each was a little disappointed in the other.

Luigi did well at Bowser's. He was right about the latest tools, and about the pay, and about the security. And he was partially right about teaching the big company something about quality. His work was often better than his colleagues, sometimes through expertise but most often because he tried harder: he loved the work, and wanted to do well, and was kind and friendly when he could be. But sometimes, try though he might, the time wasn't there, or the parts weren't in the van, and these things were not his fault; someone else at the big company had cut corners on their job and that forced Luigi to cut corners on his and make people sad and angry, or put in more time to fix it than he would have spent doing it all correctly himself in the first place. He pushed hard inside the company to fix these things, and he had some successes; a policy was written suggesting that employees work harder to improve customer happiness, and many customers across the land were made a little happier as a result. Luigi won an award. He trained some apprentices, and many of his little ways of making people happier and the job better were adopted into the company training scheme. One time he went home after another argument with his boss about the things that were not adopted, and that night he looked enviously out of their window at his brother's house across the street, thinking that it would be a fine thing to not have a boss who stopped you from doing things right.

Mario did well working for himself. The time of downturn ended and things began to pick up again, maybe not quite to where they had been but nearly there for all that, and the phone calls and messages came in once more. Everyone was pleased to see him, and although he maybe took a little longer than the men from the big company, his work was never slapdash, always taking the time to do it right. And he had less money, but he really didn't mind, or begrudge it; he had enough to get by, and he loved the work, and wanted to do well, and was kind and friendly. He did envy his brother's toolbox, though, all the latest gear while Mario himself made do with things a little older, a little rustier, but they were all good quality tools that he understood, and the work was as good and better. In November one year a very expensive plumber's inspection camera was stolen from his brother's van, and Mario thought that it would have been great to have such a thing and maybe he would have taken better care, and then he felt guilty about thinking that of his brother. He felt guiltier still when on Christmas morning he opened the box from Luigi to find an expensive inspection camera in it. But then his brother winked at him and put a finger to his lips, and all was well between them again. One time Mario was up to his waist in the drain outside a house, raindrops rattling on his hat and cursing the god who invented backflow, when he saw his brother drive past all unknowing in his modern van, windows wound up and singing along with the radio, and he looked enviously after the van's lights in the storm, thinking that it would be a fine thing to have just a notch more comfort and influence and two fewer wet knees.

on October 22, 2024 08:00 PM

October 20, 2024

I am using pretty much the exact same setup I did in 2020. Let's see who is more efficient in a live session!

But first let's take a look at the image sizes:

>>Image size (in G)001122334455UbuntuXubuntuXubuntu-minimalKubuntuLubuntuUbuntu MateManjaro 24.1 (KDE)Linux Mint 22 (Cinnamon)Fedora 40 (Gnome)Endless OS 65.840.565286906228884237.3745496805519Ubuntu3.998.51569677227016312.14438462634905Xubuntu2.5156.46610663831143367.2379472179891Xubuntu-minimal4.1214.4165165043527304.27387568468623Kubuntu3.1272.36692637039397343.62642039300044Lubuntu4330.31733623643527308.20913015551764Ubuntu Mate3.9388.2677461024765312.14438462634905Manjaro 24.1 (KDE)2.8446.21815596851775355.4321838054948Linux Mint 22 (Cinnamon)2.2504.1685658345591379.04371063048336Fedora 40 (Gnome)3.9562.1189757006003312.14438462634905Endless OS 6Image size (in G)

Charge Open Movie is what I viewed if I can make it to YouTube.

I decided to be more selective and remove those that did very porly at 1.5G, which was most.

  • Ubuntu - booted but desktop not stable, took 1.5 minutes to load Firefox
  • Xubuntu-minimal - does not include a web browser so can't further test. Snap is preinstaled even though no apps are - but trying to install a web browser worked but couldn't start.
  • Manjaro KDE - Desktop loads, but browser doesn't
  • Xubuntu - laggy when Firefox is opened, can't load sites
  • Ubuntu Mate -laggy when Firefox is opened, can't load sites
  • Kubuntu - laggy when Firefox is opened, can't load sites
  • Linux Mint 22 - desktop loads, browsers isn't responsive

>>Memory usage compared (in G)000.10.10.20.20.30.30.40.40.50.50.60.60.70.70.80.80.90.9111.11.11.21.21.31.31.41.4LubuntuEndless OS 6.0Fedora 400.4557.52699314991314372.0296569207792Lubuntu1273.2532174620874286.9854710078829Endless OS 6.00.7488.97944177426166333.3732087785536Fedora 400.9120.8066856148176302.4480502647731Lubuntu1336.5329099269918286.9854710078829Endless OS 6.01.1552.2591342391661271.5228917509926Fedora 401.1184.086378079722271.5228917509926Lubuntu1.3399.81260239189635240.5977332372121Endless OS 6.01.4615.5388267040705225.13515398032192Fedora 40Memory usage compared (in G)Desktop responsiveWeb browser loads simple siteYouTube worked fullscreen

Fedora video is a bit laggy, but watchable.. EndlessOS with Chromium is the most smooth and resonsive watching YouTube.

For fun let's look at startup time with 2GB (with me hitting buttons as needed to open a folder)

>>Startup time (Seconds)00101020203030404050506060707080809090LubuntuEndless OS 6.0Fedora 4033107.38104458917655401.2549765487598Lubuntu93299.13290992699183247.63515398032195Endless OS 6.045490.8847752648071370.53101203507225Fedora 40Startup time (Seconds)Seconds

Conclusion

  • Lubuntu lowered it's memory usage from 2020 for loading a desktop 585M to 450M! Kudos to Lubuntu team!
  • Both Fedora and Endless desktops worked in lower memory then 2020 too!
  • Lubuntu, Fedora and Endless all used Zram.
  • Chromium has definitely improved it's memory usage as last time Endless got dinged for using it. Now it appears to work better then Firefox.

Notes:

  • qemu-system-x86_64 -enable-kvm -cdrom lubuntu-24.04.1-desktop-amd64.iso -m 1.5G -smp 4 -cpu host -vga virtio --full-screen
  • Screen size was set to 1080p/60Hz.
  • I tried to reproduce 585M on Lubuntu 20.04 build, but it failed on anything below 1G.
  • Getting out of full screen on YouTube apparently is an intensive task. Dropped testing that.
  • All Ubuntu was 24.04.1 LTS.
on October 20, 2024 12:54 AM

October 15, 2024

Designed by Freepik

What is an “online” system?

Networking is a complex topic, and there is lots of confusion around the definition of an “online” system. Sometimes the boot process gets delayed up to two minutes, because the system still waits for one or more network interfaces to be ready. Systemd provides the network-online.target that other service units can rely on, if they are deemed to require network connectivity. But what does “online” actually mean in this context, is a link-local IP address enough, do we need a routable gateway and how about DNS name resolution?

The requirements for an “online” network interface depend very much on the services using an interface. For some services it might be good enough to reach their local network segment (e.g. to announce Zeroconf services), while others need to reach domain names (e.g. to mount a NFS share) or reach the global internet to run a web server. On the other hand, the implementation of network-online.target varies, depending on which networking daemon is in use, e.g. systemd-networkd-wait-online.service or NetworkManager-wait-online.service. For Ubuntu, we created a specification that describes what we as a distro expect an “online” system to be. Having a definition in place, we are able to tackle the network-online-ordering issues that got reported over the years and can work out solutions to avoid delayed boot times on Ubuntu systems.

In essence, we want systems to reach the following networking state to be considered online:

  1. Do not wait for “optional” interfaces to receive network configuration
  2. Have IPv6 and/or IPv4 “link-local” addresses on every network interface
  3. Have at least one interface with a globally routable connection
  4. Have functional domain name resolution on any routable interface

A common implementation

NetworkManager and systemd-networkd are two very common networking daemons used on modern Linux systems. But they originate from different contexts and therefore show different behaviours in certain scenarios, such as wait-online. Luckily, on Ubuntu we already have Netplan as a unification layer on top of those networking daemons, that allows for common network configuration, and can also be used to tweak the wait-online logic.

With the recent release of Netplan v1.1 we introduced initial functionality to tweak the behaviour of the systemd-networkd-wait-online.service, as used on Ubuntu Server systems. When Netplan is used to drive the systemd-networkd backend, it will emit an override configuration file in /run/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf, listing the specific non-optional interfaces that should receive link-local IP configuration. In parallel to that, it defines a list of network interfaces that Netplan detected to be potential global connections, and waits for any of those interfaces to reach a globally routable state.

Such override config file might look like this:

[Unit]
ConditionPathIsSymbolicLink=/run/systemd/generator/network-online.target.wants/systemd-networkd-wait-online.service

[Service]
ExecStart=
ExecStart=/lib/systemd/systemd-networkd-wait-online -i eth99.43:carrier -i lo:carrier -i eth99.42:carrier -i eth99.44:degraded -i bond0:degraded
ExecStart=/lib/systemd/systemd-networkd-wait-online --any -o routable -i eth99.43 -i eth99.45 -i bond0

In addition to the new features implemented in Netplan, we reached out to upstream systemd, proposing an enhancement to the systemd-networkd-wait-online service, integrating it with systemd-resolved to check for the availability of DNS name resolution. Once this is implemented upstream, we’re able to fully control the systemd-networkd backend on Ubuntu Server systems, to behave consistently and according to the definition of an “online” system that was lined out above.

Future work

The story doesn’t end there, because Ubuntu Desktop systems are using NetworkManager as their networking backend. This daemon provides its very own nm-online utility, utilized by the NetworkManager-wait-online systemd service. It implements a much higher-level approach, looking at the networking daemon in general instead of the individual network interfaces. By default, it considers a system to be online once every “autoconnect” profile got activated (or failed to activate), meaning that either a IPv4 or IPv6 address got assigned.

There are considerable enhancements to be implemented to this tool, for it to be controllable in a fine-granular way similar to systemd-networkd-wait-online, so that it can be instructed to wait for specific networking states on selected interfaces.

A note of caution

Making a service depend on network-online.target is considered an antipattern in most cases. This is because networking on Linux systems is very dynamic and the systemd target can only ever reflect the networking state at a single point in time. It cannot guarantee this state to be remained over the uptime of your system and has the potentially to delay the boot process considerably. Cables can be unplugged, wireless connectivity can drop, or remote routers can go down at any time, affecting the connectivity state of your local system. Therefore, “instead of wondering what to do about network.target, please just fix your program to be friendly to dynamically changing network configuration.” [source].

on October 15, 2024 07:33 AM

October 14, 2024

Happy 28th birthday KDE!Happy 28th Birthday KDE!

Sorry my blog updates have been MIA. Let me tell you a story…

As some of you know, 3 months ago I was in a no fault car accident. Thankfully, the only injury was I ended up with a broken arm. ER sends me home in a sling and tells me it was a clean break and it will mend itself in no time. After a week of excruciating pain I went to my follow up doctor appointment, and with my x-rays in hand, the doc tells me it was far from a clean break and needs surgery. So after a week of my shattered bone scraping my nerves and causing pain I have never felt before, I finally go in for surgery! They put in a metal plate with screws to hold the bone in place so it can properly heal. The nerve pain was gone, so I thought I was on the mend. Some time goes by and the swelling still has not subsided, the doctors are not as concerned about this as I am, so I carry on until it becomes really inflamed and developed fever blisters. After no success in reaching the doctors office my husband borrows the neighbors car and rushes me to the ER. Good thing too, I had an infection. So after a 5 day stay in the hospital, they sent us home loaded with antibiotics and trained my husband in wound packing. We did everything right, kept the place immaculate, followed orders with the wound care, took my antibiotics, yet when they ran out there was still no sign of relief, or healing. Went to doctors and they gave me another month supply of antibiotics. Two days after my final dose my arm becomes inflamed again and with extra spectacular levels of pain to go with it. I call the doctor office… They said to come in on my appointment day ( 4 days away ). I asked, “You aren’t concerned with this inflammation?”, to which they replied, “No.”. Ok, maybe I am over reacting and it’s all in my head, I can power through 4 more days. The following morning my husband observed fever blisters and the wound site was clearly not right, so once again off we go to the ER. Well… thankfully we did. I was in Sepsis and could have died… After deliberating with the doctor on the course of action for treatment, the doctor accepted our plea to remove the plate, rather than tighten screws and have me drive 100 miles to hospital everyday for iv antibiotics (Umm I don’t have a car!?) So after another 4 day stay I am released into the world, alive and well. I am happy to report, the swelling is almost gone, the pain is minimal, and I am finally healing nicely. I am still in a sling and I have to be super careful and my arm was not fully knitted. So with that I am bummed to say, no traveling for me, no Ubuntu Summit 🙁

I still need help with that car, if it weren’t for our neighbor, this story would have ended much differently.

https://gofund.me/00942f47

Despite my tragic few months for my right arm, my left arm has been quite busy. Thankfully I am a lefty! On to my work progress report.

Kubuntu:

With Plasma 6! A big thank you to the Debian KDE/QT team and Rik Mills, could not have done it without you!

KDE Snaps:

All release service snaps are done! Save a few problematic ones still WIP.. I have released 24.08.2 which you can find here:

https://snapcraft.io/publisher/kde

I completed the qt6 and KDE frameworks 6 content packs for core24

Snapcraft:

I have a PR in for kde-neon-6 extension core24 support.

That’s all for now. Thanks for stopping by!

on October 14, 2024 08:58 PM

October 13, 2024

In today’s rapidly evolving tech world, the need for fast and efficient data management is more critical than ever. One name that frequently stands out in the NoSQL database world is Redis. Since its introduction in 2009, Redis has become a go-to choice for real-time applications that require exceptional speed and flexibility in handling data.

In this article, we’ll explore the history of Redis, how it’s used, and the benefits it offers to various modern applications.

The History of Redis: Origins and Evolution

Redis, which stands for Remote Dictionary Server, was developed by Salvatore Sanfilippo in 2009. Initially launched as an open-source project to address scalability issues faced by large-scale systems, Redis quickly gained popularity among developers for its ability to process data at lightning speeds.

Redis operates as an in-memory database, meaning it stores all data in RAM rather than on disk. This design enables Redis to deliver significantly faster performance compared to traditional databases, making it ideal for applications that demand real-time speed.

How is Redis Used?

One of the primary reasons Redis is so popular is its flexibility, allowing it to be used in various scenarios. Here are some real-world examples of how Redis is utilized:

  1. Caching
    Redis is well-known for its use in caching due to its speed. By storing data in memory, Redis drastically reduces the time it takes to retrieve data. This is especially useful in web applications where users need instant access to information such as previously loaded pages, images, or API data.
  2. Session Management
    Many large platforms use Redis to store user session information. When users log into a system, Redis can store their session data in memory, ensuring quick access. This is crucial for maintaining a smooth user experience without delays.
  3. Real-Time Analytics
    In a data-driven world, companies need instant analytics to make informed decisions. Redis enables companies to process and analyze data in real time, such as tracking user behavior on websites, monitoring IoT devices, or analyzing financial transactions as they occur.
  4. Message Queuing
    Redis is also widely used for message queuing via its Pub/Sub (Publisher/Subscriber) feature. This is particularly helpful in systems where real-time communication between services or applications is required, such as notification systems or instant messaging services.

The Benefits of Redis: What Makes It Great?

Incredible Speed
Redis stands out because of its speed. As an in-memory database, Redis delivers sub-millisecond response times, making it one of the fastest technologies available for data management. This is why it is often the preferred choice for real-time applications.

Versatile Data Structures
Another feature that sets Redis apart is its support for various data structures like strings, lists, sets, and hashes. This versatility allows developers to use Redis in a wide range of scenarios, from storing user information to managing complex data in e-commerce systems.

Persistence Options
Even though Redis stores data in memory, it also offers persistence options, allowing users to periodically save data to disk. This provides an added layer of security in case of system failures, ensuring that data is backed up and recoverable.

Easy Scalability
Redis is easily scalable, whether vertically (by adding more RAM) or horizontally (by adding more Redis servers). This is essential for growing applications where the need to process more data increases over time.

Conclusion

Redis has proven itself to be one of the most powerful tools in modern data management. Its incredible speed, support for multiple data types, and scalability make it the top choice for real-time applications. Whether you’re a developer building web apps or a company looking to process real-time analytics, Redis is a technology worth exploring.


There you have it—a brief guide to Redis and the benefits it brings. This technology not only accelerates application performance but also provides a flexible and reliable solution for managing data at scale.

The post Redis: The Powerhouse Behind Modern Databases appeared first on 9M2PJU - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.

on October 13, 2024 11:33 AM

FreeBSD vs. Ubuntu: A Comparison

Faizul "Piju" 9M2PJU

When it comes to choosing an operating system for your projects, two names often come up: FreeBSD and Ubuntu. Both have unique strengths and characteristics that make them suitable for different tasks. In this post, we’ll dive deep into the differences and similarities between these two powerful systems, helping you determine which one is the best fit for your needs.

Overview of FreeBSD and Ubuntu

FreeBSD

FreeBSD is an operating system that is derived from the Berkeley Software Distribution (BSD). Known for its performance and advanced networking features, FreeBSD provides a robust environment ideal for servers, embedded systems, and networking applications. The entire operating system, from the kernel to the userland tools, is developed from a single source, which helps ensure consistency and stability.

Ubuntu

Ubuntu is a popular Linux distribution based on Debian. It is widely used for both desktop and server environments due to its user-friendliness and extensive software repositories. Ubuntu emphasizes ease of use and regular updates, making it a favorite among beginners and experienced users alike.

Key Comparisons

1. System Base

  • FreeBSD: The entire OS is developed from a single source, providing a consistent and cohesive experience. This unified approach allows for seamless integration between the kernel and userland tools.
  • Ubuntu: As a Linux-based system, Ubuntu relies on the Debian base. While it offers a rich ecosystem of software, the diversity of packages can sometimes lead to compatibility issues.

2. Performance & Efficiency

  • FreeBSD: Renowned for its lightweight and minimal design, FreeBSD excels in server environments where performance is critical. It manages system resources efficiently, making it ideal for high-traffic applications.
  • Ubuntu: While Ubuntu performs well in most situations, its default installation comes with a variety of services and applications that can consume more system resources than necessary.

3. Software Availability

  • FreeBSD: With its Ports Collection and package management system, FreeBSD offers access to over 40,000 software options. However, it may lack some of the more niche applications available on Linux.
  • Ubuntu: As one of the most popular Linux distributions, Ubuntu boasts extensive software repositories, providing compatibility with nearly all Linux applications. This makes it a go-to choice for developers and users looking for variety.

4. Security

  • FreeBSD: Security is a core focus of FreeBSD. It features built-in security mechanisms such as jails (which provide a form of lightweight virtualization) and a strong emphasis on minimizing vulnerabilities.
  • Ubuntu: While Ubuntu is secure and receives regular updates, its wider range of installed software can lead to a larger attack surface. However, it also offers tools like AppArmor for enhanced security.

5. Community & Support

  • FreeBSD: The FreeBSD community may be smaller, but it is dedicated and knowledgeable. Comprehensive documentation is available, ensuring users have access to the resources they need.
  • Ubuntu: Ubuntu has a large and active community, along with professional support available through Canonical. The extensive community means users can find help quickly, whether through forums or official channels.

6. Use Cases

  • FreeBSD: Ideal for servers, network appliances, and scenarios where stability and performance are paramount. Its strong networking capabilities make it a popular choice for firewalls and routers.
  • Ubuntu: Excellent for desktop use, development environments, and general-purpose servers. Its ease of use makes it particularly appealing for users who are new to Linux.

Conclusion

Choosing between FreeBSD and Ubuntu ultimately comes down to your specific needs and goals. If you’re looking for an operating system that excels in performance, security, and stability, especially in server or networking environments, FreeBSD is an excellent choice. On the other hand, if you prefer a user-friendly interface with a wide array of applications for both desktop and server use, Ubuntu may be the way to go.

Both systems have their strengths, and understanding them can help you make an informed decision. Whichever you choose, you’ll be working with powerful tools that are widely respected in the tech community. Happy computing!

The post FreeBSD vs. Ubuntu: A Comparison appeared first on 9M2PJU - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.

on October 13, 2024 10:22 AM

October 10, 2024

Xubuntu 24.10, "Oracular Oriole," is now available, featuring many updated applications from Xfce (4.18 and 4.19), GNOME (46 and 47), and MATE (1.26).

The post Xubuntu 24.10 Released appeared first on Sean Davis.

on October 10, 2024 09:19 PM

The Xubuntu team is happy to announce the immediate release of Xubuntu 24.10.

Xubuntu 24.10, codenamed Oracular Oriole, is a regular release and will be supported for 9 months, until July 2025.

Xubuntu 24.10, featuring the latest updates from Xfce 4.19 and GNOME 47.

Xubuntu 24.10 features the latest updates from Xfce 4.19, GNOME 47, and MATE 1.26. For Xfce enthusiasts, you’ll appreciate the new features and improved hardware support found in Xfce 4.19. Xfce 4.19 is the development series for the next release, Xfce 4.20, due later this year. As pre-release software, you may encounter more bugs than usual. Users seeking a stable, well-supported environment should opt for Xubuntu 24.04 “Noble Numbat” instead.

The final release images for Xubuntu Desktop and Xubuntu Minimal are available as torrents and direct downloads from xubuntu.org/download/.

As the main server might be busy in the first few days after the release, we recommend using the torrents if possible.

We’d like to thank everybody who contributed to this release of Xubuntu!

Highlights and Known Issues

Highlights

  • Xfce 4.19 is included as a development preview of the upcoming Xfce 4.20. Among several new features, it features early Wayland support and improved scaling.
  • GNOME 47 apps, including Disk Usage Analyzer (baobab) and Sudoku (gnome-sudoku), include a refreshed appearance and usability improvements

Known Issues

  • The shutdown prompt may not be displayed at the end of the installation. Instead you might just see a Xubuntu logo, a black screen with an underscore in the upper left hand corner, or just a black screen. Press Enter and the system will reboot into the installed environment. (LP: #1944519)
  • Xorg crashes and the user is logged out after logging in or switching users on some virtual machines, including GNOME Boxes. (LP: #1861609)
  • You may experience choppy audio or poor system performance while playing audio, but only in some virtual machines (observed in VMware and VirtualBox)
  • OEM installation options are not currently supported or available, but will be included for Xubuntu 24.04.1

For more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions, please refer to the Xubuntu Release Notes.

The main Ubuntu Release Notes cover many of the other packages we carry and more generic issues.

Support

For support with the release, navigate to Help & Support for a complete list of methods to get help.

on October 10, 2024 09:07 PM

The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 24.10 code-named “Oracular Oriole”. This marks Ubuntu Studio’s 35th release. This release is a Regular release and as such, it is supported for 9 months, until July 2025.

Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a more complete list of changes and known issues. Listed here are some of the major highlights.

You can download Ubuntu Studio 24.10 from our download page.

Special Notes

The Ubuntu Studio 24.10 disk image (ISO) exceeds 4 GB and cannot be downloaded to some file systems such as FAT32 and may not be readable when burned to a standard DVD. For this reason, we recommend downloading to a compatible file system. When creating a boot medium, we recommend creating a bootable USB stick with the ISO image or burning to a Dual-Layer DVD.

Minimum installation media requirements: Dual-Layer DVD or 8GB USB drive.

Images can be obtained from this link: https://cdimage.ubuntu.com/ubuntustudio/releases/24.10/release/

Full updated information, including Upgrade Instructions, are available in the Release Notes.

Upgrades from 24.04 LTS should be enabled within a month after release, so we appreciate your patience.

New This Release

Minimal Installation

We have now implemented minimal installations in the system installer. This will let you install a minimal desktop to get going and then install what you need via Ubuntu Studio Installer. This will make a faster installation process and lets you customize what you need for your personal Studio.

Unfortunately, at least for the time being, we also had to get rid of the default shortcuts in the panel since it would cause an error when loading without the applications being installed. A solution for this is coming in 25.04.

Generic Kernel

The Generic Ubuntu Kernel is now fully capable of low-latency workloads. As such, with this release, we have switched from the LowLatency Kernel to the Generic Kernel with the boot options to enable the low-latency configuration enabled by default.

These options can be changed via Ubuntu Studio Audio Configuration and customized depending on your use-case and your workload. If you don’t need the low-latency and wish to have a computer that is more energy-efficient, you may wish to turn off all three options. The choice is yours.

Plasma 6

Ubuntu Studio, in cooperation with Kubuntu, switched to Plasma 6 this cycle. This switch was not without issues, so we expect many of the issues to be Plasma 6 related, especially when it comes to the default configuration and theming.

New Look

Ubuntu Studio had been using the same theming, “Materia” (except for the 22.04 LTS release which was a re-colored Breeze theme) since 19.04. However, Materia has gone dead upstream. To stay consistent, we found a fork called “Orchis” which seems to match closely and have switched to that.

As you can see from the screenshot, it has more vivid colors, round corners, and a more modern look. We hope you enjoy it. We are aware of a bug involving a dark bar under windows which may be an issue, but sometimes switching the window decorations to another variation of the theme is a solution.

PipeWire 1.2.4

This release contains PipeWire 1.2. With PipeWire 1.2, FireWire devices requiring FFADO are supported. Do note that the Ubuntu Studio team does not have any FireWire devices and could not test this.

PipeWire’s JACK compatibility is configured to use out-of-the-box and is zero-latency internally. System latency is configurable via Ubuntu Studio Audio Configuration.

However, if you would rather use straight JACK 2 instead, that’s also possible. Ubuntu Studio Audio Configuration can disable and enable PipeWire’s JACK compatibility on-the-fly. From there, you can simply use JACK via QJackCtl.

Complete Deprecation of PulseAudio/JACK setup/Studio Controls

Due to the maturity of PipeWire, the traditional PulseAudio/JACK setup, where JACK would be started/stopped by Studio Controls and bridged to PulseAudio, is now fully deprecated and the option is not offered anymore. This configuration is no longer installable via Ubuntu Studio Audio Configuration. Studio Controls may return someday as a PipeWire fine-tuning solution, but for now it is unsupported by the developer.

Ardour 8.6

While this does not represent the latest release of Ardour, Ardour 8.6 is a great release. If you would like the latest release, we highly recommend purchasing one-time or subscribing to Ardour directly from the developers to help support this wonderful application.

To help support Ardour’s funding, you may obtain later versions directly from ardour.org. To do so, please one-time purchase or subscribe to Ardour from their website. If you wish to get later versions of Ardour from us, you will have to wait until the next regular release of Ubuntu Studio, due in April 2025.

Ubuntu Studio Audio Configuration

Ubuntu Studio Audio Configuration’s Dummy Audio Device now also has a much-requested Dummy Audio Input.

Additionally as described above, Ubuntu Studio Audio Configuration has an option to configure the default boot parameters that are commonly used to enable the low-latency capabilities of the Linux kernel used in Ubuntu. For more information about that, see the Ubuntu Studio Audio Configuration page.

We’re back on Matrix

You’ll notice that the menu links to our support chat and on our website will now take you to a Matrix chat. This is due to the Ubuntu community carving its own space within the Matrix federation.

However, this is not only a support chat. This is also a creativity discussion chat. You can pass ideas to each other and you’re welcome to it if the topic remains within those confines. However, if a moderator or admin warns you that you’re getting off-topic (or the intention for the chat room), please heed the warning.

This is a persistent connection, meaning if you close the window (or chat), it won’t lose your place as you may only need to sign back in to resume the chat.

Frequently Asked Questions

Q: Does Ubuntu Studio contain snaps?
A: Yes. Mozilla’s distribution agreement with Canonical changed, and Ubuntu was forced to no longer distribute Firefox in a native .deb package. We have found that, after numerous improvements, Firefox now performs just as well as the native .deb package did.

Thunderbird also became a snap so that the maintainers can get security patches delivered faster.

Additionally, Freeshow is an Electron-based application. Electron-based applications cannot be packaged in the Ubuntu repositories in that they cannot be packaged in a traditional Debian source package. While such apps do have a build system to create a .deb binary package, it circumvents the source package build system in Launchpad, which is required when packaging for Ubuntu. However, Electron apps also have a facility for creating snaps, which can be uploaded and included. Therefore, for Freeshow to be included in Ubuntu Studio, it had to be packaged as a snap.

We have additional snaps that are Ubuntu-specific, such as the Firmware Updater and the Security Center. Contrary to popular myth, Ubuntu does not have any plans to switch all packages to snaps, nor do we.

Q: Will you make an ISO with {my favorite desktop environment}?
A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since we’re completely volunteer-run, we don’t have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio – which does *not* convert that flavor to Ubuntu Studio but adds its benefits.

Q: What if I don’t want all these packages installed on my machine?
A: Simply use the Ubuntu Studio Installer to remove the features of Ubuntu Studio you don’t want or need!

Get Involved!

A wonderful way to contribute is to get involved with the project directly! We’re always looking for new volunteers to help with packaging, documentation, tutorials, user support, and MORE! Check out all the ways you can contribute!

Our project leader, Erich Eickmeyer, is now working on Ubuntu Studio at least part-time, and is hoping that the users of Ubuntu Studio can give enough to generate a monthly part-time income. We’re not there, but if every Ubuntu Studio user donated monthly, we’d be there! Your donations are appreciated! If other distributions can do it, surely we can! See the sidebar for ways to give!

Special Thanks

Huge special thanks for this release go to:

  • Eylul Dogruel: Artwork, Graphics Design
  • Ross Gammon: Upstream Debian Developer, Testing, Email Support
  • Sebastien Ramacher: Upstream Debian Developer
  • Dennis Braun: Upstream Debian Developer
  • Rik Mills: Kubuntu Council Member, help with Plasma desktop
  • Scarlett Moore: Kubuntu Project Lead, help with Plasma desktop
  • Cristian Delgado: Translations for Ubuntu Studio Menu
  • Dan Bungert: Subiquity, seed fixes
  • Len Ovens: Testing, insight
  • Wim Taymans: Creator of PipeWire
  • Mauro Gaspari: Tutorials, Promotion, and Documentation, Testing, keeping Erich sane
  • Krytarik Raido: IRC Moderator, Mailing List Moderator
  • Erich Eickmeyer: Project Leader, Packaging, Development, Direction, Treasurer
on October 10, 2024 04:21 PM

The Kubuntu Team is happy to announce that Kubuntu 24.10 has been released, featuring the new and beautiful KDE Plasma 6.1 simple by default, powerful when needed.

Codenamed “Oracular Oriole”, Kubuntu 24.10 continues our tradition of giving you Friendly Computing by integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.

Under the hood, there have been updates to many core packages, including a new 6.11 based kernel, KDE Frameworks 5.116 and 6.6.0, KDE Plasma 6.1 and many updated KDE gear applications.

Kubuntu 24.10 with Plasma 6.1

Kubuntu has seen many updates for other applications, both in our default install, and installable from the Ubuntu archive.

Applications for core day-to-day usage are included and updated, such as Firefox, and LibreOffice.

For a list of other application updates, and known bugs be sure to read our release notes.

Wayland as default Plasma session.

The Plasma wayland session is now the default option in sddm (display manager login screen). An X11 session can be selected instead if desired. The last used session type will be remembered, so you do not have to switch type on each login.

Download Kubuntu 24.10, or learn how to upgrade from 24.04 LTS.

Note: For upgrades from 24.04, there may a delay of a few hours to days between the official release announcements and the Ubuntu Release Team enabling upgrades.

on October 10, 2024 03:05 PM
Wake up and hear the birds sing! Thanks to the hard work from our contributors, Lubuntu 24.10 has been released. With the codename Oracular Oriole, Lubuntu 24.10 is the 27th release of Lubuntu, the 13th release of Lubuntu with LXQt as the default desktop environment. Download and Support Lifespan With Lubuntu 24.10 being an interim […]
on October 10, 2024 02:46 PM

October 08, 2024

Ubuntu MATE 24.10 is more of what you like, stable MATE Desktop on top of current Ubuntu. Read on to learn more 👓️

Ubuntu MATE 24.10 Ubuntu MATE 24.10

Thank you! 🙇

My sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this release 👏 I’d like to acknowledge the close collaboration with the Ubuntu Foundations team and the Ubuntu flavour teams, in particular Erich Eickmeyer who pushed critical fixes while I was travelling. Thank you! 💚

What changed since the Ubuntu MATE 24.04 LTS?

Here are the highlights of what’s changed since the release of Ubuntu MATE 24.04

  • Ships stable MATE Desktop 1.26.2 with a handful of bug fixes 🐛
  • Switched back to Slick Greeter (replacing Arctica Greeter) due to race-condition in the boot process which results the display manager failing to initialise.
    • Returning to Slick Greeter reintroduces the ability to easily configure the login screen via a graphical application, something users have been requesting be re-instated 👍
  • Ubuntu MATE 24.10 .iso 📀 is now 3.3GB 🤏 Down from 4.1GB in the 24.04 LTS release.
    • This is thanks to some fixes in the installer that no longer require as many packages in the live-seed.

Login Window Configuration Login Window

What didn’t change since the Ubuntu MATE 24.04 LTS?

If you follow upstream MATE Desktop development, then you’ll have noticed that Ubuntu MATE 24.10 doesn’t ship with the recently released MATE Desktop 1.28 🧉

I have prepared packaging for MATE Desktop 1.28, along with the associated components but encountered some bugs and regressions 🐞 I wasn’t able to get things to a standard I’m happy to ship be default, so it is tried and true MATE 1.26.2 one last time 🪨

Major Applications

Accompanying MATE Desktop 1.26.2 🧉 and Linux 6.11 🐧 are Firefox 131 🔥🦊, Celluloid 0.27 🎥, Evolution 3.54 📧, LibreOffice 24.8.2 📚

See the Ubuntu 24.10 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 24.10

Ubuntu MATE 24.10 (Oracular Oriole) is available for PC/Mac users.

Download

Upgrading to Ubuntu MATE 24.10

The upgrade process to Ubuntu MATE 24.10 is the same as Ubuntu.

There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

on October 08, 2024 12:35 PM

October 07, 2024

Ubuntu Budgie 24.10 (Oracular Oriole) is a Standard Release with 9 months of support by your distro maintainers and Canonical, from Oct 2024 to July 2025.. These release notes showcase the key takeaways for 24.04 upgraders to 24.10. In these release notes the areas covered are: The key focus for the team for this cycle has been the conversion of our distro to a Wayland based distro.

Source

on October 07, 2024 05:35 PM

September 29, 2024

A networking guide for Incus

Simos Xenitellis

Incus is a hypervisor/manager for virtual machines and application/system containers. Get community support here.

A virtual machine (VM) is an instance of an operating system that runs on a computer, along with the main operating system. A virtual machine uses hardware virtualization features for the separation from the main operating system.

A system container is an instance of an operating system that also runs on a computer, along with the main operating system. A system container, instead, uses security primitives of the Linux kernel for the separation from the main operating system. The system container follows the lifecycle of a computer system. You can think of system containers as software virtual machines.

An application container is a container that has an application or service. It follows the lifecycle of the application instead of a system. That is, here you start and stop the application instead of booting and shutting down a system. Incus supports Open Container Initiative (OCI) images such as Docker images. When Incus launches an OCI image, it uses its own runtime, not Docker’s. That is, Incus consumes images from any OCI image repositories.

In virtual machines and system/application containers we can attach virtual networking devices, either

  • none, (i.e. an instance without networking)
  • one or, (i.e. most common and simple case)
  • more than one.

In addition to the virtual networking devices, we can also attach real hardware networking devices. Those devices can be taken away from the host and get pushed into a virtual machine or system container.

You may use a combination of those networking devices in the same instance. It is left as an exercise to the reader to explore that road. In these tutorials we are look at one at most networking device per instance.

There will be attempts to generalize and explain in practical terms. If I get something wrong, please correct me in the comments so that it gets fixed and we all learn something new. Note that I will be editing this content along the way, adding material, troubleshooting cases, etc.

In this post we are listing tutorials of the different Incus devices of type nic (network interface controller). Whatever we write in this post and the linked tutorials, are covered in that documentation URL!

The list of tutorials per networking:

  1. bridge (the default, the local network bridge), it’s in this post below.
  2. bridged, (pending)
  3. macvlan, (pending)
  4. none,
  5. physical,
  6. ipvlan,
  7. routed,

The setup

When demonstrating these network configurations, we will be using an Incus VM. When learning, try there in your Incus VM before applying on your host or your server.

We launch an Incus VM, called tutorial, with Ubuntu 24.04 LTS, then get a shell with the default non-root account ubuntu. I am impatient and I am typing repeatedly the incus exec command to get a shell. The VM takes a few moments to boot up, and I get interested error messages until the VM is actually running. Not really relevant to this tutorial but you will get educated at every opportunity.

$ incus launch images:ubuntu/24.04/cloud tutorial --vm
Launching tutorial
$ incus exec tutorial -- su -l ubuntu
Error: VM agent isn't currently running
$ incus exec tutorial -- su -l ubuntu
su: user ubuntu does not exist or the user entry does not contain all the required fields
$ incus exec tutorial -- su -l ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@tutorial:~$ 

We got a shell in the VM. Then, install Incus which is available in the default repositories of Ubuntu 24.04 LTS. Also, we install zfsutils-linux, which are the client utilities to use ZFS in Incus. We are advised to add our non-root account to the incus-admin group in order to have access to Incus. Without that, we would have to use sudo all the time. When you add a user to a group, you need to logout then log in again for the change to take effect. And this is what we do (unless you know about newgrp).

ubuntu@tutorial:~$ sudo apt install -y incus zfsutils-linux
...
Creating group 'incus' with GID 989.
Creating group 'incus-admin' with GID 988.
Created symlink /etc/systemd/system/multi-user.target.wants/incus-startup.service → /usr/lib/systemd/system/incus-startup.service.
Created symlink /etc/systemd/system/sockets.target.wants/incus-user.socket → /usr/lib/systemd/system/incus-user.socket.
Created symlink /etc/systemd/system/sockets.target.wants/incus.socket → /usr/lib/systemd/system/incus.socket.
incus.service is a disabled or a static unit, not starting it.
incus-user.service is a disabled or a static unit, not starting it.

Incus has been installed. You must run `sudo incus admin init` to
perform the initial configuration of Incus.
Be sure to add user(s) to either the 'incus-admin' group for full
administrative access or the 'incus' group for restricted access,
then have them logout and back in to properly setup their access.

...
ubuntu@tutorial:~$ sudo usermod -a -G incus-admin ubuntu
ubuntu@tutorial:~$ logout
$ incus exec tutorial -- su -l ubuntu
ubuntu@tutorial:~$ 

Now we initialize Incus with sudo incus admin init.

Default Incus networking

When you install and setup Incus with incus admin init, you are prompted whether you want to create a local network bridge. We press Enter to all prompts, which means that we accept all the defaults that are presented to us. The last question is whether to show the initialization configuration. If you missed it, you can get it after the fact by running incus admin init --dump (dumps the configuration).

ubuntu@tutorial:~$ incus admin init
Would you like to use clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (zfs, dir) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: 
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: 
Size in GiB of the new loop device (1GiB minimum) [default=5GiB]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=incusbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like the server to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks:
- config:
    ipv4.address: auto
    ipv6.address: auto
  description: ""
  name: incusbr0
  type: ""
  project: default
storage_pools:
- config:
    size: 5GiB
  description: ""
  name: default
  driver: zfs
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      network: incusbr0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: default
projects: []
cluster: null

ubuntu@tutorial:~$

If you accept the defaults (i.e. press Enter in each) or type them explicitly, you get a local bridge named incusbr0 that is managed by Incus, and gives private IPv4 and IPv6 IP addresses to your newly created instances.

Let’s see them in practice in your Incus installation. You have configured Incus and Incus created a default profile, called default, for you. This profile is applied by default to all newly created instances and has the networking configuration in there. In that profile there are two devices, and one of them is the networking device. In Incus the device is called eth0 (in pink color), and in the instance it will be shown as eth0 (green color). On the host, the bridge will appear with the name incusbr0. It’s a networking type, hence of type nic.

ubuntu@tutorial:~$ incus profile list
+---------+-----------------------+---------+
|  NAME   |      DESCRIPTION      | USED BY |
+---------+-----------------------+---------+
| default | Default Incus profile | 0       |
+---------+-----------------------+---------+
ubuntu@tutorial:~$ incus profile show default
config: {}
description: Default Incus profile
devices:
  eth0:
    name: eth0
    network: incusbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by: []
ubuntu@tutorial:~$ 

incusbr0 was created by Incus. Let’s see details through the incus network commands. We first list the network interfaces and then we show the incusbr0 network interface. incusbr0 is a managed network interface (in pink below), and it’s managed by Incus. Incus takes care of the networking and provides DHCP services, and access to the upstream network (i.e. the Internet). incusbr0 is a network bridge (in blue). An instance that requires network configuration from incusbr0, will get an IP address from the range 10.180.234.1-254 (in orange). Network Address Translation (NAT) is enabled (also in orange), which means there is access to the upstream network, and likely the Internet.

ubuntu@tutorial:~$ incus network list
+----------+----------+---------+-----------------+---------+---------+
|   NAME   |   TYPE   | MANAGED |      IPV4       | USED BY |  STATE  |    
+----------+----------+---------+-----------------+---------+---------+
| enp5s0   | physical | NO      |                 | 0       |         |    
+----------+----------+---------+-----------------+---------+---------+
| incusbr0 | bridge   | YES     | 10.180.234.1/24 | 1       | CREATED |
+----------+----------+---------+-----------------+---------+---------+
ubuntu@tutorial:~$ incus network show incusbr0
config:
  ipv4.address: 10.180.234.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:7:7dfe:75cf::1/64
  ipv6.nat: "true"
description: ""
name: incusbr0
type: bridge
used_by:
- /1.0/profiles/default
managed: true
status: Created
locations:
- none
ubuntu@tutorial:~$ 

Let’s launch a container and test these out. The instance got an IP address (in orange) that is within the range of the network bridge above.

ubuntu@tutorial:~$ incus launch images:alpine/edge/cloud myalpine
Launching myalpine
ubuntu@tutorial:~$ incus list -c ns4t     
+----------+---------+----------------------+-----------+
|   NAME   |  STATE  |         IPV4         |   TYPE    |
+----------+---------+----------------------+-----------+
| myalpine | RUNNING | 10.180.234.24 (eth0) | CONTAINER |
+----------+---------+----------------------+-----------+
ubuntu@tutorial:~$ 

The IP address is OK but could it look better? It’s private anyway, and we can select anything from the range 10.x.y.z. Let’s change it so that it uses instead 10.10.10.1-254. We set the configuration of incusbr0 for ipv4.address (see earlier) to a new value, 10.10.10.1/24. Each number separated by commas is 8 bits in length, and /24 means that the first 3 * 8 = 24 bits should stay the same. We make the change, but the instance still has the old IP address. We restart the instance, and it automatically gets the new IP address from the new range.

ubuntu@tutorial:~$ incus network set incusbr0 ipv4.address=10.10.10.1/24
ubuntu@tutorial:~$ incus list -c ns4t
+----------+---------+----------------------+-----------+
|   NAME   |  STATE  |         IPV4         |   TYPE    |
+----------+---------+----------------------+-----------+
| myalpine | RUNNING | 10.180.234.24 (eth0) | CONTAINER |
+----------+---------+----------------------+-----------+
ubuntu@tutorial:~$ incus restart myalpine
ubuntu@tutorial:~$ incus list -c ns4t
+----------+---------+--------------------+-----------+
|   NAME   |  STATE  |        IPV4        |   TYPE    |
+----------+---------+--------------------+-----------+
| myalpine | RUNNING | 10.10.10.24 (eth0) | CONTAINER |
+----------+---------+--------------------+-----------+
ubuntu@tutorial:~$ 

We have created incusbr0. Are we allowed to create another private bridge? Sure we are. We will call it incusbr1, and also we disable IPv6 networking. IPv6 addresses are too wide and mess up the formatting on my blog. If you notice earlier, there were no IPv6 addresses although IPv6 was configured on incusbr0. I cheated and removed the IPv6 addresses in some command outputs.

ubuntu@tutorial:~$ incus network create incusbr1 ipv4.address=10.10.20.1/24 ipv6.address=none
Network incusbr1 created
ubuntu@tutorial:~$ incus network show incusbr1
config:
  ipv4.address: 10.10.20.1/24
  ipv6.address: none
description: ""
name: incusbr1
type: bridge
used_by: []
managed: true
status: Created
locations:
- none
ubuntu@tutorial:~$ 

We have created incusbr1. Can we now launch an instance onto that private bridge? We launch the instance called myalpine1 and we used the incus launch parameter --network incusbr1 to specify a different network than the default network in the default Incus profile. We verify below that myalpine1 is served by incusbr1 (in green).

ubuntu@tutorial:~$ incus launch images:alpine/edge/cloud myalpine1 --network incusbr1
Launching myalpine1
ubuntu@tutorial:~$ incus list -c ns4t
+-----------+---------+--------------------+-----------+
|   NAME    |  STATE  |        IPV4        |   TYPE    |
+-----------+---------+--------------------+-----------+
| myalpine  | RUNNING | 10.10.10.24 (eth0) | CONTAINER |
+-----------+---------+--------------------+-----------+
| myalpine1 | RUNNING | 10.10.20.85 (eth0) | CONTAINER |
+-----------+---------+--------------------+-----------+
ubuntu@tutorial:~$ incus network show incusbr1
config:
  ipv4.address: 10.10.20.1/24
  ipv6.address: none
description: ""
name: incusbr1
type: bridge
used_by:
- /1.0/instances/myalpine1
managed: true
status: Created
locations:
- none
ubuntu@tutorial:~$ 

Technical details

The instances that use the Incus private bridge have access to the Internet. How is this achieved? It’s achieve with either iptables or nftables rules. In recent versions of Linux distributions, you would be using nftables by default (command: nft, no relation to NFTs). To view the firewall ruleset that were created by Incus, run sudo nft list ruleset. Here is my ruleset and should be similar to yours. There is one table for Incus and four chains. A persistent, a forward, an in and an out. More at the documentation site at nftables.

ubuntu@tutorial:~$ sudo nft list ruleset
table inet incus {
	chain pstrt.incusbr0 {
		type nat hook postrouting priority srcnat; policy accept;
		ip saddr 10.57.39.0/24 ip daddr != 10.57.39.0/24 masquerade
		ip6 saddr fd42:e7b:739c:7117::/64 ip6 daddr != fd42:e7b:739c:7117::/64 masquerade
	}

	chain fwd.incusbr0 {
		type filter hook forward priority filter; policy accept;
		ip version 4 oifname "incusbr0" accept
		ip version 4 iifname "incusbr0" accept
		ip6 version 6 oifname "incusbr0" accept
		ip6 version 6 iifname "incusbr0" accept
	}

	chain in.incusbr0 {
		type filter hook input priority filter; policy accept;
		iifname "incusbr0" tcp dport 53 accept
		iifname "incusbr0" udp dport 53 accept
		iifname "incusbr0" icmp type { destination-unreachable, time-exceeded, parameter-problem } accept
		iifname "incusbr0" udp dport 67 accept
		iifname "incusbr0" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, nd-router-solicit, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
		iifname "incusbr0" udp dport 547 accept
	}

	chain out.incusbr0 {
		type filter hook output priority filter; policy accept;
		oifname "incusbr0" tcp sport 53 accept
		oifname "incusbr0" udp sport 53 accept
		oifname "incusbr0" icmp type { destination-unreachable, time-exceeded, parameter-problem } accept
		oifname "incusbr0" udp sport 67 accept
		oifname "incusbr0" icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, echo-request, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, mld2-listener-report } accept
		oifname "incusbr0" udp sport 547 accept
	}
}
ubuntu@tutorial:~$ 

Future considerations

  1. Network isolation.

on September 29, 2024 02:12 PM

September 21, 2024


The beta of Kubuntu Oracular Oriole (to become 24.10 in October) has now been released, and is available for download.

This milestone features images for Kubuntu and other Ubuntu flavours.

Pre-releases of Kubuntu Mantic Minotaur are not recommended for:

  • Anyone needing a stable system
  • Regular users who are not aware of pre-release issues
  • Anyone in a production environment with data or workflows that need to be reliable

They are, however, recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Kubuntu, KDE, and Qt developers
  • Other Ubuntu flavour developers

The Beta includes some software updates that are ready for broader testing. However, it is an early set of images, so you should expect some bugs.

We STRONGLY advise testers to read the Kubuntu 24.10 Beta release notes before installing, and in particular the section on ‘Known issues‘.

You can also find more information about the entire 24.10 release (base, kernel, graphics etc) in the main Ubuntu Beta release notes and announcement.



To enable Flatpaks in KDE’s Discover in Kubuntu 24.10, run this command:

sudo apt install flatpak plasma-discover-backend-flatpak


To enable the largest Flatpak repository, run this command:

flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo


Log out and log back in (or restart) to re-initialize the XDG_DATA_DIRS variable, otherwise, newly installed Flatpak apps will not run or appear in the startup menu.

on September 21, 2024 10:38 PM

The Ubuntu Studio team is pleased to announce the beta release of Ubuntu Studio 24.10, codenamed “Oracular Oriole”.

While this beta is reasonably free of any showstopper installer bugs, you will find some bugs within. This image is, however, mostly representative of what you will find when Ubuntu Studio 24.10 is released on October 10, 2024.

Special Notes

The Ubuntu Studio 24.10 image (ISO) exceeds 4 GB and cannot be downloaded to some file systems such as FAT32 and may not be readable when burned to a DVD. For this reason, we recommend downloading to a compatible file system. When creating a boot medium, we recommend creating a bootable USB stick with the ISO image or burning to a Dual-Layer DVD.

Images can be obtained from this link: https://cdimage.ubuntu.com/ubuntustudio/releases/24.10/beta/

Full updated information, including Upgrade Instructions, are available in the Release Notes.

New Features This Release

  • Plasma 6.1 is now the default desktop environment, an upgrade from Plasma 5.27. This may have some unknown bugs that we’re ironing out as we go along, along with theming.
  • Ubuntu’s Generic Kernel is now capable of the same low latency processing as Ubuntu’s lowlatency kernel when certain boot parameters are used. Additionally, the lowlatency kernel is eventually going to be deprecated. With this in mind, we have switched to the generic kernel with the low latency boot parameters enabled by default. These boot parameters can be tweaked in Ubuntu Studio Audio Configuation.
  • Minimal Install Option for new installations. This allows users to install Ubuntu Studio and customize what they need later with Ubuntu Studio Installer.
  • Orchis is now our default theme, which replaces Materia, our default theme since 19.04. Materia has stopped development, so we decided to
  • PipeWire continues to improve with every release and now includes FFADO support. Version 1.2.3
  • Ubuntu Studio Installer‘s included Ubuntu Studio Audio Configuration utility for fine-tuning the PipeWire setup now includes the ability to create or remove a dummy audio input device. Version 1.30
  • The legacy PulseAudio/JACK has been deprecated and discontinued, is no longer supported, and is no longer an option to use. Going forward, PipeWire or JACK are the only options. PipeWire’s JACK integration can be disabled from Ubuntu Studio Audio Configuration to use JACK by itself with QJackCtl, or via other means.

Major Package Upgrades

  • Ardour version 8.6.0
  • Qtractor version 1.1
  • OBS Studio version 30.2.3
  • Audacity version 3.6.1
  • digiKam version 8.4.0
  • Kdenlive version 24.08.1
  • Krita version 5.2.3

There are many other improvements, too numerous to list here. We encourage you to look around the freely-downloadable ISO image.

Known Issues

  • Due to the transition to Plasma 6 and Qt6, there may be some theming inconsistencies, especially for those upgrading. To work around these issues, reapply the default theme using System Settings and select “Orchis-dark” from Kvantum Manager.
  • Some graphics cards might find the transparency in the Orchis theme difficult to work with. For that reason, you can switch to “Orchis-dark-solid” in the Kvantum Manager. Feedback is welcome, and if the transparency becomes too burdensome, we can switch to the solid theme by default.
  • The new minimal install mode will not load the desktop properly with the extra icons (gimp, krita, patchance, etc.) in the top bar, so those had to be removed by default. If you find them useful, you can add them by right-clicking in the menu and clicking “Pin to Task Manager”. We apologize for the inconvenience.

Official Ubuntu Studio release notes can be found at https://ubuntustudio.org/ubuntu-studio-24-10-release-notes/

Further known issues, mostly pertaining to the desktop environment, can be found at https://wiki.ubuntu.com/OracularOriole/ReleaseNotes/Kubuntu

Additionally, the main Ubuntu release notes contain more generic issues: https://discourse.ubuntu.com/t/oracular-oriole-release-notes/44878

How You Can Help

Please test using the test cases on https://iso.qa.ubuntu.com. All you need is a Launchpad account to get started.

Additionally, we need financial contributions. Our project lead, Erich Eickmeyer, is working long hours on this project and trying to generate a part-time income. Go here to see how you can contribute financially (options are also in the sidebar).

Frequently Asked Questions

Q: Does Ubuntu Studio contain snaps?
A: Yes. Mozilla’s distribution agreement with Canonical changed, and Ubuntu was forced to no longer distribute Firefox in a native .deb package. We have found that, after numerous improvements, Firefox now performs just as well as the native .deb package did.

Thunderbird is also a snap this cycle in order for the maintainers to get security patches delivered faster.

Additionally, Freeshow is an Electron-based application. Electron-based applications cannot be packaged in the Ubuntu repositories in that they cannot be packaged in a traditional Debian source package. While such apps do have a build system to create a .deb binary package, it circumvents the source package build system in Launchpad, which is required when packaging for Ubuntu. However, Electron apps also have a facility for creating snaps, which can be uploaded and included. Therefore, for Freeshow to be included in Ubuntu Studio, it had to be packaged as a snap.

Also, to keep theming consistent, all included themes are snapped in addition to the included .deb versions so that snaps stay consistent with out themes.

We are working with Canonical to make sure that the quality of snaps goes up with each release, so we please ask that you give snaps a chance instead of writing them off completely.

Q: If I install this Beta release, will I have to reinstall when the final release comes out?
A: No. If you keep it updated, your installation will automatically become the final release. However, if Audacity returns to the Ubuntu repositories before final release, then you might end-up with a double-installation of Audacity. Removal instructions of one or the other will be made available in a future post.

Q: Will you make an ISO with {my favorite desktop environment}?
A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since we’re completely volunteer-run, we don’t have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio – which does *not* convert that flavor to Ubuntu Studio but adds its benefits.

Q: What if I don’t want all these packages installed on my machine?
A: We now include a minimal install option. Install using the minimal install option, then use Ubuntu Studio Installer to install what you need for your very own content creation studio.

on September 21, 2024 12:02 AM

September 13, 2024

Parasocial chat

On Linux Matters we have a friendly and active, public Telegram channel linked on our Contact page, along with a Discord Channel. We also have links to Mastodon, Twitter (not that we use it that much) and email.

At the time of writing there are roughly this ⬇️ number of people (plus bots, sockpuppets and duplicates) in or following each Linux Matters “official” presence:

Channel Number
Telegram 796
Discord 683
Mastodon 858
Twitter 9919

Preponderance of chat

We chose to have a presence in lots of places, but primarily the talent presenters (Martin, Mark, and myself (and Joe)) only really hang out to chat on Telegram and Mastodon.

I originally created the Telegram channel on November 20th, 2015, when we were publishing the Ubuntu Podcast (RIP in Peace) A.K.A. Ubuntu UK Podcast. We co-opted and renamed the channel when Linux Matters launched in 2023.

Prior to the channel’s existence, we used the Ubuntu UK Local Community (LoCo) Team IRC channel on Freenode (also, RIP in Peace).

We also re-branded our existing Mastodon accounts from the old Ubuntu Podcast to Linux Matters.

We mostly continue using Telegram and Mastodon as our primary methods of communication because on the whole they’re fast, reliable, stay synced across devices, have the features we enjoy, and at least one of them isn’t run by a weird billionaire.

Other options

We link to a lot of other places at the top of the Linux Matters home page, where our listeners can chat, mostly to eachother and not us.

Being over 16, I’m not a big fan of Discord, and I know Mark doesn’t even have an account there. None of us use Twitter much anymore, either.

Periodically I ponder if we (Linux Matters) should use something other than Telegram. I know some listeners really don’t like the platform, but prefer other places like Signal, Matrix or even IRC. I know for sure some non-listeners don’t like Telegram, but I care less about their opinions.

Part of the problem is that I don’t think any of us really enjoy the other realtime chat alternatives. Both Matrix and Signal have terrible user experience, and other flaws. Which is why you don’t tend to find us hanging out in either of those places.

There are further options I haven’t even considered, like Wire, WhatsApp, and likely more I don’t even know or care about.

So we kept using Telegram over any of the above alternative options.

Pondering Posting Polls

I have repeatedly considered asking the listeners about their preferred chat platforms via our existing channels. But that seems flawed, because we use what we like, and no matter how many people prefer something else, we’re unlikely to move. Unless something strange happens 👀 .

Plus, often times, especially on decentralised platforms, the audience can be somewhat “over-enthusiastic” about their preferred way being The Way™️ over the alternatives. It won’t do us any favours to get data saying 40% report we should use Signal, 40% suggest Matrix and 20% choose XMPP, if the four of us won’t use any of them.

Pursue Podcast Palaver Proposals

So rather than ask our audience, I thought I’d see what other podcasters promote for feedback and chatter on their websites.

I picked a random set from shows I have heard of, and may have listened to, plus a few extra ones I haven’t. None of this is endorsement or approval, I wanted the facts, just the fax, ma’am.

I collated the data in a json file for some reason, then generated the tables below. I don’t know what to do with this information, but it’s a bit of data we may use if we ever decide to move away from Telegram.

Presenting Pint-Sized Payoff

The table shows some nerdy podcasts along with their primary means (as far as I can tell) of community engagement. Data was gathered manually from podcast home pages and “about” pages. I generally didn’t go into the page content for each episode. I made an exception for “Dot Social” and “Linux OTC” because there’s nothing but episodes on their home page.

It doesn’t matter for this research, I just thought it was interesting that some podcasters don’t feel the need to break out their contact details to a separate page, or make it more obvious. Perhaps they feel that listeners are likely to be viewing an episode page, or looking at a specific show metadata, so it’s better putting the contact details there.

I haven’t included YouTube, where many shows publish and discuss, in addition to a podcast feed.

I am also aware that some people exclusively, or perhaps primarily publish on YouTube (or other video platforms). Those aren’t podcasts IMNSHO.

Key to the tables below. Column names have been shorted because it’s a w i d e table. The numbers indicate how many podcasts use that communication platform.

  • EM - Email address (13/18)
  • MA - Mastodon account (9/18)
  • TW - Twitter account (8/18)
  • DS - Discord server (8/18)
  • TG - Telegram channel (4/18)
  • IR - IRC channel (5/18)
  • DW - Discourse website (2/18)
  • SK - Slack channel (3/18)
  • LI - LinkedIn (2/18)
  • WF - Web form (2/18)
  • SG - Signal group (3/18)
  • WA - WhatsApp (1/18)
  • FB - FaceBook (1/18)

Linux

Show EM MA TW DS TG IR DW SK MX LI WF SG WA FB
Linux Matters
Ask The Hosts
Destination Linux
Linux Dev Time
Linux After Dark
Linux Unplugged
This Week in Linux
Ubuntu Security Podcast
Linux OTC

Open Source Adjunct

Show EM MA TW DS TG IR DW SK MX LI WF SG WA FB
2.5 Admins
Bad Voltage
Coffee and Open Source
Dot Social
Open Source Security
localfirst.fm

Other Tech

Show EM MA TW DS TG IR DW SK MX LI WF SG WA FB
ATP
BBC Newscast
The Rest is Entertainment

Point

Not entirely sure what to do with this data. But there it is.

Is Linux Matters going to move away from Telegram to something else? No idea.

on September 13, 2024 04:00 PM

September 12, 2024

git revert name and Akademy

Jonathan Riddell

I reverted my name back to Jonathan Riddell and have now made a new uid for my PGP key, you can get the updated one on keyserver.ubuntu.com or my contact page or my Launchpad page.

Here’s some pics from Akademy

on September 12, 2024 02:33 PM

September 11, 2024

Incus is a manager for virtual machines, system containers and application containers. Get Incus support here.

When you initially setup Incus, you create a storage pool where Incus will put in there everything. There are several options for storage pools, in this post we focus on ZFS storage pools, and those specifically that are stored on a separate block device (like /dev/sdb).

We are dealing with two cases. One, your installation of Incus has been somehow removed but the storage pool is somewhere there intact and you want to recover by installing again Incus. Two, you want to move the disk with storage pool from one computer to another, like reconnecting the storage pool on a new server.

This type of task is quite risky if you have a lot of important data on your system. Obviously, prior to you actually doing this on an actual system, you should take backups with incus export of your most important instances. And then, you should perform this tutorial several times so that you get the gist of recovering Incus installations. This tutorial shows you how to do a dry run of creating an Incus installation, killing it off, and then miraculously recovering it.

Prerequisites

You should have a running Incus installation.

Setting up Incus, using a block storage volume

We launch an Incus virtual machine (VM) that will act as our Incus server. We then (on the host) create a storage volume of type block. Next, we attach that block storage volume to the VM. In the VM it can be found as /dev/sdb. Subsequently, we incus admin init to initialize Incus, and configure Incus to use the block device /dev/sdb when creating the storage pool. When we run incus admin init, we press Enter when we want to accept the default value.

$ incus launch images:ubuntu/24.04/cloud --vm incusserver
Launching incusserver
$ incus storage volume create default IncusStorage --type=block size=6GiB
Storage volume IncusStorage created
$ incus storage volume attach default IncusStorage incusserver
$ incus shell incusserver
root@incusserver:~# fdisk -l /dev/sdb
Disk /dev/sdb: 6 GiB, 6442450944 bytes, 12582912 sectors
Disk model: QEMU HARDDISK   
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@incusserver:~# sudo apt install -y incus zfsutils-linux
...
root@incusserver:~# incus admin init
Would you like to use clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (dir, zfs) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: 
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: yes
Path to the existing block device: /dev/sdb
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=incusbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like the server to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks:
- config:
    ipv4.address: auto
    ipv6.address: auto
  description: ""
  name: incusbr0
  type: ""
  project: default
storage_pools:
- config:
    source: /dev/sdb
  description: ""
  name: default
  driver: zfs
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      network: incusbr0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: default
projects: []
cluster: null

root@incusserver:~#

Next we populate the Incus installation with a few alpines. We do this because we want to see these containers again after we recover the storage pool.

root@incusserver:~# incus launch images:alpine/edge alpine1
Launching alpine1
root@incusserver:~# incus launch images:alpine/edge alpine2
Launching alpine2
root@incusserver:~# incus launch images:alpine/edge alpine3
Launching alpine3
root@incusserver:~#

This is where the interesting stuff start. We now want to shutdown the Incus server and remove it. However, the block storage volume will still be there and in good condition, as the server has been shutdown cleanly. Note that the block storage volumes should only be attached to one system at a time.

root@incusserver:~# shutdown -h now
root@incusserver:~# Error: websocket: close 1006 (abnormal closure): unexpected EOF
$ incus storage volume show default IncusStorage
config:
  size: 6GiB
description: ""
name: IncusStorage
type: custom
used_by:
- /1.0/instances/incusserver
location: none
content_type: block
project: default
created_at: ...
$ incus delete incusserver
$ incus storage volume show default IncusStorage
config:
  size: 6GiB
description: ""
name: IncusStorage
type: custom
used_by: []
location: none
content_type: block
project: default
created_at: ...
$

Next, we launch a new VM that will be used as a new Incus server, then attach back the block storage volume with incus storage volume attach and install Incus along with the necessary ZFS client utils.

$ incus launch images:ubuntu/24.04/cloud --vm incusserver
Launching incusserver
$ incus storage volume attach default IncusStorage incusserver
$ incus shell incusserver
Error: Instance is not running
$ incus shell incusserver
Error: VM agent isn't currently running
$ incus shell incusserver
Error: VM agent isn't currently running
$ incus shell incusserver
Error: VM agent isn't currently running
$ incus shell incusserver
Error: VM agent isn't currently running
$ incus shell incusserver
Error: VM agent isn't currently running
$ incus shell incusserver
Error: VM agent isn't currently running
$ incus shell incusserver
root@incusserver:~# apt install -y zfsutils-linux incus
...
root@incusserver:~#

Finally, we bring back the old installation data with those three alpines. We run zpool import, which is a ZFS command that will look for potential ZFS pools and list them by name. The command zpool import default is the one that does the actual import. The ZFS pool name default was the name that was given by Incus before, when we were initializing Incus. Subsequently, we run incus admin recover to recover the ZFS pool and reconnect it with this new installation of Incus.

root@incusserver:~# zfs list
no datasets available
root@incusserver:~# zpool list
no pools available
root@incusserver:~# zpool import
   pool: default
     id: 8311839500301555365
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

	default     ONLINE
	  sdb       ONLINE
root@incusserver:~# zpool import default
root@incusserver:~# zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
default  5.50G  6.80M  5.49G        -         -     0%     0%  1.00x    ONLINE  -
root@incusserver:~# 
root@incusserver:~# incus admin recover
This server currently has the following storage pools:
Would you like to recover another storage pool? (yes/no) [default=no]: yes
Name of the storage pool: default
Name of the storage backend (zfs, dir): zfs
Source of the storage pool (block device, volume group, dataset, path, ... as applicable): /dev/sdb
Additional storage pool configuration property (KEY=VALUE, empty when done): 
Would you like to recover another storage pool? (yes/no) [default=no]: 
The recovery process will be scanning the following storage pools:
 - NEW: "default" (backend="zfs", source="/dev/sdb")
Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]: 
Scanning for unknown volumes...
The following unknown storage pools have been found:
 - Storage pool "default" of type "zfs"
The following unknown volumes have been found:
 - Container "alpine2" on pool "default" in project "default" (includes 0 snapshots)
 - Container "alpine3" on pool "default" in project "default" (includes 0 snapshots)
 - Container "alpine1" on pool "default" in project "default" (includes 0 snapshots)
Would you like those to be recovered? (yes/no) [default=no]: yes
Starting recovery...
root@incusserver:~# incus list
+---------+---------+------+------+-----------+-----------+
|  NAME   |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+---------+---------+------+------+-----------+-----------+
| alpine1 | STOPPED |      |      | CONTAINER | 0         |
+---------+---------+------+------+-----------+-----------+
| alpine2 | STOPPED |      |      | CONTAINER | 0         |
+---------+---------+------+------+-----------+-----------+
| alpine3 | STOPPED |      |      | CONTAINER | 0         |
+---------+---------+------+------+-----------+-----------+
root@incusserver:~#

Those alpines are in a STOPPED state. Will they start? Sure they will.

root@incusserver:~# incus start alpine1 alpine2 alpine3
root@incusserver:~# incus list -c ns4t
+---------+---------+----------------------+-----------+
|  NAME   |  STATE  |         IPV4         |   TYPE    |
+---------+---------+----------------------+-----------+
| alpine1 | RUNNING | 10.36.146.69 (eth0)  | CONTAINER |
+---------+---------+----------------------+-----------+
| alpine2 | RUNNING | 10.36.146.101 (eth0) | CONTAINER |
+---------+---------+----------------------+-----------+
| alpine3 | RUNNING | 10.36.146.248 (eth0) | CONTAINER |
+---------+---------+----------------------+-----------+
root@incusserver:~#

In this tutorial we saw how to recover an Incus installation, while the storage pool is intact. We covered the case that the storage pool is ZFS on a block device.

on September 11, 2024 02:05 PM