June 27, 2024

You might be tempted to think that running an app on a public cloud means you don’t need to maintain it. While that would be wonderful, it would require help from the public cloud providers and app developers themselves, and possibly a range of mythological creatures with magic powers. This is because any app, regardless of the infrastructure on which it runs or its output, requires maintenance in order to yield accurate and reliable outputs. 

Consequently, even though you likely saved a fortune on upfront costs by choosing a public cloud infrastructure, you still need to spend on ensuring your application stack is operating efficiently and as intended. Neglecting your maintenance could bring significant detriments to your business, projects, or scope. For that reason, outsourcing the operations of apps running on public cloud infrastructure is a good idea. Let’s take a closer look at why.

Choosing Public Cloud 

When to pick public cloud infrastructure vs on-premises deployments is a conundrum that has been boggling DevOps and GitOps engineers for decades. They both have their upsides and downsides, but ultimately the choice comes down to your use case. 

Let’s take a look at uptime as an example. A private cloud would need to constantly run at near capacity to be cost-efficient – meaning, you would need to know what you’re going to be running on it and have an accurate estimation of how much resources those workloads will require. This would justify the purchase of hardware, given the resources it offers will be consistently maximised. 

In comparison, high-performance workloads (such as those found in machine learning) benefit more from a highly flexible environment that can quickly increase the number of GPUs engaged in a computation to meet demand, and then decrease this number to a relatively low baseline for interpretation and optimisation. This can be achieved more easily with public clouds, with the added benefit of no upfront costs for hardware and installation. 

For these reasons, public cloud becomes the favoured environment to deploy apps like AI/ML automation platforms (like Kubeflow), stream processing platforms (like Kafka), or any other application that operates using sporadic high influxes of data and computations that eventually settle to a relatively lower baseline. 

Public Cloud App Operations

From an operational perspective, public cloud deployments require relatively the same ongoing maintenance as on-prem deployments. There is an ongoing fallacy in the market that deploying an app on a public cloud cluster automatically makes operations easier. This may be true in some isolated instances, but it often causes negligence that deters the app’s ultimate functionality. What truly differentiates public cloud form private deployments is the aim of your operations: while on-premises, you want stability and cohesion; on public cloud, you want to ensure flexibility and speed. Let us take a look at what operational excellence would look like in a public cloud deployment: 

Performance monitoring 

You want to be able to keep an eye on your applications and their performance. The trick is that this is a highly manual process, because each app comes with its own metrics. While most open source applications will have key indicators that are quantifiable (for example, in a Grafana dashboard), there are exceptions depending on both your needs and the app’s key functionality. 

In general, performance metrics will include:

  • Capacity reports (i.e. how many resources your app is using out of the cluster in which it’s deployed)
  • Workload reports (i.e. how quickly your app does what it’s supposed to do)
  • Error reports (i.e. what causes incidents to happen). 

When setting up your deployment, it is key to ensure that you know what metrics you need to follow for each component, and make sure you’re implementing proper protocols that maintain constant and consistent performance measurement. This step can at times be manual, but has potential for automation in the hands of the right engineer. 

Updates

It is no secret that you need to update all your apps – yes, even those on your phone – to ensure their optimal performance. The difference between the apps on your phone and the apps in your public cloud stack is that while not updating the former can result in decreased battery life and some delays, not updating the latter will result in your business losing money. And sometimes, lots of it. 

Efficient updates can help you in multiple ways. First, they will give you access to the newest and greenest features of your apps; You already pay for them, so you might as well make the most of your money and maximise your tools’ functionalities. Second, the efficiency and performance of your ecosystem may increase with each update, because performance bugs are almost exclusively tackled with version updates (leaving critical bugs for patches, which we will discuss later). Third, and most importantly, updates enforce the security of your entire environment, because outdated app versions are often vulnerable to breaches or attacks. 

It is therefore essential to make sure that all your applications are consistently updated. However, a recent update launch does not mean you should immediately apply it. Each release needs to be scrutinised for compatibility, checked for vulnerabilities, and holistically integrated within your larger stack. Furthermore, updates can require a lot of time, might cause downtime to your workloads, and need to be implemented with extensive backup protocols to ensure data loss is minimised. To put it briefly, updates require a lot of time and attention, but remain essential for anything deployed in public cloud clusters. 

Security 

The word “security” has a tendency to be incredibly nebulous. Do you imagine a high wall that keeps any invader at bay? A super-secure gate which only opens upon multiple scans of one’s iris, fingerprints, and tone of voice? You’d be partly right in both cases, but there is so much more to security when it comes to public cloud instances. 

If there were one concept to pick that connects to all aspects of cybersecurity in multi-cloud environments, it would be ‘access.’ In effect, all the security practices you’d need to implement ensure that access is restricted only to those who need it, and even then only for the duration of their need. Any malicious or dangerous operation begins with a party obtaining unauthorised access to a certain part of your system. From there, they can either break what they’ve accessed, steal it, or try to make their way to even more components, placing you at higher and higher risk. When it comes to public cloud app security, once you’re in, you’re in. So you need to make sure that whoever is in is supposed to be there. We will discuss security on the public cloud in more detail in a standalone blogpost in the next few weeks, but for now let me briefly outline what it entails from an operational standpoint by breaking it down into four pillars: 

  1. Configuration 

Good security begins at the configuration of your cluster. Each component that makes up your public cloud stack must be correctly identified and documented, along with its variables initial set-up metrics. You need a constantly updated and highly detailed map of your entire infrastructure at all times, along with all the parameters associated with each component. 

This, however, does not stop here. Public cloud workloads are supposed to be constantly dynamic (if they were static, you’d be better off with a private cloud deployment). This means that, at any point during your deployment’s lifecycle, there may be changes to its infrastructure. You must ensure that changes are well scrutinised and considered before implementation, proactively announced to all the necessary stakeholders, and finally implemented by the correct agents, in a highly secure way that is mindful of the larger stack. These processes can be automated, but the automation in itself is something that will require extensive expertise, as missing out on even the smallest detail can compromise your work and leave you vulnerable to attacks. 

  1. Vulnerability management 

Because we’re discussing open source app ecosystems in public cloud deployments, vulnerability management can be a delicate process. The developer of each app will likely have structurally similar, but logistically mismatching protocols for the identification, triage, and mitigation of vulnerabilities for their products. This means that, should you manage your operations yourself, you will have to follow several calendars and oblige varying timescales when it comes to vulnerability mitigations. 

Mitigations can come in the form of updates, downgrades, or patches. But before applying anything, you must ensure that you yourself have a good system in place to identify and flag vulnerabilities, and either report them to app developers or fix them in-house. You can automate mitigations to a level, but you will require expertise in order to both develop and maintain this automation. 

Note: Using a portfolio of apps developed or maintained by a single company can make vulnerability management much easier and create cohesion within your ecosystem. Canonical is an example of such a company, with an extensive portfolio of highly interoperable applications.

  1. Monitoring

Even the most successful and carefully deployed environments have incidents. These could be an external attack, or it could just be things going wrong within the amalgamation of components – whatever the situation, you need a detailed and highly performant monitoring system in place to tell you when they happen. 

In essence, monitoring is a set of practices and procedures that ‘keep an eye’ on your systems and ensures that everything does only what it’s supposed to do. Efficient monitoring will also help you keep track of who does what on your clusters, and even what happens during successful or unsuccessful incidents. The benefits of monitoring are endless, and resemble those of documenting anything else: you have a chance to look back on the progression of your system, and also offer any interested stakeholder the ability to evaluate what is happening (or has been happening) at any given time on your infrastructure. 

A more stringent form of monitoring is the security audit, which entails a trusted third party performing a deep-dive into your systems and their security protocols. The third party can be either a separate team within your company (in the case of larger enterprises) or a completely different entity. Audits are the best way to ensure that your security protocols are compliant and do not fall prey to human bias. 

While there are plenty of things that can be done to automate monitoring, it is one of the security components that does, at this point at least, still require some manual operation. It will, therefore, consume a significant amount of operational resources, and cannot be overlooked. 

  1. Incident and Recovery

Like I mentioned above, even the most successful deployments have incidents. You cannot control it, things can go wrong all the time. You may think of the worst case scenarios: security breaches, horrible errors, or your entire system going down. But even losing half of an application’s core functionality because of an incorrect configuration or update can cause significant disruption to your business processes, and ultimately to your profitability. This is why it is essential to establish and enforce a set of predefined incident management processes. 

An incident management process refers to some variation of a handbook with hypothetical scenarios and predefined actions to be done in order to address them. To bring in some coding lingo, it is an extensive collection of ‘if’ statements that assume the worst; this ensures that no time is wasted on decision-making and implementation of security measures when an incident happens, which decreases the time required to restore your systems and augments the efficiency of your team during crisis situations. 

Bear in mind that your own team and your customers need to benefit from an incident management process handbook. You want to make sure that every stakeholder that is in any way involved in your public cloud deployment has the option to report an incident and, if necessary, fix it.

Currently, the International Organisation for Standardisation (ISO) proposes several standardised incident management processes that support companies in remaining compliant with local and global legislation. To achieve compliance with ISO standards, you need to dedicate substantial engineering resources and tailor both the configuration and maintenance of your clusters to perfection. In many situations, you will have no choice, as adherence to standards is the only way to legally operate within certain industries like healthcare, telecommunications, or national security. It is useful to note that even without ISO certification, developing and following internal incident and recovery protocols is an essential security practice. 

All in all, operational security can be an arduous task that will eat your resources and energy. It’s also mandatory, regardless of the scope and size of your organisation: you simply cannot operate with a non-secure environment. Otherwise, you risk not only being attacked and losing competitive advantage, but perhaps even receiving heavy fines from global legislators – or even losing the right to operate in various industries. Consequently, you must ensure that you dedicate utmost attention to your operational security. 

Patching 

Patching, or patch management, refers to the distribution and application of local updates, or in other words updates dedicated to individual servers or clusters. These updates are often relatively urgent, as they are essential to the correction of security vulnerabilities or performance errors. 

Operationally speaking, several actions are required in order to maintain proper patching practices. Firstly, you must know as soon as possible when patches are available, and what your vulnerabilities are. It is also essential to have full visibility of your infrastructure, so you can know which endpoints are vulnerable and need patching. It is common practice for administrators to maintain reports on which patches were applied and where, as well as for which bugs. As your company grows, it would be wise to consider the adoption of standardised patching protocols as an internal rule, so that any new member of your team can join in with full transparency on how you patch. 

Patching takes time, and each patch must be scrutinised. Generally, you will be able to automate a lot of the processes, but you will still need engineering wisdom and time in order to ensure both that the automation runs correctly and that the applied patches fit well within your larger public or multi-cloud ecosystem. 

What else?

I know the above sounds like a lot – and the truth is, for a lot of companies, it is. Especially in cases where your primary focus is not related to software or hardware, you will likely want to focus on what really matters to you. Undoubtedly, it can be daunting to know that in order to stay competitive, you will almost certainly have to go digital and choose some level of cloud infrastructure. This will inevitably mean that you’ll be required to operate it to the standards mentioned above. However, there is a solution that many companies select every day to avoid this unnecessary headache: outsourcing. 

With regards to public cloud infrastructure, outsourcing your operations involves starting a collaboration with a managed service provider (also known as an MSP) who will handle everything for you against some predefined contract rules. This can have many benefits, from giving you more predictability over your costs to liberating you to focus on your primary business scope. 

In the next part of this blog post, we will look more in depth at the benefits and key considerations you should know before, during, and after you’ve selected an MSP for your public cloud deployments. If you are eager to find out more, get in touch with our team using this link.

on June 27, 2024 11:30 AM
<noscript> <img alt="" height="628" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1200,h_628/https://ubuntu.com/wp-content/uploads/b44d/Siggraph-2024-Canonical-Ubuntu-Animation-Studios-VFX.png" width="1200" /> </noscript>

We are thrilled to announce that we will exhibit at SIGGRAPH 2024 in Denver, Colorado, from July 30th to August 1st. 

Visit us at booth #317 to discover how Ubuntu and other Canonical solutions can drive innovation in animation and enable secure usage of open-source software

As the backbone of many modern animation studios, Ubuntu provides a robust and secure platform for creating stunning visual effects and animations.

At our booth, we will showcase how our solutions can help you use the open-source software you need and get enterprise-grade support, security, and compliance coverage.

Would you like to meet our team in person? Let us know, and we will take it from there.

Book a meeting

Open-source software in Animation

Before open-source, traditional stop-motion animation often needed expensive, specialized equipment and software. But today, almost every modern animation studio runs open-source in some capacity.

For example, your studio probably regularly uses tools like Blender, Dragonframe, Unity, Unreal, or DaVinci Resolve.

All of these tools can run on Ubuntu, the world’s most popular Linux distribution. 

Ubuntu is an open-source operating system that allows animators to use powerful animation tools on affordable hardware, making sophisticated stop-motion animation accessible even on a budget. 

Ubuntu also ensures stability and compatibility with a wide range of animation hardware and software, so animators can use tools that integrate well with digital cameras and motion control rigs, enabling smooth and efficient animation processes.

Although open-source tools can offer flexibility, cost-effectiveness and a collaborative development model that drives innovation, it can still be challenging to ensure they are secure and will be supported in the future. 

That’s where Ubuntu Pro comes in.

Why Ubuntu Pro for animation studios:

  • Security and Compliance: With Ubuntu Pro, you get 10 years of maintenance for over 25,000 open-source packages, including Blender. This reduces your average exposure to serious security vulnerabilities from 98 days to just 1 day, keeping your systems secure.
  • Empower your creative team and upgrade your software on your schedule, all while keeping your entire production pipeline secure. With our infrastructure management tool, Landscape, you’ll benefit from unparalleled compatibility and stability across a wide range of animation hardware and software.
  • Affordable Animation Setup & System Management. Ubuntu is and always will be free. Ubuntu Pro is a single subscription that overcomes traditional barriers of cost, complexity, and data management for open-source animation and VFX tools. 
  • Enterprise-grade support when you need it. Whether you need 24/7/365 worldwide assistance, managed services, or specialized training, Canonical, the company behind Ubuntu, has you covered. We can design a tailored support plan to ensure business continuity and keep your infrastructure resilient.

About SIGGRAPH 2024

SIGGRAPH 2024 is the 51st International Conference and Exhibition on Computer Graphics and Interactive Techniques. It features diverse sessions and exhibits, showcasing the latest advancements in computer graphics, visual effects, interactive techniques, and digital art.

Gaming developers, visual effects/animation studios, and VFX tool creators, we want to meet you all and learn how you leverage open-source software today and how we can make this journey secure and efficient.

Did you know that we deliver and maintain the majority of open-source packages on the public cloud (25,000+ open-source packages)? 

You can hire the engineers behind Ubuntu to help you build, maintain, and operate secure clouds or subscribe to Ubuntu Pro to cover all aspects of open infrastructure and architecture.

Interested to learn more? Book a meeting with our booth team.

on June 27, 2024 09:52 AM

E305 NAS vs Backup: Massacre Em Haia

Podcast Ubuntu Portugal

Depois de um hiato de duas semanas, o General Montgomery voltou visivelmente alcoolizado e a fermentar ideias e projectos inexequíveis. Falámos de Raspberry Pi5, campeões de aviação, cerveja artesanal, dissipação passiva, Raspberry Pi OS versus Ubuntu, a diferença óbvia (ou nem por isso) entre backups e um NAS feito com papel de pastilha elástica, um clip e um elástico. Vai acontecer uma Cimeira muito importante em Haia, onde a Canonical afinal não vai ser julgada no Tribunal Penal Internacional, por alegadamente querer exterminar impiedosamente os pacotes deb. Mas há quem tire férias para ir lá ver.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on June 27, 2024 12:00 AM

June 24, 2024

Welcome to the Ubuntu Weekly Newsletter, Issue 845 for the week of June 16 – 22, 2024. The full version of this issue is available here.

In this issue we cover:

  • Developer Membership Board restaffing
  • Welcome New Members and Developers
  • Ubuntu Stats
  • Hot in Support
  • Come contribute to Ubuntu-pt
  • Ubuntu Packaging Workshop – July 6th @Busan
  • UbuCon Asia: Accommodations
  • LoCo Events
  • Ubuntu Cloud News
  • Canonical News
  • In the Press
  • In the Blogosphere
  • In Other News
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, 23.10, and 24.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

on June 24, 2024 08:58 PM

June 21, 2024

As the tech world comes together to celebrate FreeBSD Day 2024, we are thrilled to bring you an exclusive interview with none other than Beastie, the iconic mascot of BSD! In a rare and exciting appearance, Beastie joins Kim McMahon to share insights about their journey, their role in the BSD community, and some fun personal preferences. Here’s a sneak peek into the life of the beloved mascot that has become synonymous with BSD.

From Icon to Legend: How Beastie Became the BSD Mascot

Beastie, with their distinct and endearing devilish charm, has been the face of BSD for decades. But how did they land this coveted role? During the interview, Beastie reveals that their journey began back in the early days of BSD. The character was originally drawn by John Lasseter of Pixar fame, and quickly became a symbol of the BSD community’s resilience and innovation. Beastie’s playful yet formidable appearance captured the spirit of BSD, making them an instant hit among developers and users alike.

A Day in the Life of Beastie

What does a typical day look like for the BSD mascot? Beastie shares that their role goes beyond just being a symbol. They actively participate in community events, engage with developers, and even help in promoting BSD at various conferences around the globe. Beastie’s presence is a source of inspiration and motivation for the BSD community, reminding everyone of the project’s rich heritage and vibrant future.

Beastie’s Favorite Tools and Editors

No interview with a tech mascot would be complete without delving into their favorite tools. Beastie is an advocate of keeping things simple and efficient. When asked about their preferred text editor, Beastie enthusiastically endorsed Vim, praising its versatility and powerful features. They also shared their admiration for the classic Unix philosophy, which aligns perfectly with the minimalist yet powerful nature of Vim.

Engaging with the BSD Community

Beastie’s role is not just about representation; it’s about active engagement. They spoke about the importance of community in the BSD ecosystem and how it has been pivotal in driving the project forward. From organizing hackathons to participating in mailing lists, Beastie is deeply involved in fostering a collaborative and inclusive environment. They highlighted the incredible contributions of the BSD community, acknowledging that it’s the collective effort that makes BSD a robust and reliable operating system.

Looking Ahead: The Future of BSD

As we look to the future, Beastie remains optimistic about the path ahead for BSD. They emphasized the ongoing developments and the exciting projects in the pipeline that promise to enhance the BSD experience. Beastie encouraged new users and seasoned developers alike to explore BSD, contribute to its growth, and be a part of its dynamic community.

Join the Celebration

To mark FreeBSD Day 2024, the community is hosting a series of events, including workshops, Q&A sessions, and more. Beastie’s interview with Kim McMahon is just one of the highlights. Be sure to tune in and catch this rare glimpse into the life of BSD’s beloved mascot.

Final Thoughts

Beastie’s interview is a testament to the enduring legacy and vibrant community of BSD. As we celebrate FreeBSD Day 2024, let’s take a moment to appreciate the contributions of everyone involved and look forward to an exciting future for BSD.

Don’t miss out on this exclusive interview—check it out on YouTube and join the celebration of FreeBSD Day 2024!

Watch the interview here

The post Celebrating FreeBSD Day 2024: An Exclusive Interview with Beastie appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.

on June 21, 2024 06:03 PM

June 20, 2024

When embarking on a new software project, one of the critical decisions you’ll face is selecting the right open-source license. The choice can significantly impact how your software can be used, modified, and distributed. Two of the most popular open-source licenses are the BSD (Berkeley Software Distribution) license and the GPL (General Public License). Each has its own philosophy and implications for your code. In this blog post, we’ll compare these licenses to help you determine which one might be the best fit for your project.

The BSD License: Freedom with Few Restrictions

The BSD license is known for its permissiveness. It originated with the Berkeley Software Distribution, a Unix operating system derivative. The license is minimalistic and designed to provide maximum freedom to users. Here are the key characteristics:

Key Features of BSD License

  1. Permissiveness: The BSD license is very permissive, allowing almost unrestricted use of the code. Users can modify, distribute, and even incorporate the code into proprietary products without much obligation.
  2. Minimal Requirements: The main requirements are to include the original copyright notice and disclaimers of liability in all copies or substantial portions of the software.
  3. Compatibility: Because of its permissiveness, the BSD license is highly compatible with other licenses, both open-source and proprietary.

Pros and Cons of BSD License

  • Pros:
  • High flexibility for developers and companies.
  • Encourages widespread use and adoption.
  • Simplifies integration with other projects.
  • Cons:
  • Limited protection against proprietary use of the code.
  • No guarantee that improvements will be shared back with the community.

The GPL License: Protecting Freedom Through Copyleft

The GPL, created by the Free Software Foundation, is designed to ensure that software remains free and open. The GPL uses a concept known as “copyleft” to require that any derivative work must also be distributed under the GPL.

Key Features of GPL License

  1. Copyleft: The defining feature of the GPL is its copyleft provision, which mandates that any modified versions of the software must be distributed under the same license.
  2. Source Code Availability: Any distributed software under the GPL must make the source code available to recipients.
  3. Compatibility: GPL’s strict requirements can make it incompatible with some other licenses, especially proprietary ones.

Pros and Cons of GPL License

  • Pros:
  • Ensures that software and its derivatives remain free and open.
  • Encourages contributions back to the community.
  • Provides strong legal protection for the rights of users and developers.
  • Cons:
  • Can be restrictive for commercial use and integration with proprietary software.
  • May limit adoption in environments where mixing with proprietary software is necessary.

Choosing Between BSD and GPL

The choice between BSD and GPL depends largely on your goals and philosophy as a developer or organization.

When to Choose BSD

  • Flexibility: If you want maximum flexibility for how your software can be used, modified, and distributed, the BSD license is a good choice.
  • Broad Adoption: If you aim for your software to be widely adopted, including in proprietary environments, BSD’s permissiveness can be beneficial.
  • Minimal Restrictions: For projects where minimal legal overhead is preferred, the simplicity of the BSD license is advantageous.

When to Choose GPL

  • Protecting Freedom: If your priority is to ensure that your software and its derivatives remain free and open, the GPL’s copyleft provision is crucial.
  • Community Contributions: If you want to foster a collaborative environment where improvements are shared back with the community, the GPL encourages this through its requirements.
  • Legal Protection: For projects that need strong legal protections for user freedoms, the GPL provides a robust framework.

Conclusion

Both the BSD and GPL licenses have their strengths and ideal use cases. The BSD license offers flexibility and minimal restrictions, making it suitable for projects that aim for broad adoption and integration with proprietary software. On the other hand, the GPL license ensures that software remains free and open, fostering community contributions and protecting user rights.

Carefully consider your project’s goals, your philosophy on software freedom, and the potential implications for your code’s use and distribution when choosing between these licenses. By doing so, you can ensure that your project aligns with your values and achieves its intended impact.

The post Comparing BSD and GPL Licenses: Which One Fits Your Project? appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.

on June 20, 2024 01:10 PM

E304 Arquivo De Bugs

Podcast Ubuntu Portugal

Esta semana os podcasters da classe trabalhadora uniram-se numa jornada de luta e fizeram greve, pelo fim da carestia de vida. Mas calma: o Bação fez de fura-greves e veio ajudar o patronato representado pelo Diogo. Falaram de Remarkable, um pacote snap que se suicida; Omnivore, o software que come tudo; mais um Fairphone na casa do patrão e ainda peroraram sobre a educação para a literacia digital das crianças em contexto familiar, sobretudo quando estão rodeadas de Software Livre.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on June 20, 2024 12:00 AM

June 18, 2024

 

Download And Use latest Version Of Nginx Stable

To ensure you receive the latest security updates and bug fixes for Nginx, configure your system's repository specifically for it. Detailed instructions on how to achieve this can be found on the Nginx website. Setting up the repository allows your system to automatically download and install future Nginx updates, keeping your web server running optimally and securely.

Visit this websites for information on how to configure your repository for Nginx.

https://nginx.org/en/linux_packages.html

https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/ 

Installing Nginx on different Linux distributions

Example from https://docs.bunkerweb.io/latest/integrations/#linux 

Ubuntu

sudo apt install -y curl gnupg2 ca-certificates lsb-release debian-archive-keyring && \
curl https://nginx.org/
keys/nginx_signing.key | gpg --dearmor \
| sudo tee /usr/
share/keyrings/nginx-archive-keyring.gpg >/dev/null && \
echo
"deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/debian `lsb_release -cs` nginx"
 \
| sudo tee /etc/apt/sources.list.d/nginx.list

# Latest Stable (pick either latest stable
or by version)

sudo apt
update && \
sudo apt
install -y nginx

#
By version (pick one only, latest stable or by version)

sudo apt
update && \
sudo apt
install -y nginx=1.24.0-1~$(lsb_release -cs)

AlmaLinux / Rocky Linux (Redhat)

Create the following file at /etc/yum.repos.d/nginx.repo

[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/
gpgcheck=1
enabled=0
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

# Latest Stable (pick either latest stable or by version)

sudo dnf install nginx

# Latest Stable (pick either latest stable or by version)

sudo dnf install nginx-1.24.0

Nginx Fork (This for reference only - year 2024)

https://thenewstack.io/freenginx-a-fork-of-nginx/ 

https://github.com/freenginx/ 

Use this Web tool to configure nginx.

https://www.digitalocean.com/community/tools/nginx

https://github.com/digitalocean/nginxconfig.io 

Example

https://www.digitalocean.com/community/tools/nginx?domains.0.server.domain=songketmail.linuxmalaysia.lan&domains.0.server.redirectSubdomains=false&domains.0.https.hstsPreload=true&domains.0.php.phpServer=%2Fvar%2Frun%2Fphp%2Fphp8.2-fpm.sock&domains.0.logging.redirectAccessLog=true&domains.0.logging.redirectErrorLog=true&domains.0.restrict.putMethod=true&domains.0.restrict.patchMethod=true&domains.0.restrict.deleteMethod=true&domains.0.restrict.connectMethod=true&domains.0.restrict.optionsMethod=true&domains.0.restrict.traceMethod=true&global.https.portReuse=true&global.https.sslProfile=modern&global.https.ocspQuad9=true&global.https.ocspVerisign=true&global.security.limitReq=true&global.security.securityTxt=true&global.logging.errorLogEnabled=true&global.logging.logNotFound=true&global.tools.modularizedStructure=false&global.tools.symlinkVhost=false 

Harisfazillah Jamel - LinuxMalaysia - 20240619
on June 18, 2024 10:18 PM

June 17, 2024

Welcome to the Ubuntu Weekly Newsletter, Issue 844 for the week of June 9 – 15, 2024. The full version of this issue is available here.

In this issue we cover:

  • Developer Membership Board restaffing
  • Ubuntu Stats
  • Hot in Support
  • UbuCon Asia – Call for Global Committee members
  • Call For Speaker For Mini UbuCon Malaysia 2024
  • News! UbuCon Latin America 2024, 9th Edition
  • Flisol Barranquilla 2024, How it went!
  • LoCo Events
  • FOSSCOMM 2024 – Thessaloniki, Greece
  • Ubuntu themes are now on the Chrome Web Store!
  • Data science stack Beta is here. Try it now!
  • Ubuntu Desktop’s 24.10 Dev Cycle – June Update
  • Canonical News
  • In the Blogosphere
  • In Other News
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, 23.10, and 24.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the list!

.

on June 17, 2024 10:58 PM

June 16, 2024

The previous release of uCareSystem, version 24.05.0, introduced enhanced maintenance and cleanup capabilities for flatpak packages. The uCareSystem 24.06.0 has a special treat for desktop users 🙂 This new version includes: The people who supported the previous development cycle ( version 24.05 ) with their generous donations and are mentioned in the uCareSystem app: Where […]
on June 16, 2024 08:49 PM

June 09, 2024

Call For Speaker for Mini UbuCon Malaysia 2024

Mini UbuCon Malaysia 2024: Call for Speakers

Share your Ubuntu expertise!

The Ubuntu Malaysia LoCo Team is thrilled to invite you to speak at Mini UbuCon Malaysia 2024 on August 7, 2024, during Siber Siaga and CyberDSA. This is your chance to share your knowledge and experiences with a passionate community of Ubuntu users.

What is Ubuntu?

Ubuntu is a free and open-source operating system based on Linux. It's popular for its user-friendly interface, wide range of software applications, and strong community support. Whether you're a developer, system administrator, or everyday computer user,  Ubuntu offers a powerful and customizable computing experience.

What We're Looking For

  • Technical Talks
    • Case studies, development experiences, implementations, and applications related to Ubuntu.
    • Focus on practical, knowledge-sharing content.
    • Avoid marketing materials.
  • Ubuntu-Centric Topics
    • Priority given to subjects directly related to Ubuntu, its ecosystem, and the community.
    • Be specific! Broad topics like "What is Security?" are less likely to be accepted. For instance, a talk on "Advanced Security Configurations in Ubuntu" is more likely to be accepted than a general overview of security.
  • Unique Insights
    • Share your hard-won experiences and hard-to-find information, not easily searchable content.
  • Community-Oriented Talks
    • Generic topics are okay if directly connected to Ubuntu projects or the community (e.g., managing budget for UbuCon events).

Please note that this is a volunteer opportunity and speaker participation is on your own expense.

Get Inspired

Browse past UbuCon sessions on the Ubuntu wiki for ideas: https://wiki.ubuntu.com/Ubucon

Speaker Requirements

Important Dates

  • Speaker Application Deadline: July 5, 2024
  • Selection Notification: After selection process by the committee
  • Presentation Slide Submission: August 1, 2024 (if selected)

How to Apply

About Mini UbuCon Malaysia 2024

Mark your calendars for August 7, 2024, as Mini UbuCon Malaysia 2024 will take place during the prestigious Siber Siaga and CyberDSA events. These events provide a dynamic platform for cybersecurity and digital transformation, making them the perfect backdrop for our Mini UbuCon.

For those of you who want to attend Mini UbuCon Malaysia 2024, you still need to register through SIBER SIAGA 2024. Visit Facebook https://www.facebook.com/sibersiaga  and other links https://linktr.ee/sibersiaga24 for more info.

Online Registration

  • Visit: https://form.evenesis.com/cyberdsa2024
  • Select "General Visitors" by pressing + for total (at least 1).
  • Click "Continue" and fill out the required information.
  • In the field "Any specific vendors / exhibitors you are interested in meeting?", enter "Mini UbuCon Malaysia 2024".
  • Press "Register".

Anything regarding Mini UbuCon 2024 can be asked on Telegram Ubuntu Malaysia Loco Team: https://t.me/ubuntumalaysia

Don't miss this opportunity to connect with the Ubuntu community in Malaysia

Privacy Notice

We take your privacy seriously. Any personal information collected through the online speaker application form, including email addresses, telephone numbers, names, and profiles, will only be used for purposes related to Mini UbuCon Malaysia 2024 speaker selection and communication. This information will not be shared with third parties without your consent.

FAQs

  1. What topics are best suited for Mini UbuCon Malaysia 2024? We are looking for technical talks, Ubuntu-centric topics, unique insights, and community-oriented talks that provide value and knowledge to the Ubuntu community.

  2. How do I submit my application to speak? Submit a 50-100 word abstract through the online form. Ensure you meet the application deadline.

  3. Can I attend Mini UbuCon Malaysia 2024 without speaking? Yes, you can attend by registering through Siber Siaga 2024. In the field "Any specific vendors / exhibitors you are interested in meeting?", enter "Mini UbuCon Malaysia 2024".

  4. What is the Creative Commons license requirement? All presentation slides must be published under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license to encourage sharing and collaboration.

  5. Where can I get more information about past UbuCon sessions? You can browse past sessions on the Ubuntu wiki for ideas and inspiration.


    Ubuntu Meetup Release Party 24.04 - 25 May 2024

on June 09, 2024 10:02 PM

June 06, 2024

This year’s edition of the annual Linux Plumbers Conference will be in Vienna, Austria between September 18th and 20th.

I’ll once again be one of the organizers for the Containers and Checkpoint/Restore micro-conference where I’m looking forward to a half-day of interesting topics on containers, namespacing, resource limits, security and the ability to serialize and restore all of that stuff!

We just published our CFP for that micro-conference with a deadline of July 15th for anyone interested in presenting their work. You may also want to look at the extensive list of other micro-conferences and tracks.

As usual for this conference, presenting within one of the many micro-conferences doesn’t provide you a ticket to attend the conference. So anyone interested in attending or presenting should be looking at getting their registration done now while early bird tickets remain!

LPC runs as a hybrid event with remote participation possible through video-conferencing and accessing shared notes. While it’s technically possible to present remotely too, it’s usually preferred to do that in person.

See you all in Vienna!

on June 06, 2024 08:23 PM

June 05, 2024

I am still here, busy as ever, I just haven’t found the inspiration to blog. So soon after the loss of my son, I have lost my only brother a couple weeks ago. It has been a tough year for our family. Thank you everyone for you love and support during this difficult time. I will do my best in re-capping my work, there has been quite a bit as I am “keeping busy with work” so I don’t dwell to much on the sadness.

KDE Snaps:

Trying to debug the unable to save files breakage in the latest Krita builds without luck.

KisOpenGLCanvas
Renderer::reportFailedShaderCompilation\[0m: Shad
er Compilation Failure:  "Failed to add vertex sh
ader source from file: matrix_transform.vert - Ca
use: "

I have implemented everything from https://snapcraft.io/docs/gpu-support , it has worked for years and now suddenly it just stopped. I have had to put it on hold for now, it is unpaid work and I simply don’t have time.

With the help of my GSOC student we are improving the Qt6 snap MR: https://invent.kde.org/neon/snap-packaging/kde-qt6-core-sdk/-/merge_requests/3 and many improvements on top of that. This exposed many issues with the kf6 snap and the linking to static libs. Those are being worked on now.

Updated qt to 6.7.1

Qt6 apps in the works: okular, ark, gwenview, kwrited, elisa

Kubuntu:

So many SRu’s for the Noble release, I will probably miss a few.

https://bugs.launchpad.net/ubuntu/+source/ark/+bug/2068491 Ark cannot open 7-zip files. Sadly the patches were for qt6, waiting for a qt5 port upstream.

https://bugs.launchpad.net/ubuntu/noble/+source/merkuro/+bug/2065063 Crash due to missing qml. Fix is in git, no upload rights. Requested sponsor.

https://bugs.launchpad.net/ubuntu/+source/tellico/+bug/2065915 Several applications no longer work on architectures that are not amd64 due to hard coded paths. All fixed in git. Several uploaded to oracular, several sponsorship has been requested. Noble updates rejected despite SRU, going to retry.

https://bugs.launchpad.net/ubuntu/+source/sddm/+bug/2066275 The dreaded black screen on second boot bug is fixed in git and oracular. Noble was rejected despite the SRU. Will retry.

https://bugs.launchpad.net/ubuntu/+source/kubuntu-meta/+bug/2066028 Broken systray submenus. Fixed in git and oracular. Noble rejected despite SRU. Will retry.

https://bugs.launchpad.net/ubuntu/+source/plasma-workspace/+bug/2067747 Long standing bug with plasma not loading with lightdm. Fixed in git and oracular. Noble rejected… will retry.

https://bugs.launchpad.net/ubuntu/+source/plasma-workspace/+bug/2067742 CVE-2024-36041Fixed in git and oracular, noble rejected, will retry.

And many more…

I am applying for MOTU in hopes it will reduce all of my uploading issues. https://wiki.ubuntu.com/scarlettmoore/MOTUApplication

Debian:

kf6-knotifications and kapidox. Will jump into Plasma 6 next week !

Misc:

Went to LinuxFest Northwest with Valorie! We had a great time and it was a huge success, we had many people stop by our booth.

As usual, if you like my work and want to see Plasma 6 in Kubuntu it all depends on you!

Kubuntu will be out of funds soon and needs donations! Thank you for your consideration.

https://kubuntu.org/donate/

Personal:

Support for my grandson: https://www.gofundme.com/f/in-loving-memory-of-william-billy-dean-scalf

on June 05, 2024 05:30 PM

June 03, 2024

Announcing Incus 6.2

Stéphane Graber

This release is the second one to feature contribution from students at the University of Texas in Austin, there are a couple more features that were contributed by students which will most likely make it into Incus 6.3 at which point we’ll have wrapped up all of those for this year.

If you’d like to try your hands at contributing some code to Incus, we maintain a list of issues for newcomers, primarily issues and features that are well understood and on which we’d be happy to provide mentoring and assistance.

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Enjoy!

on June 03, 2024 08:07 PM

June 02, 2024

My Debian contributions this month were all sponsored by Freexian.

The bulk of my Debian time this month went towards trying to haul more Python packages up to current versions, but I got a few other bits and pieces done as well.

  • I did a little work on improving debbugs’ autopkgtest status.
  • openssh:
    • I fixed an OpenSSL version mismatch error in openssh-ssh1.
    • I finally tracked down a baffling CI issue in openssh, unblocking several contributed merge requests that I’d been sitting on until I could get CI to pass for them. (Special thanks to Andreas Hasenack; GSS-API integration tests will make my life much easier.)
    • I removed the user_readenv=1 option from openssh’s PAM configuration, and did some work on release notes to document this change for affected users.
    • I started work on the first stage of my plan to split out GSS-API key exchange support to separate packages.
  • Python team:
    • I upgraded bitstruct, flufl.enum, flufl.testing, gunicorn, langtable, psycopg3, pygresql, pylint-flask, python-click-didyoumean, python-gssapi, python-httplib2, python-json-log-formatter, python-persistent, python-pgspecial, python-pyld, python-repoze.tm2, python-serializable, python-tenacity, python-typing-extensions, python-unidiff, responses, shortuuid (including an upstream packaging tweak), sqlparse, vulture, zc.lockfile, and zope.interface to new upstream versions.
    • I cherry-picked an upstream PR to fix a pytest 8 incompatibility in ipywidgets.
  • I decided that fixing my old troffcvt package to support groff 1.23.0 wasn’t worth the time investment, and filed a removal request instead.
  • I NMUed bidentd and linuxtv-dvb-apps to declare Architecture: linux-any (and in the latter case also to fix a build failure due to 64-bit time), and worked with the buildd team to remove several of the other remaining entries from Packages-arch-specific.

You can support my work directly via Liberapay.

on June 02, 2024 10:53 AM

June 01, 2024

I’ve a self-hosted Nextcloud installation which is frankly a pain, there’s a good chance updates will break everything.

Plesk is a fairly intuitive interface for self hosting but is best described as non-standard Ubuntu – it handles PHP oddly in particular. Try running any of the php occ commands in it’s built-in ssh terminal and you’ll experience an exercise in frustration.

It’s so much easier to ssh from an Ubuntu terminal and run occ commands directly – giving a clear error message that you can actually do something with.

So its the Circles app, lets disable it:

php occ app:disable circles

Then repair the installation:

php occ maintenance:repair

Finally turn off maintenance mode:

php occ maintenance:mode --off

I don’t even use that app but we’re back.

on June 01, 2024 03:37 PM

May 24, 2024

In my previous blog, I explored The New APT 3.0 solver. Since then I have been at work in the test suite making tests pass and fixing some bugs.

You see for all intents and purposes, the new solver is a very stupid naive DPLL SAT solver (it just so happens we don’t actually have any pure literals in there). We can control it in a bunch of ways:

  1. We can mark packages as “install” or “reject”
  2. We can order actions/clauses. When backtracking the action that came later will be the first we try to backtrack on
  3. We can order the choices of a dependency - we try them left to right.

This is about all that we really want to do, we can’t go if we reach a conflict, say “oh but this conflict was introduced by that upgrade, and it seems more important, so let’s not backtrack on the upgrade request but on this dependency instead.”.

This forces us to think about lowering the dependency problem into this form, such that not only do we get formally correct solutions, but also semantically correct ones. This is nice because we can apply a systematic way to approach the issue rather than introducing ad-hoc rules in the old solver which had a “which of these packages should I flip the opposite way to break the conflict” kind of thinking.

Now our test suite has a whole bunch of these semantics encoded in it, and I’m going to share some problems and ideas for how to solve them. I can’t wait to fix these and the error reporting and then turn it on in Ubuntu and later Debian (the defaults change is a post-trixie change, let’s be honest).

apt upgrade is hard

The apt upgrade commands implements a safe version of dist-upgrade that essentially calculates the dist-upgrade, and then undoes anything that would cause a package to be removed, but it (unlike its apt-get counterpart) allows the solver to install new packages.

Now, consider the following package is installed:

X Depends: A (= 1) | B

An upgrade from A=1 to A=2 is available. What should happen?

The classic solver would choose to remove X in a dist-upgrade, and then upgrade A, so it’s answer is quite clear: Keep back the upgrade of A.

The new solver however sees two possible solutions:

  1. Install B to satisfy X Depends A (= 1) | B.
  2. Keep back the upgrade of A

Which one does it pick? This depends on the order in which it sees the upgrade action for A and the dependency, as it will backjump chronologically. So

  1. If it gets to the dependency first, it marks A=1 for install to satisfy A (= 1). Then it gets to the upgrade request, which is just A Depends A (= 2) | A (= 1) and sees it is satisfied already and is content.

  2. If it gets to the upgrade request first, it marks A=2 for install to satisfy A (= 2). Then later it gets to X Depends: A (= 1) | B, sees that A (= 1) is not satisfiable, and picks B.

We have two ways to approach this issue:

  1. We always order upgrade requests last, so they will be kept back in case of conflicting dependencies
  2. We require that, for apt upgrade a currently satisfied dependency must be satisfied by currently installed packages, hence eliminating B as a choice.

Recommends are hard too

See if you have a X Recommends: A (= 1) and a new version of A, A (= 2), the solver currently will silently break the Recommends in some cases.

But let’s explore what the behavior of a X Recommends: A (= 1) in combination with an available upgrade of A (= 2) should be. We could say the rule should be:

  • An upgrade should keep back A instead of breaking the Recommends
  • A dist-upgrade should either keep back A or remove X (if it is obsolete)

This essentially leaves us the same choices as for the previous problem, but with an interesting twist. We can change the ordering (and we already did), but we could also introduce a new rule, “promotions”:

A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently satisfied, must continue to be satisfied, that is, it effectively is promoted to a Depends.

This neatly solves the problem for us. We will never break Recommends that are satisfied.

Likewise, we already have a Recommends demotion rule:

A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently unsatisfied, will not be further evaluated (it is treated like a Suggests is in the default configuration).

Whether we should be allowed to break Suggests with our decisions or not (the old autoremover did not, for instance) is a different decision. Should we promote currently satisified Suggests to Depends as well? Should we follow currently satisified Suggests so the solver sees them and doesn’t autoremove them, but treat them as optional?

tightening of versioned dependencies

Another case of versioned dependencies with alternatives that has complex behavior is something like

X Depends: A (>= 2) | B
X Recommends: A (>= 2) | B

In both cases, installing X should upgrade an A < 2 in favour of installing B. But a naive SAT solver might not. If your request to keep A installed is encoded as A (= 1) | A (= 2), then it first picks A (= 1). When it sees the Depends/Recommends it will switch to B.

We can solve this again as in the previous example by ordering the “keep A installed” requests after any dependencies. Notably, we will enqueue the common dependencies of all A versions first before selecting a version of A, so something may select a version for us.

version narrowing instead of version choosing

A different approach to dealing with the issue of version selection is to not select a version until the very last moment. So instead of selecting a version to satisfy A (>= 2) we instead translate

Depends: A (>= 2)

into two rules:

  1. The package selection rule:

     Depends: A
    

    This ensures that any version of A is installed (i.e. it adds a version choice clause, A (= 1) | A (= 2) in an example with two versions for A.

  2. The version narrowing rule:

     Conflicts: A (<< 2)
    

    This outright would reject a choice of A (= 1).

So now we have 3 kinds of clauses:

  1. package selection
  2. version narrowing
  3. version selection

If we process them in that order, we should surely be able to find the solution that best matches the semantics of our Debian dependency model, i.e. selecting earlier choices in a dependency before later choices in the face of version restrictions.

This still leaves one issue: What if our maintainer did not use Depends: A (>= 2) | B but e.g. Depends: A (= 3) | B | A (= 2). He’d expect us to fall back to B if A (= 3) is not installable, and not to B. But we’d like to enqueue A and reject all choices other than 3 and 2. I think it’s fair to say: “Don’t do that, then” here.

Implementing strict pinning correctly

APT knows a single candidate version per package, this makes the solver relatively deterministic: It will only ever pick the candidate, or an installed version. This also happens to significantly reduce the search space which is good - less backtracking. An uptodate system will only ever have one version per package that can be installed, so we never actually have to choose versions.

But of course, APT allows you to specify a non-candidate version of a package to install, for example:

apt install foo/oracular-proposed

The way this works is that the core component of the previous solver, which is the pkgDepCache maintains what essentially amounts to an overlay of the policy that you could see with apt-cache policy.

The solver currently however validates allowed version choices against the policy directly, and hence finds these versions are not allowed and craps out. This is an interesting problem because the solver should not be dependent on the pkgDepCache as the pkgDepCache initialization (Building dependency tree...) accounts for about half of the runtime of APT (until the Y/n prompt) and I’d really like to get rid of it.

But currently the frontend does go via the pkgDepCache. It marks the packages in there, building up what you could call a transaction, and then we translate it to the new solver, and once it is done, it translates the result back into the pkgDepCache.

The current implementation of “allowed version” is implemented by reducing the search space, i.e. every dependency, we outright ignore any non-allowed versions. So if you have a version 3 of A that is ignored a Depends: A would be translated into A (= 2) | A (= 1).

However this has two disadvantages. (1) It means if we show you why A could not be installed, you don’t even see A (= 3) in the list of choices and (2) you would need to keep the pkgDepCache around for the temporary overrides.

So instead of actually enforcing the allowed version rule by filtering, a more reasonable model is that we apply the allowed version rule by just marking every other version as not allowed when discovering the package in the from depcache translation layer. This doesn’t really increase the search space either but it solves both our problem of making overrides work and giving you a reasonable error message that lists all versions of A.

pulling up common dependencies to minimize backtracking cost

One of the common issues we have is that when we have a dependency group

`A | B | C | D`

we try them in order, and if one fails, we undo everything it did, and move on to the next one. However, this isn’t perhaps the best choice of operation.

I explained before that one thing we do is queue the common dependencies of a package (i.e. dependencies shared in all versions) when marking a package for install, but we don’t do this here: We have already lowered the representation of the dependency group into a list of versions, so we’d need to extract the package back out of it.

This can of course be done, but there may be a more interesting solution to the problem, in that we simply enqueue all the common dependencies. That is, we add n backtracking levels for n possible solutions:

  1. We enqueue the common dependencies of all possible solutions deps(A)&deps(B)&deps(C)&deps(D)
  2. We decide (adding a decision level) not to install D right now and enqueue deps(A)&deps(B)&deps(C)
  3. We decide (adding a decision level) not to install C right now and enqueue deps(A)&deps(B)
  4. We decide (adding a decision level) not to install B right now and enqueue A

Now if we need to backtrack from our choice of A we hopefully still have a lot of common dependencies queued that we do not need to redo. While we have more backtracking levels, each backtracking level would be significantly cheaper, especially if you have cheap backtracking (which admittedly we do not have, yet anyway).

The caveat though is: It may be pretty expensive to find the common dependencies. We need to iterate over all dependency groups of A and see if they are in B, C, and D, so we have a complexity of roughly

#A * (#B+#C+#D)

Each dependency group we need to check i.e. is X|Y in B meanwhile has linear cost: We need to compare the memory content of two pointer arrays containing the list of possible versions that solve the dependency group. This means that X|Y and Y|X are different dependencies of course, but that is to be expected – they are. But any dependency of the same order will have the same memory layout.

So really the cost is roughly N^4. This isn’t nice.

You can apply various heuristics here on how to improve that, or you can even apply binary logic:

  1. Enqueue common dependencies of A|B|C|D
  2. Move into the left half, enqueue of A|B
  3. Again divide and conquer and select A.

This has a significant advantage in long lists of choices, and also in the common case, where the first solution should be the right one.

Or again, if you enqueue the package and a version restriction instead, you already get the common dependencies enqueued for the chosen package at least.

on May 24, 2024 08:57 AM

May 20, 2024

OR...

Aaron Rainbolt

Contrary to what you may be thinking, this is not a tale of an inexperienced coder pretending to know what they’re doing. I have something even better for you.

It all begins in the dead of night, at my workplace. In front of me is a typical programmer’s desk - two computers, three monitors (one of which isn’t even plugged in), a mess of storage drives, SD cards, 2FA keys, and an arbitrary RPi 4, along with a host of items that most certainly don’t belong on my desk, and a tangle of cables that would give even a rat a migraine. My dev laptop is sitting idle on the desk, while I stare intently at the screen of a system running a battery of software tests. In front of me is the logs of a failed script run.

Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

Generally when this particular script fails, it gives me some indication as to what went wrong. There are thorough error catching measures (or so I thought) throughout the code, so that if anything goes wrong, I know what went wrong and where. This time though, I’m greeted by something like this:

$ systemctl status test-sh.service
test-sh.service - does testing things
...
May 20 23:00:00 desktop-pc systemd[1]: Starting test-sh.service - does testing things
May 20 23:00:00 desktop-pc systemd[1]: test-sh.service: Failed with result ‘exit-code’.
May 20 23:00:00 desktop-pc systemd[1]: Failed to start test-sh.service.

I stare at the screen in bewilderment for a few seconds. No debugging info, no backtraces, no logs, not even an error message. It’s as if the script simply decided it needed some coffee before it would be willing to keep working this late at night. Having heard the tales of what happens when you give a computer coffee, I elected to try a different approach.

$ vim /usr/bin/test-sh
1 #!/bin/bash
2 #
3 # Copyright 2024 ...
4 set -u;
5 set -e;

Before I go into what exactly is wrong with this picture, I need to explain a bit about how Bash handles the ultimate question of life, “what is truth?”

(RED ALERT: I do not know if I’m correct about the reasoning behind the design decisions I talk about in the rest of this article. Don’t use me as a reference for why things work like this, and please correct me if I’ve botched something. Also, a lot of what I describe here is simplified, so don’t be surprised if you notice or discover that things are a bit more complex in reality than I make them sound like here.)

Bash, as many of you probably know, is primarily a “glue” language - it glues applications to each other, it glues the user to the applications, and it glues one’s sanity to the ceiling, far out of the user’s reach. As such, it features a bewildering combination of some of the most intuitive and some of the least intuitive behaviors one can dream up, and the handling of truth and falsehood is one of these bewildering things.

Every command you run in Bash reports back whether or not what it did “worked”. (“Worked” is subjective and depends on the command, but for the most part if a command says “It worked”, you can trust that it did what you told it to, at least mostly.) This is done by means of an “exit code”, which is nothing more than a number between 0 and 255. If a program exits and hands the shell an exit code of 0, it usually means “it worked”, whereas a non-zero exit code usually means “something went wrong”. (This makes sense if you know a bit about how programs written in C work - if your program is written to just “do things” and then exit, it will default to exiting with code zero.)

Because zero = good and non-zero = not good, it makes sense to treat zero as meaning “true” and non-zero as meaning “false”. That’s exactly what Bash does - if you do something like “if command; then commandIfTrue; else commandIfFalse; fi”, Bash will run “commandIfTrue” if “command” exits with 0, and will run “commandIfFalse” if “command” exits with 1 or higher.

Now since Bash is a glue language, it has to be able to handle it if a command runs and fails. This can be done with some amount of difficulty by testing (almost) every command the script runs, but that can be quite tedious. There’s a (generally) easier way however, which is to tell the script to immediately exit if any command exits with a non-zero exit code. This is done by using the command “set -e” at or near the top of the script. Once “set -e” is active, any command that fails will cause the whole script to stop.

So back to my script. I’m using “set -e” so that if anything goes wrong, the script stops. What could go wrong other than a failed command? To answer that question, we have to take a look at how some things work in C.

C is a very different language than Bash. Whereas Bash is designed to take a bunch of pieces and glue them together, C is designed to make the pieces themselves. You can think of Bash as being a glue gun and C as being a 3d printer. As such, C does not concern itself nearly as much with things like return codes and exiting when a command fails. It focuses on taking data and doing stuff with it.

Since C is more data- and algorithm-oriented, true and false work significantly differently here. C sees 0 as meaning “none, empty, all bits set to 0, etc.” and thus treats it as meaning “false”. Any number greater than 0 has a value, and can be treated as “on” or “true”. An astute reader will notice this is exactly the opposite of how Bash works, where 0 is true and non-zero is false. (In my opinion this is a rather lamentable design decision, but sadly these behaviors have been standardized for longer than I’ve been alive, so there’s not much point in trying to change them. But I digress.)

C also of course has features for doing math, called “operators”. One of the most common operators is the assignment operator, “=”. The assignment operator’s job is to take whatever you put on the right side of it, and store it in whatever you put on the left side. If you say “a = 0”, the value “0” will be stored in the variable “a” (assuming things work right). But the assignment operator has a trick up its sleeve - not only does it assign the value to the variable, it also returns the value. Basically what that means is that the statement “a = 0” spits out an extra value that you can do things with. This allows you to do things like “a = b = 0”, which will assign 0 to “b”, return zero, and then assign that returned zero to "a”. (The assignment of the second zero to “a” also returns a zero, but that simply gets ignored by the program since there’s nothing to do with it.)

You may be able to see where I’m going with this. Assigning a value to a variable also returns that value… and 0 means “false”… so “a = 0” succeeds, but also returns what is effectively “false”. That means if you do something like “if (a = 0) { ... } else { explodeComputer(); }”, the computer will explode. “a = 0” returns “false”, thus the “if” condition does not run and the “else” condition does. (Coincidentally, this is also a good example of the “world’s last programming bug” - the comparison operation in C is “==”, which is awfully easy to mistype as the assignment operator, “=”. Using an assignment operator in an “if” statement like this will almost always result in the code within the “if” being executed, as the value being stored in the variable will usually be non-zero and thus will be seen as “true” by the “if” statement. This also corrupts the variable you thought you were comparing something to. Some fear that a programmer with access to nuclear weapons will one day write something like “if (startWar = 1) { destroyWorld(); }” and thus the world will be destroyed by a missing equals sign.)

“So what,” you say. “Bash and C are different languages.” That’s true, and in theory this would mean that everything here is fine. Unfortunately theory and practice are the same in theory but much different in practice, and this is one of those instances where things go haywire because of weird differences like this. There’s one final piece of the puzzle to look at first though - how to do math in Bash.

Despite being a glue language, Bash has some simple math capabilities, most of which are borrowed from C. Yes, including the behavior of the assignment operator and the values for true and false. When you want to do math in Bash, you write “(( do math here... ))”, and everything inside the double parentheses is evaluated. Any assignment done within this mode is executed as expected. If I want to assign the number 5 to a variable, I can do “(( var = 5 ))” and it shall be so.

But wait, what happens with the return value of the assignment operator?

Well, take a guess. What do you think Bash is going to do with it?

Let’s look at it logically. In C (and in Bash’s math mode), 0 is false and non-zero is true. In Bash, 0 is true and non-zero is false. Clearly if whatever happen within math mode fails and returns false (0), Bash should not misinterpret this as true! Things like “(( 5 == 6 ))” shouldn’t be treated as being true, right? So what do we do with this conundrum? Easy solution - convert the return value to an exit code so that its semantics are retained across the C/Bash barrier. If the return value of the math mode statement is false (0), it should be converted to Bash’s concept of false (non-zero), therefore the return value of 0 is converted to an exit code of 1. On the other hand, if the return value of the math mode statement is true (non-zero), it should be converted to Bash’s concept of true (0), therefore the return value of anything other than 0 is converted to an exit code of 0. (You probably see the writing on the wall at this point. Spoiler, my code was weighed in the balances and found wanting.)

So now we can put all this nice, logical, sensible behavior together and make a glorious mess with it. Guess what happens if you run “(( var = 0 ))” in a script where “set -e” is enabled.

  • “0” is assigned to “var”.

  • The statement returns 0.

  • Bash dutifully converts that to a 1 (false/failure).

  • Bash now sees the command as having failed.

  • set -e” says the script should immediately stop if anything fails.

  • The script crashes.

You can try this for yourself - pop open a terminal and run “set -e; (( var = 0 ));” and watch in awe as your terminal instantly closes (or otherwise shows an indication that Bash has exited).

So back to the code. In my script, I have a function that helps with generating random numbers within any specified bounds. Basically it just grabs the value of “$RANDOM” (which is a special variable in Bash that always returns an integer between 0 and 32767) and does some manipulations on it so that it becomes a random number between a “lower bound” and an “upper bound” parameter. In the guts of that function’s code I have many “math mode” statements for getting those numbers into shape. Those statements include variable assignments, and those variable assignments were throwing exit codes into the script. I had written this before enabling “set -e”, so everything was fine before, but now “set -e” was enabled and Bash was going to enforce it as ruthlessly as possible.

While I will never know what line of code triggered the failure, it’s a fairly safe bet that the culprit was:

88 (( _val = ( _val % ( _adj_upper_bound + 1 ) ) ));

This basically takes whatever is in “_val” , divides it by “_adj_upper_bound + 1”, and then assigns the remainder of that operation to “_val”. This makes sure that “_val” is lower than “_adj_upper_bound + 1”. (This is typically known as a “getting the modulus”, and the “%” operator here is the “modulo operator”. For the math people reading this, don’t worry, I did the requisite gymnastics to ensure this code didn’t have modulo bias.) If “_val” happens to be equal to “_adj_upper_bound + 1”, the code on the right side of the assignment operator will evaluate to 0, which will become an exit code of 1, thus exploding my script because of what appeared to be a failed command.

Sigh.

So there’s the problem. What’s the solution? Turns out it’s pretty simple. Among Bash’s feature set, there is the profoundly handy “logical or operator”, “||”. This operator lets us say “if this OR that is true, return true.” In other words, “Run whatever’s on the left hand of the ||. If it exits 0, move on. If it exits non-zero, run whatever’s on the right hand of the ||. If it exits 0, move on and ignore the earlier failure. Only return non-zero if both commands fail.” There’s also a handy command in Bash called “true” that does nothing except for give an exit code of 0. That means that if you ever have a line of code in Bash that is liable to exit non-zero but it’s no big deal if it does, you can just slap an “|| true” on the end and it will magically make everything work by pretending that nothing went wrong. (If only this worked in real life!) I proceeded to go through and apply this bandaid to every standalone math mode call in my script, and it now seems to be behaving itself correctly again. For now anyway.

tl;dr: Faking success is sometimes a perfectly valid way to solve a computing problem. Just don’t live the way you code and you’ll be alright.

Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

on May 20, 2024 08:06 AM

May 14, 2024

The new APT 3.0 solver

Julian Andres Klode

APT 2.9.3 introduces the first iteration of the new solver codenamed solver3, and now available with the –solver 3.0 option. The new solver works fundamentally different from the old one.

How does it work?

Solver3 is a fully backtracking dependency solving algorithm that defers choices to as late as possible. It starts with an empty set of packages, then adds the manually installed packages, and then installs packages automatically as necessary to satisfy the dependencies.

Deferring the choices is implemented multiple ways:

First, all install requests recursively mark dependencies with a single solution for install, and any packages that are being rejected due to conflicts or user requests will cause their reverse dependencies to be transitively marked as rejected, provided their or group cannot be solved by a different package.

Second, any dependency with more than one choice is pushed to a priority queue that is ordered by the number of possible solutions, such that we resolve a|b before a|b|c.

Not just by the number of solutions, though. One important point to note is that optional dependencies, that is, Recommends, are always sorting after mandatory dependencies. Do note on that: Recommended packages do not “nest” in backtracking - dependencies of a Recommended package themselves are not optional, so they will have to be resolved before the next Recommended package is seen in the queue.

Another important step in deferring choices is extracting the common dependencies of a package across its version and then installing them before we even decide which of its versions we want to install - one of the dependencies might cycle back to a specific version after all.

Decisions about package levels are recorded at a certain decision level, if we reach a conflict we backtrack to the previous decision level, mark the decision we made (install X) in the inverse (DO NOT INSTALL X), reset all the state all decisions made at the higher level, and restore any dependencies that are no longer resolved to the work queue.

Comparison to SAT solver design.

If you have studied SAT solver design, you’ll find that essentially this is a DPLL solver without pure literal elimination. A pure literal eliminitation phase would not work for a package manager: First negative pure literals (packages that everything conflicts with) do not exist, and positive pure literals (packages nothing conflicts with) we do not want to mark for install - we want to install as little as possible (well subject, to policy).

As part of the solving phase, we also construct an implication graph, albeit a partial one: The first package installing another package is marked as the reason (A -> B), the same thing for conflicts (not A -> not B).

Once we have added the ability to have multiple parents in the implication graph, it stands to reason that we can also implement the much more advanced method of conflict-driven clause learning; where we do not jump back to the previous decision level but exactly to the decision level that caused the conflict. This would massively speed up backtracking.

What changes can you expect in behavior?

The most striking difference to the classic APT solver is that solver3 always keeps manually installed packages around, it never offers to remove them. We will relax that in a future iteration so that it can replace packages with new ones, that is, if your package is no longer available in the repository (obsolete), but there is one that Conflicts+Replaces+Provides it, solver3 will be allowed to install that and remove the other.

Implementing that policy is rather trivial: We just need to queue obsolete | replacement as a dependency to solve, rather than mark the obsolete package for install.

Another critical difference is the change in the autoremove behavior: The new solver currently only knows the strongest dependency chain to each package, and hence it will not keep around any packages that are only reachable via weaker chains. A common example is when gcc-<version> packages accumulate on your system over the years. They all have Provides: c-compiler and the libtool Depends: gcc | c-compiler is enough to keep them around.

New features

The new option --no-strict-pinning instructs the solver to consider all versions of a package and not just the candidate version. For example, you could use apt install foo=2.0 --no-strict-pinning to install version 2.0 of foo and upgrade - or downgrade - packages as needed to satisfy foo=2.0 dependencies. This mostly comes in handy in use cases involving Debian experimental or the Ubuntu proposed pockets, where you want to install a package from there, but try to satisfy from the normal release as much as possible.

The implication graph building allows us to implement an apt why command, that while not as nicely detailed as aptitude, at least tells you the exact reason why a package is installed. It will only show the strongest dependency chain at first of course, since that is what we record.

What is left to do?

At the moment, error information is not stored across backtracking in any way, but we generally will want to show you the first conflict we reach as it is the most natural one; or all conflicts. Currently you get the last conflict which may not be particularly useful.

Likewise, errors currently are just rendered as implication graphs of the form [not] A -> [not] B -> ..., and we need to put in some work to present those nicely.

The test suite is not passing yet, I haven’t really started working on it. A challenge is that most packages in the test suite are manually installed as they are mocked, and the solver now doesn’t remove those.

We plan to implement the replacement logic such that foo can be replaced by foo2 Conflicts/Replaces/Provides foo without needing to be automatically installed.

Improving the backtracking to be non-chronological conflict-driven clause learning would vastly enhance our backtracking performance. Not that it seems to be an issue right now in my limited testing (mostly noble 64-bit-time_t upgrades). A lot of that complexity you have normally is not there because the manually installed packages and resulting unit propagation (single-solution Depends/Reverse-Depends for Conflicts) already ground us fairly far in what changes we can actually make.

Once all the stuff has landed, we need to start rolling it out and gather feedback. On Ubuntu I’d like automated feedback on regressions (running solver3 in parallel, checking if result is worse and then submitting an error to the error tracker), on Debian this could just be a role email address to send solver dumps to.

At the same time, we can also incrementally start rolling this out. Like phased updates in Ubuntu, we can also roll out the new solver as the default to 10%, 20%, 50% of users before going to the full 100%. This will allow us to capture regressions early and fix them.

on May 14, 2024 11:26 AM

May 12, 2024

The Kubuntu Team are thrilled to announce significant updates to KubuQA, our streamlined ISO testing tool that has now expanded its capabilities beyond Kubuntu to support Ubuntu and all its other flavors. With these enhancements, KubuQA becomes a versatile resource that ensures a smoother, more intuitive testing process for upcoming releases, including the 24.04 Noble Numbat and the 24.10 Oracular Oriole.

What is KubuQA?

KubuQA is a specialized tool developed by the Kubuntu Team to simplify the process of ISO testing. Utilizing the power of Kdialog for user-friendly graphical interfaces and VirtualBox for creating and managing virtual environments, KubuQA allows testers to efficiently evaluate ISO images. Its design focuses on accessibility, making it easy for testers of all skill levels to participate in the development process by providing clear, guided steps for testing ISOs.

New Features and Extensions

The latest update to KubuQA marks a significant expansion in its utility:

  • Broader Coverage: Initially tailored for Kubuntu, KubuQA now supports testing ISO images for Ubuntu and all other Ubuntu flavors. This broadened coverage ensures that any Ubuntu-based community can benefit from the robust testing framework that KubuQA offers.
  • Support for Latest Releases: KubuQA has been updated to include support for the newest Ubuntu release cycles, including the 24.04 Noble Numbat and the upcoming 24.10 Oracular Oriole. This ensures that communities can start testing early and often, leading to more stable and polished releases.
  • Enhanced User Experience: With improvements to the Kdialog interactions, testers will find the interface more intuitive and responsive, which enhances the overall testing experience.

Call to Action for Ubuntu Flavor Leads

The Kubuntu Team is keen to collaborate closely with leaders and testers from all Ubuntu flavors to adopt and adapt KubuQA for their testing needs. We believe that by sharing this tool, we can foster a stronger, more cohesive testing community across the Ubuntu ecosystem.

We encourage flavor leads to try out KubuQA, integrate it into their testing processes, and share feedback with us. This collaboration will not only improve the tool but also ensure that all Ubuntu flavors can achieve higher quality and stability in their releases.

Getting Involved

For those interested in getting involved with ISO testing using KubuQA:

  • Download the Tool: You can find KubuQA on the Kubuntu Team Github.
  • Join the Community: Engage with the Kubuntu community for support and to connect with other testers. Your contributions and feedback are invaluable to the continuous improvement of KubuQA.

Conclusion

The enhancements to KubuQA signify our commitment to improving the quality and reliability of Ubuntu and its derivatives. By extending its coverage and simplifying the testing process, we aim to empower more contributors to participate in the development cycle. Whether you’re a seasoned tester or new to the community, your efforts are crucial to the success of Ubuntu.

We look forward to seeing how different communities will utilise KubuQA to enhance their testing practices. And by the way, have you thought about becoming a member of the Kubuntu Community? Join us today to make a difference in the world of open-source software!

on May 12, 2024 09:28 PM
I am happy to announce the availability of SysGlance, a simple and universal, Linux utility for generating a report for the host system. Imagine encountering a problem with a Linux system service or device. Typically, you would search for a solution by Googling the issue, hoping to find a fix. In most cases, you would […]
on May 12, 2024 08:39 PM

May 08, 2024

I recently discovered that there's an old software edition of the Oxford English Dictionary (the second edition) on archive.org for download. Not sure how legal this is, mind, but I thought it would be useful to get it running on my Ubuntu machine. So here's how I did that.

Firstly, download the file; that will give you a file called Oxford English Dictionary (Second Edition).iso, which is a CD image. We want to unpack that, and usefully there is 7zip in the Ubuntu archives which knows how to unpack ISO files.1 So, unpack the ISO with 7z x "Oxford English Dictionary (Second Edition).iso". That will give you two more files: OED2.DAT and SETUP.EXE. The .DAT file is, I think, all the dictionary entries in some sort of binary format (and is 600MB, so be sure you have the space for it). You can then run wine SETUP.EXE, which will install the software using wine, and that's all good.2 Choose a folder to install it in (I chose the same folder that SETUP.EXE is in, at which point it will create an OED subfolder in there and unpack a bunch of files into it, including OED.EXE).

That's the easy part. However, it won't quite work yet. You can see this by running wine OED/OED.EXE. It should start up OK, and then complain that there's no CDROM.

a Windows dialog box reading 'CD-ROM not found'

This is because it expects there to be a CDROM drive with the OED2.DAT file on it. We can set one up, though; we tell Wine to pretend that there's a CD drive connected, and what's on it. Run winecfg, and in the Drives tab, press Add… to add a new drive. I chose D: (which is a common Windows drive letter for a CD drive), and OK. Select your newly added D: drive and set the Path to be the folder where OED2.DAT is (which is wherever you unpacked the ISO file). Then say Show Advanced and change the drive Type to CD-ROM to tell Wine that you want this new drive to appear to be a CD. Say OK.

a Windows dialog box reading 'CD-ROM not found'

Now, when you wine OED/OED.EXE again, it should start up fine! Hooray, we're done! Except…

the OED Windows app, except that all the text is little squares rather than actual text, which looks like a font rendering error

…that's not good. The app runs, but it looks like it's having font issues. (In particular, you can select and copy the text, even though it looks like a bunch of little squares, and if you paste that text into somewhere else it's real text! So this is some sort of font display problem.)

Fortunately, the OED app does actually come with the fonts it needs. Unfortunately, it seems to unpack them to somewhere (C:\WINDOWS\SYSTEM)3 that Wine doesn't appear to actually look at. What we need to do is to install those font files so Linux knows about them. You could click them all to install them, but there's a quicker way; copy them, from where the installer puts them, into our own font folder.

To do this...

  • first make a new folder to put them in: mkdir ~/.local/share/fonts/oed.
  • Then find out where the installer put the font files, as a real path on our Linux filesystem: winepath -u "C:/WINDOWS/SYSTEM". Let's say that that ends up being /home/you/.wine/dosdevices/c:/windows/system
  • Copy the TTF files from that folder (remembering to change the first path to the one that winepath output just now): cp /home/you/.wine/dosdevices/c:/windows/system/*.TTF ~/.local/share/fonts/oed
  • And tell the font system that we've added a bunch of new fonts: fc-cache

And now it all ought to work! Run wine OED/OED.EXE one last time…

the OED Windows app in all its glory

  1. and using 7zip is much easier than mounting the ISO file as a loopback thing
  2. There's a Microsoft Word macro that it offers to install; I didn't want that, and I have no idea whether it works
  3. which we can find out from OED/INSTALL.LOG
on May 08, 2024 10:18 PM

May 03, 2024

Many years ago (2012!) I was invited to be part of "The Pastry Box Project", which described itself thus:

Each year, The Pastry Box Project gathers 30 people who are each influential in their field and asks them to share thoughts regarding what they do. Those thoughts are then published every day throughout the year at a rate of one per day, starting January 1st and ending December 31st.

It was interesting. Sadly, it's dropped off the web (as has its curator, Alex Duloz, as far as I can tell), but thankfully the Wayback Machine comes to the rescue once again.1 I was quietly proud of some of the things I wrote there (and I was recently asked for a reference to a thing I said which the questioner couldn't find, which is what made me realise that the site's not around any more), so I thought I'd republish the stuff I wrote there, here, for ease of finding. This was all written in 2012, and the world has moved on in a few ways since then, a dozen years ago at time of writing, but... I think I'd still stand by most of this stuff. The posts are still at archive.org and you can get to and read other people's posts from there too, some of which are really good and worth your time. But here are mine, so I don't lose them again.

Tuesday, 18 December 2012

My daughter’s got a smartphone, because, well, everyone has. It has GPS on it, because, well, every one does. What this means is that she will never understand the concept of being lost.

Think about that for a second. She won’t ever even know what it means to be lost.

Every argument I have in the pub now goes for about ten minutes before someone says, right, we’ve spent long enough arguing now, someone look up the correct answer on Wikipedia. My daughter won’t ever understand the concept of not having a bit of information available, of being confused about a matter of fact.

A while back, it was decreed that telephone directories are not subject to copyright, that a list of phone numbers is “information alone without a minimum of original creativity” and therefore held no right of ownership.

What instant access to information has provided us is a world where all the simple matters of fact are now yours; free for the asking. Putting data on the internet is not a skill; it is drudgery, a mechanical task for robots. Ask yourself: why do you buy technical books? It’s not for the information inside: there is no tech book anywhere which actually reveals something which isn’t on the web already. It’s about the voice; about the way it’s written; about how interesting it is. And that is a skill. Matters of fact are not interesting — they’re useful, right enough, but not interesting. Making those facts available to everyone frees up authors, creators, makers to do authorial creative things. You don’t have to spend all your time collating stuff any more: now you can be Leonardo da Vinci all the time. Be beautiful. Appreciate the people who do things well, rather than just those who manage to do things at all. Prefer those people who make you laugh, or make you think, or make you throw your laptop out of a window with annoyance: who give you a strong reaction to their writing, or their speaking, or their work. Because information wanting to be free is what creates a world of creators. Next time someone wants to build a wall around their little garden, ask yourself: is what you’re paying for, with your time or your money or your personal information, something creative and wonderful? Or are they just mechanically collating information? I hope to spend 2013 enjoying the work of people who do something more than that.

Wednesday, 31 October 2012

Not everyone who works with technology loves technology. No, really, it’s true! Most of the people out there building stuff with web tech don’t attend conferences, don’t talk about WebGL in the pub, don’t write a blog with CSS3 “experiments” in it, don’t like what they do. It’s a job: come in at 9, go home at 5, don’t think about HTML outside those hours. Apparently 90% of the stuff in the universe is “dark matter”: undetectable, doesn’t interact with other matter, can’t be seen even with a really big telescope. Our “dark matter developers”, who aren’t part of the community, who barely even know that the community exists… how are we to help them? You can write all the A List Apart articles you like but dark matter developers don’t read it. And so everyone’s intranet is horrid and Internet-Explorer-specific and so the IE team have to maintain backwards compatibility with that and that hurts the web. What can we do to reach this huge group of people? Everyone’s written a book about web technologies, and books help, but books are dying. We want to get the word out about all the amazing things that are now possible to everyone: do we know how? Do we even have to care? The theory is that this stuff will “trickle down”, but that doesn’t work for economics: I’m not sure it works for @-moz-keyframes either.

Monday, 8 October 2012

The web moves really fast. How many times have you googled for a tutorial on or an example of something and found that the results, written six months or a year or two years ago, no longer work? The syntax has changed, or there’s a better way now, or it never worked right to begin with. You’ll hear people bemoaning this: trying to stop the web moving so quickly in order that knowledge about it doesn’t go out of date. But that ship’s sailed. This is the world we’ve built: it moves fast, and we have to just hat up and deal with it. So, how? How can we make sure that old and wrong advice doesn’t get found? It’s a difficult question, and I don’t think anyone’s seriously trying to answer it. We should try and think of a way.

Tuesday, 18 September 2012

Software isn’t always a solution to problems. If you’re a developer, everything generally looks like a nail: a nail which is solved by making a new bit of code. I’ve got half-finished mobile apps done for tracking my running with GPS, for telling me when to switch between running and walking, and… I’m still fat, because I’m writing software instead of going running. One of the big ideas behind computers was to automate repetitive and boring tasks, certainly, which means that it should work like this: identify a thing that needs doing, do it for a while, think “hm, a computer could do this more easily”, write a bit of software to do it. However, there’s too much premature optimisation going on, so it actually looks like this: identify a thing that needs doing, think “hm, I’m sure a computer would be able to do this more easily”, write a bit of software to do it. See the difference? If the software never gets finished, then in the first approach the thing still gets done. Don’t always reach for the keyboard: sometimes it’s better to reach for Post-It notes, or your running shoes.

Saturday, 18 August 2012

Changing the world is within your grasp.

This is not necessarily a good thing.

If you go around and talk to normal people, it becomes clear that, weirdly, they don’t ever imagine how to get ten million dollars. They don’t think about new ways to redesign a saucepan or the buttons in their car. They don’t contemplate why sending a parcel is slow and how it could be a slicker process. They don’t think about ways to change the world.

I find it hard to talk to someone who doesn’t think like that.

To an engineer, the world is a toy box full of sub-optimized and feature-poor toys, as Scott Adams once put it. To a designer, the world is full of bad design. And to both, it is not only possible but at a high level obvious how to (a) fix it (b) for everyone (c) and make a few million out of doing so.

At first, this seems a blessing: you can see how the world could be better! And make it happen!

Then it’s a curse. Those normal people I mentioned? Short of winning the lottery or Great Uncle Brewster dying, there’s no possibility of becoming a multi-millionaire, and so they’re not thinking about it. Doors that have a handle on them but say “Push” are not a source of distress. Wrong kerning in signs is not like sandpaper on their nerves.

The curse of being able to change the world is… the frustration that you have so far failed to do so.

Perhaps there is a Zen thing here. Some people have managed it. Maybe you have. So the world is better, and that’s a good thing all by itself, right?

Friday, 27 July 2012

The best systems are built by people who can accept that no-one will ever know how hard it was to do, and who therefore don’t seek validation by explaining to everyone how hard it was to do.

Tuesday, 12 June 2012

The most poisonous idea in the world is when you’re told that something which achieved success through lots of hard work actually got there just because it was excellent.

Friday, 18 May 2012

Ever notice how the things you slave over and work crushingly hard on get less attention, sometimes, than the amusing things you threw together in a couple of evenings?

I can't decide whether this is a good thing or not.

Thursday, 5 April 2012

It's OK to not want to build websites for everybody and every browser. Making something which is super-dynamic in Chrome 18 and also works excellently in w3m is jolly hard work, and a lot of the time you might well be justified in thinking it's not worth it. If your site stats, or your belief, or your prediction of the market's direction, or your favourite pundit tell you that the best use of your time is to only support browsers with querySelector, or only support browsers with JavaScript, or only support WebKit, or only support iOS Safari, then that's a reasonable decision to make; don't let anyone else tell you what your relationship with your users and customers and clients is, because you know better than them.

Just don't confuse what you're doing with supporting "the web". State your assumptions up front. Own your decisions, and be prepared to back them up, for your project. If you're building something which doesn't work in IE6, that requires JavaScript, that requires mobile WebKit, that requires Opera Mobile, then you are letting some people down. That's OK; you've decided to do that. But your view's no more valid than theirs, for a project you didn't build. Make your decisions, and state what the axioms you worked from were, and then everyone else can judge whether what you care about is what they care about. Just don't push your view as being what everyone else should do, and we'll all be fine.

Sunday, 18 March 2012

Publish and be damned, said the Duke of Wellington; these days, in between starting wars in France and being sick of everyone repeating the jokes about his name from Blackadder, he’d probably say that we should publish or be damned. If you’re anything like me, you’ve got folders full of little experiments that you never got around to finishing or that didn’t pan out. Put ’em up somewhere. These things are useful.

Twitter, autobiographies, collections of letters from authors, all these have shown us that the minutiae can be as fascinating as carefully curated and sieved and measured writings, and who knows what you’ll inspire the next person to do from the germ of one of your ideas?

Monday, 27 February 2012

There's a lot to think about when you're building something on the web. Is it accessible? How do I handle translations of the text? Is the design OK on a 320px-wide screen? On a 2320px-wide screen? Does it work in IE8? In Android 4.0? In Opera Mini? Have I minimized the number of HTTP requests my page requires? Is my JavaScript minified? Are my images responsive? Is Google Analytics hooked up properly? AdSense? Am I handling Unicode text properly? Avoiding CSRF? XSS? Have I encoded my videos correctly? Crushed my pngs? Made a print stylesheet?

We've come a long way since:

<HEADER>
<TITLE>The World Wide Web project</TITLE>
<NEXTID N="55">
</HEADER>
<BODY>
<H1>World Wide Web</H1>The WorldWideWeb (W3) is a wide-area<A
NAME=0 HREF="WhatIs.html">
hypermedia</A> information retrieval
initiative aiming to give universal
access to a large universe of documents.

Look at http://html5boilerplate.com/—a base level page which helps you to cover some (nowhere near all) of the above list of things to care about (and the rest of the things you need to care about too, which are the other 90% of the list). A year in development, 900 sets of changes and evolutions from the initial version, seven separate files. That's not over-engineering; that's what you need to know to build things these days.

The important point is: one of the skills in our game is knowing what you don't need to do right now but still leaving the door open for you to do it later. If you become the next Facebook then you will have to care about all these things; initially you may not. You don't have to build them all on day one: that is over-engineering. But you, designer, developer, translator, evangelist, web person, do have to understand what they all mean. And you do have to be able to layer them on later without having to tear everything up and start again. Feel guilty that you're not addressing all this stuff in the first release if necessary, but you should feel a lot guiltier if you didn't think of some of it.

Wednesday, 18 January 2012

Don't be creative. Be a creator. No one ever looks back and wishes that they'd given the world less stuff.

  1. Also, the writing is all archived at Github!
on May 03, 2024 06:08 PM

Playing with rich

Colin Watson

One of the things I do as a side project for Freexian is to work on various bits of business automation: accounting tools, programs to help contributors report their hours, invoicing, that kind of thing. While it’s not quite my usual beat, this makes quite a good side project as the tools involved are mostly rather sensible and easy to deal with (Python, git, ledger, that sort of thing) and it’s the kind of thing where I can dip into it for a day or so a week and feel like I’m making useful contributions. The logic can be quite complex, but there’s very little friction in the tools themselves.

A recent case where I did run into some friction in the tools was with some commands that need to present small amounts of tabular data on the terminal, using OSC 8 hyperlinks if the terminal supports them: think customer-related information with some links to issues. One of my colleagues had previously done this using a hack on top of texttable, which was perfectly fine as far as it went. However, now I wanted to be able to add multiple links in a single table cell in some cases, and that was really going to stretch the limits of that approach: working out the width of the displayed text in the cell was going to take an annoying amount of bookkeeping.

I started looking around to see whether any other approaches might be easier, without too much effort (remember that “a day or so a week” bit above). ansiwrap looked somewhat promising, but it isn’t currently packaged in Debian, and it would have still left me with the problem of figuring out how to integrate it into texttable, which looked like it would be quite complicated. Then I remembered that I’d heard good things about rich, and thought I’d take a look.

rich turned out to be exactly what I wanted. Instead of something like this based on the texttable hack above:

import shutil
from pyxian.texttable import UrlTable

termsize = shutil.get_terminal_size((80, 25))
table = UrlTable(max_width=termsize.columns)
table.set_deco(UrlTable.HEADER)
table.set_cols_align(["l"])
table.set_cols_dtype(["u"])
table.add_row(["Issue"])
table.add_row([(issue_url, f"#{issue_id}")]
print(table.draw())

… now I can do this instead:

import rich
from rich import box
from rich.table import Table

table = Table(box=box.SIMPLE)
table.add_column("Issue")
table.add_row(f"[link={issue_url}]#{issue_id}[/link]")
rich.print(table)

While this is a little shorter, the real bonus is that I can now just put multiple [link] tags in a single string, and it all just works. No ceremony. In fact, once the relevant bits of code passed type-checking (since the real code is a bit more complex than the samples above), it worked first time. It’s a pleasure to work with a library like that.

It looks like I’ve only barely scratched the surface of rich, but I expect I’ll reach for it more often now.

on May 03, 2024 03:09 PM
The
<figcaption> The “Secure Rollback Prevention” entry in the UEFI BIOS configuration </figcaption>

The bottom line is that there is a new configuration called “AMD Secure Processor Rollback protection” on recent AMD systems in addition to “Secure Rollback Prevention” (BIOS rollback protection). If it’s enabled by a vendor, you cannot downgrade the UEFI BIOS revisions once you install a one with security vulnerability fixes.

https://fwupd.github.io/libfwupdplugin/hsi.html#org.fwupd.hsi.Amd.RollbackProtection

This feature prevents an attacker from loading an older firmware onto the part after a security vulnerability has been fixed.
[…]
End users are not able to directly modify rollback protection, this is controlled by the manufacturer.

Previously I installed the revision 1.49 (R23ET73W) but it’s gone from Lenovo’s official page with the notice below. I’ve been annoyed by a symptom which is likely from a firmware so I wanted to try multiple revisions for bisecting, and also I thought I should downgrade it to the latest official revision as 1.40 (R23ET70W) since the withdrawal clearly indicates that there is something wrong with 1.49.

This BIOS version R23UJ73W is reported Lenovo cloud not working issue, hence it has been withdrawn from support site.

First, I turned off Secure Rollback Prevention and tried downgrading it with fwupdmgr like the following. However, it failed to be applied with Secure Flash Authentication Failed when rebooted.

$ fwupdmgr downgrade
0.	Cancel
1.	b0fb0282929536060857f3bd5f80b319233340fd (Battery)
2.	6fd62cb954242863ea4a184c560eebd729c76101 (Embedded Controller)
3.	0d5d05911800242bb1f35287012cdcbd9b381148 (Prometheus)
4.	3743975ad7f64f8d6575a9ae49fb3a8856fe186f (SKHynix HFS256GDE9X081N)
5.	d77c38c163257a2c2b0c0b921b185f481d9c1e0c (System Firmware)
6.	6df01b2df47b1b08190f1acac54486deb0b4c645 (TPM)
7.	362301da643102b9f38477387e2193e57abaa590 (UEFI dbx)
Choose device [0-7]: 5
0.	Cancel
1.	0.1.46
2.	0.1.41
3.	0.1.38
4.	0.1.36
5.	0.1.23
Choose release [0-5]: 

Next, I tried their ISO image r23uj70wd.iso, but no luck with another error.

Error

The system program file is not correct for this system.

Also, Windows failed to apply it so I became convinced it was impossible. However, I didn’t have a clear idea why at that point and bumped into a handy command in fwupdmgr.

$ fwupdmgr security
Host Security ID: HSI:1! (v1.9.16)

HSI-1
✔ BIOS firmware updates:         Enabled
✔ Fused platform:                Locked
✔ Supported CPU:                 Valid
✔ TPM empty PCRs:                Valid
✔ TPM v2.0:                      Found
✔ UEFI bootservice variables:    Locked
✔ UEFI platform key:             Valid
✔ UEFI secure boot:              Enabled

HSI-2
✔ SPI write protection:          Enabled
✔ IOMMU:                         Enabled
✔ Platform debugging:            Locked
✔ TPM PCR0 reconstruction:       Valid
✘ BIOS rollback protection:      Disabled

HSI-3
✔ SPI replay protection:         Enabled
✔ CET Platform:                  Supported
✔ Pre-boot DMA protection:       Enabled
✔ Suspend-to-idle:               Enabled
✔ Suspend-to-ram:                Disabled

HSI-4
✔ Processor rollback protection: Enabled
✔ Encrypted RAM:                 Encrypted
✔ SMAP:                          Enabled

Runtime Suffix -!
✔ fwupd plugins:                 Untainted
✔ Linux kernel lockdown:         Enabled
✔ Linux kernel:                  Untainted
✘ CET OS Support:                Not supported
✘ Linux swap:                    Unencrypted

This system has HSI runtime issues.
 » https://fwupd.github.io/hsi.html#hsi-runtime-suffix

Host Security Events
  2024-05-01 15:06:29:  ✘ BIOS rollback protection changed: Enabled → Disabled

As you can see, the BIOS rollback protection in the HSI-2 section is “Disabled” as intended. But Processor rollback protection in HSI-4 is “Enabled”. I found a commit suggesting that there was a system with the config disabled and it was able to be enabled when OS Optimized Defaults is turned on.

https://github.com/fwupd/fwupd/commit/52d6c3cb78ab8ebfd432949995e5d4437569aaa6

Update documentation to indicate that loading “OS Optimized Defaults”

may enable security processor rollback protection on Lenovo systems.

I hoped that Processor rollback protection might be disabled by turning off OS Optimized Defaults instead.

Tried OS Optimized Defaults turned off but no luck
<figcaption> Tried OS Optimized Defaults turned off but no luck </figcaption>
$ fwupdmgr security
Host Security ID: HSI:1! (v1.9.16)

...

✘ BIOS rollback protection:      Disabled

...

HSI-4
✔ Processor rollback protection: Enabled

...

Host Security Events
  2024-05-02 03:24:45:  ✘ Kernel lockdown disabled
  2024-05-02 03:24:45:  ✘ Secure Boot disabled
  2024-05-02 03:24:45:  ✘ Pre-boot DMA protection is disabled
  2024-05-02 03:24:45:  ✘ Encrypted RAM changed: Encrypted → Not supported

Some configurations were overridden, but the Processor rollback protection stayed the same. It’s confirmed that it’s really impossible to downgrade the firmware with vulnerability fixes. I learned the hard way that there was a clear difference between “a vendor doesn’t support downgrading” and “it can’t be downgraded” as per the release notes.

https://download.lenovo.com/pccbbs/mobiles/r23uj73wd.txt

CHANGES IN THIS RELEASE

Version 1.49 (UEFI BIOS) 1.32 (ECP)

[Important updates]

  • Notice that BIOS can’t be downgraded to older BIOS version after upgrade to r23uj73w(1.49).

[New functions or enhancements]

  • Enhancement to address security vulnerability, CVE-2023-5058,LEN-123535,LEN-128083,LEN-115697,LEN-123534,LEN-118373,LEN-119523,LEN-123536.
  • Change to permit fan rotation after fan error happen.

I have to wait for a new and better firmware.

on May 03, 2024 03:20 AM

May 01, 2024

Thanks to a colleague who introduced me to Nim during last week’s SUSE Labs conference, I became a man with a dream, and after fiddling with compiler flags and obviously not reading documentation, I finally made it.

This is something that shouldn’t exist; from the list of ideas that should never have happened.

But it does. It’s a Perl interpreter embedded in Rust. Get over it.

Once cloned, you can run the following commands to see it in action:

  • cargo run --verbose -- hello.pm showtime
  • cargo run --verbose -- hello.pm get_quick_headers

How it works

There is a lot of autogenerated code, mainly for two things:

  • bindings.rs and wrapper.h; I made a lot of assumptions and perlxsi.c may or may not be necessary in the future (see main::xs_init_rust), depends on how bad or terrible my C knowledge is by the time you’re reading this.
  • xs_init_rust function is the one that does the magic, as far as my understanding goes, by hooking up boot_DynaLoader to DynaLoader in Perl via ffi.

With those two bits in place, and thanks to the magic of the bindgen crate, and after some initialization, I decided to use Perl_call_argv, do note that Perl_ in this case comes from bindgen, I might change later the convention to ruperl or something to avoid confusion between that a and perl_parse or perl_alloc which (if I understand correctly) are exposed directly by the ffi interface.

What I ended up doing, is passing the same list of arguments (for now, or at least for this PoC), directly to Perl_call_argv, which will in turn, take the third argument and pass it verbatim as the call_argv

        Perl_call_argv(myperl, perl_sub, flags_ptr, perl_parse_args.as_mut_ptr());

Right now hello.pm defines two sub routines, one to open a file, write something and print the time to stdout, and a second one that will query my blog, and show the headers. This is only example code, but enough to demostrate that the DynaLoader works, and that the embedding also works :)

itsalive

I got most of this working by following the perlembed guide.

Why?

Why not?.

I want to see if I can embed also python in the same binary, so I can call native perl, from native python and see how I can fiddle all that into os-autoinst

Where to find the code?

On github: https://github.com/foursixnine/ruperl or under https://crates.io/crates/ruperl

on May 01, 2024 12:00 AM

April 30, 2024

Ubuntu 24.04 LTS OneDrive

Dougie Richardson

I’ve not had much time to play around with the latest release but this is cool – OneDrive Nautilus integration.

Settings > Online Accounts > Microsoft 365, leave everything blank and hit “Sign in…”. Web page opens to authenticate and then you can mount OneDrive in Nautilus.

on April 30, 2024 08:22 PM

April 27, 2024

The Joy of Code

Alan Pope

A few weeks ago, in episode 25 of Linux Matters Podcast I brought up the subject of ‘Coding Joy’. This blog post is an expanded follow-up to that segment. Go and listen to that episode - or not - it’s all covered here.

The Joy of Linux Torture

Not a Developer

I’ve said this many times - I’ve never considered myself a ‘Developer’. It’s not so much imposter syndrome, but plain facts. I didn’t attend university to study software engineering, and have never held a job with ‘Engineer’ or Developer’ in the title.

(I do have Engineering Manager and Developer Advocate roles in my past, but in popey’s weird set of rules, those don’t count.)

I have written code over the years. Starting with BASIC on the Sinclair ZX81 and Sinclair Spectrum, I wrote stuff for fun and no financial gain. I also coded in Z80 & 6502 assembler, taught myself Pascal on my Epson 8086 PC in 1990, then QuickBasic and years later, BlitzBasic, Lua (via LÖVE) and more.

In the workplace, I wrote some alarmingly complex utilities in Windows batch scripts and later Bash shell scripts on Linux. In a past career, I would write ABAP in SAP - which turned into an internal product mildly amusingly called “Alan’s Tool”.

These were pretty much all coding for fun, though. Nobody specced up a project and assigned me as a developer on it. I just picked up the tools and started making something, whether that was a sprite routine in Z80 assembler, an educational CPU simulator in Pascal, or a spreadsheet uploader for SAP BiW.

In 2003, three years before Twitter launched in 2006, I made a service called ‘Clunky.net’. It was a bunch of PHP and Perl smashed together and published online with little regard for longevity or security. Users could sign up and send ’tweet’ style messages from their phone via SMS, which would be presented in a reverse-chronological timeline. It didn’t last, but I had fun making it while it did.

They were all fun side-quests.

None of this makes me a developer.

Volatile Memories

It’s rapidly approaching fifty years since I first wrote any code on my first computer. Back then, you’d typically write code and then either save it on tape (if you were patient) or disk (if you were loaded). Maybe you’d write it down - either before or after you typed it in - or perhaps you’d turn the computer off and lose it all.

When I studied for a BTEC National Diploma in Computer Studies at college, one of our classes was on the IBM PC with two floppy disc drives. The lecturer kept hold of all the floppies because we couldn’t be trusted not to lose, damage or forget them. Sometimes the lecturer was held up at the start of class, so we’d be sat twiddling our thumbs for a bit.

In those days, when you booted the PC with no floppy inserted, it would go directly into BASICA, like the 8-bit microcomputers before it. I would frequently start writing something, anything, to pass the time.

With no floppy disks on hand, the code - beautiful as it was - would be lost. The lecturer often reset the room when they entered, hitting a big red ‘Stop’ button, which instantly powered down all the computers, losing whatever ‘work’ you’d done.

I was probably a little irritated at the moment, just as I would when the RAM pack wobbled on my ZX81, losing everything. You move on, though, and make something else, or get on with your college work, and soon forget about it.

Or you bitterly remember it and write a blog post four decades later. Each to their own.

Sharing is Caring

This part was the main focus of the conversation when we talked about this on the show.

In the modern age, over the last ten to fifteen years or so, I’ve not done so much of the kind of coding I wrote about above. I certainly have done some stuff for work, mostly around packaging other people’s software as snaps or writing noddy little shell scripts. But I lost a lot of the ‘joy’ of coding recently.

Why?

I think a big part is the expectation that I’d make the code available to others. The public scrutiny others give your code may have been a factor. The pressure I felt that I should put my code out and continue to maintain it rather than throw it over the wall wouldn’t have helped.

I think I was so obsessed with doing the ‘right’ thing that coding ‘correctly’ or following standards and making it all maintainable became a cognitive roadblock.

I would start writing something and then begin wondering, ‘How would someone package this up?’ and ‘Am I using modern coding standards, toolkits, and frameworks?’ This held me back from the joy of coding in the first place. I was obsessing too much over other people’s opinions of my code and whether someone else could build and run it.

I never used to care about this stuff for personal projects, and it was a lot more joyful an experience - for me.

I used to have an idea, pick up a text editor and start coding. I missed that.

Realisation

In January this year, Terence Eden wrote about his escapades making a FourSquare-like service using ActivityPub and OpenStreetMap. When he first mentioned this on Mastodon, I grabbed a copy of the code he shared and had a brief look at it.

The code was surprisingly simple, scrappy, kinda working, and written in PHP. I was immediately thrown back twenty years to my terrible ‘Clunky’ code and how much fun it was to throw together.

In February, I bumped into Terence at State of Open Con in London and took the opportunity to quiz him about his creation. We discussed his choice of technology (PHP), and the simple ’thrown together in a day’ nature of the project.

At that point, I had a bit of a light-bulb moment, realising that I could get back to joyful coding. I don’t have to share everything; not every project needs to be an Open-Source Opus.

I can open a text editor, type some code, and enjoy it, and that’s enough.

Joy Rediscovered

I had an idea for a web application and wanted to prototype something without too much technological research or overhead. So I created a folder on my home server, ran php -S 0.0.0.0:9000 in a terminal there, made a skeleton index.php and pointed a browser at the address. Boom! Application created!

I created some horribly insecure and probably unmaintainable PHP that will almost certainly never see the light of day.

I had fun doing it though. Which is really the whole point.

More side-quests, fewer grand plans.

on April 27, 2024 08:00 AM

April 26, 2024

Over coffee this morning, I stumbled upon simone, a fledgling Open-Source tool for repurposing YouTube videos as blog posts. The Python tool creates a text summary of the video and extracts some contextual frames to illustrate the text.

A neat idea! In my experience, software engineers are often tasked with making demonstration videos, but other engineers commonly prefer consuming the written word over watching a video. I took simone for a spin, to see how well it works. Scroll down and tell me what you think!

I was sat in front of my work laptop, which is a mac, so roughly speaking, this is what I did:

  • Install host pre-requisites
$ brew install ffmpeg tesseract virtualenv
git clone https://github.com/rajtilakjee/simone
  • Get a free API key from OpenRouter
  • Put the API key in .env
GEMMA_API_KEY=sk-or-v1-0000000000000000000000000000000000000000000000000000000000000000
  • Install python requisites
$ cd simone
$ virtualenv .venv
$ source .venv/bin/activate
(.venv) $ pip install -r requirements.txt
  • Run it!
(.venv) $ python src/main.py
Enter YouTube URL: https://www.youtube.com/watch?v=VDIAHEoECfM
/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/whisper/transcribe.py:115: UserWarning: FP16 is not supported on CPU; using FP32 instead
 warnings.warn("FP16 is not supported on CPU; using FP32 instead")
Traceback (most recent call last):
 File "/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/pytesseract/pytesseract.py", line 255, in run_tesseract
 proc = subprocess.Popen(cmd_args, **subprocess_args())
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 File "/opt/homebrew/Cellar/python@3.12/3.12.3/Frameworks/Python.framework/Versions/3.12/lib/python3.12/subprocess.py", line 1026, in __init__
 self._execute_child(args, executable, preexec_fn, close_fds,
 File "/opt/homebrew/Cellar/python@3.12/3.12.3/Frameworks/Python.framework/Versions/3.12/lib/python3.12/subprocess.py", line 1955, in _execute_child
 raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'C:/Program Files/Tesseract-OCR/tesseract.exe'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
 File "/Users/alan/Work/rajtilakjee/simone/src/main.py", line 47, in <module>
 blogpost(url)
 File "/Users/alan/Work/rajtilakjee/simone/src/main.py", line 39, in blogpost
 score = scores.score_frames()
 ^^^^^^^^^^^^^^^^^^^^^
 File "/Users/alan/Work/rajtilakjee/simone/src/utils/scorer.py", line 20, in score_frames
 extracted_text = pytesseract.image_to_string(
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 File "/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/pytesseract/pytesseract.py", line 423, in image_to_string
 return {
 ^
 File "/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/pytesseract/pytesseract.py", line 426, in <lambda>
 Output.STRING: lambda: run_and_get_output(*args),
 ^^^^^^^^^^^^^^^^^^^^^^^^^
 File "/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/pytesseract/pytesseract.py", line 288, in run_and_get_output
 run_tesseract(**kwargs)
 File "/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/pytesseract/pytesseract.py", line 260, in run_tesseract
 raise TesseractNotFoundError()
pytesseract.pytesseract.TesseractNotFoundError: C:/Program Files/Tesseract-OCR/tesseract.exe is not installed or it's not in your PATH. See README file for more information.
  • Oof!
  • File a bug (like a good Open Source citizen)
  • Locally patch the file and try again
(.venv) python src/main.py
Enter YouTube URL: https://www.youtube.com/watch?v=VDIAHEoECfM
/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/whisper/transcribe.py:115: UserWarning: FP16 is not supported on CPU; using FP32 instead
 warnings.warn("FP16 is not supported on CPU; using FP32 instead")
  • Look for results
(.venv) $ ls -l generated_blogpost.txt *.jpg
-rw-r--r-- 1 alan staff 2163 26 Apr 09:26 generated_blogpost.txt
-rw-r--r--@ 1 alan staff 132984 26 Apr 09:27 top_frame_4_score_106.jpg
-rw-r--r-- 1 alan staff 184705 26 Apr 09:27 top_frame_5_score_105.jpg
-rw-r--r-- 1 alan staff 126148 26 Apr 09:27 top_frame_9_score_101.jpg

In my test I pointed simone at a short demo video from my employer, Anchore’s YouTube channel. The results are below, with no editing, I even included the typos. The images at the bottom of this post are frames from the video that simone selected.


Ancors Static Stick Checker Tool Demo: Evaluating and Resolving Security Findings

Introduction

Static stick checker tool helps developers identify security vulnerabilities in Docker images by running open-source security checks and generating remediation recommendations. This blog post summarizes a live demo of the tool’s capabilities.

How it works

The tool works by:

  • Downloading and analyzing the Docker image.
  • Detecting the base operating system distribution and selecting the appropriate stick profile.
  • Running open-source security checks on the image.
  • Generating a report of identified vulnerabilities and remediation actions.

Demo Walkthrough

The demo showcases the following steps:

  • Image preparation: Uploading a Docker image to a registry.
  • Tool execution: Running the static stick checker tool against the image.
  • Results viewing: Analyzing the generated stick results and identifying vulnerabilities.
  • Remediation: Implementing suggested remediation actions by modifying the Dockerfile.
  • Re-checking: Running the tool again to verify that the fixes have been effective.

Key findings

  • The static stick checker tool identified vulnerabilities in the Docker image in areas such as:
    • Verifying file hash integrity.
    • Configuring cryptography policy.
    • Verifying file permissions.
  • Remediation scripts were provided to address each vulnerability.
  • By implementing the recommended changes, the security posture of the Docker image was improved.

Benefits of using the static stick checker tool

  • Identify security vulnerabilities early in the development process.
  • Automate the remediation process.
  • Shift security checks leftward in the development pipeline.
  • Reduce the burden on security teams by addressing vulnerabilities before deployment.

Conclusion

The Ancors static stick checker tool provides a valuable tool for developers to improve the security of their Docker images. By proactively addressing vulnerabilities during the development process, organizations can ensure their applications are secure and reduce the risk of security incidents


Here’s the images it pulled out:

First image taken from the video

Second image taken from the video

Third image taken from the video

Not bad! It could be better - getting the company name wrong, for one!

I can imagine using this to create a YouTube description, or use it as a skeleton from which a blog post could be created. I certainly wouldn’t just pipe the output of this into blog posts! But so many videos need better descriptions, and this could help!

on April 26, 2024 09:00 AM

April 25, 2024

The Kubuntu Team is happy to announce that Kubuntu 24.04 has been released, featuring the ‘beautiful’ KDE Plasma 5.27 simple by default, powerful when needed.

Codenamed “Noble Numbat”, Kubuntu 24.04 continues our tradition of giving you Friendly Computing by integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.

Under the hood, there have been updates to many core packages, including a new 6.8-based kernel, KDE Frameworks 5.115, KDE Plasma 5.27 and KDE Gear 23.08.

Kubuntu 24.04 with Plasma 5.27.11

Kubuntu has seen many updates for other applications, both in our default install, and installable from the Ubuntu archive.

Haruna, Krita, Kdevelop, Yakuake, and many many more applications are updated.

Applications for core day-to-day usage are included and updated, such as Firefox, and LibreOffice.

For a list of other application updates, and known bugs be sure to read our release notes.

Download Kubuntu 24.04, or learn how to upgrade from 23.10 or 22.04 LTS.

Note: For upgrades from 23.10, there may a delay of a few hours to days between the official release announcements and the Ubuntu Release Team enabling upgrades.

on April 25, 2024 04:16 PM

The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 24.04 LTS, code-named “Noble Numbat”. This marks Ubuntu Studio’s 34th release. This release is a Long-Term Support release and as such, it is supported for 3 years (36 months, until April 2027).

Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a more complete list of changes and known issues. Listed here are some of the major highlights.

You can download Ubuntu Studio 24.04 LTS from our download page.

Special Notes

The Ubuntu Studio 24.04 LTS disk image (ISO) exceeds 4 GB and cannot be downloaded to some file systems such as FAT32 and may not be readable when burned to a standard DVD. For this reason, we recommend downloading to a compatible file system. When creating a boot medium, we recommend creating a bootable USB stick with the ISO image or burning to a Dual-Layer DVD.

Minimum installation media requirements: Dual-Layer DVD or 8GB USB drive.

Images can be obtained from this link: https://cdimage.ubuntu.com/ubuntustudio/releases/24.04/beta/

Full updated information, including Upgrade Instructions, are available in the Release Notes.

Please note that upgrading from 22.04 before the release of 24.04.1, due August 2024, is unsupported.

Upgrades from 23.10 should be enabled within a month after release, so we appreciate your patience.

New This Release

All-New System Installer

In cooperation with the Ubuntu Desktop Team, we have an all-new Desktop installer. This installer uses the underlying code of the Ubuntu Server installer (“Subiquity”) which has been in-use for years, with a frontend coded in “Flutter”. This took a large amount of work for this release, and we were able to help a lot of other official Ubuntu flavors transition to this new installer.

Be on the lookout for a special easter egg when the graphical environment for the installer first starts. For those of you who have been long-time users of Ubuntu Studio since our early days (even before Xfce!), you will notice exactly what it is.

PipeWire 1.0.4

Now for the big one: PipeWire is now mature, and this release contains PipeWire 1.0. With PipeWire 1.0 comes the stability and compatibility you would expect from multimedia audio. In fact, at this point, we recommend PipeWire usage for both Professional, Prosumer, and Everyday audio needs. At Ubuntu Summit 2023 in Riga, Latvia, our project leader Erich Eickmeyer used PipeWire to demonstrate live audio mixing with much success and has since done some audio mastering work using it. JACK developers even consider it to be “JACK 3”.

PipeWire’s JACK compatibility is configured to use out-of-the-box and is zero-latency internally. System latency is configurable via Ubuntu Studio Audio Configuration.

However, if you would rather use straight JACK 2 instead, that’s also possible. Ubuntu Studio Audio Configuration can disable and enable PipeWire’s JACK compatibility on-the-fly. From there, you can simply use JACK via QJackCtl.

With this, we consider audio production with Ubuntu Studio so mature that it can now rival operating systems such as macOS and Windows in ease-of-use since it’s ready to go out-of-the-box.

Deprecation of PulseAudio/JACK setup/Studio Controls

Due to the maturity of PipeWire, we now consider the traditional PulseAudio/JACK setup, where JACK would be started/stopped by Studio Controls and bridged to PulseAudio, deprecated. This configuration is still installable via Ubuntu Studio Audio Configuration, but we do not recommend it. Studio Controls may return someday as a PipeWire fine-tuning solution, but for now it is unsupported by the developer. For that reason, we recommend users not use this configuration. If you do, it is at your own risk and no support will be given. In fact, it’s likely to be dropped for 24.10.

Ardour 8.4

While this does not represent the latest release of Ardour, Ardour 8.4 is a great release. If you would like the latest release, we highly recommend purchasing one-time or subscribing to Ardour directly from the developers to help support this wonderful application. Also, for that reason, this will be an application we will not directly backport. More on that later.

Ubuntu Studio Audio Configuration

Ubuntu Studio Audio Configuration has undergone a UI overhaul and contains the ability to start and stop a Dummy Audio Device which can also be configured to start or stop upon login. When assigned as the default, this will free-up channels that would normally be assigned to your system audio to be assigned to a null device.

Meta Package for Music Education

In cooperation with Edubuntu, we have created a metapackage for music education. This package is installable from Ubuntu Studio Installer and includes the following packages:

  • FMIT: Free Musical Instrument Tuner, a tool for tuning musical Instruments (also included by default)
  • GNOME Metronome: Exactly what it sounds like (pun unintended): a metronome.
  • Minuet: Ear training for intervals, chords, scales, and more.
  • MuseScore: Create, playback, and print sheet music for free (this one is no stranger to the Ubuntu Studio community)
  • Piano Booster: MIDI player/game that displays musical notes and teaches you how to play piano, optionally using a MIDI keyboard.
  • Solfege: Ear training program for harmonic and melodic intervals, chords, scales, and rhythms.

New Artwork

Thanks to the work of Eylul and the submissions to the Ubuntu Studio Noble Numbat Wallpaper Contest, we have a number of wallpapers to choose from and a new default wallpaper.

Deprecation of Ubuntu Studio Backports Is In Effect

As stated in the Ubuntu 23.10 Release Announcement, the Ubuntu Studio Backports PPA is now deprecated in favor of the official Ubuntu Backports repository. However, the Backports repository only works for LTS releases and for good reason. There are a few requirements for backporting:

  • It must be an application which already exists in the Ubuntu repositories
  • It must be an application which would not otherwise qualify for a simple bugfix, which would then qualify it to be a Stable Release Update. This means it must have new features.
  • It must not rely on new libraries or new versions of libraries.
  • It must exist within a later supported release or the development release of Ubuntu.

If you have a suggestion for an application for which to backport that meets those requirements, feel free to join and email the Ubuntu Studio Users Mailing List with your suggestion with the tag “[BPO]” at the beginning of the subject line. Backports to 22.04 LTS are now closed and backports to 24.04 LTS are now open. Additionally, suggestions must pertain to Ubuntu Studio and preferably must be applications included with Ubuntu Studio. Suggestions can be rejected at the Project Leader’s discretion.

One package that is exempt to backporting is Ardour. To help support Ardour’s funding, you may obtain later versions directly from them. To do so, please one-time purchase or subscribe to Ardour from their website. If you wish to get later versions of Ardour from us, you will have to wait until the next regular release of Ubuntu Studio, due in October 2024.

We’re back on Matrix

You’ll notice that the menu links to our support chat and on our website will now take you to a Matrix chat. This is due to the Ubuntu community carving its own space within the Matrix federation.

However, this is not only a support chat. This is also a creativity discussion chat. You can pass ideas to each other and you’re welcome to it if the topic remains within those confines. However, if a moderator or admin warns you that you’re getting off-topic (or the intention for the chat room), please heed the warning.

This is a persistent connection, meaning if you close the window (or chat), it won’t lose your place as you may only need to sign back in to resume the chat.

Frequently Asked Questions

Q: Does Ubuntu Studio contain snaps?
A: Yes. Mozilla’s distribution agreement with Canonical changed, and Ubuntu was forced to no longer distribute Firefox in a native .deb package. We have found that, after numerous improvements, Firefox now performs just as well as the native .deb package did.

Thunderbird also became a snap during this cycle for the maintainers to get security patches delivered faster.

Additionally, Freeshow is an Electron-based application. Electron-based applications cannot be packaged in the Ubuntu repositories in that they cannot be packaged in a traditional Debian source package. While such apps do have a build system to create a .deb binary package, it circumvents the source package build system in Launchpad, which is required when packaging for Ubuntu. However, Electron apps also have a facility for creating snaps, which can be uploaded and included. Therefore, for Freeshow to be included in Ubuntu Studio, it had to be packaged as a snap.

Q: Will you make an ISO with {my favorite desktop environment}?
A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since we’re completely volunteer-run, we don’t have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio – which does *not* convert that flavor to Ubuntu Studio but adds its benefits.

Q: What if I don’t want all these packages installed on my machine?
A: Simply use the Ubuntu Studio Installer to remove the features of Ubuntu Studio you don’t want or need!

Looking Toward the Future

Plasma 6

Ubuntu Studio, in cooperation with Kubuntu, will be switching to Plasma 6 during the 24.10 development cycle. Likewise, Lubuntu will be switching to LXQt 2.0 and Qt 6, so the three flavors will be cooperating to do the move.

New Look

Ubuntu Studio has been using the same theming, “Materia” (except for the 22.04 LTS release which was a re-colored Breeze theme) since 19.04. However, Materia has gone dead upstream. To stay consistent, we found a fork called “Orchis” which seems to match closely and will be switching to that. More on that soon.

Minimal Installation

The new system installer has the capability to do minimal installations. This was something we did not have time to implement this cycle but intend to do for 24.10. This will let users install a minimal desktop to get going and then install what they need via Ubuntu Studio Installer. This will make a faster installation process but will not make the installation .iso image smaller. However, we have an idea for that as well.

Minimal Installation .iso Image

We are going to research what it will take to create a minimal installer .iso image that will function much like the regular .iso image minus the ability to install everything and allow the user to customize the installation via Ubuntu Studio Installer. This should lead to a much smaller initial download. Unlike creating a version with a different desktop environment, the Ubuntu Technical Board has been on record as saying this would not require going through the new flavor creation process. Our friends at Xubuntu recently did something similar.

Get Involved!

A wonderful way to contribute is to get involved with the project directly! We’re always looking for new volunteers to help with packaging, documentation, tutorials, user support, and MORE! Check out all the ways you can contribute!

Our project leader, Erich Eickmeyer, is now working on Ubuntu Studio at least part-time, and is hoping that the users of Ubuntu Studio can give enough to generate a monthly part-time income. Your donations are appreciated! If other distributions can do it, surely we can! See the sidebar for ways to give!

Special Thanks

Huge special thanks for this release go to:

  • Eylul Dogruel: Artwork, Graphics Design
  • Ross Gammon: Upstream Debian Developer, Testing, Email Support
  • Sebastien Ramacher: Upstream Debian Developer
  • Dennis Braun: Upstream Debian Developer
  • Rik Mills: Kubuntu Council Member, help with Plasma desktop
  • Scarlett Moore: Kubuntu Project Lead, help with Plasma desktop
  • Zixing Liu: Simplified Chinese translations in the installer
  • Simon Quigley: Lubuntu Release Manager, help with Qt items, Core Developer stuff, keeping Erich sane and focused
  • Steve Langasek: Help with livecd-rootfs changes to make the new installer work properly.
  • Dan Bungert: Subiquity, seed fixes
  • Dennis Loose: Ubuntu Desktop Provision (installer)
  • Lukas Klingsbo: Ubuntu Desktop Provision (installer)
  • Len Ovens: Testing, insight
  • Wim Taymans: Creator of PipeWire
  • Mauro Gaspari: Tutorials, Promotion, and Documentation, Testing, keeping Erich sane
  • Krytarik Raido: IRC Moderator, Mailing List Moderator
  • Erich Eickmeyer: Project Leader, Packaging, Development, Direction, Treasurer

A Note from the Project Leader

When I started out working on Ubuntu Studio six years ago, I had a vision of making it not only the easiest Linux-based operating system for content creation, but the easiest content creation operating system… full-stop.

With the release of Ubuntu Studio 24.04 LTS, I believe we have achieved that goal. No longer do we have to worry about whether an application is JACK or PulseAudio or… whatever. It all just works! Audio applications can be patched to each other!

If an audio device doesn’t depend on complex drivers (i.e. if the device is class-compliant), it will just work. If a user wishes to lower the latency or change the sample rate, we have a utility that does that (Ubuntu Studio Audio Configuration). If a user wants to have finer control use pure JACK via QJackCtl, they can do that too!

I honestly don’t know how I would replicate this on Windows, and replicating on macOS would be much harder without downloading all sorts of applications. With Ubuntu Studio 24.04 LTS, it’s ready to go and you don’t have to worry about it.

Where we are now is a dream come true for me, and something I’ve been hoping to see Ubuntu Studio become. And now, we’re finally here, and I feel like it can only get better.

-Erich Eickmeyer

on April 25, 2024 03:16 PM

Ubuntu MATE 24.04 is more of what you like, stable MATE Desktop on top of current Ubuntu. This release rolls up some fixes and more closely aligns with Ubuntu. Read on to learn more 👓️

Ubuntu MATE 24.04 LTS Ubuntu MATE 24.04 LTS

Thank you! 🙇

I’d like to extend my sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this release 👏 I’d like to acknowledge the close collaboration with all the Ubuntu flavour teams and the Ubuntu Foundations and Desktop Teams. The assistance and support provided by Erich Eickmeyer (Ubuntu Studio), Simon Quigley (Lubuntu) and David Muhammed (Ubuntu Budgie) have been invaluable. Thank you! 💚

What changed since the Ubuntu MATE 23.10?

Here are the highlights of what’s changed since the release of Ubuntu MATE 23.10

  • Ships stable MATE Desktop 1.26.2 with a selection of bug fixes 🐛 and minor improvements 🩹 to associated components.
  • Integrated the new ✨ Ubuntu Desktop Bootstrap installer 📀
  • Added GNOME Firmware, that replaces Firmware Updater.
  • Added App Center, that replaces Software Boutique.
  • Retired Ubuntu MATE Welcome; although it is still available for Ubuntu MATE 23.10 and earlier.

Major Applications

Accompanying MATE Desktop 1.26.2 🧉 and Linux 6.8 🐧 are Firefox 125 🔥🦊, Celluloid 0.26 🎥, Evolution 3.52 📧, LibreOffice 24.2.2 📚

See the Ubuntu 24.04 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 24.04

This new release will be first available for PC/Mac users.

Download

Upgrading to Ubuntu MATE 24.04

The upgrade process to Ubuntu MATE 24.04 LTS from either Ubuntu MATE 22.04 LTS or 23.10 is the same as Ubuntu.

There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

on April 25, 2024 02:57 PM

We are pleased to announce the release of the next version of our distro, 24.04 Long Term Support. The LTS version is supported for 3 years while the regular releases are supported for 9 months. The new release rolls-up various fixes and optimizations that the Ubuntu Budgie team have been released since the 22.04 release in April 2022: We also inherits hundreds of stability…

Source

on April 25, 2024 01:37 PM
Thanks to the hard work from our contributors, Lubuntu 24.04 LTS has been released. With the codename Noble Numbat, Lubuntu 24.04 is the 26th release of Lubuntu, the 12th release of Lubuntu with LXQt as the default desktop environment. Download and Support Lifespan With Lubuntu 24.04 being a long-term support interim release, it will follow […]
on April 25, 2024 01:31 PM

The Xubuntu team is happy to announce the immediate release of Xubuntu 24.04.

Xubuntu 24.04, codenamed Noble Numbat, is a long-term support (LTS) release and will be supported for 3 years, until 2027.

Xubuntu 24.04, featuring the latest updates from Xfce 4.18 and GNOME 46.

Xubuntu 24.04 features the latest updates from Xfce 4.18, GNOME 46, and MATE 1.26. For new users and those coming from Xubuntu 22.04, you’ll appreciate the performance, stability, and improved hardware support found in Xubuntu 24.04. Xfce 4.18 is stable, fast, and full of user-friendly features. Enjoy frictionless bluetooth headphone connections and out-of-the-box touchpad support. Updates to our icon theme and wallpapers make Xubuntu feel fresh and stylish.

The final release images for Xubuntu Desktop and Xubuntu Minimal are available as torrents and direct downloads from xubuntu.org/download/.

As the main server might be busy in the first few days after the release, we recommend using the torrents if possible.

We’d like to thank everybody who contributed to this release of Xubuntu!

Highlights and Known Issues

Highlights

  • Xfce 4.18 is included and well-polished since it’s initial release in December 2022
  • Xubuntu Minimal is included as an officially supported subproject
  • GNOME Software has been replaced by Snap Store and GDebi
  • Snap Desktop Integration is now included for improved snap package support
  • Firmware Updater has been added to enable firmware updates in Xubuntu is included to support firmware updates from the Linux Vendor Firmware Service (LVFS)
  • Thunderbird is now distributed as a Snap package
  • Ubiquity has been replaced by the Flutter-based Ubuntu Installer to provide fast and user-friendly installation
  • Pipewire (and wireplumber) are now included in Xubuntu
  • Improved hardware support for bluetooth headphones and touchpads
  • Color emoji is now included and supported in Firefox, Thunderbird, and newer Gtk-based apps
  • Significantly improved screensaver integration and stability

Known Issues

  • The shutdown prompt may not be displayed at the end of the installation. Instead you might just see a Xubuntu logo, a black screen with an underscore in the upper left hand corner, or just a black screen. Press Enter and the system will reboot into the installed environment. (LP: #1944519)
  • Xorg crashes and the user is logged out after logging in or switching users on some virtual machines, including GNOME Boxes. (LP: #1861609)
  • You may experience choppy audio or poor system performance while playing audio, but only in some virtual machines (observed in VMware and VirtualBox)
  • OEM installation options are not currently supported or available, but will be included for Xubuntu 24.04.1

For more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions, please refer to the Xubuntu Release Notes.

The main Ubuntu Release Notes cover many of the other packages we carry and more generic issues.

Support

For support with the release, navigate to Help & Support for a complete list of methods to get help.

on April 25, 2024 12:00 PM

With the work that has been done in the debian-installer/netcfg merge-proposal !9 it is possible to install a standard Debian system, using the normal Debian-Installer (d-i) mini.iso images, that will come pre-installed with Netplan and all network configuration structured in /etc/netplan/.

In this write-up, I’d like to run you through a list of commands for experiencing the Netplan enabled installation process first-hand. For now, we’ll be using a custom ISO image, while waiting for the above-mentioned merge-proposal to be landed. Furthermore, as the Debian archive is going through major transitions builds of the “unstable” branch of d-i don’t currently work. So I implemented a small backport, producing updated netcfg and netcfg-static for Bookworm, which can be used as localudebs/ during the d-i build.

Let’s start with preparing a working directory and installing the software dependencies for our virtualized Debian system:

$ mkdir d-i_bookworm && cd d-i_bookworm
$ apt install ovmf qemu-utils qemu-system-x86

Now let’s download the custom mini.iso, linux kernel image and initrd.gz containing the Netplan enablement changes, as mentioned above.

$ wget https://people.ubuntu.com/~slyon/d-i/bookworm/mini.iso
$ wget https://people.ubuntu.com/~slyon/d-i/bookworm/linux
$ wget https://people.ubuntu.com/~slyon/d-i/bookworm/initrd.gz

Next we’ll prepare a VM, by copying the EFI firmware files, preparing some persistent EFIVARs file, to boot from FS0:\EFI\debian\grubx64.efi, and create a virtual disk for our machine:

$ cp /usr/share/OVMF/OVMF_CODE_4M.fd .
$ cp /usr/share/OVMF/OVMF_VARS_4M.fd .
$ qemu-img create -f qcow2 ./data.qcow2 5G

Finally, let’s launch the installer using a custom preseed.cfg file, that will automatically install Netplan for us in the target system. A minimal preseed file could look like this:

# Install minimal Netplan generator binary
d-i preseed/late_command string in-target apt-get -y install netplan-generator

For this demo, we’re installing the full netplan.io package (incl. Python CLI), as the netplan-generator package was not yet split out as an independent binary in the Bookworm cycle. You can choose the preseed file from a set of different variants to test the different configurations:

We’re using the custom linux kernel and initrd.gz here to be able to pass the preseed URL as a parameter to the kernel’s cmdline directly. Launching this VM should bring up the normal debian-installer in its netboot/gtk form:

$ export U=https://people.ubuntu.com/~slyon/d-i/bookworm/netplan-preseed+networkd.cfg
$ qemu-system-x86_64 \
	-M q35 -enable-kvm -cpu host -smp 4 -m 2G \
	-drive if=pflash,format=raw,unit=0,file=OVMF_CODE_4M.fd,readonly=on \
	-drive if=pflash,format=raw,unit=1,file=OVMF_VARS_4M.fd,readonly=off \
	-device qemu-xhci -device usb-kbd -device usb-mouse \
	-vga none -device virtio-gpu-pci \
	-net nic,model=virtio -net user \
	-kernel ./linux -initrd ./initrd.gz -append "url=$U" \
	-hda ./data.qcow2 -cdrom ./mini.iso;

Now you can click through the normal Debian-Installer process, using mostly default settings. Optionally, you could play around with the networking settings, to see how those get translated to /etc/netplan/ in the target system.

After you confirmed your partitioning changes, the base system gets installed. I suggest not to select any additional components, like desktop environments, to speed up the process.

During the final step of the installation (finish-install.d/55netcfg-copy-config) d-i will detect that Netplan was installed in the target system (due to the preseed file provided) and opt to write its network configuration to /etc/netplan/ instead of /etc/network/interfaces or /etc/NetworkManager/system-connections/.

Done! After the installation finished, you can reboot into your virgin Debian Bookworm system.

To do that, quit the current Qemu process, by pressing Ctrl+C and make sure to copy over the EFIVARS.fd file that was written by grub during the installation, so Qemu can find the new system. Then reboot into the new system, not using the mini.iso image any more:

$ cp ./OVMF_VARS_4M.fd ./EFIVARS.fd
$ qemu-system-x86_64 \
        -M q35 -enable-kvm -cpu host -smp 4 -m 2G \
        -drive if=pflash,format=raw,unit=0,file=OVMF_CODE_4M.fd,readonly=on \
        -drive if=pflash,format=raw,unit=1,file=EFIVARS.fd,readonly=off \
        -device qemu-xhci -device usb-kbd -device usb-mouse \
        -vga none -device virtio-gpu-pci \
        -net nic,model=virtio -net user \
        -drive file=./data.qcow2,if=none,format=qcow2,id=disk0 \
        -device virtio-blk-pci,drive=disk0,bootindex=1
        -serial mon:stdio

Finally, you can play around with your Netplan enabled Debian system! As you will find, /etc/network/interfaces exists but is empty, it could still be used (optionally/additionally). Netplan was configured in /etc/netplan/ according to the settings given during the d-i installation process.

In our case, we also installed the Netplan CLI, so we can play around with some of its features, like netplan status:

Thank you for following along the Netplan enabled Debian installation process and happy hacking! If you want to learn more, join the discussion at Salsa:installer-team/netcfg and find us at GitHub:netplan.

on April 25, 2024 10:19 AM

April 24, 2024

Ubuntu MATE 23.10 is more of what you like, stable MATE Desktop on top of current Ubuntu. This release rolls up a number of bugs fixes and updates that continues to build on recent releases, where the focus has been on improving stability 🪨

Ubuntu MATE 23.10 Ubuntu MATE 23.10

Thank you! 🙇

I’d like to extend my sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this release 👏 From reporting bugs, submitting translations, providing patches, contributing to our crowd-funding, developing new features, creating artwork, offering community support, actively testing and providing QA feedback to writing documentation or creating this fabulous website. Thank you! 💚

What changed since the Ubuntu MATE 23.04?

Here are the highlights of what’s changed since the release of Ubuntu MATE 23.04

MATE Desktop

MATE Desktop has been updated to 1.26.2 with a selection of bugs fixes 🐛 and minor improvements 🩹 to associated components.

  • caja-rename 23.10.1-1 has been ported from Python to C.
  • libmatemixer 1.26.0-2+deb12u1 resolves heap corruption and application crashes when removing USB audio devices.
  • mate-desktop 1.26.2-1 improves portals support.
  • mate-notification-daemon 1.26.1-1 fixes several memory leaks.
  • mate-system-monitor 1.26.0-5 now picks up libexec files from /usr/libexec
  • mate-session-manager 1.26.1-2 set LIBEXECDIR to /usr/libexec/ for correct interaction with mate-system-monitor ☝️
  • mate-user-guide 1.26.2-1 is a new upstream release.
  • mate-utils 1.26.1-1 fixes several memory leaks.

Yet more AI Generated wallpaper

My friend Simon Butcher 🇬🇧 is Head of Research Platforms at Queen Mary University of London managing the Apocrita HPC cluster service. Once again, Simon has created a stunning AI-generated 🤖🧠 wallpaper for Ubuntu MATE using bleeding edge diffusion models 🖌 The sample below is 1920x1080 but the version included in Ubuntu MATE 23.10 are 3840x2160.

Here’s what Simon has to say about the process of creating this new wallpaper for Mantic Minotaur:

Since Minotaurs are imaginary creatures, interpretations tend to vary widely. I wanted to produce an image of a powerful creature in a graphic novel style, although not gruesome like many depictions. The latest open source Stable Diffusion XL base model was trained at a higher resolution and the difference in quality has been noticeable, particularly at better overall consistency and detail, while reducing anatomical irregularities in images. The image was produced locally using Linux and an NVIDIA A100 80GB GPU, starting from an initial text prompt and refined using img2img, inpainting and upscaling features.

Major Applications

Accompanying MATE Desktop 1.26.2 🧉 and Linux 6.5 🐧 are Firefox 118 🔥🦊, Celluloid 0.25 🎥, Evolution 3.50 📧, LibreOffice 7.6.1 📚

See the Ubuntu 23.10 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 23.10

This new release will be first available for PC/Mac users.

Download

Upgrading from Ubuntu MATE 23.04

You can upgrade to Ubuntu MATE 23.10 from Ubuntu MATE 23.04. Ensure that you have all updates installed for your current version of Ubuntu MATE before you upgrade.

  • Open the “Software & Updates” from the Control Center.
  • Select the 3rd Tab called “Updates”.
  • Set the “Notify me of a new Ubuntu version” drop down menu to “For any new version”.
  • Press Alt+F2 and type in update-manager -c -d into the command box.
  • Update Manager should open up and tell you: New distribution release ‘23.10’ is available.
    • If not, you can use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
  • Click “Upgrade” and follow the on-screen instructions.

There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

Feedback

Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

on April 24, 2024 09:55 PM

April 15, 2024

Ubuntu Budgie 24.04 LTS (Noble Numbat) is a Long Term Support release with 3 years of support by your distro maintainers, from April 2024 to May 2027. These release notes showcase the key takeaways for 22.04 upgraders to 24.04. In these release notes the areas covered are: Quarter & half tiling is pretty much self-explaining. Dragging a window to the…

Source

on April 15, 2024 09:02 PM

April 13, 2024

Domo Arigato, Mr. debugfs

Paul Tagliamonte

Years ago, at what I think I remember was DebConf 15, I hacked for a while on debhelper to write build-ids to debian binary control files, so that the build-id (more specifically, the ELF note .note.gnu.build-id) wound up in the Debian apt archive metadata. I’ve always thought this was super cool, and seeing as how Michael Stapelberg blogged some great pointers around the ecosystem, including the fancy new debuginfod service, and the find-dbgsym-packages helper, which uses these same headers, I don’t think I’m the only one.

At work I’ve been using a lot of rust, specifically, async rust using tokio. To try and work on my style, and to dig deeper into the how and why of the decisions made in these frameworks, I’ve decided to hack up a project that I’ve wanted to do ever since 2015 – write a debug filesystem. Let’s get to it.

Back to the Future

Time to admit something. I really love Plan 9. It’s just so good. So many ideas from Plan 9 are just so prescient, and everything just feels right. Not just right like, feels good – like, correct. The bit that I’ve always liked the most is 9p, the network protocol for serving a filesystem over a network. This leads to all sorts of fun programs, like the Plan 9 ftp client being a 9p server – you mount the ftp server and access files like any other files. It’s kinda like if fuse were more fully a part of how the operating system worked, but fuse is all running client-side. With 9p there’s a single client, and different servers that you can connect to, which may be backed by a hard drive, remote resources over something like SFTP, FTP, HTTP or even purely synthetic.

The interesting (maybe sad?) part here is that 9p wound up outliving Plan 9 in terms of adoption – 9p is in all sorts of places folks don’t usually expect. For instance, the Windows Subsystem for Linux uses the 9p protocol to share files between Windows and Linux. ChromeOS uses it to share files with Crostini, and qemu uses 9p (virtio-p9) to share files between guest and host. If you’re noticing a pattern here, you’d be right; for some reason 9p is the go-to protocol to exchange files between hypervisor and guest. Why? I have no idea, except maybe due to being designed well, simple to implement, and it’s a lot easier to validate the data being shared and validate security boundaries. Simplicity has its value.

As a result, there’s a lot of lingering 9p support kicking around. Turns out Linux can even handle mounting 9p filesystems out of the box. This means that I can deploy a filesystem to my LAN or my localhost by running a process on top of a computer that needs nothing special, and mount it over the network on an unmodified machine – unlike fuse, where you’d need client-specific software to run in order to mount the directory. For instance, let’s mount a 9p filesystem running on my localhost machine, serving requests on 127.0.0.1:564 (tcp) that goes by the name “mountpointname” to /mnt.

$ mount -t 9p \
-o trans=tcp,port=564,version=9p2000.u,aname=mountpointname \
127.0.0.1 \
/mnt

Linux will mount away, and attach to the filesystem as the root user, and by default, attach to that mountpoint again for each local user that attempts to use it. Nifty, right? I think so. The server is able to keep track of per-user access and authorization along with the host OS.

WHEREIN I STYX WITH IT

Since I wanted to push myself a bit more with rust and tokio specifically, I opted to implement the whole stack myself, without third party libraries on the critical path where I could avoid it. The 9p protocol (sometimes called Styx, the original name for it) is incredibly simple. It’s a series of client to server requests, which receive a server to client response. These are, respectively, “T” messages, which transmit a request to the server, which trigger an “R” message in response (Reply messages). These messages are TLV payload with a very straight forward structure – so straight forward, in fact, that I was able to implement a working server off nothing more than a handful of man pages.

Later on after the basics worked, I found a more complete spec page that contains more information about the unix specific variant that I opted to use (9P2000.u rather than 9P2000) due to the level of Linux specific support for the 9P2000.u variant over the 9P2000 protocol.

MR ROBOTO

The backend stack over at zoo is rust and tokio running i/o for an HTTP and WebRTC server. I figured I’d pick something fairly similar to write my filesystem with, since 9P can be implemented on basically anything with I/O. That means tokio tcp server bits, which construct and use a 9p server, which has an idiomatic Rusty API that partially abstracts the raw R and T messages, but not so much as to cause issues with hiding implementation possibilities. At each abstraction level, there’s an escape hatch – allowing someone to implement any of the layers if required. I called this framework arigato which can be found over on docs.rs and crates.io.

/// Simplified version of the arigato File trait; this isn't actually
/// the same trait; there's some small cosmetic differences. The
/// actual trait can be found at:
///
/// https://docs.rs/arigato/latest/arigato/server/trait.File.html
trait File {
/// OpenFile is the type returned by this File via an Open call.
 type OpenFile: OpenFile;
/// Return the 9p Qid for this file. A file is the same if the Qid is
 /// the same. A Qid contains information about the mode of the file,
 /// version of the file, and a unique 64 bit identifier.
 fn qid(&self) -> Qid;
/// Construct the 9p Stat struct with metadata about a file.
 async fn stat(&self) -> FileResult<Stat>;
/// Attempt to update the file metadata.
 async fn wstat(&mut self, s: &Stat) -> FileResult<()>;
/// Traverse the filesystem tree.
 async fn walk(&self, path: &[&str]) -> FileResult<(Option<Self>, Vec<Self>)>;
/// Request that a file's reference be removed from the file tree.
 async fn unlink(&mut self) -> FileResult<()>;
/// Create a file at a specific location in the file tree.
 async fn create(
&mut self,
name: &str,
perm: u16,
ty: FileType,
mode: OpenMode,
extension: &str,
) -> FileResult<Self>;
/// Open the File, returning a handle to the open file, which handles
 /// file i/o. This is split into a second type since it is genuinely
 /// unrelated -- and the fact that a file is Open or Closed can be
 /// handled by the `arigato` server for us.
 async fn open(&mut self, mode: OpenMode) -> FileResult<Self::OpenFile>;
}
/// Simplified version of the arigato OpenFile trait; this isn't actually
/// the same trait; there's some small cosmetic differences. The
/// actual trait can be found at:
///
/// https://docs.rs/arigato/latest/arigato/server/trait.OpenFile.html
trait OpenFile {
/// iounit to report for this file. The iounit reported is used for Read
 /// or Write operations to signal, if non-zero, the maximum size that is
 /// guaranteed to be transferred atomically.
 fn iounit(&self) -> u32;
/// Read some number of bytes up to `buf.len()` from the provided
 /// `offset` of the underlying file. The number of bytes read is
 /// returned.
 async fn read_at(
&mut self,
buf: &mut [u8],
offset: u64,
) -> FileResult<u32>;
/// Write some number of bytes up to `buf.len()` from the provided
 /// `offset` of the underlying file. The number of bytes written
 /// is returned.
 fn write_at(
&mut self,
buf: &mut [u8],
offset: u64,
) -> FileResult<u32>;
}

Thanks, decade ago paultag!

Let’s do it! Let’s use arigato to implement a 9p filesystem we’ll call debugfs that will serve all the debug files shipped according to the Packages metadata from the apt archive. We’ll fetch the Packages file and construct a filesystem based on the reported Build-Id entries. For those who don’t know much about how an apt repo works, here’s the 2-second crash course on what we’re doing. The first is to fetch the Packages file, which is specific to a binary architecture (such as amd64, arm64 or riscv64). That architecture is specific to a component (such as main, contrib or non-free). That component is specific to a suite, such as stable, unstable or any of its aliases (bullseye, bookworm, etc). Let’s take a look at the Packages.xz file for the unstable-debug suite, main component, for all amd64 binaries.

$ curl \
https://deb.debian.org/debian-debug/dists/unstable-debug/main/binary-amd64/Packages.xz \
| unxz

This will return the Debian-style rfc2822-like headers, which is an export of the metadata contained inside each .deb file which apt (or other tools that can use the apt repo format) use to fetch information about debs. Let’s take a look at the debug headers for the netlabel-tools package in unstable – which is a package named netlabel-tools-dbgsym in unstable-debug.

Package: netlabel-tools-dbgsym
Source: netlabel-tools (0.30.0-1)
Version: 0.30.0-1+b1
Installed-Size: 79
Maintainer: Paul Tagliamonte <paultag@debian.org>
Architecture: amd64
Depends: netlabel-tools (= 0.30.0-1+b1)
Description: debug symbols for netlabel-tools
Auto-Built-Package: debug-symbols
Build-Ids: e59f81f6573dadd5d95a6e4474d9388ab2777e2a
Description-md5: a0e587a0cf730c88a4010f78562e6db7
Section: debug
Priority: optional
Filename: pool/main/n/netlabel-tools/netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb
Size: 62776
SHA256: 0e9bdb087617f0350995a84fb9aa84541bc4df45c6cd717f2157aa83711d0c60

So here, we can parse the package headers in the Packages.xz file, and store, for each Build-Id, the Filename where we can fetch the .deb at. Each .deb contains a number of files – but we’re only really interested in the files inside the .deb located at or under /usr/lib/debug/.build-id/, which you can find in debugfs under rfc822.rs. It’s crude, and very single-purpose, but I’m feeling a bit lazy.

Who needs dpkg?!

For folks who haven’t seen it yet, a .deb file is a special type of .ar file, that contains (usually) three files inside – debian-binary, control.tar.xz and data.tar.xz. The core of an .ar file is a fixed size (60 byte) entry header, followed by the specified size number of bytes.

[8 byte .ar file magic]
[60 byte entry header]
[N bytes of data]
[60 byte entry header]
[N bytes of data]
[60 byte entry header]
[N bytes of data]
...

First up was to implement a basic ar parser in ar.rs. Before we get into using it to parse a deb, as a quick diversion, let’s break apart a .deb file by hand – something that is a bit of a rite of passage (or at least it used to be? I’m getting old) during the Debian nm (new member) process, to take a look at where exactly the .debug file lives inside the .deb file.

$ ar x netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb
$ ls
control.tar.xz debian-binary
data.tar.xz netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb
$ tar --list -f data.tar.xz | grep '.debug$'
./usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug

Since we know quite a bit about the structure of a .deb file, and I had to implement support from scratch anyway, I opted to implement a (very!) basic debfile parser using HTTP Range requests. HTTP Range requests, if supported by the server (denoted by a accept-ranges: bytes HTTP header in response to an HTTP HEAD request to that file) means that we can add a header such as range: bytes=8-68 to specifically request that the returned GET body be the byte range provided (in the above case, the bytes starting from byte offset 8 until byte offset 68). This means we can fetch just the ar file entry from the .deb file until we get to the file inside the .deb we are interested in (in our case, the data.tar.xz file) – at which point we can request the body of that file with a final range request. I wound up writing a struct to handle a read_at-style API surface in hrange.rs, which we can pair with ar.rs above and start to find our data in the .deb remotely without downloading and unpacking the .deb at all.

After we have the body of the data.tar.xz coming back through the HTTP response, we get to pipe it through an xz decompressor (this kinda sucked in Rust, since a tokio AsyncRead is not the same as an http Body response is not the same as std::io::Read, is not the same as an async (or sync) Iterator is not the same as what the xz2 crate expects; leading me to read blocks of data to a buffer and stuff them through the decoder by looping over the buffer for each lzma2 packet in a loop), and tarfile parser (similarly troublesome). From there we get to iterate over all entries in the tarfile, stopping when we reach our file of interest. Since we can’t seek, but gdb needs to, we’ll pull it out of the stream into a Cursor<Vec<u8>> in-memory and pass a handle to it back to the user.

From here on out its a matter of gluing together a File traited struct in debugfs, and serving the filesystem over TCP using arigato. Done deal!

A quick diversion about compression

I was originally hoping to avoid transferring the whole tar file over the network (and therefore also reading the whole debug file into ram, which objectively sucks), but quickly hit issues with figuring out a way around seeking around an xz file. What’s interesting is xz has a great primitive to solve this specific problem (specifically, use a block size that allows you to seek to the block as close to your desired seek position just before it, only discarding at most block size - 1 bytes), but data.tar.xz files generated by dpkg appear to have a single mega-huge block for the whole file. I don’t know why I would have expected any different, in retrospect. That means that this now devolves into the base case of “How do I seek around an lzma2 compressed data stream”; which is a lot more complex of a question.

Thankfully, notoriously brilliant tianon was nice enough to introduce me to Jon Johnson who did something super similar – adapted a technique to seek inside a compressed gzip file, which lets his service oci.dag.dev seek through Docker container images super fast based on some prior work such as soci-snapshotter, gztool, and zran.c. He also pulled this party trick off for apk based distros over at apk.dag.dev, which seems apropos. Jon was nice enough to publish a lot of his work on this specifically in a central place under the name “targz” on his GitHub, which has been a ton of fun to read through.

The gist is that, by dumping the decompressor’s state (window of previous bytes, in-memory data derived from the last N-1 bytes) at specific “checkpoints” along with the compressed data stream offset in bytes and decompressed offset in bytes, one can seek to that checkpoint in the compressed stream and pick up where you left off – creating a similar “block” mechanism against the wishes of gzip. It means you’d need to do an O(n) run over the file, but every request after that will be sped up according to the number of checkpoints you’ve taken.

Given the complexity of xz and lzma2, I don’t think this is possible for me at the moment – especially given most of the files I’ll be requesting will not be loaded from again – especially when I can “just” cache the debug header by Build-Id. I want to implement this (because I’m generally curious and Jon has a way of getting someone excited about compression schemes, which is not a sentence I thought I’d ever say out loud), but for now I’m going to move on without this optimization. Such a shame, since it kills a lot of the work that went into seeking around the .deb file in the first place, given the debian-binary and control.tar.gz members are so small.

The Good

First, the good news right? It works! That’s pretty cool. I’m positive my younger self would be amused and happy to see this working; as is current day paultag. Let’s take debugfs out for a spin! First, we need to mount the filesystem. It even works on an entirely unmodified, stock Debian box on my LAN, which is huge. Let’s take it for a spin:

$ mount \
-t 9p \
-o trans=tcp,version=9p2000.u,aname=unstable-debug \
192.168.0.2 \
/usr/lib/debug/.build-id/

And, let’s prove to ourselves that this actually mounted before we go trying to use it:

$ mount | grep build-id
192.168.0.2 on /usr/lib/debug/.build-id type 9p (rw,relatime,aname=unstable-debug,access=user,trans=tcp,version=9p2000.u,port=564)

Slick. We’ve got an open connection to the server, where our host will keep a connection alive as root, attached to the filesystem provided in aname. Let’s take a look at it.

$ ls /usr/lib/debug/.build-id/
00 0d 1a 27 34 41 4e 5b 68 75 82 8E 9b a8 b5 c2 CE db e7 f3
01 0e 1b 28 35 42 4f 5c 69 76 83 8f 9c a9 b6 c3 cf dc E7 f4
02 0f 1c 29 36 43 50 5d 6a 77 84 90 9d aa b7 c4 d0 dd e8 f5
03 10 1d 2a 37 44 51 5e 6b 78 85 91 9e ab b8 c5 d1 de e9 f6
04 11 1e 2b 38 45 52 5f 6c 79 86 92 9f ac b9 c6 d2 df ea f7
05 12 1f 2c 39 46 53 60 6d 7a 87 93 a0 ad ba c7 d3 e0 eb f8
06 13 20 2d 3a 47 54 61 6e 7b 88 94 a1 ae bb c8 d4 e1 ec f9
07 14 21 2e 3b 48 55 62 6f 7c 89 95 a2 af bc c9 d5 e2 ed fa
08 15 22 2f 3c 49 56 63 70 7d 8a 96 a3 b0 bd ca d6 e3 ee fb
09 16 23 30 3d 4a 57 64 71 7e 8b 97 a4 b1 be cb d7 e4 ef fc
0a 17 24 31 3e 4b 58 65 72 7f 8c 98 a5 b2 bf cc d8 E4 f0 fd
0b 18 25 32 3f 4c 59 66 73 80 8d 99 a6 b3 c0 cd d9 e5 f1 fe
0c 19 26 33 40 4d 5a 67 74 81 8e 9a a7 b4 c1 ce da e6 f2 ff

Outstanding. Let’s try using gdb to debug a binary that was provided by the Debian archive, and see if it’ll load the ELF by build-id from the right .deb in the unstable-debug suite:

$ gdb -q /usr/sbin/netlabelctl
Reading symbols from /usr/sbin/netlabelctl...
Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
(gdb)

Yes! Yes it will!

$ file /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
/usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter *empty*, BuildID[sha1]=e59f81f6573dadd5d95a6e4474d9388ab2777e2a, for GNU/Linux 3.2.0, with debug_info, not stripped

The Bad

Linux’s support for 9p is mainline, which is great, but it’s not robust. Network issues or server restarts will wedge the mountpoint (Linux can’t reconnect when the tcp connection breaks), and things that work fine on local filesystems get translated in a way that causes a lot of network chatter – for instance, just due to the way the syscalls are translated, doing an ls, will result in a stat call for each file in the directory, even though linux had just got a stat entry for every file while it was resolving directory names. On top of that, Linux will serialize all I/O with the server, so there’s no concurrent requests for file information, writes, or reads pending at the same time to the server; and read and write throughput will degrade as latency increases due to increasing round-trip time, even though there are offsets included in the read and write calls. It works well enough, but is frustrating to run up against, since there’s not a lot you can do server-side to help with this beyond implementing the 9P2000.L variant (which, maybe is worth it).

The Ugly

Unfortunately, we don’t know the file size(s) until we’ve actually opened the underlying tar file and found the correct member, so for most files, we don’t know the real size to report when getting a stat. We can’t parse the tarfiles for every stat call, since that’d make ls even slower (bummer). Only hiccup is that when I report a filesize of zero, gdb throws a bit of a fit; let’s try with a size of 0 to start:

$ ls -lah /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
-r--r--r-- 1 root root 0 Dec 31 1969 /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
$ gdb -q /usr/sbin/netlabelctl
Reading symbols from /usr/sbin/netlabelctl...
Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
warning: Discarding section .note.gnu.build-id which has a section size (24) larger than the file size [in module /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug]
[...]

This obviously won’t work since gdb will throw away all our hard work because of stat’s output, and neither will loading the real size of the underlying file. That only leaves us with hardcoding a file size and hope nothing else breaks significantly as a result. Let’s try it again:

$ ls -lah /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
-r--r--r-- 1 root root 954M Dec 31 1969 /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
$ gdb -q /usr/sbin/netlabelctl
Reading symbols from /usr/sbin/netlabelctl...
Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
(gdb)

Much better. I mean, terrible but better. Better for now, anyway.

Kilroy was here

Do I think this is a particularly good idea? I mean; kinda. I’m probably going to make some fun 9p arigato-based filesystems for use around my LAN, but I don’t think I’ll be moving to use debugfs until I can figure out how to ensure the connection is more resilient to changing networks, server restarts and fixes on i/o performance. I think it was a useful exercise and is a pretty great hack, but I don’t think this’ll be shipping anywhere anytime soon.

Along with me publishing this post, I’ve pushed up all my repos; so you should be able to play along at home! There’s a lot more work to be done on arigato; but it does handshake and successfully export a working 9P2000.u filesystem. Check it out on on my github at arigato, debugfs and also on crates.io and docs.rs.

At least I can say I was here and I got it working after all these years.

on April 13, 2024 01:27 PM

April 12, 2024

It has been a very busy couple of weeks as we worked against some major transitions and a security fix that required a rebuild of the $world. I am happy to report that against all odds we have a beta release! You can read all about it here: https://kubuntu.org/news/kubuntu-24-04-beta-released/ Post beta freeze I have already begun pushing our fixes for known issues today. A big one being our new branding! Very exciting times in the Kubuntu world.

In the snap world I will be using my free time to start knocking out KDE applications ( not covered by the project ). I have also recruited some help, so you should start seeing these pop up in the edge channel very soon!

Now that we are nearing the release of Noble Numbat, my contract is coming to an end with Kubuntu. If you would like to see Plasma 6 in the next release and in a PPA for Noble, please consider donating to extend my contract at https://kubuntu.org/donate !

On a personal level, I am still looking to help with my grandson and you can find that here: https://www.gofundme.com/f/in-loving-memory-of-william-billy-dean-scalf

Thanks for stopping by,

Scarlett

on April 12, 2024 07:29 PM