August 11, 2024

There are a lot of privileges most of us probably take for granted. Not everyone is gifted with the ability to do basic things like talk, walk, see, and hear. Those of us (like myself) who can do all of these things don’t really think about them much. Those of us who can’t, have to think about it a lot because our world is largely not designed for them. Modern-day things are designed for a fully-functional human being, and then have stuff tacked onto them to make them easier to use. Not easy, just “not quite totally impossible.”

Issues of accessibility plague much of modern-day society, but I want to focus on one pain-point in particular. Visually-impaired accessibility.

Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

Now I’m not blind, so I am not qualified to say how exactly a blind person would use their computer. But I have briefly tried using a computer with my monitor turned off to test visually-impaired accessibility, so I know a bit about how it works. The basic idea seems to be that you launch a screen reader using a keyboard shortcut. That screen reader proceeds to try to describe various GUI elements to you at a rapid speed, from which you have to divine the right combination of Tab, Space, Enter, and arrow keys to get to the various parts of the application you want to use. Using these arcane sequences of keys, you can make the program do… something. Hopefully it’s what you wanted, but based on reports I’ve seen from blind users, oftentimes the computer reacts to your commands in highly unexpected ways.

The first thing here that jumps out to most people is probably the fact that using a computer blind is like trying to use magic. That’s a problem, but that’s not what I’m focusing on. I'm focusing on two words in particular.

Screen. Reader.

Wha…?

I want you to stop and take a moment to imagine the following scenario. You want to go to a concert, but can’t, so you sent your friend to the concert in your place and ask them to record it for you. They do so, and come back with a video of the whole thing. They’ve transcribed every word of each song, and made music sheets detailing every note and chord the concert played. There’s even some cool colors and visualizer stuff that conveys the feeling and tempo of each song. They then proceed to lay all this glorious work out in front of you, claiming it conveys everything about the concert perfectly. Confronted with this onslaught of visual data, what’s the first thing you’re going to ask?

“Didn’t you record the audio?”

Of course that’s the first thing you’re going to ask, because it’s a concert for crying out loud, 90% of the point of it is the audio. I can listen to a simple, relatively low-quality recording of a concert’s audio and be satisfied. I get to hear the emotion, the artistry, everything. I don’t need a single pixel of images to let me experience it in a satisfactory way. On the other hand, I don’t care how detailed your video analysis of the audio is - if it doesn’t include the audio, I’m going to be upset. Potentially very upset.

Now let’s go back to the topic at hand, visually-impaired accessibility. What does a screen reader do? It takes a user interface, one designed for seeing users, and tries to describe it as best it can to the user via audio. You then have to use keyboard shortcuts to navigate the UI, which the screen reader continues to describe bits of as you move around. For someone who’s looking at the app, this is all fine and dandy. For someone who can kinda see, maybe it’s sufficient. But for someone who’s blind or severely visually impaired, this is difficult to use if you’re lucky. Chances are you’re not going to be lucky and the app your working with might as well not exist.

Why is this so hard? Why have decades of computer development not led to breakthroughs in accessibility for blind people? Because we’re doing the whole thing wrong! We’re taking a user interface designed specifically and explicitly for seeing users, and trying to convey it over audio! It’s as ridiculous as trying to convey a concert over video. A user who’s listening to their computer shouldn’t need to know how an app is visually laid out in order to figure out whether they need to press up arrow, right arrow, or Tab to get to their desired UI element. They shouldn’t have to think in terms of buttons and check boxes. These are inherently visual user interface elements. Forcing a blind person to use these is tantamount to torture.

On top of all of this, half the time screen readers don’t even work! People who design software are usually able to see. You just don’t think about how to make software usable for blind people when you can see. It’s not something that easily crosses your mind. But try turning your screen off and navigating your system with a screen reader, and suddenly you’ll understand what’s lacking about the accessibility features. I tried doing this once, and I went and turned the screen back on after about five minutes of futile keyboard bashing. I can’t imagine the frustration I would have experienced if I had literally no other option than to work with a screen reader. Add on top of that the possibility that the user of your app has never even seen a GUI element in their lives before because they can’t see at all, and now you have essentially a language barrier in the way too.

So what’s the solution to this? Better screen reader compatibility might be helpful, but I don’t think that’s ultimately the correct solution here. I think we need to collectively recognize that blind people shouldn’t have to work with graphical user interfaces, and design something totally new.

One of the advantages of Linux is that it’s just a bunch of components that work together to provide a coherent and usable set of features for working on your computer. You aren’t locked into using a UI that you don’t like - just use or create some other UI. All current desktop environments are based around a screen that the user can see, but there’s no rules that say it has to be that way. Imagine if instead, your computer just talked to you, telling you what app you were using, what keys to press to accomplish certain actions, etc. In response, you talked back to it using the keyboard or voice recognition. There would be no buttons, check boxes, menus, or other graphical elements - instead you’d have actions, options, feature lists, and other conceptual elements that can be conveyed over audio. Switching between UI elements with the keyboard would be intuitive, predictable, and simple, since the app would be designed from step one to work that way. Such an audio-centric user interface would be easy for a blind or vision-impaired person to use. If well-designed, it could even be pleasant. A seeing person might have a learning curve to get past, but it would be usable enough for them too. Taking things a step further, support for Braille displays would be very handy, though as I have never used one I don’t know how hard that would be to implement.

A lot of work would be needed in order to get to the point of having a full desktop environment that worked this way. We’d need toolkits for creating apps with intuitive, uniform user interface controls. Many sounds would be needed to create a rich sound scheme for conveying events and application status to the user. Just like how graphical apps need a display server, we’d also need an audio user interface server that would tie all the apps together, letting users multitask without their apps talking over each other or otherwise interfering. We’d need plenty of apps that would actually be designed to work in an audio-only environment. A text editor, terminal, and web browser are the first things that spring to mind, but email, chat, and file management applications would also be very important. There might even be an actually good use for AI here, in the form of an image “viewer” that could describe an image to the user. And of course, we’d need an actually good text-to-speech engine (Piper seems particularly promising here).

This is a pretty rough overview of how I imagine we could make the world better for visually impaired computer users. Much remains to be designed and thought about, but I think this would work well. Who knows, maybe Linux could end up being easier for blind users to use than Windows is!

Interested in helping make this happen? Head over to the Aurora Aural User Interface project on GitHub, and offer ideas!

Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

on August 11, 2024 08:41 AM

August 10, 2024

For Additional Confusion

Benjamin Mako Hill

The Wikipedia article on antipopes can be pretty confusing! If you’d like to be even more confused, it can help with that!

on August 10, 2024 03:56 PM

August 09, 2024

Introduction

The telecommunications industry relies heavily on their core network, which is essential for enabling communication and data transfer. As technology advances and data demand increases, these core networks pose significant challenges. Telcos may face immense difficulties in ensuring fast data transfer, maintaining network reliability, and securing communications. At the same time, the transition to technologies like 5G adds to these challenges, requiring substantial infrastructure changes. In this post, we will analyse the primary pain points faced by the industry and consider how different core network designs can address these challenges.

This discussion aims to provide a clear understanding of core network requirements and design choices, offering insights for telecommunications professionals and stakeholders. By examining these critical aspects, we can better understand how to develop core networks that meet current and future demands, ultimately improving network performance and user experience.

Core network requirements

The core network is the backbone of any telecommunications system, as it ensures the smooth and efficient transmission of data and communication signals. For a core network to function optimally, it must meet several key requirements that cater to both current and future demands.

First is reliability. Reliability is a non-negotiable requirement for the core network: it must provide consistent and uninterrupted service. This involves robust fault-tolerance mechanisms, redundancy, and fail-over capabilities to ensure that the network remains operational even in the event of hardware or software failures. High reliability is critical for maintaining user trust and meeting service level agreements (SLAs).

The second key requirement is security, an equally essential factor. As cyber threats become more sophisticated, the core network must incorporate advanced security measures to protect sensitive data and prevent unauthorised access. This includes encryption, intrusion detection systems, and regular security audits to identify and mitigate vulnerabilities.

Scalability is another crucial requirement. With the rapid growth of data traffic and the increasing number of connected devices, the core network must be able to expand and accommodate this growth without compromising performance. Scalability ensures that the network can handle peak loads and future expansion without requiring a complete overhaul.

Telco operators also need to prioritise low latency, as it is vital for real-time applications such as voice calls or video conferencing, and emerging technologies like autonomous vehicles and augmented reality. The core network must be designed to minimise delays and ensure fast data transmission, enhancing the user experience.

Interoperability is also important, as it enables the core network to work seamlessly with various technologies and systems. This is particularly relevant in a multi-vendor environment, where equipment from different manufacturers must operate together without issues. Standardisation and adherence to industry protocols facilitate this interoperability.

Finally, cost-effectiveness cannot be overlooked. The core network must provide a balance between performance and cost, ensuring that it delivers high-quality service without excessive expenditure. Efficient resource management and innovative technologies can help achieve this balance, making it feasible for operators to maintain and upgrade their networks economically.

Design options

The design of a core network significantly impacts its performance, scalability, and reliability. Telco operators have several architectural options, each with distinct benefits and challenges.

Traditional core network architectures

Traditional core network designs rely on dedicated hardware and proprietary systems. These architectures are robust and well-tested but can be inflexible and costly to upgrade. This approach relies on physical infrastructure, which often leads to higher operational costs and longer deployment times.

Virtualisation and Software-Defined Networking (SDN)

Virtualisation and SDN represent the first modernisation approach to decoupling hardware and software in the core network design. Virtualisation abstracts network functions from hardware, allowing them to run on standard servers. This increases flexibility and reduces costs. SDN separates the control plane from the data plane, enabling centralised network management and more efficient resource utilisation. However, transitioning to virtualised and SDN-based networks can be complex and require significant investment in new skills and technologies.

Cloud-based core networks

Cloud-based solutions leverage cloud computing to host core network functions. This approach offers scalability, agility and cost savings by utilising cloud infrastructure. Operators can quickly scale resources up or down based on demand, therefore improving their efficiency. The main challenges of this approach are data security and the potential for increased latency, depending on cloud provider performance and network configuration.

Private cloud and cloud-ready apps

Private cloud solutions offer a balance between the scalability of public clouds and the control of traditional infrastructure. They allow operators to manage resources securely within their own environment, using automation tools that are similar to the ones used in public clouds for provisioning compute, storage and networking. Cloud-ready applications are designed to run efficiently in cloud environments, ensuring better performance and easier management.

Cloud-native core network architecture

Cloud-native architecture focuses on building and running applications that exploit the advantages of cloud computing models. These applications are typically built as micro-services, deployed in containers, and managed by orchestration platforms like Kubernetes. This approach enhances agility, scalability, and resilience.

Hybrid approaches

Many operators adopt a hybrid approach, combining traditional and modern design elements. This allows for a gradual migration to newer technologies while maintaining the stability of legacy systems. Hybrid networks can provide a balance between innovation and reliability, making the transition smoother and more manageable.

5G Non-Standalone vs Standalone

In the context of 5G, non-standalone (NSA) and standalone (SA) architectures offer different pathways for deployment. NSA uses existing 4G infrastructure for control functions, with 5G providing enhanced data capabilities. This allows for faster deployment and lower initial costs. Conversely, SA architecture utilises a completely new 5G core, delivering full 5G benefits such as ultra-low latency and advanced network slicing. While SA promises superior performance, it requires significant investment in new infrastructure.
By understanding these design options, telco operators can choose the best approach for their specific needs, ensuring their core networks are robust, scalable, and future-ready.

Major pain points in core network design and implementation

Designing and implementing a core network involves several challenges that telco operators must navigate.

Technical challenges

Scalability remains a major issue. As the number of connected devices and data traffic increases, networks must expand efficiently without performance loss. Reducing latency is also critical, especially for real-time applications like video calls and gaming. Integrating new technologies with existing legacy systems can be complex and costly. Operators need solutions that allow for smooth integration without extensive overhauls.

Operational challenges

Provisioning, maintaining and upgrading the network poses significant challenges. Regular maintenance is necessary to ensure reliability and performance, but it can be disruptive and expensive. Telcos often encounter difficulties in automating and streamlining the process of setting up network services for new customers. Inefficient provisioning can lead to delays, increased operational costs, and customer dissatisfaction. Security and privacy concerns are ever-present, as networks must protect against increasingly sophisticated cyber threats. Ensuring network reliability and uptime is crucial, as outages can lead to substantial financial losses and damage to reputation.

Strategic challenges

Cost management is a key concern. Building and upgrading core networks require substantial investment, and operators must balance performance with cost-efficiency. Keeping up with rapid technological advancements demands continuous learning and adaptation, which can strain resources. Regulatory and compliance requirements add another layer of complexity, as operators must adhere to various standards and regulations.

Energy costs and environmental footprint

Energy consumption is a significant pain point for telco operators. Running and cooling network infrastructure requires substantial amounts of energy, contributing to high operational costs. Additionally, the environmental footprint of these energy demands is considerable, leading to increased scrutiny from regulatory bodies and environmentally conscious consumers. Operators must seek energy-efficient solutions and renewable energy sources to mitigate these impacts, balancing performance with sustainability.

Leveraging open source solutions

Telco operators can leverage open source solutions to address some of these challenges. Open source software offers flexibility and cost savings by reducing reliance on proprietary systems. It allows operators to customise solutions to fit their specific needs and integrate them more easily with existing systems. Open source communities provide a collaborative environment where operators can share knowledge and resources, accelerating innovation and problem-solving.

Adopting open source technologies also presents an opportunity for telcos to transform into techcos. By embracing a tech-first approach, telcos can adopt DevSecOps practices, integrating development, security, and operations to drive continuous innovation. DevSecOps fosters a culture of collaboration and efficiency, enabling faster deployment of new features and improvements while maintaining high security standards.

This transformation allows telcos to participate more actively in the tech growth, contributing to and benefiting from the collective advancements of the open-source community. By leveraging open source solutions and adopting DevSecOps practices, telco operators can enhance scalability, improve security, and reduce costs. This approach helps manage the complexities of core network design and implementation more effectively, ensuring robust and future-ready networks.

Conclusion

In the rapidly evolving telecommunications landscape, addressing the challenges of core network design and implementation is critical. Telco operators must ensure scalability, reliability, security, and cost-effectiveness to meet growing demands. Canonical offers a robust portfolio of open-source infrastructure software that can help telcos overcome these challenges and drive innovation.

Canonical’s software, including Ubuntu, OpenStack, Kubernetes, and MicroCloud, provide flexible and scalable infrastructure options. Ubuntu is renowned for its stability and security, making it a reliable choice for core network operations. OpenStack offers a powerful platform for building private clouds, enabling operators to efficiently manage and scale their resources. Kubernetes facilitates container orchestration, allowing for the deployment and management of applications in a consistent and automated manner. MicroCloud extends these capabilities to edge environments, ensuring consistent performance across diverse locations.

Additionally, Canonical’s Ubuntu Pro enhances security and support with features such as Expanded Security Maintenance (ESM) and live kernel updates. Ubuntu Pro provides up to 12 years of security coverage for over 30,000 packages, ensuring compliance with standards like HIPAA, FIPS, and GDPR. It reduces average CVE exposure time from 98 days to just one, offering peace of mind with enterprise-grade support and long-term maintenance. Canonical Secure Software Development Lifecycle (SSDLC) also participates in helping telcos ensure they comply with the latest telecommunications and security regulations, including the UK’s Telecommunications Security Code of Practice (TSCP), the EU’s Cyber Resilience Act, and the US’ Federal Information Processing Standards (FIPS).

Canonical’s open source solutions and comprehensive support services empower telco operators to build efficient, scalable, and secure core networks. By embracing these technologies, operators can stay ahead of technological advancements, reduce costs, and deliver enhanced services to their customers.

Learn more about Canonical’s portfolio of software and services solutions for the telco core.

on August 09, 2024 04:23 PM

August 08, 2024

DebConf24 – Busan

Jonathan Carter

I’m finishing typing up this blog entry hours before my last 13 hour leg back home, after I spent 2 weeks in Busan, South Korea for DebCamp24 and DebCamp24. I had a rough year and decided to take it easy this DebConf. So this is the first DebConf in a long time where I didn’t give any talks. I mostly caught up on a bit of packaging, worked on DebConf video stuff, attended a few BoFs and talked to people. Overall it was a very good DebConf, which also turned out to be more productive than I expeced it would.

In the welcome session on the first day of DebConf, Nicolas Dandrimont mentioned that a benefit of DebConf is that it provides a sort of caffeine for your Debian motivation. I could certainly feel that affect swell as the days went past, and it’s nice to be excited about some ideas again that would otherwise be fading.

Recovering DPL

It’s a bit of a gear shift being DPL for 4 years, and DebConf Committee for nearly 5 years before that, and then being at DebConf while some issue arise (as it always does during a conference). At first I jump into high alert mode, but then I have to remind myself “it’s not your problem anymore” and let others deal with it.

It was nice spending a little in-person time with Andreas Tille, our new DPL, we did some more handover and discussed some current issues. I still have a few dozen emails in my DPL inbox that I need to collate and forward to Andreas, I hope to finish all that up by the end of August.

During the Bits from the DPL talk, the usual question came up whether Andreas will consider running for DPL again, to which he just responded in a slide “Maybe”. I think it’s a good idea for a DPL to do at least two terms if it all works out for everyone, since it takes a while to get up to speed on everything.

Also, having been DPL for four years, I have a lot to say about it, and I think there’s a lot we can fix in the role, or at least discuss it. If I had the bandwidth for it I would have scheduled a BoF for it, but I’ll very likely do that for the next DebConf instead!

Video team

I set up the standby loop for the video streaming setup. We call it loopy, it’s a bunch of OBS scenes that provide announcements, shows sponsors, the schedule and some social content. I wrote about it back in 2020, but it’s evolved quite a bit since then, so I’m probably due to write another blog post with a bunch of updates on it. I hope to organise a video team sprint in Cape Town in the first half of next year, so I’ll summarize everything before then.

It would’ve been great if we could have some displays in social areas that could show talks, the loop and other content, but we were just too pressed for time for that. This year’s DebConf had a very compressed timeline, and there was just too much that had to be done and that had to be figured out on the last minute. This put quite a lot of strain on the organisers, but I was glad to see how, for the most part, most attendees were very sympathetic to some rough edges (but I digress…).

I added more of the OBS machine setup to the videoteam’s ansible repository, so as of now it just needs an ansible setup and the OBS data and it’s good to go. The loopy data is already in the videoteam git repository, so I could probably just add a git pull and create some symlinks in ansible and then that machine can be installed from 0% to 100% by just installing via debian-installer with our ansible hooks.

This DebConf I volunteered quite a bit for actual video roles during the conference, something I didn’t have much time for in recent DebConfs, and it’s been fun, especially in a session or two where nearly none of the other volunteers showed up. Sometimes chaos is just fun :-)

Baekyongee is the university mascot, who’s visible throughout the university. So of course we included this four legged whale creature on the loop too!

Packaging

I was hoping to do more packaging during DebCamp, but at least it was a non-zero amount:

  • Uploaded gdisk 1.0.10-2 to unstable (previously tested effects of adding dh-sequence-movetousr) (Closes: #1073679).
  • Worked a bit on bcachefs-tools (updating git to 1.9.4), but has a build failure that I need to look into (we might need a newer bindgen) – update: I’m probably going to ROM this package soon, it doesn’t seem suitable for packaging in Debian.
  • Calamares: Tested a fix for encrypted installs, and uploaded it.
  • Calamares: Uploaded (3.3.8-1) to backports (at the time of writing it’s still in backports-NEW).
  • Backport obs-gradient-source for bookworm.
  • Did some initial packaging on Cambalache, I’ll upload to unstable once gtk-4-dev (4.14.0) is in unstable (currently in experimental).
  • Pixelorama 1.0 – I did some initial packaging for Pixelorama back when we did the MiniDebConf Gaming Edition, but it had a few stoppers back then. Version 1.0 seems to fix all of that, but it depends on Godot 4.2 and we’re still on the 3 series in Debian, so I’ll upload this once Godot 4.2 hits at least experimental. Godot software/games is otherwise quite easy to run, it’s basically just source code / data that is installed and then run via godot-runner (godot3-runner package in Debian).

BoFs

Python Team BoF

Link to the etherpad / pad archive link and video can be found on the talk page: https://debconf24.debconf.org/talks/31-python-bof/

The session ended up being extended to a second part, since all the issues didn’t fit into the first session.

I was distracted by too many thing during the Python 3.12 transition (to the point where I thought that 3.11 was still new in Debian), so it was very useful listening to the retrospective of that transition.

There was a discussion whether Python 3.13 could still make it to testing in time for freeze, and it seems that there is consensus that it can, although, likely with new experimental features like disabling the global interpreter lock and the just in time compiler disabled.

I learned for the first time about the “dead batteries” project, PEP-0594, which removes ancient modules that have mostly been superseded, from the Python standard library.

There was some talk about the process for changing team policy, and a policy discussion on whether we should require autopkgtests as a SHOULD or a MUST for migration to testing. As with many things, the devil is in the details and in my opinion you could go either way and achieve a similar result (the original MUST proposal allowed exceptions which imho made it the same as the SHOULD proposal).

There’s an idea to do some ongoing remote sprints, like having co-ordinated days for bug squashing / working on stuff together. This is a nice idea and probably a good way to energise the team and also to gain some interest from potential newcomers.

Louis-Philipe Véronneau was added as a new team admin and there was some discussion on various Sphinx issues and which Lintian tags might be needed for Python 3.13. If you want to know more, you probably have to watch the videos / read the notes :)

Debian.net BoF

Link to the etherpad / pad archive link can be found on the talk page: https://debconf24.debconf.org/talks/37-debiannet-team-bof

Debian Developers can set up services on subdomains on debian.net, but a big problem we’ve had before was that developers were on their own for hosting those services. This meant that they either hosted it on their DSL/fiber connection at home, paid for the hosting themselves, or hosted it at different services which became an accounting nightmare to claim back the used funds. So, a few of us started the debian.net hosting project (sometimes we just call it debian.net, this is probably a bit of a bug) so that Debian has accounts with cloud providers, and as admins we can create instances there that gets billed directly to Debian.

We had an initial rush of services, but requests have slowed down since (not really a bad thing, we don’t want lots of spurious requests). Last year we did a census, to check which of the instances were still used, whether they received system updates and to ask whether they are performing backups. It went well and some issues were found along the way, so we’ll be doing that again.

We also gained two potential volunteers to help run things, which is great.

Debian Social BoF

Link to the etherpad / pad archive link can be found on the talk page: https://debconf24.debconf.org/talks/34-debiansocial-bof

We discussed the services we run, you can view the current state of things at: https://wiki.debian.org/Teams/DebianSocial

Pleroma has shown some cracks over the last year or so, and there are some forks that seem promising. At the same time, it might be worth while considering Mastodon too. So we’ll do some comparison of features and maintenance and find a way forward. At the time when Pleroma was installed, it was way ahead in terms of moderation features.

Pixelfed is doing well and chugging along nicely, we should probably promote it more.

Peertube is working well, although we learned that we still don’t have all the recent DebConf videos on there. A bunch of other issues should be fixed once we move it to a new machine that we plan to set up.

We’re removing writefreely and plume. Nice concepts, but it didn’t get much traction yet, and no one who signed up for these actually used it, which is fine, some experimentation with services is good and sometimes they prove to be very popular and other times not.

The WordPress multisite instance has some mild use, otherwise haven’t had any issues.

Matrix ended up to be much, much bigger than we thought, both in usage and in its requirements. It’s very stateful and remembers discussions for as long as you let it, so it’s Postgres database is continuously expanding, this will also be a lot easier to manage once we have this on the new host.

Jitsi is also quite popular, but it could probably be on jitsi.debian.net instead (we created this on debian.social during the initial height of COVID-19 where we didn’t have the debian.net hosting yet), although in practice it doesn’t really matter where it lives.

Most of our current challenges will be solved by moving everything to a new big machine that has a few public IPs available for some VMs, so we’ll be doing that shortly.

Debian Foundation Discussion BoF

This was some brainstorming about the future structure of Debian, and what steps might be needed to get there. It’s way too big a problem to take on in a BoF, but we made some progress in figuring out some smaller pieces of the larger puzzle. The DPL is going to get in touch with some legal advisors and our trusted organisations so that we can aim to formalise our relationships a bit more by the time it’s DebConf again.

I also introduced my intention to join the Debian Partners delegation. When I was DPL, I enjoyed talking with external organisations who wanted to help Debian, but helping external organisations help Debian turned out to be too much additional load on the usual DPL roles, so I’m pursuing this with the Debian Partners team, more on that some other time.

This session wasn’t recorded, but if you feel like you missed something, don’t worry, all intentions will be communicated and discussed with project members before anything moves forward. There was a strong agreement in the room though that we should push forward on this, and not reach another DebConf where we didn’t make progress on formalising Debian’s structure more.

Social

Conference Dinner

Conference Dinner Photo from Santiago

The conference dinner took place in the university gymnasium. I hope not many people do sports there in the summer, because it got HOT. There was also some interesting observations on the thermodynamics of the attempted cooling solutions, which was amusing. On the plus side, the food was great, the company was good, and the speeches were kept to a minimum, so it was a great conference dinner, even though it was probably cut a bit short due to the heat.

Cheese and Wine

Cheese and Wine happened on 1 August, which happens to be the date I became a DD at DebConf17 in Montréal seven years before, so this was a nice accidental celebration of my Debiversary :)

Since I’m running out of time, I’ll add some more photos to this post some time after publishing it :P

Group Photo

As per DebConf tradition, Aigars took the group photo. You can find the high resolution version on Debian’s GitLab instance.

Debian annual conference Debconf 24, Busan, South Korea
Photography: Aigars Mahinovs aigarius@debian.org
License: CC-BYv3+ or GPLv2+

Talking

Ah yes, talking to people is a big part of DebConf, but I didn’t keep track of it very well.

  • I mostly listened to Alper a bit about his ideas for his talk about debian installer.
  • I talked to Rhonda a bit about ActivityPub and MQTT and whether they could be useful for publicising Debian activity.
  • Listened to Gunnar and Julian have a discussion about GPG and APT which was interesting.
  • I learned that you can learn Hangul, the Korean alphabet, in about an hour or so (I wish I knew that in all my years of playing StarCraft II).
  • We had the usual continuous keysigning party. Besides it’s intended function, this is always a good ice breaker and a way to for shy people to meet other shy people.
  • … and many other fly-by discussions.

Stuff that didn’t happen this DebConf

  • loo.py – A simple Python script that could eventually replace the obs-advanced-scene-switcher sequencer in OBS. It would also be extremely useful if we’d ever replace OBS for loopy. I was hoping to have some time to hack on this, and try to recreate the current loopy in loo.py, but didn’t have the time.
  • toetally – This year videoteam had to scramble to get a bunch of resistors to assemble some tally light. Even when assembled, they were a bit troublesome. It would’ve been nice to hack on toetally and get something ready for testing, but it mostly relies on having something like a rasbperry pi zero with an attached screen in order to work on further. I’ll try to have something ready for the next mini conf though.
  • extrepo on debian live – I think we should have extrepo installed by default on desktop systems, I meant to start a discussion on this, but perhaps it’s just time I go ahead and do it and announce it.
  • Live stream to peertube server – It would’ve been nice to live stream DebConf to PeerTube, but the dependency tree to get this going got a bit too huge. Following our plans discussed in the Debian Social BoF, we should have this safely ready before the next MiniDebConf and should be able to test it there.
  • Desktop Egg – there was this idea to get a stand-in theme for Debian testing/unstable until the artwork for the next release is finalized (Debian bug: #1038660), I have an idea that I meant to implement months ago, but too many things got in the way. It’s based on Juliette Taka’s Homeworld theme, and basically transforms the homeworld into an egg. Get it? Something that hasn’t hatched yet? I also only recently noticed that we never used the actual homeworld graphics (featuring the world image) in the final bullseye release. lol.

So, another DebConf and another new plush animal. Last but not least, thanks to PKNU for being such a generous and fantastic host to us! See you again at DebConf25 in Brest, France next year!

on August 08, 2024 12:29 PM

The Xubuntu development update for August 2024 features Xubuntu 24.10, "Oracular Oriole," featuring Xfce 4.19, and many more updates.

The post Xubuntu Development Update August 2024 appeared first on Sean Davis.

on August 08, 2024 12:27 PM

E310 Till Kamppeter

Podcast Ubuntu Portugal

Vocês sabem aquela sensação de satisfação que temos quando usamos GNU-Linux, ligamos uma impressora e ela funciona à primeira? Não é magia. Till Kamppeter é um contribuidor de Debian e membro da equipa de desktop do Ubuntu. É um grande e dedicado pioneiro da modernização da impressão nas distribuições modernas e acessíveis de Linux, é também promotor dos snaps e veio simpaticamente conversar com o podcast Ubuntu Portugal para nos dar a conhecer melhor o estado da arte e o caminho que foi percorrido para cá chegarmos.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on August 08, 2024 12:00 AM

August 07, 2024

This blog post dives into the world of AI on the edge, and how to deploy TensorFlow Lite models on edge devices. We’ll explore the challenges of managing dependencies and updates for these models, and how containerisation with Ubuntu Core and Snapcraft can streamline the process.

Let’s start by defining what TensorFlow and its Lite variant are.

TensorFlow and its sibling TensorFlow Lite

TensorFlow is a machine learning platform that implements the current best practices. It provides tools for creating ML models, running them, monitoring and improving them. TensorFlow aims to assist beginners and professionals deploying to production environments on desktop, mobile, web and cloud.

TensorFlow Lite is a library meant for running ML models on the edge or on platforms where resource constraints are greater, for example microcontrollers, embedded systems, mobile phones and so on. TF Lite is ideal when the only thing you need to do is to run an ML model. The TensorFlow Lite runtime is a fraction of the size of the full TensorFlow package and includes the bare minimum features to run inference while saving disk space. TF Lite can also optimise existing TensorFlow models by using quantization methods. This reduces the required computing resources, while only incurring a negligible loss in accuracy.

There are two main challenges in bringing TF Lite inference to production: dependency and update management.

Dependency issues

TensorFlow is a large machine learning framework with hundreds of dependencies. In comparison, the TensorFlow Lite runtime has a smaller set of dependencies. Let’s take a look at a few of the dependencies for tflite-runtime and tflite-support, the two main libraries required in many deployments: 

  • tflite-runtime: a simplified library for running machine learning models on mobile and embedded devices, without having to include all TensorFlow packages. 
    wheel ~ 2MB
  • tflite-support: a toolkit that helps users develop and deploy TFLite models onto mobile devices. Even though tflite-runtime should be enough for most use cases, tflite-support adds extra functionality to customise how the model is run. This is especially useful when using a hardware accelerator.
    wheel ~ 270MB

Being a Python framework, Tensorflow’s most important dependency is obviously the Python runtime. Both tflite-runtime and tflite-support depend on Python <=3.11, while tflite-support on ARM64 works on Python <=3.9 only. Ubuntu 24.04 which is the current Ubuntu release with the longest support ships Python 3.12.

Another key dependency is NumPy. The tflite-runtime library requires NumPy 1.x, while the latest version is v2.x. If you naively install tflite-runtime, it will install the latest NumPy resulting in the following runtime error:

A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.0 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.

If you are a user of the module, the easiest solution will be to downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.

If you want to work with image files, you also need the Python image library called Pillow.

One of the powerful features of TF Lite is its ability to use Tensor Processing Units (TPUs)  for offloading of the computations. TPUs are silicons designed for efficient neural network calculations. Use of TPUs from TF Lite on Linux requires the drivers and supporting libraries. For example, to use a Coral EdgeTPU USB Accelerator, you need Edge TPU runtime and the PyCoral Python library:

  • Edge TPU runtime provides the core programming interface for the Edge TPU. There are two Debian packages for the drivers: libedgetpu1-std (slower clock rate, lower power, less heat), and libedgetpu1-max (higher clock rate, higher TPU temperature).
  • PyCoral is a Python library built on top of the TensorFlow Lite library to speed up your development and provide extra functionality for the Edge TPU. PyCoral currently supports only Python 3.6 through 3.9.

Bundling the correct version of all these drivers and libraries into your software can be a daunting task. Especially if you have multiple software packages running on the same system, requiring different versions of the same library. For the most TensorFlow examples we need Python 3.8 for full compatibility with all the dependencies, but some other system software such as the Raspberry Pi 5 fan controller, might require Python 3.12.

Update management challenges

Machine learning models are created based on training datasets. In many applications, the machine learning models need to evolve to maintain their prediction accuracy based on current observations and trends. Continuous machine learning often relies on feedback loops and incremental training pipelines; this isn’t easily possible on resource-constrained devices on the edge. That’s why in many scenarios, it makes sense to develop a workflow to remotely update the machine learning models, similar to every other software components and dependencies.

Another limiting factor is the device deployment location. Edge devices that perform ML inference could be located in remote areas, far apart, or inside factories or data centres with strict access control. This makes physical updates of these devices difficult and expensive.

Usually, the initial developments take over-the-air updates and remote access into account. However, the delivery mechanisms aren’t atomic and transactional. Uploading a new model or new software to the edge is always risky. There could be connection issues during the update, or the startup may fail due to an unknown bug that surfaced only in the field. Updates that result in failures can be expensive to resolve, if not catastrophic.

An industry standard for dependency and update management

Dependency and update management are widespread challenges in this industry. There are many ways to tackle the various issues. Here, we leverage Ubuntu Core and the snap ecosystem, designed with security and remote management in mind. Ubuntu Core is fully containerised, making it possible to reliably update key building blocks of the operating system, ranging from the kernel itself to system packages. The native packaging format on Ubuntu Core is snap. Snaps are not only used for applications but also the operating system components. This offers an end-to-end and consistent update mechanism for the entire software stack. Let’s continue by creating a snap for TF Lite application.

Snapping a TF Lite application

A snap is a software package that creates a sandboxed environment for the software to run in. Inside this sandbox, only the required dependencies are available for the software to run correctly. It also limits access to the host operating system, with rules to allow certain capabilities such as talking to USB hardware or displaying a GUI on the screen. This solves dependency issues, while drastically improving security.

Let’s package a TensorFlow application which classifies an input image. We assume you already know the basics of building snaps, as explained in the Create a new snap tutorial.

Starting with a base

Our snap uses core24 for its base. A base is the file system and basic set of utilities that are available inside the snap. Core24 inherits this from Ubuntu 24.04 LTS, thus providing up to 12 years of support and security updates.

base: core24

Sorting out Python

The base bundles Python 3.12 which is incompatible with the application. Instead, we are interested in Python 3.8 which is compatible with our, and many of the existing upstream examples. We use the deadsnakes PPA to include the Python 3.8 interpreter inside the snap. The deadsnakes PPA provides older and newer versions of Python for current Ubuntu releases.

package-repositories:
  - type: apt
    ppa: deadsnakes/ppa

Assembling a snap consists of different parts. These can be seen as the steps to follow to put the final snap together. Our first step is to install the Python 3.8 Debian package.

parts:
  python38:
    plugin: nil
    stage-packages:
      - python3.8-full

In another part we use the snapcraft python plugin to install our Python dependencies. We tell it to run after the part that installed Python 3.8, and then specify Python 3.8 as the interpreter the plugin should use. Lastly, we install the packages for the latest Pillow and tflite-runtime, as well as a specific version of NumPy.

python-dependencies:
    after: [python38]
    plugin: python
    build-environment:
      - PARTS_PYTHON_INTERPRETER: python3.8
    python-packages:
      - numpy<2
      - pillow
      - tflite-runtime

Adding your application code

Our business logic, in this case a Python script running the ML workload, goes into its own part. This script is based on the TensorFlow Lite Python image classification demo. It takes an image file as input, runs it through the ML model, which recognises objects in the image, and prints a list of labels and their certainty out to the terminal.

scripts:
    plugin: dump
    source: .
    override-build: |
      cp label_image_lite.py $CRAFT_PART_INSTALL/

Downloading and including the model and labels also get their own parts.

  model:
    plugin: dump
    source: <url>/model.tgz
    source-type: tar
    override-build: |
      cp model.tflite $CRAFT_PART_INSTALL/
  labels:
    ...

You might be wondering why we use so many different parts that do so little, while everything can be done in a single part. We specifically do this to improve build caching and delta updates. We will discuss this in the next section.

In the apps section we define which command is executed when the snap is called. In our case the command is the Python interpreter with the script as its only argument. The plug defines that this application needs access to the user’s home directory.

apps:
  tf-label-image:
    plugs:
      - home
    command: bin/python3 $SNAP/label_image_lite.py

This snap only exposes an app that needs to be interactively called. It is also possible to define a service app. If it is defined as a daemon, the app runs constantly in the background. It can also be automatically started after installation and at boot. This can be useful on an edge device that needs to perform a persistent task, without being interacted with by a user. To change an app to become a service, one needs to update the code to continuously process a stream, like a video feed, and add daemon: simple to the app definition.

apps:
  tf-label-image:
    plugs:
      - home
    command: bin/python3 $SNAP/daemon_script.py
    daemon: simple

Running the snap

We build our snap using snapcraft -v and then install it using snap install --dangerous ./tf-label-image_*.snap. After it’s installed the example can be run. If no input image is provided, it will use an included photo of Grace Hopper. The script should print out a list of labels.

<noscript> <img alt="" height="606" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_517,h_606/https://ubuntu.com/wp-content/uploads/e0e6/Grace_Hopper.jpg" width="517" /> </noscript>
Image source: Wikimedia Commons
$ tf-label-image
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
0.919721: 653:military uniform
0.017762: 907:Windsor tie
0.007507: 668:mortarboard
0.005419: 466:bulletproof vest
0.003828: 458:bow tie, bow-tie, bowtie
time: 28.502ms

You can also provide an image to be labelled.

<noscript> <img alt="" height="1002" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_750,h_1002/https://ubuntu.com/wp-content/uploads/4f10/Parrot.red_.macaw_.1.arp_.750pix.jpg" width="750" /> </noscript>
Image source: Wikimedia Commons, Adrian Pingstone
$ tf-label-image -i ~/Downloads/Parrot.red.macaw.1.arp.750pix.jpg
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
0.939399: 89:macaw
0.060436: 91:lorikeet
0.000062: 90:sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita
0.000057: 24:vulture
0.000023: 88:African grey, African gray, Psittacus erithacus
time: 28.257ms

The complete source code for this example is available on Github: 
https://github.com/canonical/tf-lite-examples-snap/tree/tf-lite-blogpost/image-labelling

You can find more complex TensorFlow Lite examples packaged with snap at: https://github.com/canonical/tf-lite-examples-snap 

Delta updates

The most likely scenario of a production ML workload would be a regular update of the model. These models could be quite large, but still only a fraction of the size of the entire snap. 

Many edge devices are connected using an expensive LTE or satellite internet connection. Data costs are a huge concern in some of these cases. Taking the size of the update into account is therefore important.

Snaps can update via binary delta updates – that is, only sending the binary changes to the edge device, without having to download the entire new package. To make full use of this, one needs to make sure your snapcraft.yaml file is written in such a way that only the required parts change when you update something.

This is the reason for splitting the dependencies, the Python script, and the model and labels into separate parts. In our example, if the model gets updated, only the changes in the model will be sent to the edge device during an update. The dependencies and the Python script won’t be sent again if they do not change. Keep in mind that deltas are calculated on the compressed package, which means that the deltas aren’t equal to the changed bytes. This also depends on the selected compression algorithm for the snap.

Testing and rolling out your updates

In a production environment, you should not blindly roll out updates to all your devices without testing it properly first. Snapcraft and the Snapstore have features to assist with this. Two of the valuable snap features are Channels and Progressive Release management.

Channels are used to publish stable and pre-release versions of your software. Your production devices will be subscribed to updates from the stable channel, while your test devices will be looking for updates on the edge channel. You can have multiple channels for various levels of confidence of the version. As a new update gets tested on the edge channel, it can be moved to beta, or candidate as confidence increases, and be tested there by a wider audience.

The progressive release mechanism allows incremental update roll-outs to a predefined percentage of your devices. For example, a ratio of your testers that are subscribed to updates on the beta channel. You do not push the update to all of them at the same time. Rather start with a small percentage, say 20%. After a day, if there are no issues, you increase the percentage to 50%, and after another day to 100%. After a week of testing with no reported issues, you can promote the update to the candidate channel and follow a similar progressive release strategy. Eventually, the update can be released to the stable channel for the entire target device set.

Release management is a complex task, but the Snap Store offers various features to simplify it; read more here. As your deployments scale, it would become beneficial to rely on more powerful tools to manage updates to your fleets. Landscape offers a good collection of remote management and monitoring features.

A fully containerised solution

Since we created a snap, our TF Lite application can be easily deployed on Ubuntu Core and integrated into a software stack that is fully containerised and maintainable. The snap can be easily deployed and installed on a pre-built Ubuntu Core image, however, this isn’t what you would do in production. 

Ubuntu Core comes with tooling that makes it possible to create your production images that bundle your stack’s building blocks, including the ML software. Refer to this documentation to get started with building your Ubuntu Core image.

on August 07, 2024 01:00 PM

This is another blog post lifted wholesale out of my weekly newsletter. I do this when I get a bit verbose to keep the newsletter brief. The newsletter is becoming a blog incubator, which I’m okay with.

A reminder about that newsletter

The newsletter is emailed every Friday - subscribe here, and is archived and available via RSS a few days later.

I talked a bit about the process of setting up the newsletter on episode 34 of Linux Matters Podcast. Have a listen if you’re interested.

Linux Matters 34

Patreon supporters of Linux Matters can get the show a day or so early and without adverts. 🙏

Multiple kind offers

Good news, everyone! I now have a crack team of fine volunteers who proofread the text that lands in your inbox/browser cache/RSS reader. Crucially, they’re doing that review before I send the mail, not after, as was previously the case. Thank you for volunteering, mystery proofreaders.

popey dreamland

Until now, my newsletter “workflow” (such as it was) involved hoping that I’d get it done and dusted by Friday morning. Then, ideally, it would spend some time “in review”, followed by saving to disk. But if necessary, it would be ready to be opened in an emergency text editor at a moment’s notice before emails were automatically sent by lunchtime.

I clearly don’t know me very well.

popey reality

What actually happened is that I would continue editing right up until the moment I sent it out, then bash through the various “post-processing” steps and schedule the emails for “5 minutes from now.” Boom! Done.

This often resulted in typos or other blemishes in my less-than-lovingly crafted emails to fabulous people. A few friends would ping me with corrections. But once the emails are sent, reaching out and fixing those silly mistakes is problematic.

Someone should investigate over-the-air updates to your email. Although zero-day patches and DLC for your inbox sound horrendous. Forget that.

In theory, I could tweak the archived version, but that is not straightforward.

Tool refresh?

Aside: Yes, I know it’s not the tools, but I should slow down, be more methodical and review every change to my document before publishing. I agree. Now, let’s move on.

While preparing the newsletter, I would initially write in Sublime Text (my desktop text editor of choice), with a Grammarly† (affiliate link) LSP extension, to catch my numerous blunders, and re-word my clumsy English.

Unfortunately, the Grammarly extension for Sublime broke a while ago, so I no longer have that available while I prepare the newsletter.

I could use Google Docs, I suppose, where Grammarly still works, augmenting the built-in spell and grammar checker. But I really like typing directly as Markdown in a lightweight editor, not a big fat browser. So I guess I need to figure something else out to check my spelling and grammar prior to the awesome review team getting it to save at least some of my blushes.

I’m not looking for suggestions for a different text editor—or am I? Maybe I am. I might be.

Sure, that’ll fix it.

ZX81 -> Spectrum -> CPC -> edlin -> Edit -> Notepad -> TextPad -> Sublime -> ?

I’ve used a variety of text editors over the years. Yes, the ZX81 and Sinclair Spectrum count as text editors. Yes, I am old.

I love Sublime’s minimalism, speed, and flexibility. I use it for all my daily work notes, personal scribblings, blog posts, and (shock) even authoring (some) code.

I also value Sublime’s data-recovery features. If the editor is “accidentally” terminated or a power-loss event occurs, Sublime reliably recovers its state, retaining whatever you were previously editing.

I regularly use Windows, Linux, and macOS on any given day across multiple computers. So, a cross-platform editor is also essential for me, but only on the laptop/desktop, as I never edit on mobile‡ devices.

I typically just open a folder as a “workspace” in a window or an additional tab in one window. I frequently open many folders, each full of files across multiple displays and machines.

All my notes are saved in folders that use Syncthing to keep in sync across machines. I leave all of those notes open for days, perhaps weeks, so having a robust sync tool combined with an editor that auto-reloads when files change is key.

Their notes are separately backed up, so cloud storage isn’t essential for my use case.

Something else?

Whatever else I pick, it’s really got to fit that model and requirements, or it’ll be quite a stretch for me to change. One option I considered and test-drove is NotepadNext. It’s an open-source re-implementation of Notepad++, written in C++ and Qt.

A while back, I packaged up and published it as a snap, to make it easy to install and update. It fits many of the above requirements already, with the bonus of being open-source, but sadly, there is no Grammarly support there either.

I’d prefer no :::: W I D E - L O A D :::: electron monsters. Also, not Notion or Obsidian, as I’ve already tried them, and I’m not a fan. In addition, no, not Vim or Emacs.

Bonus points if you have a suggestion where one of the selling points isn’t “AI”§.

Perhaps there isn’t a great plain text editor that fulfills all my requirements. I’m open to hearing suggestions from readers of this blog or the newsletter. My contact details are here somewhere.


† - Please direct missives about how terrible Grammarly is to /dev/null. Thanks. Further, suggestions that I shouldn’t rely on Grammarly or other tools and should just “Git Gud” (as the youths say) may be inserted into the A1481 on the floor.

‡ - I know a laptop is technically a “mobile” device.

§ - Yes, I know that “Not wanting AI” and “Wanting a tool like Grammarly” are possibly conflicting requirements.

◇ - For this blog post I copy and pasted the entire markdown source into a Google doc, used all the spelling and grammar tools, then pasted it back into Sublime, pushed to git, and I’m done. Maybe that’s all I need to do? Keep my favourite editor, and do all the grammar in one chunk at the end in a tab of a browser I already had open anyway. Beat that!

on August 07, 2024 09:00 AM

August 05, 2024

Welcome to the Ubuntu Weekly Newsletter, Issue 851 for the week of July 28 – August 3, 2024. The full version of this issue is available here.

In this issue we cover:

  • State of the Ubuntu mailing lists
  • Technical Board: Feedback requested – draft policy on third party software sources included by Ubuntu
  • Ubuntu Stats
  • Hot in Support
  • LoCo Events
  • Exploring O3 Optimization for Ubuntu
  • Other Community News
  • Ubuntu Cloud News
  • Canonical News
  • In the Blogosphere
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, and 24.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

on August 05, 2024 10:53 PM

August 04, 2024

The Freedesktop.org Specifications directory contains a list of common specifications that have accumulated over the decades and define how common desktop environment functionality works. The specifications are designed to increase interoperability between desktops. Common specifications make the life of both desktop-environment developers and especially application developers (who will almost always want to maximize the amount of Linux DEs their app can run on and behave as expected, to increase their apps target audience) a lot easier.

Unfortunately, building the HTML specifications and maintaining the directory of available specs has become a bit of a difficult chore, as the pipeline for building the site has become fairly old and unmaintained (parts of it still depended on Python 2). In order to make my life of maintaining this part of Freedesktop easier, I aimed to carefully modernize the website. I do have bigger plans to maybe eventually restructure the site to make it easier to navigate and not just a plain alphabetical list of specifications, and to integrate it with the Wiki, but in the interest of backwards compatibility and to get anything done in time (rather than taking on a mega-project that can’t be finished), I decided to just do the minimum modernization first to get a viable website, and do the rest later.

So, long story short: Most Freedesktop specs are written in DocBook XML. Some were plain HTML documents, some were DocBook SGML, a few were plaintext files. To make things easier to maintain, almost every specification is written in DocBook now. This also simplifies the review process and we may be able to switch to something else like AsciiDoc later if we want to. Of course, one could have switched to something else than DocBook, but that would have been a much bigger chore with a lot more broken links, and I did not want this to become an even bigger project than it already was and keep its scope somewhat narrow.

DocBook is a markup language for documentation which has been around for a very long time, and therefore has older tooling around it. But fortunately our friends at openSUSE created DAPS (DocBook Authoring and Publishing Suite) as a modern way to render DocBook documents to HTML and other file formats. DAPS is now used to generate all Freedesktop specifications on our website. The website index and the specification revisions are also now defined in structured TOML files, to make them easier to read and to extend. A bunch of specifications that had been missing from the original website are also added to the index and rendered on the website now.

Originally, I wanted to put the website live in a temporary location and solicit feedback, especially since some links have changed and not everything may have redirects. However, due to how GitLab Pages worked (and due to me not knowing GitLab CI well enough…) the changes went live before their MR was actually merged. Rather than reverting the change, I decided to keep it (as the old website did not build properly anymore) and to see if anything breaks. So far, no dead links or bad side effects have been observed, but:

If you notice any broken link to specifications.fd.o or anything else weird, please file a bug so that we can fix it!

Thank you, and I hope you enjoy reading the specifications in better rendering and more coherent look! 😃

on August 04, 2024 06:54 PM

Thankfully no tragedies to report this week! I thank each and everyone of you that has donated to my car fund. I still have a ways to go and could use some more help so that we can go to the funeral. https://gofund.me/033eb25d I am between contracts and work packages, so all of my work is currently for free. Thanks for your consideration.

Another very busy week getting qt6 updates in Debian, Kubuntu, and KDE snaps.

Kubuntu:

  • Merkuro and Neochat SRUs have made progress.
  • See Debian for the qt6 Plasma / applications work.

Debian:

  • qtmpv – in NEW
  • arianna – in NEW
  • kamera – experimental
  • libkdegames – experimental
  • kdenetwork-filesharing – experimental
  • xwaylandvideobridge – NEW
  • futuresql – NEW
  • kpat WIP
  • Tokodon – Done, but needs qtmpv to pass NEW
  • Gwenview – WIP needs kamera, kio-extras
  • kio-extras – Blocked on kdsoap in which the maintainer is not responding to bug reports or emails. Will likely fork in Kubuntu as our freeze quickly approaches.

KDE Snaps:

Updated QT to 6.7.2 which required a rebuild of all our snaps. Also found an issue with mismatched ffmpeg libraries, we have to bundle them for now until versioning issues are resolved.

Made new theme snaps for KDE breeze: gtk-theme-breeze, icon-theme-breeze so if you use the plasma theme breeze please install these and run

for PLUG in $(snap connections | grep gtk-common-themes:icon-themes | awk '{print $2}'); do sudo snap connect ${PLUG} icon-theme-breeze:icon-themes; done
for PLUG in $(snap connections | grep gtk-common-themes:gtk-3-themes | awk '{print $2}'); do sudo snap connect ${PLUG} gtk-theme-breeze:gtk-3-themes; done
for PLUG in $(snap connections | grep gtk-common-themes:gtk-2-themes | awk '{print $2}'); do sudo snap connect ${PLUG} gtk-theme-breeze:gtk-2-themes; done

This should resolve most theming issues. We are still waiting for kdeglobals to be merged in snapd to fix colorscheme issues, it is set for next release. I am still working on qt6 themes and working out how to implement them in snaps as they are more complex than gtk themes with shared libraries and file structures.

Please note: Please help test the –edge snaps so I can promote them to stable.

WIP Snaps or MR’s made

  • Juk (WIP)
  • Kajongg (WIP problem with pyqt)
  • Kalgebra (in store review)
  • Kdevelop (WIP)
  • Kdenlive (MR)
  • KHangman (WIP)
  • Ruqola (WIP)
  • Picmi (building)
  • Kubrick (WIP)
  • lskat (building)
  • Palapeli (MR)
  • Kanagram (WIP)
  • Labplot (WIP)
  • Ktuberling (building)
  • Ksudoku (building)
  • Ksquares (MR)
on August 04, 2024 12:35 PM

August 03, 2024

Gogh

Dougie Richardson

Check out these awesome terminal themes at http://gogh-co.github.io/Gogh/

on August 03, 2024 11:57 AM

August 02, 2024

My Debian contributions this month were all sponsored by Freexian.

You can also support my work directly via Liberapay.

OpenSSH

At the start of the month, I uploaded a quick fix (via Salvatore Bonaccorso) for a regression from CVE-2006-5051, found by Qualys; this was because I expected it to take me a bit longer to merge OpenSSH 9.8, which had the full fix.

This turned out to be a good guess: it took me until the last day of the month to get the merge done. OpenSSH 9.8 included some substantial changes to split the server into a listener binary and a per-session binary, which required some corresponding changes in the GSS-API key exchange patch. At this point I was very grateful for the GSS-API integration test contributed by Andreas Hasenack a little while ago, because otherwise I might very easily not have noticed my mistake: this patch adds some entries to the key exchange algorithm proposal, and on the server side I’d accidentally moved that to after the point where the proposal is sent to the client, which of course meant it didn’t work at all. Even with a failing test, it took me quite a while to spot the problem, involving a lot of staring at strace output and comparing debug logs between versions.

There are still some regressions to sort out, including a problem with socket activation, and problems in libssh2 and Twisted due to DSA now being disabled at compile-time.

Speaking of DSA, I wrote a release note for this change, which is now merged.

GCC 14 regressions

I fixed a number of build failures with GCC 14, mostly in my older packages: grub (legacy), imaptool, kali, knews, and vigor.

autopkgtest

I contributed a change to allow maintaining Incus container and VM images in parallel. I use both of these regularly (containers are faster, but some tests need full machine isolation), and the build tools previously didn’t handle that very well.

I now have a script that just does this regularly to keep my images up to date (although for now I’m running this with PATH pointing to autopkgtest from git, since my change hasn’t been released yet):

RELEASE=sid autopkgtest-build-incus images:debian/trixie
RELEASE=sid autopkgtest-build-incus --vm images:debian/trixie

Python team

I fixed dnsdiag’s uninstallability in unstable, and contributed the fix upstream.

I reverted python-tenacity to an earlier version due to regressions in a number of OpenStack packages, including octavia and ironic. (This seems to be due to #486 upstream.)

I fixed a build failure in python3-simpletal due to Python 3.12 removing the old imp module.

I added non-superficial autopkgtests to a number of packages, including httmock, py-macaroon-bakery, python-libnacl, six, and storm.

I switched a number of packages to build using PEP 517 rather than calling setup.py directly, including alembic, constantly, hyperlink, isort, khard, python-cpuinfo, and python3-onelogin-saml2. (Much of this was by working through the missing-prerequisite-for-pyproject-backend Lintian tag, but there’s still lots to do.)

I upgraded frozenlist, ipykernel, isort, langtable, python-exceptiongroup, python-launchpadlib, python-typeguard, pyupgrade, sqlparse, storm, and uncertainties to new upstream versions. In the process, I added myself to Uploaders for isort, since the previous primary uploader has retired.

Other odds and ends

I applied a suggestion by Chris Hofstaedtler to create /etc/subuid and /etc/subgid in base-passwd, since the login package is no longer essential.

I fixed a wireless-tools regression due to iproute2 dropping its (/usr)/sbin/ip compatibility symlink.

I applied a suggestion by Petter Reinholdtsen to add AppStream metainfo to pcmciautils.

on August 02, 2024 12:27 PM

August 01, 2024

E309 O Verdadeiro 309, Levante-Se

Podcast Ubuntu Portugal

Ir de férias pode ser muito stressante, mas o Software Livre permite relaxar e navegar bonitas paisagens à vontadinha, sem receio de transferir dados para mãos alheias - o Diogo conta-nos tudo. O Miguel já comprou litros e litros de água oxigenada na farmácia, mas ainda precisa de tomar banho em tetraacetiletilenodiamina. Entre nós, a Crowdstrike foi o bombo da festa, mas ficámos a perceber um bocadinho melhor como acontecem catástrofes informáticas globais e como não lidar com elas. Nenhum de nós recebeu um vale de 10 euros da UberEats para fazer este episódio.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. O tema musical “Suspense Theme” usado neste episódio foi criado por [Tramos] (https://freemusicarchive.org/music/tramos/), com a [licença CC BY 4.0] (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on August 01, 2024 12:00 AM

July 30, 2024

With the work that has been done in the debian-installer/netcfg merge-proposal !9 it is possible to install a standard Debian system, using the normal Debian-Installer (d-i) mini.iso images, that will come pre-installed with Netplan and all network configuration structured in /etc/netplan/.

In this write-up, I’d like to run you through a list of commands for experiencing the Netplan enabled installation process first-hand. Let’s start with preparing a working directory and installing the software dependencies for our virtualized Debian system:

$ mkdir d-i_tmp && cd d-i_tmp
$ apt install ovmf qemu-utils qemu-system-x86

Now let’s download the official (daily) mini.iso, linux kernel image and initrd.gz containing the Netplan enablement changes:

$ wget https://d-i.debian.org/daily-images/amd64/daily/netboot/gtk/mini.iso
$ wget https://d-i.debian.org/daily-images/amd64/daily/netboot/gtk/debian-installer/amd64/initrd.gz
$ wget https://d-i.debian.org/daily-images/amd64/daily/netboot/gtk/debian-installer/amd64/linux

Next we’ll prepare a VM, by copying the EFI firmware files, preparing some persistent EFIVARs file, to boot from FS0:\EFI\debian\grubx64.efi, and create a virtual disk for our machine:

$ cp /usr/share/OVMF/OVMF_CODE_4M.fd .
$ cp /usr/share/OVMF/OVMF_VARS_4M.fd .
$ qemu-img create -f qcow2 ./data.qcow2 20G

Finally, let’s launch the debian-installer using a preseed.cfg file, that will automatically install Netplan (netplan-generator) for us in the target system. A minimal preseed file could look like this:

# Install minimal Netplan generator binary
d-i preseed/late_command string in-target apt-get -y install netplan-generator

For this demo, we’re installing the full netplan.io package (incl. the interactive Python CLI), as well as the netplan-generator package and systemd-resolved, to show the full Netplan experience. You can choose the preseed file from a set of different variants to test the different configurations:

We’re using the linux kernel and initrd.gz here to be able to pass the preseed URL as a parameter to the kernel’s cmdline directly. Launching this VM should bring up the official debian-installer in its netboot/gtk form:

$ export U=https://people.ubuntu.com/~slyon/d-i/netplan-preseed+full.cfg
$ qemu-system-x86_64 \
	-M q35 -enable-kvm -cpu host -smp 4 -m 2G \
	-drive if=pflash,format=raw,unit=0,file=OVMF_CODE_4M.fd,readonly=on \
	-drive if=pflash,format=raw,unit=1,file=OVMF_VARS_4M.fd,readonly=off \
	-device qemu-xhci -device usb-kbd -device usb-mouse \
	-vga none -device virtio-gpu-pci \
	-net nic,model=virtio -net user \
	-kernel ./linux -initrd ./initrd.gz -append "url=$U" \
	-hda ./data.qcow2 -cdrom ./mini.iso;

Now you can click through the normal Debian-Installer process, using mostly default settings. Optionally, you could play around with the networking settings, to see how those get translated to /etc/netplan/ in the target system.

After you confirmed your partitioning changes, the base system gets installed. I suggest not to select any additional components, like desktop environments, to speed up the process.

During the final step of the installation (finish-install.d/55netcfg-copy-config) d-i will detect that Netplan was installed in the target system (due to the preseed file provided) and opt to write its network configuration to /etc/netplan/ instead of /etc/network/interfaces or /etc/NetworkManager/system-connections/.

Done! After the installation finished, you can reboot into your virgin Debian Sid/Trixie system.

To do that, quit the current Qemu process, by pressing Ctrl+C and make sure to copy over the EFIVARS.fd file that was modified by grub during the installation, so Qemu can find the new system. Then reboot into the new system, not using the mini.iso image any more:

$ cp ./OVMF_VARS_4M.fd ./EFIVARS.fd
$ qemu-system-x86_64 \
        -M q35 -enable-kvm -cpu host -smp 4 -m 2G \
        -drive if=pflash,format=raw,unit=0,file=OVMF_CODE_4M.fd,readonly=on \
        -drive if=pflash,format=raw,unit=1,file=EFIVARS.fd,readonly=off \
        -device qemu-xhci -device usb-kbd -device usb-mouse \
        -vga none -device virtio-gpu-pci \
        -net nic,model=virtio -net user \
        -drive file=./data.qcow2,if=none,format=qcow2,id=disk0 \
        -device virtio-blk-pci,drive=disk0,bootindex=1
        -serial mon:stdio

Finally, you can play around with your Netplan enabled Debian system! As you will find, /etc/network/interfaces exists but is empty, it could still be used (optionally/additionally). Netplan was configured in /etc/netplan/ according to the settings given during the d-i installation process.

In our case, we also installed the Netplan CLI, so we can play around with some of its features, like netplan status:

Thank you for following along the Netplan enabled Debian installation process and happy hacking! If you want to learn more, find us at GitHub:netplan.

on July 30, 2024 04:24 AM

July 29, 2024

Welcome to the Ubuntu Weekly Newsletter, Issue 850 for the week of July 21 – 27, 2024. The full version of this issue is available here.

In this issue we cover:

  • Ubuntu Stats
  • Hot in Support
  • LoCo Events
  • Ubuntu Summit 2024 Call for Abstracts extended!
  • Announcing the Multipass 1.14.0 release
  • Call For Testing: FFmpeg SDK for core24
  • Canonical to present keynote session at Kubecon China 2024
  • Other Community News
  • Canonical News
  • In the Blogosphere
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, and 24.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Giulia Zanchi
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

on July 29, 2024 11:08 PM

Updates for July 2024

Ubuntu Studio

The Road to 24.10

We have quite a few exciting changes going on for Ubuntu Studio 24.10, including one that some might find controversial. However, this is not without a lot of thought and foresight, and even research, testing, and coordination.

With that, let’s just dive right into the controversial change.

Switching to Ubuntu’s Generic Kernel

This is the one that’s going to come as a shock. However, with the release of 24.04 LTS, the generic kernel is now fully capable of preemptable low latency workloads. Because of this, the lowlatency kernel in Ubuntu will eventually be depricated.

Rather than take a reactive approach to this, we at Ubuntu Studio decided to be proactive and switch to the generic kernel starting with 24.10. To facilitate this, we will be enabling not only threadirqs like we had done before, but also preempt=full by default.

If you had read the first link above, you’ll also notice that nohz_full=all was also recommended, but we noticed that created performance degradation in high video workloads, so we decided to leave that off by default but give users a GUI option in Ubuntu Studio Audio Configuration to enable and disable these three kernel parameters as they need.

This has been tested on 24.04 LTS with results equivalent to or better than with the lowlatency kernel. The Ubuntu Kernel Team also has mentioned even more improvements coming to the kernel in 24.10, including the potential of ability to change these settings and more on-the-fly without reboot.

There have also been numerous improvements for gaming with these settings, for those of you that like to game. You can explore more of that on the Ubuntu Discourse.

Plasma 6

We are in cooperation with the Kubuntu team doing what we can to help with the transition to KDE Plasma Desktop 6. The work is going along slowly but surely, and we hope to have more information on this in the future. For right now, most testing on new stuff is being done on Ubuntu Studio 24.04 LTS for this reason since desktop environment breakages can be catastrophic for application testing. Hence, any screenshots will be on Plasma 5.

New Theming for Ubuntu Studio

We’ve been using the Materia theme for the past five years, since 19.04, with a brief break for 22.04 LTS. Unfortunately, that is coming to an end as the Materia theme is no longer maintained. Its successor has been found in Orchis, which was forked from Materia. Here’s a general screenshot our Project Leader, Erich Eickmeyer, made from his personal desktop using Ubuntu Studio 24.04 LTS and the Orchis theme:

Message from Erich: “Yes, that’s Microsoft Edge and yes, my system needs a reboot. Don’t @ me. XD”

Contributions Needed, and Help a Family in Need!

Ubuntu Studio is a community-run project, and donations are always welcome. If you find Ubuntu Studio useful and want to support its ongoing development, please contribute!

Erich’s wife, Edubuntu Project Leader Amy Eickmeyer, lost her full-time job two weeks ago and the family is in desperate need of help in this time of hardship. If you could find it in your heart to donate extra to Ubuntu Studio, those funds will help the Eickmeyer family at this time.

Contribution options are on the sidebar to the right or at ubuntustudio.org/contribute.

on July 29, 2024 08:30 PM

July 28, 2024

This week our family suffered another loss with my brother in-law. We will miss him dearly. On our way down to Phoenix to console our nephew that just lost his dad our car blew up. Last week we were in a roll over accident that totaled our truck and left me with a broken arm. We are now in great need of a new vehicle. Please consider donating to this fund: https://gofund.me/033eb25d . Kubuntu is out of money and I am between work packages with the ‘project’. We are 50 miles away from the closest town for supplies, essentials such as water requires a vehicle.

I have had bad years before ( covid ) in which I lost my beloved job at Blue Systems. I made a vow to myself to never let my personal life affect my work again. I have so far kept that promise to myself and without further ado I present to you my work.

Kubuntu:

  • Many SRUs awaiting verification stage including the massive apparmor policy bug.
  • sddm fix for the black screen on second boot has passed verification and should make .1 release.
  • See Debian for the qt6 Plasma / applications work.

Debian:

  • qtmpv – in NEW
  • arianna – in NEW
  • kamera – uploading today
  • kcharselect – Experimental
  • Tokodon – Done, but needs qtmpv to pass NEW
  • Gwenview – WIP needs kamera, kio-extras
  • kio-extras – WIP

KDE Snaps:

Please note: for the most part the Qt6 snaps are in –edge except the few in the ‘project’ that are heavily tested. Please help test the –edge snaps so I can promote them.

  • Elisa
  • Okular
  • Konsole ( please note this is a confined terminal for the ‘project’ and not very useful except to ssh to the host system )
  • Kwrite
  • Gwenview
  • Kate ( –classic )
  • Gcompris
  • Alligator
  • Ark
  • Blinken
  • Bomber
  • Bovo
  • Calindori
  • Digikam
  • Dragon
  • Falkon
  • Filelight

WIP Snaps or MR’s made

  • KSpacedual
  • Ksquares
  • KSudoku
  • KTuberling
  • Kubrick
  • lskat
  • Palapeli
  • Kajongg
  • Kalzium
  • Kanagram
  • Kapman
  • Katomic
  • KBlackBox
  • KBlocks
  • KBounce
  • KBreakOut
  • KBruch

Please note that 95% of the snaps are free-time work. The project covers 5. I am going as fast as I can between Kubuntu/Debian and the project commitments. Not to mention I have only one arm! My GSOC student is also helping which you can read all about here: https://soumyadghosh.github.io/website/interns/gsoc-2024/gsoc-week-3-week-7/

There is still much work to do in Kubuntu to be Plasma 6 ready for Oracular and they are out of funds. I will still continue my work regardless, but please consider donating until we can procure a consistent flow of funding : https://kubuntu.org/donate/

Thank you for reading and have a blessed day!

on July 28, 2024 04:24 PM

I initially started typing this as short -[ Contrafibularities ]- segment for my free, weekly newsletter. But it got a bit long, so I broke it out into a blog post instead.

About that newsletter

The newsletter is emailed every Friday - subscribe here, and is archived and available via RSS a few days later. I talked a bit about the process of setting up the newsletter on episode 34 of Linux Matters Podcast. Have a listen if you’re interested.

Linux Matters 34

Patreon supporters of Linux Matters can get the show a day or so early, and without adverts. 🙏

Going live!

I have a work-supplied M3 MacBook Pro. It’s a lovely device with ludicrous battery endurance, great screen, keyboard and decent connectivity. As an ex-Windows user at work, and predominantly Linux enthusiast at home, macOS throws curveballs at me on a weekly basis. This week, screenshots.

I scripted a ‘going live’ shell script for my personal live streams. For the title card, I wanted the script to take a screenshot of the running terminal, Alacritty. I went looking for ways to do this on the command line, and learned that macOS has shipped a screencapture command-line tool for some time now. Amusingly the man page for it says:

DESCRIPTION
 The screencapture utility is not very well documented to date.
 A list of options follows.

and..

BUGS
 Better documentation is needed for this utility.

This is 100% correct.

How hard can it be?

Perhaps I’m too used to scrot on X11, that I have used for over 20 years. If I want a screenshot of the current running system, just run scrot and bang there’s a PNG in the current directory showing what’s on screen. Easy peasy.

On macOS, run screencapture image.png and you’ll get a screenshot alright, of the desktop, your wallpaper. Not the application windows carefully arranged on top. To me, this is somewhat obtuse. However, it is also possible to screenshot a window, if you know the <windowid>.

From the screencapture man page:

 -l <windowid> Captures the window with windowid.

There appears to be no straightforward way to actually get the <windowid> on macOS, though. So, to discover the <windowid> you might want the GetWindowID utility from smokris (easily installed using Homebrew).

That’s fine and relatively straightforward if there’s only one Window of the application, but a tiny bit more complex if the app reports multiple windows - even when there’s only one. Alacritty announces multiple windows, for some reason.

$ GetWindowID Alacritty --list
"" size=500x500 id=73843
"(null)" size=0x0 id=73842
"alan@Alans-MacBook-Pro.local (192.168.1.170) - byobu" size=1728x1080 id=73841

FINE. We can deal with that:

$ GetWindowID Alacritty --list | grep byobu | awk -F '=' '{print $3}'
73841

You may then encounter the mysterious could not create image from window error. This threw me off a little, initially. Thankfully I’m far from the first to encounter this.

Big thanks to this rancher-sandbox, rancher-desktop pull request against their screenshot docs. Through that I discovered there’s a macOS security permission I had to enable, for the terminal application to be able to initiate screenshots of itself.

A big thank-you to both of the above projects for making their knowledge available. Now I have this monstrosity in my script, to take a screenshot of the running Alacritty window:

screencapture -l$(GetWindowID Alacritty --list | \
 grep byobu | \
 awk -F '=' '{print $3}') titlecard.png

If you watch any of my live streams, you may notice the title card. Now you know how it’s made, or at least how the screenshot is created, anyway.

on July 28, 2024 10:28 AM

July 22, 2024

Introduction

When managing Unix-like operating systems, understanding permission settings and security practices is crucial for maintaining system integrity and protecting data. FreeBSD and Linux, two popular Unix-like systems, offer distinct approaches to permission settings and security. This article delves into these differences, providing a comprehensive comparison to help system administrators and users navigate these systems effectively.

1. Overview of FreeBSD and Linux

FreeBSD is a Unix-like operating system derived from the Berkeley Software Distribution (BSD), renowned for its stability, performance, and advanced networking features. It is widely used in servers, network appliances, and embedded systems.

Linux, on the other hand, is a free and open-source operating system kernel created by Linus Torvalds. It is the foundation of numerous distributions (distros) like Ubuntu, Fedora, and CentOS. Linux is known for its flexibility, broad hardware support, and extensive community-driven development.

2. File System Hierarchy

Both FreeBSD and Linux follow the Unix file system hierarchy but with slight variations. Understanding these differences is key to grasping permission settings on each system.

  • FreeBSD: Uses the Filesystem Hierarchy Standard (FHS) but has its nuances. The /usr directory contains user programs and data, while /var holds variable data like logs and databases. FreeBSD also utilizes /usr/local for locally installed software.
  • Linux: Generally adheres to the FHS. Important directories include /bin for essential binaries, /etc for configuration files, /home for user directories, and /var for variable files.

3. Permissions and Ownership

Both systems use a similar model for file permissions but have some differences in implementation and additional features.

3.1 Basic File Permissions

  • FreeBSD:
  • Owner: The user who owns the file.
  • Group: A group of users with shared permissions.
  • Others: All other users.
  • Permissions are represented as read (r), write (w), and execute (x) for each category. Commands to manage permissions:
  • ls -l: Lists files with permissions.
  • chmod: Changes file permissions.
  • chown: Changes file ownership.
  • chgrp: Changes group ownership.
  • Linux:
  • Similar to FreeBSD, Linux file permissions are also divided into owner, group, and others.
  • Commands are the same: ls -l, chmod, chown, chgrp.

3.2 Special Permissions

  • FreeBSD:
  • Setuid: Allows users to execute a file with the file owner’s permissions.
  • Setgid: When applied to a directory, new files inherit the directory’s group.
  • Sticky Bit: Ensures only the file owner can delete the file.
  • Linux:
  • Setuid: Allows a user to execute a file with the permissions of the file owner.
  • Setgid: When set on a directory, files created within inherit the directory’s group.
  • Sticky Bit: Similar to FreeBSD, it restricts file deletion.

4. Extended Attributes and ACLs

4.1 FreeBSD:

FreeBSD supports Extended File Attributes (EAs) and Access Control Lists (ACLs) to provide more granular permission control.

  • Extended Attributes: Used to store metadata beyond standard attributes. Managed with setfattr and getfattr.
  • Access Control Lists (ACLs): Allow setting permissions for multiple users and groups. Managed with setfacl and getfacl.

4.2 Linux:

Linux also supports Extended Attributes and ACLs.

  • Extended Attributes: Managed with setxattr and getxattr.
  • Access Control Lists (ACLs): Managed with setfacl and getfacl.

5. Security Models and Practices

5.1 FreeBSD Security Model:

FreeBSD includes several features for enhanced security:

  • Jails: Provide a form of operating system-level virtualization. Each jail has its own filesystem, network configuration, and process space, which helps in isolating applications and services.
  • TrustedBSD Extensions: Enhance FreeBSD’s security by adding Mandatory Access Control (MAC) frameworks, which include fine-grained policies for file and process management.
  • Capsicum: A lightweight, capability-based security framework that allows developers to restrict the capabilities of running processes, minimizing the impact of potential vulnerabilities.

5.2 Linux Security Model:

Linux employs a range of security modules and practices:

  • SELinux (Security-Enhanced Linux): A set of kernel-level security enhancements that provide mandatory access controls. It defines policies that restrict how processes can interact with files and other processes.
  • AppArmor: A security module that restricts programs’ capabilities with per-program profiles. Unlike SELinux, it uses path-based policies.
  • Namespaces and cgroups: Used for containerization, allowing process isolation and resource control. These are the basis for technologies like Docker and Kubernetes.

6. System Configuration and Management

6.1 FreeBSD Configuration:

FreeBSD uses configuration files located in /etc and other directories for system management. The rc.conf file is central for system startup and service configuration. The sysctl command is used for kernel parameter adjustments.

6.2 Linux Configuration:

Linux configurations are distributed across various directories like /etc for system-wide settings and /proc for kernel parameters. Systemd is the most common init system, managing services and their dependencies. The sysctl command is also used in Linux for kernel parameter adjustments.

7. User Management

7.1 FreeBSD:

FreeBSD manages users and groups through /etc/passwd, /etc/group, and /etc/master.passwd. User and group management commands include adduser, pw, and groupadd.

7.2 Linux:

Linux also uses /etc/passwd and /etc/group for user management. User and group management commands include useradd, usermod, groupadd, and passwd.

8. Network Security

8.1 FreeBSD:

FreeBSD offers robust network security features, including:

  • IPFW: A firewall and packet filtering system integrated into the kernel.
  • PF (Packet Filter): A powerful and flexible packet filter that provides firewall functionality and network address translation (NAT).

8.2 Linux:

Linux provides several options for network security:

  • iptables: The traditional firewall utility for configuring packet filtering rules.
  • nftables: The successor to iptables, offering a more streamlined and flexible approach to packet filtering and NAT.
  • firewalld: A front-end for iptables and nftables, providing dynamic firewall management.

9. Backup and Recovery

9.1 FreeBSD:

FreeBSD supports several backup and recovery tools:

  • dump/restore: Traditional utilities for file system backups.
  • rsync: For incremental backups and synchronization.
  • zfs snapshots: ZFS filesystem features allow creating snapshots for backup and recovery.

9.2 Linux:

Linux offers a range of backup and recovery tools:

  • tar: A traditional tool for archiving files.
  • rsync: For incremental backups and synchronization.
  • LVM snapshots: Logical Volume Manager features provide snapshot capabilities.

10. Conclusion

Both FreeBSD and Linux offer robust permission settings and security features, each with its strengths and specific implementations. FreeBSD provides a comprehensive suite of security features, including jails and Capsicum, while Linux offers a variety of security modules like SELinux and AppArmor. Understanding these differences is crucial for system administrators to effectively manage and secure their systems. By leveraging the unique features of each operating system, administrators can enhance their systems’ security and maintain a robust and reliable computing environment.

The post Understanding Permission Setting and Security on FreeBSD vs. Linux appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.

on July 22, 2024 02:09 PM

July 14, 2024

uCareSystem has had the ability to detect if a system reboot is needed after applying maintenance tasks for some time now. With the new release, it will also show you the list of packages that requested the reboot. Additionally, the new release has squashed some annoying bugs. Restart ? Why though ? uCareSystem has had […]
on July 14, 2024 05:03 PM

July 12, 2024

Announcing Incus 6.3

Stéphane Graber

This release includes the long awaited OCI/Docker image support!
With this, users who previously were either running Docker alongside Incus or Docker inside of an Incus container just to run some pretty simple software that’s only distributed as OCI images can now just do it directly in Incus.

In addition to the OCI container support, this release also comes with:

  • Baseline CPU definition within clusters
  • Filesystem support for io.bus and io.cache
  • Improvements to incus top
  • CPU flags in server resources
  • Unified image support in incus-simplestreams
  • Completion of libovsdb transition

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Enjoy!

on July 12, 2024 05:05 PM

One of the cool new features in Incus 6.3 is the ability to run OCI images (such as those for Docker) directly in Incus.

You can certainly install the Docker packages in an Incus instance but that would put you in a situation of running a container in a container. Why not let Docker be a first-class citizen in the Incus ecosystem?

Note that this feature is new to Incus, which means that if you encounter issues, please discuss and report them.

Launching the docker.io nginx OCI container image in Incus.

Table of Contents

Background

In Incus you typically run system containers, which are containers that have been setup to resemble a virtual machine (VM). That is, you launch a system container in Incus, and this system container keeps running until you stop it. Just like with VMs.

In contrast, with Docker you are running application containers. You launch the Docker container with some configuration to perform a task, the task is performed, and the container stops. The task might also be something long-lived, like a Web server. In that case, the application container will have a longer lifetime. With application containers you are thinking primarily about tasks. You stop the task and the container is gone.

Prerequisites

You need to install Incus 6.3. If you are using Debian or Ubuntu, you would select the stable repository of Incus.

$ incus version
Client version: 6.3
Server version: 6.3
$

Adding the Docker repository to Incus

The container images from Docker follow the Open Container Image (OCI) format. There is also a special way to access those images through the Docker Hub Container Image Repository, which is distinctive from the other ways supported by Incus.

We will be adding (once only) a remote for the Docker repository. A remote is a configuration to access images from a particular repository of such images. Let’s see what we already have. We run incus remote list which invokes the list command for the functionality about remotes (incus remote). There are two remotes, the images which is the standard repository for container images and virtual machine images for Incus. And then there is local, which is the remote of the local installation of Incus. Every installation of Incus has such a default remote.

$ incus remote list
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
|      NAME       |                URL                 |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC | GLOBAL |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| images          | https://images.linuxcontainers.org | simplestreams | none        | YES    | NO     | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| local (current) | unix://                            | incus         | file access | NO     | YES    | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
$ 

The Docker repository has the URL https://docker.io and it is accessed through a different protocol. It is called oci.

Therefore, to add the Docker repository, we need to run incus remote add with the appropriate parameters. The URL is https://docker.io and the --protocol is oci.

$ incus remote add docker https://docker.io --protocol=oci
$

Let’s list again the available Incus remotes. The docker remote has been added successfully.

$ incus remote list
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
|      NAME       |                URL                 |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC | GLOBAL |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| docker          | https://docker.io                  | oci           | none        | YES    | NO     | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| images          | https://images.linuxcontainers.org | simplestreams | none        | YES    | NO     | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| local (current) | unix://                            | incus         | file access | NO     | YES    | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
$ 

If we ever want to remove a remote called myremote, we would incus remote remove myremote.

Launching a Docker image in Incus

When you launch (install and run) a container in Incus, you use incus launch with the appropriate parameters. In this first example, we launched the image hello-world, which is one of the Docker official images. As an image, it runs, printing some text and then it stops. In this case we used the parameter --console in order to see the text output. Finally, we use the --ephemeral parameter that would automatically delete the container image as soon as it stops. Ephemeral (εφήμερο) is a Greek word, meaning that it lasts only a brief time. Both these two additional parameters are not essential but are helpful in this specific case.

$ incus launch docker:hello-world --console --ephemeral
Launching the instance
Instance name is: best-feature                       

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

$ 

Note that we did not specify a name for the container. Astonishingly, Incus randomly selected the name best-feature with some editorial help. Since this is an ephemeral container and the specific image hello-world is short-lived, it is gone in a flash. Indeed, if you then run incus list, the Docker container is not found because it has been auto-deleted.

Let’s try another Docker image, the official nginx Docker image. It launches the nginx image which serves an empty nginx Web server. We need to run incus list and search for the IP address that was given to the container. Then, we can view the default page of the Web server in our Web browser.

$ incus launch docker:nginx --console --ephemeral
Launching the instance
Instance name is: best-feature                               
To detach from the console, press: <ctrl>+a q
2024/07/12 16:31:59 [notice] 21#21: using the "epoll" event method
2024/07/12 16:31:59 [notice] 21#21: nginx/1.27.0
2024/07/12 16:31:59 [notice] 21#21: built by gcc 12.2.0 (Debian 12.2.0-14) 
2024/07/12 16:31:59 [notice] 21#21: OS: Linux 6.5.0-41-generic
2024/07/12 16:31:59 [notice] 21#21: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2024/07/12 16:31:59 [notice] 21#21: start worker processes
2024/07/12 16:31:59 [notice] 21#21: start worker process 45
2024/07/12 16:31:59 [notice] 21#21: start worker process 46
2024/07/12 16:31:59 [notice] 21#21: start worker process 47
2024/07/12 16:31:59 [notice] 21#21: start worker process 48
2024/07/12 16:31:59 [notice] 21#21: start worker process 49
2024/07/12 16:31:59 [notice] 21#21: start worker process 50
2024/07/12 16:31:59 [notice] 21#21: start worker process 51
2024/07/12 16:31:59 [notice] 21#21: start worker process 52
2024/07/12 16:31:59 [notice] 21#21: start worker process 53
2024/07/12 16:31:59 [notice] 21#21: start worker process 54
2024/07/12 16:31:59 [notice] 21#21: start worker process 55
2024/07/12 16:31:59 [notice] 21#21: start worker process 56
10.10.10.1 - - [12/Jul/2024:16:33:29 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36" "-"
...
^C
...
2024/07/12 16:39:10 [notice] 21#21: signal 29 (SIGIO) received
2024/07/12 16:39:10 [notice] 21#21: signal 17 (SIGCHLD) received from 55
2024/07/12 16:39:10 [notice] 21#21: worker process 55 exited with code 0
2024/07/12 16:39:10 [notice] 21#21: exit
Error: stat /proc/-1: no such file or directory
$ 

Discussion

How long is the lifecycle time of a minimal Docker container?

How long does it take to launch the docker:hello-world container? I prepend time to the command, and check the stats at the end of the execution. It takes about four seconds to launch and run a simple Docker container. On my system and my Internet connection (it’s cached).

$ time incus launch docker:hello-world --console --ephemeral
...
real	0m3,956s
user	0m0,016s
sys	0m0,016s
$ 

How long does it take to repeatedly run a minimal Docker container?

We are removing the --ephemeral option. We launch the container and we give some relevant name, mydocker. This container remains after execution. We can see that after launching it, the container stays in a STOPPED state. We then incus start the container with the command time incus start mydocker --console, and we can see that it takes a bit more than half a second to complete the execution.

$ incus launch docker:hello-world mydocker --console
Launching mydocker

Hello from Docker!
...
$ incus list mydocker
+----------+---------+------+------+-----------------+-----------+
|   NAME   |  STATE  | IPV4 | IPV6 |      TYPE       | SNAPSHOTS |
+----------+---------+------+------+-----------------+-----------+
| mydocker | STOPPED |      |      | CONTAINER (APP) | 0         |
+----------+---------+------+------+-----------------+-----------+
$ time incus start mydocker --console

Hello from Docker!
...

Hello from Docker!
...
real	0m0,638s
user	0m0,003s
sys	0m0,013s
$ 

Are the container images cached?

Yes, they are cached. When you launch a container for the first time, you can visibly the downloading of the image components. Also, if you run incus image list, you can see the cached Docker.io images in the output.

Troubleshooting

Error: Can’t list images from OCI registry

You tried to list the images of the Docker Hub repository. Currently this is not supported. Technically it could be supported though the repository has more than ten thousand images. Using incus list without parameters may not make sense as the output would require to download the full list of images from the repository. Searching for images does make sense, but if you can search, you should be able to list as well. I am not sure what’s the maintainer’s view on this.

$ incus image list docker:
Error: Can't list images from OCI registry
$ 

As a workaround, you can simply locate the image name from the Website at https://hub.docker.com/search?q=&image_filter=official

Error: stat /proc/-1: no such file or directory

I ran many many instances of Docker containers and in a few cases I got the above error. I do not know what it is and I am adding it here in case someone manages to replicate. It feels that it’s some kind of race condition.

Failed getting remote image info: Image not found

You have configured Incus properly and you definitely have Incus version 6.3 (both the client and the server). But still, Incus cannot find any Docker image, not even hello-world.

This can happen if you are using a packaging of Incus that does not include the skopeo and umoci packages. If you are using the Zabbly distribution of Incus, these programs are included in the Incus packaging. Therefore, if you are using alternative packaging for Incus, you can manually install the versions of those packages as provided by your Linux distribution.

on July 12, 2024 04:50 PM

July 11, 2024

As of July 11, 2024, all flavors of Ubuntu 23.10, including Ubuntu Studio 23.10, codenamed “Mantic Minotaur”, have reached end-of-life (EOL). There will be no more updates of any kind, including security updates, for this release of Ubuntu.

If you have not already done so, please upgrade to Ubuntu Studio 24.04 LTS via the instructions provided here. If you do not do so as soon as possible, you will lose the ability without additional advanced configuration.

No single release of any operating system can be supported indefinitely, and Ubuntu Studio has no exception to this rule.

Regular Ubuntu releases, meaning those that are between the Long-Term Support releases, are supported for 9 months and users are expected to upgrade after every release with a 3-month buffer following each release.

Long-Term Support releases are identified by an even numbered year-of-release and a month-of-release of April (04). Hence, the most recent Long-Term Support release is 24.04 (YY.MM = 2024.April), and the next Long-Term Support release will be 26.04 (2026.April). LTS releases for official Ubuntu flavors (not Desktop or Server which are supported for five years) are three years, meaning LTS users are expected to upgrade after every LTS release with a one-year buffer.

on July 11, 2024 03:27 PM

July 07, 2024

One year of freelancing

Stéphane Graber

Introduction

It was exactly one year ago today that I left my day job as Engineering Manager of LXD at Canonical and went freelance. It’s been quite a busy year but things turned out better than I had hoped and I’m excited about year two!

Zabbly

Zabbly is the company I created for my freelance work. Over the year, a number of my personal projects were transferred over to being part of Zabbly, including the operation of my ASN (as399760.net), my datacenter co-location infrastructure and more.

Through Zabbly I offer a mix of by-the-hour consultation with varying prices depending on the urgency of the work (basic consultation, support, emergency support) as well as fixed-cost services, mostly related to Incus (infrastructure review, migration from LXD, remote or on-site trainings, …).

Other than Incus, Zabbly also provides up to date mainline kernel packages for Debian and Ubuntu and associated up to date ZFS packages. This is something that came out as needed for a number of projects I work on, from being able to test Incus on recent Linux kernels to avoiding Ubuntu kernel bugs on my own and NorthSec’s servers.

Zabbly is also the legal entity for donations related to my open source work, currently supporting:

And lastly, Zabbly also runs a Youtube channel covering the various projects I’m involved with.
A lot of it is currently about Incus, but there is also the occasional content on NorthSec or other side projects. The channel grew to a bit over 800 subscribers in the past 10 months or so.

Now, how well is all of that doing? Well enough that I could stop relying on my savings just a few months in and turn a profit by the end of 2023. Zabbly currently has around a dozen active customers from 7 countries and across 3 continents with size ranging from individuals to large governmental agencies.

2024 has also been very good so far and while I’m not back to the level of income I had while at Canonical, I also don’t have to go through 4-5 hours of meetings a day and get to actually contribute to open source again, so I’ll gladly take the (likely temporary) pay cut!

Incus

A lot of my time in the past year has been dedicated to Incus.

This wasn’t exactly what I had planned when leaving Canonical.
I was expecting LXD to keep on going as a proper Open Source project as part of the Linux Containers community. But Canonical had other plans and so things changed a fair bit over the few months following my departure.

For those not aware, the rough timeline of what happened is:

So rather than contributing to LXD while working on some other new projects, a lot of my time has instead gone into setting up the Incus project for success.

And I think I’ve been pretty successful at that as we’re seeing a monthly user base growth (based on image server interactions) of around 25%. Incus is now natively available in most Linux distributions (Alpine, Arch Linux, Debian, Gentoo, Nix, Ubuntu and Void) with more coming soon (Fedora and EPEL).

Incus has 6 maintainers, most of whom were the original LXD maintainers.
We’ve seen over 100 individual contributors since Incus was forked from LXD including around 20 students from the University of Texas in Austin who contributed to Incus as part of their virtualization class.

I’ve been acting as the release manager for Incus, also running all the infrastructure behind the project, mentoring new contributors and reviewing a number of changes while also contributing a number of new features myself, some sponsored by my customers, some just based on my personal interests.

A big milestone for Incus was its 6.0 LTS release as that made it suitable for production users.
Today we’re seeing around 40% of our users running the LTS release while the rest run the monthly releases.

On top of Incus itself, I’ve also gotten to contribute to both create the Incus Deploy project, which is a collection of Ansible playbooks and Terraform modules to make it easy to deploy Incus clusters and contribute to both the Ansible Incus connection plugin and our Incus Terraform/OpenTofu provider.

The other Linux Containers projects

As mentioned in my recent post about the 6.0.1 LTS releases, the Linux Containers project tries to do coordinated LTS releases on our core projects. This currently includes LXC, LXCFS and Incus.

I didn’t have to do too much work myself on LXC and LXCFS, thanks to Aleksandr Mikhalitsyn from the Canonical LXD team who’s been dealing with most of the review and issues in both LXC and LXCFS alongside other long time maintainers, Serge Hallyn and Christian Brauner.

NorthSec

NorthSec is a yearly cybersecurity conference, CTF and training provider, usually happening in late May in Montreal, Canada. It’s been operating since 2013 and is now one of the largest on-site CTF events in the world along with having a pretty sizable conference too.

I’m the current VP of Infrastructure for the event and have been involved with it from the beginning, designing and running its infrastructure, first on a bunch of old donated hardware and then slowly modernizing that to the environment we have now with proper production hardware both at our datacenter and on-site during the event.

This year, other than transitioning everything from LXD to Incus, the main focus has been on upgrading the OS on our 6 physical servers and dozens of infrastructure containers and VMs from Ubuntu 20.04 LTS to Ubuntu 24.04 LTS.

At the same time, also significantly reducing the complexity of our infrastructure by operating a single unified Incus cluster, switching to OpenID Connect and OpenFGA for access control and automating even more of our yearly infrastructure with Ansible and Terraform.

Automation is really key with NorthSec as it’s a non-profit organization with a lot of staffing changes every year, around 100 year long contributors and then an additional 50 or so on-site volunteers!

I went over the NorthSec infrastructure in a couple of YouTube videos:

Conferences

I’ve cut down and focused my conference attendance a fair bit over this past year.
Part of it for budgetary reasons, part of it because of having so many things going on that fitting another couple of weeks of cross-country travel was difficult.

I decided to keep attending two main events. The Linux Plumbers Conference where I co-organizer the Containers and Checkpoint-Restore Micro-Conference and FOSDEM where I co-organize both the Containers and the Kernel devrooms.

With one event usually in September/October and the other in February, this provides two good opportunities to catch up with other developers and users, get to chat a bunch and make plans for the year.

I’m looking forward to catching up with folks at the upcoming Linux Plumbers Conference in Vienna, Austria!

What’s next

I’ve got quite a lot going on, so the remaining half of 2024 and first half of 2025 are going to be quite busy and exciting!

On the Incus front, we’ve got some exciting new features coming in, like the native OCI container support, more storage options, more virtual networking features, improved deployment tooling, full coverage of Incus features in Terraform/OpenTofu and even a small immutable OS image!

NorthSec is currently wrapping up a few last items related to its 2024 edition and then it will be time to set up the development infrastructure and get started on organizing 2025!

For conferences, as mentioned above, I’ll be in Vienna, Austria in September for Linux Plumbers and expect to be in Brussels again for FOSDEM in February.

There’s also more that I’m not quite ready to talk about, but expect some great Incus related news to come out in the next few months!

on July 07, 2024 12:00 PM

July 04, 2024

 

Critical OpenSSH Vulnerability (CVE-2024-6387): Please Update Your Linux

A critical security flaw (CVE-2024-6387) has been identified in OpenSSH, a program widely used for secure remote connections. This vulnerability could allow attackers to completely compromise affected systems (remote code execution).

Who is Affected?

Only specific versions of OpenSSH (8.5p1 to 9.7p1) running on glibc-based Linux systems are vulnerable. Newer versions are not affected.

What to Do?

  1. Update OpenSSH: Check your version by running ssh -V in your terminal. If you're using a vulnerable version (8.5p1 to 9.7p1), update immediately.

  2. Temporary Workaround (Use with Caution): Disabling the login grace timeout (setting LoginGraceTime=0 in sshd_config) can mitigate the risk, but be aware it increases susceptibility to denial-of-service attacks.

  3. Recommended Security Enhancement: Install fail2ban to prevent brute-force attacks. This tool automatically bans IPs with too many failed login attempts.

Optional: IP Whitelisting for Increased Security

Once you have fail2ban installed, consider allowing only specific IP addresses to access your server via SSH. This can be achieved using:

  • ufw for Ubuntu

  • firewalld for AlmaLinux or Rocky Linux

Additional Resources

About Fail2ban

Fail2ban monitors log files like /var/log/auth.log and bans IPs with excessive failed login attempts. It updates firewall rules to block connections from these IPs for a set duration. Fail2ban is pre-configured to work with common log files and can be easily customized for other logs and errors.

Installation Instructions:

  • Ubuntu: sudo apt install fail2ban

  • AlmaLinux/Rocky Linux: sudo dnf install fail2ban


About DevSec Hardening Framework

The DevSec Hardening Framework is a set of tools and resources that helps automate the process of securing your server infrastructure. It addresses the challenges of manually hardening servers, which can be complex, error-prone, and time-consuming, especially when managing a large number of servers. The framework integrates with popular infrastructure automation tools like Ansible, Chef, and Puppet. It provides pre-configured modules that automatically apply secure settings to your operating systems and services such as OpenSSH, Apache and MySQL. This eliminates the need for manual configuration and reduces the risk of errors.


Prepare by LinuxMalaysia with the help of Google Gemini


5 July 2024

 

In Google Doc Format 

 

https://docs.google.com/document/d/e/2PACX-1vTSU27PLnDXWKjRJfIcjwh9B0jlSN-tnaO4_eZ_0V5C2oYOPLLblnj3jQOzCKqCwbnqGmpTIE10ZiQo/pub 



on July 04, 2024 09:42 PM

July 02, 2024

My Debian contributions this month were all sponsored by Freexian.

  • I switched man-db and putty to Rules-Requires-Root: no, thanks to a suggestion from Niels Thykier.
  • I moved some files in pcmciautils as part of the /usr move.
  • I upgraded libfido2 to 1.15.0.
  • I made an upstream release of multipart 0.2.5.
  • I reviewed some security-update patches to putty.
  • I packaged yubihsm-connector, yubihsm-shell, and python-yubihsm.
  • openssh:
    • I did a bit more planning for the GSS-API package split, though decided not to land it quite yet to avoid blocking other changes on NEW queue review.
    • I removed the user_readenv option from PAM configuration (#1018260), and prepared a release note.
  • Python team:
    • I packaged zope.deferredimport, needed for a new upstream version of python-persistent.
    • I fixed some incompatibilities with pytest 8: ipykernel and ipywidgets.
    • I fixed a couple of RC or soon-to-be-RC bugs in khard (#1065887 and #1069838), since I use it for my address book and wanted to get it back into testing.
    • I fixed an RC bug in python-repoze.sphinx.autointerface (#1057599).
    • I sponsored uploads of python-channels-redis (Dale Richards) and twisted (Florent ‘Skia’ Jacquet).
    • I upgraded babelfish, django-favicon-plus-reloaded, dnsdiag, flake8-builtins, flufl.lock, ipywidgets, jsonpickle, langtable, nbconvert, requests, responses, partd, pytest-mock, python-aiohttp (fixing CVE-2024-23829, CVE-2024-23334, CVE-2024-30251, and CVE-2024-27306), python-amply, python-argcomplete, python-btrees, python-cups, python-django-health-check, python-fluent-logger, python-persistent, python-plumbum, python-rpaths, python-rt, python-sniffio, python-tenacity, python-tokenize-rt, python-typing-extensions, pyupgrade, sphinx-copybutton, sphinxcontrib-autoprogram, uncertainties, zodbpickle, zope.configuration, zope.proxy, and zope.security to new upstream versions.

You can support my work directly via Liberapay.

on July 02, 2024 12:02 PM

June 21, 2024

As the tech world comes together to celebrate FreeBSD Day 2024, we are thrilled to bring you an exclusive interview with none other than Beastie, the iconic mascot of BSD! In a rare and exciting appearance, Beastie joins Kim McMahon to share insights about their journey, their role in the BSD community, and some fun personal preferences. Here’s a sneak peek into the life of the beloved mascot that has become synonymous with BSD.

From Icon to Legend: How Beastie Became the BSD Mascot

Beastie, with their distinct and endearing devilish charm, has been the face of BSD for decades. But how did they land this coveted role? During the interview, Beastie reveals that their journey began back in the early days of BSD. The character was originally drawn by John Lasseter of Pixar fame, and quickly became a symbol of the BSD community’s resilience and innovation. Beastie’s playful yet formidable appearance captured the spirit of BSD, making them an instant hit among developers and users alike.

A Day in the Life of Beastie

What does a typical day look like for the BSD mascot? Beastie shares that their role goes beyond just being a symbol. They actively participate in community events, engage with developers, and even help in promoting BSD at various conferences around the globe. Beastie’s presence is a source of inspiration and motivation for the BSD community, reminding everyone of the project’s rich heritage and vibrant future.

Beastie’s Favorite Tools and Editors

No interview with a tech mascot would be complete without delving into their favorite tools. Beastie is an advocate of keeping things simple and efficient. When asked about their preferred text editor, Beastie enthusiastically endorsed Vim, praising its versatility and powerful features. They also shared their admiration for the classic Unix philosophy, which aligns perfectly with the minimalist yet powerful nature of Vim.

Engaging with the BSD Community

Beastie’s role is not just about representation; it’s about active engagement. They spoke about the importance of community in the BSD ecosystem and how it has been pivotal in driving the project forward. From organizing hackathons to participating in mailing lists, Beastie is deeply involved in fostering a collaborative and inclusive environment. They highlighted the incredible contributions of the BSD community, acknowledging that it’s the collective effort that makes BSD a robust and reliable operating system.

Looking Ahead: The Future of BSD

As we look to the future, Beastie remains optimistic about the path ahead for BSD. They emphasized the ongoing developments and the exciting projects in the pipeline that promise to enhance the BSD experience. Beastie encouraged new users and seasoned developers alike to explore BSD, contribute to its growth, and be a part of its dynamic community.

Join the Celebration

To mark FreeBSD Day 2024, the community is hosting a series of events, including workshops, Q&A sessions, and more. Beastie’s interview with Kim McMahon is just one of the highlights. Be sure to tune in and catch this rare glimpse into the life of BSD’s beloved mascot.

Final Thoughts

Beastie’s interview is a testament to the enduring legacy and vibrant community of BSD. As we celebrate FreeBSD Day 2024, let’s take a moment to appreciate the contributions of everyone involved and look forward to an exciting future for BSD.

Don’t miss out on this exclusive interview—check it out on YouTube and join the celebration of FreeBSD Day 2024!

Watch the interview here

The post Celebrating FreeBSD Day 2024: An Exclusive Interview with Beastie appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.

on June 21, 2024 06:03 PM

June 18, 2024

 

Download And Use latest Version Of Nginx Stable

To ensure you receive the latest security updates and bug fixes for Nginx, configure your system's repository specifically for it. Detailed instructions on how to achieve this can be found on the Nginx website. Setting up the repository allows your system to automatically download and install future Nginx updates, keeping your web server running optimally and securely.

Visit this websites for information on how to configure your repository for Nginx.

https://nginx.org/en/linux_packages.html

https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/ 

Installing Nginx on different Linux distributions

Example from https://docs.bunkerweb.io/latest/integrations/#linux 

Ubuntu

sudo apt install -y curl gnupg2 ca-certificates lsb-release debian-archive-keyring && \
curl https://nginx.org/
keys/nginx_signing.key | gpg --dearmor \
| sudo tee /usr/
share/keyrings/nginx-archive-keyring.gpg >/dev/null && \
echo
"deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/debian `lsb_release -cs` nginx"
 \
| sudo tee /etc/apt/sources.list.d/nginx.list

# Latest Stable (pick either latest stable
or by version)

sudo apt
update && \
sudo apt
install -y nginx

#
By version (pick one only, latest stable or by version)

sudo apt
update && \
sudo apt
install -y nginx=1.24.0-1~$(lsb_release -cs)

AlmaLinux / Rocky Linux (Redhat)

Create the following file at /etc/yum.repos.d/nginx.repo

[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/
gpgcheck=1
enabled=0
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

# Latest Stable (pick either latest stable or by version)

sudo dnf install nginx

# Latest Stable (pick either latest stable or by version)

sudo dnf install nginx-1.24.0

Nginx Fork (This for reference only - year 2024)

https://thenewstack.io/freenginx-a-fork-of-nginx/ 

https://github.com/freenginx/ 

Use this Web tool to configure nginx.

https://www.digitalocean.com/community/tools/nginx

https://github.com/digitalocean/nginxconfig.io 

Example

https://www.digitalocean.com/community/tools/nginx?domains.0.server.domain=songketmail.linuxmalaysia.lan&domains.0.server.redirectSubdomains=false&domains.0.https.hstsPreload=true&domains.0.php.phpServer=%2Fvar%2Frun%2Fphp%2Fphp8.2-fpm.sock&domains.0.logging.redirectAccessLog=true&domains.0.logging.redirectErrorLog=true&domains.0.restrict.putMethod=true&domains.0.restrict.patchMethod=true&domains.0.restrict.deleteMethod=true&domains.0.restrict.connectMethod=true&domains.0.restrict.optionsMethod=true&domains.0.restrict.traceMethod=true&global.https.portReuse=true&global.https.sslProfile=modern&global.https.ocspQuad9=true&global.https.ocspVerisign=true&global.security.limitReq=true&global.security.securityTxt=true&global.logging.errorLogEnabled=true&global.logging.logNotFound=true&global.tools.modularizedStructure=false&global.tools.symlinkVhost=false 

Harisfazillah Jamel - LinuxMalaysia - 20240619
on June 18, 2024 10:18 PM

June 16, 2024

The previous release of uCareSystem, version 24.05.0, introduced enhanced maintenance and cleanup capabilities for flatpak packages. The uCareSystem 24.06.0 has a special treat for desktop users 🙂 This new version includes: The people who supported the previous development cycle ( version 24.05 ) with their generous donations and are mentioned in the uCareSystem app: Where […]
on June 16, 2024 08:49 PM

June 01, 2024

I’ve a self-hosted Nextcloud installation which is frankly a pain, there’s a good chance updates will break everything.

Plesk is a fairly intuitive interface for self hosting but is best described as non-standard Ubuntu – it handles PHP oddly in particular. Try running any of the php occ commands in it’s built-in ssh terminal and you’ll experience an exercise in frustration.

It’s so much easier to ssh from an Ubuntu terminal and run occ commands directly – giving a clear error message that you can actually do something with.

So its the Circles app, lets disable it:

php occ app:disable circles

Then repair the installation:

php occ maintenance:repair

Finally turn off maintenance mode:

php occ maintenance:mode --off

I don’t even use that app but we’re back.

on June 01, 2024 03:37 PM

May 24, 2024

In my previous blog, I explored The New APT 3.0 solver. Since then I have been at work in the test suite making tests pass and fixing some bugs.

You see for all intents and purposes, the new solver is a very stupid naive DPLL SAT solver (it just so happens we don’t actually have any pure literals in there). We can control it in a bunch of ways:

  1. We can mark packages as “install” or “reject”
  2. We can order actions/clauses. When backtracking the action that came later will be the first we try to backtrack on
  3. We can order the choices of a dependency - we try them left to right.

This is about all that we really want to do, we can’t go if we reach a conflict, say “oh but this conflict was introduced by that upgrade, and it seems more important, so let’s not backtrack on the upgrade request but on this dependency instead.”.

This forces us to think about lowering the dependency problem into this form, such that not only do we get formally correct solutions, but also semantically correct ones. This is nice because we can apply a systematic way to approach the issue rather than introducing ad-hoc rules in the old solver which had a “which of these packages should I flip the opposite way to break the conflict” kind of thinking.

Now our test suite has a whole bunch of these semantics encoded in it, and I’m going to share some problems and ideas for how to solve them. I can’t wait to fix these and the error reporting and then turn it on in Ubuntu and later Debian (the defaults change is a post-trixie change, let’s be honest).

apt upgrade is hard

The apt upgrade commands implements a safe version of dist-upgrade that essentially calculates the dist-upgrade, and then undoes anything that would cause a package to be removed, but it (unlike its apt-get counterpart) allows the solver to install new packages.

Now, consider the following package is installed:

X Depends: A (= 1) | B

An upgrade from A=1 to A=2 is available. What should happen?

The classic solver would choose to remove X in a dist-upgrade, and then upgrade A, so it’s answer is quite clear: Keep back the upgrade of A.

The new solver however sees two possible solutions:

  1. Install B to satisfy X Depends A (= 1) | B.
  2. Keep back the upgrade of A

Which one does it pick? This depends on the order in which it sees the upgrade action for A and the dependency, as it will backjump chronologically. So

  1. If it gets to the dependency first, it marks A=1 for install to satisfy A (= 1). Then it gets to the upgrade request, which is just A Depends A (= 2) | A (= 1) and sees it is satisfied already and is content.

  2. If it gets to the upgrade request first, it marks A=2 for install to satisfy A (= 2). Then later it gets to X Depends: A (= 1) | B, sees that A (= 1) is not satisfiable, and picks B.

We have two ways to approach this issue:

  1. We always order upgrade requests last, so they will be kept back in case of conflicting dependencies
  2. We require that, for apt upgrade a currently satisfied dependency must be satisfied by currently installed packages, hence eliminating B as a choice.

Recommends are hard too

See if you have a X Recommends: A (= 1) and a new version of A, A (= 2), the solver currently will silently break the Recommends in some cases.

But let’s explore what the behavior of a X Recommends: A (= 1) in combination with an available upgrade of A (= 2) should be. We could say the rule should be:

  • An upgrade should keep back A instead of breaking the Recommends
  • A dist-upgrade should either keep back A or remove X (if it is obsolete)

This essentially leaves us the same choices as for the previous problem, but with an interesting twist. We can change the ordering (and we already did), but we could also introduce a new rule, “promotions”:

A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently satisfied, must continue to be satisfied, that is, it effectively is promoted to a Depends.

This neatly solves the problem for us. We will never break Recommends that are satisfied.

Likewise, we already have a Recommends demotion rule:

A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently unsatisfied, will not be further evaluated (it is treated like a Suggests is in the default configuration).

Whether we should be allowed to break Suggests with our decisions or not (the old autoremover did not, for instance) is a different decision. Should we promote currently satisified Suggests to Depends as well? Should we follow currently satisified Suggests so the solver sees them and doesn’t autoremove them, but treat them as optional?

tightening of versioned dependencies

Another case of versioned dependencies with alternatives that has complex behavior is something like

X Depends: A (>= 2) | B
X Recommends: A (>= 2) | B

In both cases, installing X should upgrade an A < 2 in favour of installing B. But a naive SAT solver might not. If your request to keep A installed is encoded as A (= 1) | A (= 2), then it first picks A (= 1). When it sees the Depends/Recommends it will switch to B.

We can solve this again as in the previous example by ordering the “keep A installed” requests after any dependencies. Notably, we will enqueue the common dependencies of all A versions first before selecting a version of A, so something may select a version for us.

version narrowing instead of version choosing

A different approach to dealing with the issue of version selection is to not select a version until the very last moment. So instead of selecting a version to satisfy A (>= 2) we instead translate

Depends: A (>= 2)

into two rules:

  1. The package selection rule:

     Depends: A
    

    This ensures that any version of A is installed (i.e. it adds a version choice clause, A (= 1) | A (= 2) in an example with two versions for A.

  2. The version narrowing rule:

     Conflicts: A (<< 2)
    

    This outright would reject a choice of A (= 1).

So now we have 3 kinds of clauses:

  1. package selection
  2. version narrowing
  3. version selection

If we process them in that order, we should surely be able to find the solution that best matches the semantics of our Debian dependency model, i.e. selecting earlier choices in a dependency before later choices in the face of version restrictions.

This still leaves one issue: What if our maintainer did not use Depends: A (>= 2) | B but e.g. Depends: A (= 3) | B | A (= 2). He’d expect us to fall back to B if A (= 3) is not installable, and not to B. But we’d like to enqueue A and reject all choices other than 3 and 2. I think it’s fair to say: “Don’t do that, then” here.

Implementing strict pinning correctly

APT knows a single candidate version per package, this makes the solver relatively deterministic: It will only ever pick the candidate, or an installed version. This also happens to significantly reduce the search space which is good - less backtracking. An uptodate system will only ever have one version per package that can be installed, so we never actually have to choose versions.

But of course, APT allows you to specify a non-candidate version of a package to install, for example:

apt install foo/oracular-proposed

The way this works is that the core component of the previous solver, which is the pkgDepCache maintains what essentially amounts to an overlay of the policy that you could see with apt-cache policy.

The solver currently however validates allowed version choices against the policy directly, and hence finds these versions are not allowed and craps out. This is an interesting problem because the solver should not be dependent on the pkgDepCache as the pkgDepCache initialization (Building dependency tree...) accounts for about half of the runtime of APT (until the Y/n prompt) and I’d really like to get rid of it.

But currently the frontend does go via the pkgDepCache. It marks the packages in there, building up what you could call a transaction, and then we translate it to the new solver, and once it is done, it translates the result back into the pkgDepCache.

The current implementation of “allowed version” is implemented by reducing the search space, i.e. every dependency, we outright ignore any non-allowed versions. So if you have a version 3 of A that is ignored a Depends: A would be translated into A (= 2) | A (= 1).

However this has two disadvantages. (1) It means if we show you why A could not be installed, you don’t even see A (= 3) in the list of choices and (2) you would need to keep the pkgDepCache around for the temporary overrides.

So instead of actually enforcing the allowed version rule by filtering, a more reasonable model is that we apply the allowed version rule by just marking every other version as not allowed when discovering the package in the from depcache translation layer. This doesn’t really increase the search space either but it solves both our problem of making overrides work and giving you a reasonable error message that lists all versions of A.

pulling up common dependencies to minimize backtracking cost

One of the common issues we have is that when we have a dependency group

`A | B | C | D`

we try them in order, and if one fails, we undo everything it did, and move on to the next one. However, this isn’t perhaps the best choice of operation.

I explained before that one thing we do is queue the common dependencies of a package (i.e. dependencies shared in all versions) when marking a package for install, but we don’t do this here: We have already lowered the representation of the dependency group into a list of versions, so we’d need to extract the package back out of it.

This can of course be done, but there may be a more interesting solution to the problem, in that we simply enqueue all the common dependencies. That is, we add n backtracking levels for n possible solutions:

  1. We enqueue the common dependencies of all possible solutions deps(A)&deps(B)&deps(C)&deps(D)
  2. We decide (adding a decision level) not to install D right now and enqueue deps(A)&deps(B)&deps(C)
  3. We decide (adding a decision level) not to install C right now and enqueue deps(A)&deps(B)
  4. We decide (adding a decision level) not to install B right now and enqueue A

Now if we need to backtrack from our choice of A we hopefully still have a lot of common dependencies queued that we do not need to redo. While we have more backtracking levels, each backtracking level would be significantly cheaper, especially if you have cheap backtracking (which admittedly we do not have, yet anyway).

The caveat though is: It may be pretty expensive to find the common dependencies. We need to iterate over all dependency groups of A and see if they are in B, C, and D, so we have a complexity of roughly

#A * (#B+#C+#D)

Each dependency group we need to check i.e. is X|Y in B meanwhile has linear cost: We need to compare the memory content of two pointer arrays containing the list of possible versions that solve the dependency group. This means that X|Y and Y|X are different dependencies of course, but that is to be expected – they are. But any dependency of the same order will have the same memory layout.

So really the cost is roughly N^4. This isn’t nice.

You can apply various heuristics here on how to improve that, or you can even apply binary logic:

  1. Enqueue common dependencies of A|B|C|D
  2. Move into the left half, enqueue of A|B
  3. Again divide and conquer and select A.

This has a significant advantage in long lists of choices, and also in the common case, where the first solution should be the right one.

Or again, if you enqueue the package and a version restriction instead, you already get the common dependencies enqueued for the chosen package at least.

on May 24, 2024 08:57 AM

May 20, 2024

OR...

Aaron Rainbolt

Contrary to what you may be thinking, this is not a tale of an inexperienced coder pretending to know what they’re doing. I have something even better for you.

It all begins in the dead of night, at my workplace. In front of me is a typical programmer’s desk - two computers, three monitors (one of which isn’t even plugged in), a mess of storage drives, SD cards, 2FA keys, and an arbitrary RPi 4, along with a host of items that most certainly don’t belong on my desk, and a tangle of cables that would give even a rat a migraine. My dev laptop is sitting idle on the desk, while I stare intently at the screen of a system running a battery of software tests. In front of me is the logs of a failed script run.

Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

Generally when this particular script fails, it gives me some indication as to what went wrong. There are thorough error catching measures (or so I thought) throughout the code, so that if anything goes wrong, I know what went wrong and where. This time though, I’m greeted by something like this:

$ systemctl status test-sh.service
test-sh.service - does testing things
...
May 20 23:00:00 desktop-pc systemd[1]: Starting test-sh.service - does testing things
May 20 23:00:00 desktop-pc systemd[1]: test-sh.service: Failed with result ‘exit-code’.
May 20 23:00:00 desktop-pc systemd[1]: Failed to start test-sh.service.

I stare at the screen in bewilderment for a few seconds. No debugging info, no backtraces, no logs, not even an error message. It’s as if the script simply decided it needed some coffee before it would be willing to keep working this late at night. Having heard the tales of what happens when you give a computer coffee, I elected to try a different approach.

$ vim /usr/bin/test-sh
1 #!/bin/bash
2 #
3 # Copyright 2024 ...
4 set -u;
5 set -e;

Before I go into what exactly is wrong with this picture, I need to explain a bit about how Bash handles the ultimate question of life, “what is truth?”

(RED ALERT: I do not know if I’m correct about the reasoning behind the design decisions I talk about in the rest of this article. Don’t use me as a reference for why things work like this, and please correct me if I’ve botched something. Also, a lot of what I describe here is simplified, so don’t be surprised if you notice or discover that things are a bit more complex in reality than I make them sound like here.)

Bash, as many of you probably know, is primarily a “glue” language - it glues applications to each other, it glues the user to the applications, and it glues one’s sanity to the ceiling, far out of the user’s reach. As such, it features a bewildering combination of some of the most intuitive and some of the least intuitive behaviors one can dream up, and the handling of truth and falsehood is one of these bewildering things.

Every command you run in Bash reports back whether or not what it did “worked”. (“Worked” is subjective and depends on the command, but for the most part if a command says “It worked”, you can trust that it did what you told it to, at least mostly.) This is done by means of an “exit code”, which is nothing more than a number between 0 and 255. If a program exits and hands the shell an exit code of 0, it usually means “it worked”, whereas a non-zero exit code usually means “something went wrong”. (This makes sense if you know a bit about how programs written in C work - if your program is written to just “do things” and then exit, it will default to exiting with code zero.)

Because zero = good and non-zero = not good, it makes sense to treat zero as meaning “true” and non-zero as meaning “false”. That’s exactly what Bash does - if you do something like “if command; then commandIfTrue; else commandIfFalse; fi”, Bash will run “commandIfTrue” if “command” exits with 0, and will run “commandIfFalse” if “command” exits with 1 or higher.

Now since Bash is a glue language, it has to be able to handle it if a command runs and fails. This can be done with some amount of difficulty by testing (almost) every command the script runs, but that can be quite tedious. There’s a (generally) easier way however, which is to tell the script to immediately exit if any command exits with a non-zero exit code. This is done by using the command “set -e” at or near the top of the script. Once “set -e” is active, any command that fails will cause the whole script to stop.

So back to my script. I’m using “set -e” so that if anything goes wrong, the script stops. What could go wrong other than a failed command? To answer that question, we have to take a look at how some things work in C.

C is a very different language than Bash. Whereas Bash is designed to take a bunch of pieces and glue them together, C is designed to make the pieces themselves. You can think of Bash as being a glue gun and C as being a 3d printer. As such, C does not concern itself nearly as much with things like return codes and exiting when a command fails. It focuses on taking data and doing stuff with it.

Since C is more data- and algorithm-oriented, true and false work significantly differently here. C sees 0 as meaning “none, empty, all bits set to 0, etc.” and thus treats it as meaning “false”. Any number greater than 0 has a value, and can be treated as “on” or “true”. An astute reader will notice this is exactly the opposite of how Bash works, where 0 is true and non-zero is false. (In my opinion this is a rather lamentable design decision, but sadly these behaviors have been standardized for longer than I’ve been alive, so there’s not much point in trying to change them. But I digress.)

C also of course has features for doing math, called “operators”. One of the most common operators is the assignment operator, “=”. The assignment operator’s job is to take whatever you put on the right side of it, and store it in whatever you put on the left side. If you say “a = 0”, the value “0” will be stored in the variable “a” (assuming things work right). But the assignment operator has a trick up its sleeve - not only does it assign the value to the variable, it also returns the value. Basically what that means is that the statement “a = 0” spits out an extra value that you can do things with. This allows you to do things like “a = b = 0”, which will assign 0 to “b”, return zero, and then assign that returned zero to "a”. (The assignment of the second zero to “a” also returns a zero, but that simply gets ignored by the program since there’s nothing to do with it.)

You may be able to see where I’m going with this. Assigning a value to a variable also returns that value… and 0 means “false”… so “a = 0” succeeds, but also returns what is effectively “false”. That means if you do something like “if (a = 0) { ... } else { explodeComputer(); }”, the computer will explode. “a = 0” returns “false”, thus the “if” condition does not run and the “else” condition does. (Coincidentally, this is also a good example of the “world’s last programming bug” - the comparison operation in C is “==”, which is awfully easy to mistype as the assignment operator, “=”. Using an assignment operator in an “if” statement like this will almost always result in the code within the “if” being executed, as the value being stored in the variable will usually be non-zero and thus will be seen as “true” by the “if” statement. This also corrupts the variable you thought you were comparing something to. Some fear that a programmer with access to nuclear weapons will one day write something like “if (startWar = 1) { destroyWorld(); }” and thus the world will be destroyed by a missing equals sign.)

“So what,” you say. “Bash and C are different languages.” That’s true, and in theory this would mean that everything here is fine. Unfortunately theory and practice are the same in theory but much different in practice, and this is one of those instances where things go haywire because of weird differences like this. There’s one final piece of the puzzle to look at first though - how to do math in Bash.

Despite being a glue language, Bash has some simple math capabilities, most of which are borrowed from C. Yes, including the behavior of the assignment operator and the values for true and false. When you want to do math in Bash, you write “(( do math here... ))”, and everything inside the double parentheses is evaluated. Any assignment done within this mode is executed as expected. If I want to assign the number 5 to a variable, I can do “(( var = 5 ))” and it shall be so.

But wait, what happens with the return value of the assignment operator?

Well, take a guess. What do you think Bash is going to do with it?

Let’s look at it logically. In C (and in Bash’s math mode), 0 is false and non-zero is true. In Bash, 0 is true and non-zero is false. Clearly if whatever happen within math mode fails and returns false (0), Bash should not misinterpret this as true! Things like “(( 5 == 6 ))” shouldn’t be treated as being true, right? So what do we do with this conundrum? Easy solution - convert the return value to an exit code so that its semantics are retained across the C/Bash barrier. If the return value of the math mode statement is false (0), it should be converted to Bash’s concept of false (non-zero), therefore the return value of 0 is converted to an exit code of 1. On the other hand, if the return value of the math mode statement is true (non-zero), it should be converted to Bash’s concept of true (0), therefore the return value of anything other than 0 is converted to an exit code of 0. (You probably see the writing on the wall at this point. Spoiler, my code was weighed in the balances and found wanting.)

So now we can put all this nice, logical, sensible behavior together and make a glorious mess with it. Guess what happens if you run “(( var = 0 ))” in a script where “set -e” is enabled.

  • “0” is assigned to “var”.

  • The statement returns 0.

  • Bash dutifully converts that to a 1 (false/failure).

  • Bash now sees the command as having failed.

  • set -e” says the script should immediately stop if anything fails.

  • The script crashes.

You can try this for yourself - pop open a terminal and run “set -e; (( var = 0 ));” and watch in awe as your terminal instantly closes (or otherwise shows an indication that Bash has exited).

So back to the code. In my script, I have a function that helps with generating random numbers within any specified bounds. Basically it just grabs the value of “$RANDOM” (which is a special variable in Bash that always returns an integer between 0 and 32767) and does some manipulations on it so that it becomes a random number between a “lower bound” and an “upper bound” parameter. In the guts of that function’s code I have many “math mode” statements for getting those numbers into shape. Those statements include variable assignments, and those variable assignments were throwing exit codes into the script. I had written this before enabling “set -e”, so everything was fine before, but now “set -e” was enabled and Bash was going to enforce it as ruthlessly as possible.

While I will never know what line of code triggered the failure, it’s a fairly safe bet that the culprit was:

88 (( _val = ( _val % ( _adj_upper_bound + 1 ) ) ));

This basically takes whatever is in “_val” , divides it by “_adj_upper_bound + 1”, and then assigns the remainder of that operation to “_val”. This makes sure that “_val” is lower than “_adj_upper_bound + 1”. (This is typically known as a “getting the modulus”, and the “%” operator here is the “modulo operator”. For the math people reading this, don’t worry, I did the requisite gymnastics to ensure this code didn’t have modulo bias.) If “_val” happens to be equal to “_adj_upper_bound + 1”, the code on the right side of the assignment operator will evaluate to 0, which will become an exit code of 1, thus exploding my script because of what appeared to be a failed command.

Sigh.

So there’s the problem. What’s the solution? Turns out it’s pretty simple. Among Bash’s feature set, there is the profoundly handy “logical or operator”, “||”. This operator lets us say “if this OR that is true, return true.” In other words, “Run whatever’s on the left hand of the ||. If it exits 0, move on. If it exits non-zero, run whatever’s on the right hand of the ||. If it exits 0, move on and ignore the earlier failure. Only return non-zero if both commands fail.” There’s also a handy command in Bash called “true” that does nothing except for give an exit code of 0. That means that if you ever have a line of code in Bash that is liable to exit non-zero but it’s no big deal if it does, you can just slap an “|| true” on the end and it will magically make everything work by pretending that nothing went wrong. (If only this worked in real life!) I proceeded to go through and apply this bandaid to every standalone math mode call in my script, and it now seems to be behaving itself correctly again. For now anyway.

tl;dr: Faking success is sometimes a perfectly valid way to solve a computing problem. Just don’t live the way you code and you’ll be alright.

Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

on May 20, 2024 08:06 AM

May 14, 2024

The new APT 3.0 solver

Julian Andres Klode

APT 2.9.3 introduces the first iteration of the new solver codenamed solver3, and now available with the –solver 3.0 option. The new solver works fundamentally different from the old one.

How does it work?

Solver3 is a fully backtracking dependency solving algorithm that defers choices to as late as possible. It starts with an empty set of packages, then adds the manually installed packages, and then installs packages automatically as necessary to satisfy the dependencies.

Deferring the choices is implemented multiple ways:

First, all install requests recursively mark dependencies with a single solution for install, and any packages that are being rejected due to conflicts or user requests will cause their reverse dependencies to be transitively marked as rejected, provided their or group cannot be solved by a different package.

Second, any dependency with more than one choice is pushed to a priority queue that is ordered by the number of possible solutions, such that we resolve a|b before a|b|c.

Not just by the number of solutions, though. One important point to note is that optional dependencies, that is, Recommends, are always sorting after mandatory dependencies. Do note on that: Recommended packages do not “nest” in backtracking - dependencies of a Recommended package themselves are not optional, so they will have to be resolved before the next Recommended package is seen in the queue.

Another important step in deferring choices is extracting the common dependencies of a package across its version and then installing them before we even decide which of its versions we want to install - one of the dependencies might cycle back to a specific version after all.

Decisions about package levels are recorded at a certain decision level, if we reach a conflict we backtrack to the previous decision level, mark the decision we made (install X) in the inverse (DO NOT INSTALL X), reset all the state all decisions made at the higher level, and restore any dependencies that are no longer resolved to the work queue.

Comparison to SAT solver design.

If you have studied SAT solver design, you’ll find that essentially this is a DPLL solver without pure literal elimination. A pure literal eliminitation phase would not work for a package manager: First negative pure literals (packages that everything conflicts with) do not exist, and positive pure literals (packages nothing conflicts with) we do not want to mark for install - we want to install as little as possible (well subject, to policy).

As part of the solving phase, we also construct an implication graph, albeit a partial one: The first package installing another package is marked as the reason (A -> B), the same thing for conflicts (not A -> not B).

Once we have added the ability to have multiple parents in the implication graph, it stands to reason that we can also implement the much more advanced method of conflict-driven clause learning; where we do not jump back to the previous decision level but exactly to the decision level that caused the conflict. This would massively speed up backtracking.

What changes can you expect in behavior?

The most striking difference to the classic APT solver is that solver3 always keeps manually installed packages around, it never offers to remove them. We will relax that in a future iteration so that it can replace packages with new ones, that is, if your package is no longer available in the repository (obsolete), but there is one that Conflicts+Replaces+Provides it, solver3 will be allowed to install that and remove the other.

Implementing that policy is rather trivial: We just need to queue obsolete | replacement as a dependency to solve, rather than mark the obsolete package for install.

Another critical difference is the change in the autoremove behavior: The new solver currently only knows the strongest dependency chain to each package, and hence it will not keep around any packages that are only reachable via weaker chains. A common example is when gcc-<version> packages accumulate on your system over the years. They all have Provides: c-compiler and the libtool Depends: gcc | c-compiler is enough to keep them around.

New features

The new option --no-strict-pinning instructs the solver to consider all versions of a package and not just the candidate version. For example, you could use apt install foo=2.0 --no-strict-pinning to install version 2.0 of foo and upgrade - or downgrade - packages as needed to satisfy foo=2.0 dependencies. This mostly comes in handy in use cases involving Debian experimental or the Ubuntu proposed pockets, where you want to install a package from there, but try to satisfy from the normal release as much as possible.

The implication graph building allows us to implement an apt why command, that while not as nicely detailed as aptitude, at least tells you the exact reason why a package is installed. It will only show the strongest dependency chain at first of course, since that is what we record.

What is left to do?

At the moment, error information is not stored across backtracking in any way, but we generally will want to show you the first conflict we reach as it is the most natural one; or all conflicts. Currently you get the last conflict which may not be particularly useful.

Likewise, errors currently are just rendered as implication graphs of the form [not] A -> [not] B -> ..., and we need to put in some work to present those nicely.

The test suite is not passing yet, I haven’t really started working on it. A challenge is that most packages in the test suite are manually installed as they are mocked, and the solver now doesn’t remove those.

We plan to implement the replacement logic such that foo can be replaced by foo2 Conflicts/Replaces/Provides foo without needing to be automatically installed.

Improving the backtracking to be non-chronological conflict-driven clause learning would vastly enhance our backtracking performance. Not that it seems to be an issue right now in my limited testing (mostly noble 64-bit-time_t upgrades). A lot of that complexity you have normally is not there because the manually installed packages and resulting unit propagation (single-solution Depends/Reverse-Depends for Conflicts) already ground us fairly far in what changes we can actually make.

Once all the stuff has landed, we need to start rolling it out and gather feedback. On Ubuntu I’d like automated feedback on regressions (running solver3 in parallel, checking if result is worse and then submitting an error to the error tracker), on Debian this could just be a role email address to send solver dumps to.

At the same time, we can also incrementally start rolling this out. Like phased updates in Ubuntu, we can also roll out the new solver as the default to 10%, 20%, 50% of users before going to the full 100%. This will allow us to capture regressions early and fix them.

on May 14, 2024 11:26 AM

May 12, 2024

The Kubuntu Team are thrilled to announce significant updates to KubuQA, our streamlined ISO testing tool that has now expanded its capabilities beyond Kubuntu to support Ubuntu and all its other flavors. With these enhancements, KubuQA becomes a versatile resource that ensures a smoother, more intuitive testing process for upcoming releases, including the 24.04 Noble Numbat and the 24.10 Oracular Oriole.

What is KubuQA?

KubuQA is a specialized tool developed by the Kubuntu Team to simplify the process of ISO testing. Utilizing the power of Kdialog for user-friendly graphical interfaces and VirtualBox for creating and managing virtual environments, KubuQA allows testers to efficiently evaluate ISO images. Its design focuses on accessibility, making it easy for testers of all skill levels to participate in the development process by providing clear, guided steps for testing ISOs.

New Features and Extensions

The latest update to KubuQA marks a significant expansion in its utility:

  • Broader Coverage: Initially tailored for Kubuntu, KubuQA now supports testing ISO images for Ubuntu and all other Ubuntu flavors. This broadened coverage ensures that any Ubuntu-based community can benefit from the robust testing framework that KubuQA offers.
  • Support for Latest Releases: KubuQA has been updated to include support for the newest Ubuntu release cycles, including the 24.04 Noble Numbat and the upcoming 24.10 Oracular Oriole. This ensures that communities can start testing early and often, leading to more stable and polished releases.
  • Enhanced User Experience: With improvements to the Kdialog interactions, testers will find the interface more intuitive and responsive, which enhances the overall testing experience.

Call to Action for Ubuntu Flavor Leads

The Kubuntu Team is keen to collaborate closely with leaders and testers from all Ubuntu flavors to adopt and adapt KubuQA for their testing needs. We believe that by sharing this tool, we can foster a stronger, more cohesive testing community across the Ubuntu ecosystem.

We encourage flavor leads to try out KubuQA, integrate it into their testing processes, and share feedback with us. This collaboration will not only improve the tool but also ensure that all Ubuntu flavors can achieve higher quality and stability in their releases.

Getting Involved

For those interested in getting involved with ISO testing using KubuQA:

  • Download the Tool: You can find KubuQA on the Kubuntu Team Github.
  • Join the Community: Engage with the Kubuntu community for support and to connect with other testers. Your contributions and feedback are invaluable to the continuous improvement of KubuQA.

Conclusion

The enhancements to KubuQA signify our commitment to improving the quality and reliability of Ubuntu and its derivatives. By extending its coverage and simplifying the testing process, we aim to empower more contributors to participate in the development cycle. Whether you’re a seasoned tester or new to the community, your efforts are crucial to the success of Ubuntu.

We look forward to seeing how different communities will utilise KubuQA to enhance their testing practices. And by the way, have you thought about becoming a member of the Kubuntu Community? Join us today to make a difference in the world of open-source software!

on May 12, 2024 09:28 PM

May 08, 2024

I recently discovered that there's an old software edition of the Oxford English Dictionary (the second edition) on archive.org for download. Not sure how legal this is, mind, but I thought it would be useful to get it running on my Ubuntu machine. So here's how I did that.

Firstly, download the file; that will give you a file called Oxford English Dictionary (Second Edition).iso, which is a CD image. We want to unpack that, and usefully there is 7zip in the Ubuntu archives which knows how to unpack ISO files.1 So, unpack the ISO with 7z x "Oxford English Dictionary (Second Edition).iso". That will give you two more files: OED2.DAT and SETUP.EXE. The .DAT file is, I think, all the dictionary entries in some sort of binary format (and is 600MB, so be sure you have the space for it). You can then run wine SETUP.EXE, which will install the software using wine, and that's all good.2 Choose a folder to install it in (I chose the same folder that SETUP.EXE is in, at which point it will create an OED subfolder in there and unpack a bunch of files into it, including OED.EXE).

That's the easy part. However, it won't quite work yet. You can see this by running wine OED/OED.EXE. It should start up OK, and then complain that there's no CDROM.

a Windows dialog box reading 'CD-ROM not found'

This is because it expects there to be a CDROM drive with the OED2.DAT file on it. We can set one up, though; we tell Wine to pretend that there's a CD drive connected, and what's on it. Run winecfg, and in the Drives tab, press Add… to add a new drive. I chose D: (which is a common Windows drive letter for a CD drive), and OK. Select your newly added D: drive and set the Path to be the folder where OED2.DAT is (which is wherever you unpacked the ISO file). Then say Show Advanced and change the drive Type to CD-ROM to tell Wine that you want this new drive to appear to be a CD. Say OK.

a Windows dialog box reading 'CD-ROM not found'

Now, when you wine OED/OED.EXE again, it should start up fine! Hooray, we're done! Except…

the OED Windows app, except that all the text is little squares rather than actual text, which looks like a font rendering error

…that's not good. The app runs, but it looks like it's having font issues. (In particular, you can select and copy the text, even though it looks like a bunch of little squares, and if you paste that text into somewhere else it's real text! So this is some sort of font display problem.)

Fortunately, the OED app does actually come with the fonts it needs. Unfortunately, it seems to unpack them to somewhere (C:\WINDOWS\SYSTEM)3 that Wine doesn't appear to actually look at. What we need to do is to install those font files so Linux knows about them. You could click them all to install them, but there's a quicker way; copy them, from where the installer puts them, into our own font folder.

To do this...

  • first make a new folder to put them in: mkdir ~/.local/share/fonts/oed.
  • Then find out where the installer put the font files, as a real path on our Linux filesystem: winepath -u "C:/WINDOWS/SYSTEM". Let's say that that ends up being /home/you/.wine/dosdevices/c:/windows/system
  • Copy the TTF files from that folder (remembering to change the first path to the one that winepath output just now): cp /home/you/.wine/dosdevices/c:/windows/system/*.TTF ~/.local/share/fonts/oed
  • And tell the font system that we've added a bunch of new fonts: fc-cache

And now it all ought to work! Run wine OED/OED.EXE one last time…

the OED Windows app in all its glory

  1. and using 7zip is much easier than mounting the ISO file as a loopback thing
  2. There's a Microsoft Word macro that it offers to install; I didn't want that, and I have no idea whether it works
  3. which we can find out from OED/INSTALL.LOG
on May 08, 2024 10:18 PM

May 03, 2024

Many years ago (2012!) I was invited to be part of "The Pastry Box Project", which described itself thus:

Each year, The Pastry Box Project gathers 30 people who are each influential in their field and asks them to share thoughts regarding what they do. Those thoughts are then published every day throughout the year at a rate of one per day, starting January 1st and ending December 31st.

It was interesting. Sadly, it's dropped off the web (as has its curator, Alex Duloz, as far as I can tell), but thankfully the Wayback Machine comes to the rescue once again.1 I was quietly proud of some of the things I wrote there (and I was recently asked for a reference to a thing I said which the questioner couldn't find, which is what made me realise that the site's not around any more), so I thought I'd republish the stuff I wrote there, here, for ease of finding. This was all written in 2012, and the world has moved on in a few ways since then, a dozen years ago at time of writing, but... I think I'd still stand by most of this stuff. The posts are still at archive.org and you can get to and read other people's posts from there too, some of which are really good and worth your time. But here are mine, so I don't lose them again.

Tuesday, 18 December 2012

My daughter’s got a smartphone, because, well, everyone has. It has GPS on it, because, well, every one does. What this means is that she will never understand the concept of being lost.

Think about that for a second. She won’t ever even know what it means to be lost.

Every argument I have in the pub now goes for about ten minutes before someone says, right, we’ve spent long enough arguing now, someone look up the correct answer on Wikipedia. My daughter won’t ever understand the concept of not having a bit of information available, of being confused about a matter of fact.

A while back, it was decreed that telephone directories are not subject to copyright, that a list of phone numbers is “information alone without a minimum of original creativity” and therefore held no right of ownership.

What instant access to information has provided us is a world where all the simple matters of fact are now yours; free for the asking. Putting data on the internet is not a skill; it is drudgery, a mechanical task for robots. Ask yourself: why do you buy technical books? It’s not for the information inside: there is no tech book anywhere which actually reveals something which isn’t on the web already. It’s about the voice; about the way it’s written; about how interesting it is. And that is a skill. Matters of fact are not interesting — they’re useful, right enough, but not interesting. Making those facts available to everyone frees up authors, creators, makers to do authorial creative things. You don’t have to spend all your time collating stuff any more: now you can be Leonardo da Vinci all the time. Be beautiful. Appreciate the people who do things well, rather than just those who manage to do things at all. Prefer those people who make you laugh, or make you think, or make you throw your laptop out of a window with annoyance: who give you a strong reaction to their writing, or their speaking, or their work. Because information wanting to be free is what creates a world of creators. Next time someone wants to build a wall around their little garden, ask yourself: is what you’re paying for, with your time or your money or your personal information, something creative and wonderful? Or are they just mechanically collating information? I hope to spend 2013 enjoying the work of people who do something more than that.

Wednesday, 31 October 2012

Not everyone who works with technology loves technology. No, really, it’s true! Most of the people out there building stuff with web tech don’t attend conferences, don’t talk about WebGL in the pub, don’t write a blog with CSS3 “experiments” in it, don’t like what they do. It’s a job: come in at 9, go home at 5, don’t think about HTML outside those hours. Apparently 90% of the stuff in the universe is “dark matter”: undetectable, doesn’t interact with other matter, can’t be seen even with a really big telescope. Our “dark matter developers”, who aren’t part of the community, who barely even know that the community exists… how are we to help them? You can write all the A List Apart articles you like but dark matter developers don’t read it. And so everyone’s intranet is horrid and Internet-Explorer-specific and so the IE team have to maintain backwards compatibility with that and that hurts the web. What can we do to reach this huge group of people? Everyone’s written a book about web technologies, and books help, but books are dying. We want to get the word out about all the amazing things that are now possible to everyone: do we know how? Do we even have to care? The theory is that this stuff will “trickle down”, but that doesn’t work for economics: I’m not sure it works for @-moz-keyframes either.

Monday, 8 October 2012

The web moves really fast. How many times have you googled for a tutorial on or an example of something and found that the results, written six months or a year or two years ago, no longer work? The syntax has changed, or there’s a better way now, or it never worked right to begin with. You’ll hear people bemoaning this: trying to stop the web moving so quickly in order that knowledge about it doesn’t go out of date. But that ship’s sailed. This is the world we’ve built: it moves fast, and we have to just hat up and deal with it. So, how? How can we make sure that old and wrong advice doesn’t get found? It’s a difficult question, and I don’t think anyone’s seriously trying to answer it. We should try and think of a way.

Tuesday, 18 September 2012

Software isn’t always a solution to problems. If you’re a developer, everything generally looks like a nail: a nail which is solved by making a new bit of code. I’ve got half-finished mobile apps done for tracking my running with GPS, for telling me when to switch between running and walking, and… I’m still fat, because I’m writing software instead of going running. One of the big ideas behind computers was to automate repetitive and boring tasks, certainly, which means that it should work like this: identify a thing that needs doing, do it for a while, think “hm, a computer could do this more easily”, write a bit of software to do it. However, there’s too much premature optimisation going on, so it actually looks like this: identify a thing that needs doing, think “hm, I’m sure a computer would be able to do this more easily”, write a bit of software to do it. See the difference? If the software never gets finished, then in the first approach the thing still gets done. Don’t always reach for the keyboard: sometimes it’s better to reach for Post-It notes, or your running shoes.

Saturday, 18 August 2012

Changing the world is within your grasp.

This is not necessarily a good thing.

If you go around and talk to normal people, it becomes clear that, weirdly, they don’t ever imagine how to get ten million dollars. They don’t think about new ways to redesign a saucepan or the buttons in their car. They don’t contemplate why sending a parcel is slow and how it could be a slicker process. They don’t think about ways to change the world.

I find it hard to talk to someone who doesn’t think like that.

To an engineer, the world is a toy box full of sub-optimized and feature-poor toys, as Scott Adams once put it. To a designer, the world is full of bad design. And to both, it is not only possible but at a high level obvious how to (a) fix it (b) for everyone (c) and make a few million out of doing so.

At first, this seems a blessing: you can see how the world could be better! And make it happen!

Then it’s a curse. Those normal people I mentioned? Short of winning the lottery or Great Uncle Brewster dying, there’s no possibility of becoming a multi-millionaire, and so they’re not thinking about it. Doors that have a handle on them but say “Push” are not a source of distress. Wrong kerning in signs is not like sandpaper on their nerves.

The curse of being able to change the world is… the frustration that you have so far failed to do so.

Perhaps there is a Zen thing here. Some people have managed it. Maybe you have. So the world is better, and that’s a good thing all by itself, right?

Friday, 27 July 2012

The best systems are built by people who can accept that no-one will ever know how hard it was to do, and who therefore don’t seek validation by explaining to everyone how hard it was to do.

Tuesday, 12 June 2012

The most poisonous idea in the world is when you’re told that something which achieved success through lots of hard work actually got there just because it was excellent.

Friday, 18 May 2012

Ever notice how the things you slave over and work crushingly hard on get less attention, sometimes, than the amusing things you threw together in a couple of evenings?

I can't decide whether this is a good thing or not.

Thursday, 5 April 2012

It's OK to not want to build websites for everybody and every browser. Making something which is super-dynamic in Chrome 18 and also works excellently in w3m is jolly hard work, and a lot of the time you might well be justified in thinking it's not worth it. If your site stats, or your belief, or your prediction of the market's direction, or your favourite pundit tell you that the best use of your time is to only support browsers with querySelector, or only support browsers with JavaScript, or only support WebKit, or only support iOS Safari, then that's a reasonable decision to make; don't let anyone else tell you what your relationship with your users and customers and clients is, because you know better than them.

Just don't confuse what you're doing with supporting "the web". State your assumptions up front. Own your decisions, and be prepared to back them up, for your project. If you're building something which doesn't work in IE6, that requires JavaScript, that requires mobile WebKit, that requires Opera Mobile, then you are letting some people down. That's OK; you've decided to do that. But your view's no more valid than theirs, for a project you didn't build. Make your decisions, and state what the axioms you worked from were, and then everyone else can judge whether what you care about is what they care about. Just don't push your view as being what everyone else should do, and we'll all be fine.

Sunday, 18 March 2012

Publish and be damned, said the Duke of Wellington; these days, in between starting wars in France and being sick of everyone repeating the jokes about his name from Blackadder, he’d probably say that we should publish or be damned. If you’re anything like me, you’ve got folders full of little experiments that you never got around to finishing or that didn’t pan out. Put ’em up somewhere. These things are useful.

Twitter, autobiographies, collections of letters from authors, all these have shown us that the minutiae can be as fascinating as carefully curated and sieved and measured writings, and who knows what you’ll inspire the next person to do from the germ of one of your ideas?

Monday, 27 February 2012

There's a lot to think about when you're building something on the web. Is it accessible? How do I handle translations of the text? Is the design OK on a 320px-wide screen? On a 2320px-wide screen? Does it work in IE8? In Android 4.0? In Opera Mini? Have I minimized the number of HTTP requests my page requires? Is my JavaScript minified? Are my images responsive? Is Google Analytics hooked up properly? AdSense? Am I handling Unicode text properly? Avoiding CSRF? XSS? Have I encoded my videos correctly? Crushed my pngs? Made a print stylesheet?

We've come a long way since:

<HEADER>
<TITLE>The World Wide Web project</TITLE>
<NEXTID N="55">
</HEADER>
<BODY>
<H1>World Wide Web</H1>The WorldWideWeb (W3) is a wide-area<A
NAME=0 HREF="WhatIs.html">
hypermedia</A> information retrieval
initiative aiming to give universal
access to a large universe of documents.

Look at http://html5boilerplate.com/—a base level page which helps you to cover some (nowhere near all) of the above list of things to care about (and the rest of the things you need to care about too, which are the other 90% of the list). A year in development, 900 sets of changes and evolutions from the initial version, seven separate files. That's not over-engineering; that's what you need to know to build things these days.

The important point is: one of the skills in our game is knowing what you don't need to do right now but still leaving the door open for you to do it later. If you become the next Facebook then you will have to care about all these things; initially you may not. You don't have to build them all on day one: that is over-engineering. But you, designer, developer, translator, evangelist, web person, do have to understand what they all mean. And you do have to be able to layer them on later without having to tear everything up and start again. Feel guilty that you're not addressing all this stuff in the first release if necessary, but you should feel a lot guiltier if you didn't think of some of it.

Wednesday, 18 January 2012

Don't be creative. Be a creator. No one ever looks back and wishes that they'd given the world less stuff.

  1. Also, the writing is all archived at Github!
on May 03, 2024 06:08 PM
The
<figcaption> The “Secure Rollback Prevention” entry in the UEFI BIOS configuration </figcaption>

The bottom line is that there is a new configuration called “AMD Secure Processor Rollback protection” on recent AMD systems in addition to “Secure Rollback Prevention” (BIOS rollback protection). If it’s enabled by a vendor, you cannot downgrade the UEFI BIOS revisions once you install a one with security vulnerability fixes.

https://fwupd.github.io/libfwupdplugin/hsi.html#org.fwupd.hsi.Amd.RollbackProtection

This feature prevents an attacker from loading an older firmware onto the part after a security vulnerability has been fixed.
[…]
End users are not able to directly modify rollback protection, this is controlled by the manufacturer.

Previously I installed the revision 1.49 (R23ET73W) but it’s gone from Lenovo’s official page with the notice below. I’ve been annoyed by a symptom which is likely from a firmware so I wanted to try multiple revisions for bisecting, and also I thought I should downgrade it to the latest official revision as 1.40 (R23ET70W) since the withdrawal clearly indicates that there is something wrong with 1.49.

This BIOS version R23UJ73W is reported Lenovo cloud not working issue, hence it has been withdrawn from support site.

First, I turned off Secure Rollback Prevention and tried downgrading it with fwupdmgr like the following. However, it failed to be applied with Secure Flash Authentication Failed when rebooted.

$ fwupdmgr downgrade
0.	Cancel
1.	b0fb0282929536060857f3bd5f80b319233340fd (Battery)
2.	6fd62cb954242863ea4a184c560eebd729c76101 (Embedded Controller)
3.	0d5d05911800242bb1f35287012cdcbd9b381148 (Prometheus)
4.	3743975ad7f64f8d6575a9ae49fb3a8856fe186f (SKHynix HFS256GDE9X081N)
5.	d77c38c163257a2c2b0c0b921b185f481d9c1e0c (System Firmware)
6.	6df01b2df47b1b08190f1acac54486deb0b4c645 (TPM)
7.	362301da643102b9f38477387e2193e57abaa590 (UEFI dbx)
Choose device [0-7]: 5
0.	Cancel
1.	0.1.46
2.	0.1.41
3.	0.1.38
4.	0.1.36
5.	0.1.23
Choose release [0-5]: 

Next, I tried their ISO image r23uj70wd.iso, but no luck with another error.

Error

The system program file is not correct for this system.

Also, Windows failed to apply it so I became convinced it was impossible. However, I didn’t have a clear idea why at that point and bumped into a handy command in fwupdmgr.

$ fwupdmgr security
Host Security ID: HSI:1! (v1.9.16)

HSI-1
✔ BIOS firmware updates:         Enabled
✔ Fused platform:                Locked
✔ Supported CPU:                 Valid
✔ TPM empty PCRs:                Valid
✔ TPM v2.0:                      Found
✔ UEFI bootservice variables:    Locked
✔ UEFI platform key:             Valid
✔ UEFI secure boot:              Enabled

HSI-2
✔ SPI write protection:          Enabled
✔ IOMMU:                         Enabled
✔ Platform debugging:            Locked
✔ TPM PCR0 reconstruction:       Valid
✘ BIOS rollback protection:      Disabled

HSI-3
✔ SPI replay protection:         Enabled
✔ CET Platform:                  Supported
✔ Pre-boot DMA protection:       Enabled
✔ Suspend-to-idle:               Enabled
✔ Suspend-to-ram:                Disabled

HSI-4
✔ Processor rollback protection: Enabled
✔ Encrypted RAM:                 Encrypted
✔ SMAP:                          Enabled

Runtime Suffix -!
✔ fwupd plugins:                 Untainted
✔ Linux kernel lockdown:         Enabled
✔ Linux kernel:                  Untainted
✘ CET OS Support:                Not supported
✘ Linux swap:                    Unencrypted

This system has HSI runtime issues.
 » https://fwupd.github.io/hsi.html#hsi-runtime-suffix

Host Security Events
  2024-05-01 15:06:29:  ✘ BIOS rollback protection changed: Enabled → Disabled

As you can see, the BIOS rollback protection in the HSI-2 section is “Disabled” as intended. But Processor rollback protection in HSI-4 is “Enabled”. I found a commit suggesting that there was a system with the config disabled and it was able to be enabled when OS Optimized Defaults is turned on.

https://github.com/fwupd/fwupd/commit/52d6c3cb78ab8ebfd432949995e5d4437569aaa6

Update documentation to indicate that loading “OS Optimized Defaults”

may enable security processor rollback protection on Lenovo systems.

I hoped that Processor rollback protection might be disabled by turning off OS Optimized Defaults instead.

Tried OS Optimized Defaults turned off but no luck
<figcaption> Tried OS Optimized Defaults turned off but no luck </figcaption>
$ fwupdmgr security
Host Security ID: HSI:1! (v1.9.16)

...

✘ BIOS rollback protection:      Disabled

...

HSI-4
✔ Processor rollback protection: Enabled

...

Host Security Events
  2024-05-02 03:24:45:  ✘ Kernel lockdown disabled
  2024-05-02 03:24:45:  ✘ Secure Boot disabled
  2024-05-02 03:24:45:  ✘ Pre-boot DMA protection is disabled
  2024-05-02 03:24:45:  ✘ Encrypted RAM changed: Encrypted → Not supported

Some configurations were overridden, but the Processor rollback protection stayed the same. It’s confirmed that it’s really impossible to downgrade the firmware with vulnerability fixes. I learned the hard way that there was a clear difference between “a vendor doesn’t support downgrading” and “it can’t be downgraded” as per the release notes.

https://download.lenovo.com/pccbbs/mobiles/r23uj73wd.txt

CHANGES IN THIS RELEASE

Version 1.49 (UEFI BIOS) 1.32 (ECP)

[Important updates]

  • Notice that BIOS can’t be downgraded to older BIOS version after upgrade to r23uj73w(1.49).

[New functions or enhancements]

  • Enhancement to address security vulnerability, CVE-2023-5058,LEN-123535,LEN-128083,LEN-115697,LEN-123534,LEN-118373,LEN-119523,LEN-123536.
  • Change to permit fan rotation after fan error happen.

I have to wait for a new and better firmware.

on May 03, 2024 03:20 AM

May 01, 2024

Thanks to a colleague who introduced me to Nim during last week’s SUSE Labs conference, I became a man with a dream, and after fiddling with compiler flags and obviously not reading documentation, I finally made it.

This is something that shouldn’t exist; from the list of ideas that should never have happened.

But it does. It’s a Perl interpreter embedded in Rust. Get over it.

Once cloned, you can run the following commands to see it in action:

  • cargo run --verbose -- hello.pm showtime
  • cargo run --verbose -- hello.pm get_quick_headers

How it works

There is a lot of autogenerated code, mainly for two things:

  • bindings.rs and wrapper.h; I made a lot of assumptions and perlxsi.c may or may not be necessary in the future (see main::xs_init_rust), depends on how bad or terrible my C knowledge is by the time you’re reading this.
  • xs_init_rust function is the one that does the magic, as far as my understanding goes, by hooking up boot_DynaLoader to DynaLoader in Perl via ffi.

With those two bits in place, and thanks to the magic of the bindgen crate, and after some initialization, I decided to use Perl_call_argv, do note that Perl_ in this case comes from bindgen, I might change later the convention to ruperl or something to avoid confusion between that a and perl_parse or perl_alloc which (if I understand correctly) are exposed directly by the ffi interface.

What I ended up doing, is passing the same list of arguments (for now, or at least for this PoC), directly to Perl_call_argv, which will in turn, take the third argument and pass it verbatim as the call_argv

        Perl_call_argv(myperl, perl_sub, flags_ptr, perl_parse_args.as_mut_ptr());

Right now hello.pm defines two sub routines, one to open a file, write something and print the time to stdout, and a second one that will query my blog, and show the headers. This is only example code, but enough to demostrate that the DynaLoader works, and that the embedding also works :)

itsalive

I got most of this working by following the perlembed guide.

Why?

Why not?.

I want to see if I can embed also python in the same binary, so I can call native perl, from native python and see how I can fiddle all that into os-autoinst

Where to find the code?

On github: https://github.com/foursixnine/ruperl or under https://crates.io/crates/ruperl

on May 01, 2024 12:00 AM

April 25, 2024

The Kubuntu Team is happy to announce that Kubuntu 24.04 has been released, featuring the ‘beautiful’ KDE Plasma 5.27 simple by default, powerful when needed.

Codenamed “Noble Numbat”, Kubuntu 24.04 continues our tradition of giving you Friendly Computing by integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.

Under the hood, there have been updates to many core packages, including a new 6.8-based kernel, KDE Frameworks 5.115, KDE Plasma 5.27 and KDE Gear 23.08.

Kubuntu 24.04 with Plasma 5.27.11

Kubuntu has seen many updates for other applications, both in our default install, and installable from the Ubuntu archive.

Haruna, Krita, Kdevelop, Yakuake, and many many more applications are updated.

Applications for core day-to-day usage are included and updated, such as Firefox, and LibreOffice.

For a list of other application updates, and known bugs be sure to read our release notes.

Download Kubuntu 24.04, or learn how to upgrade from 23.10 or 22.04 LTS.

Note: For upgrades from 23.10, there may a delay of a few hours to days between the official release announcements and the Ubuntu Release Team enabling upgrades.

on April 25, 2024 04:16 PM