It’s official: since the outbreak of the COVID-19 pandemic, cybercrime has increased by 600%. Among these, ransomware attacks are estimated to cost $6 trillion in 2021 alone. And there were nearly 550,000 ransomware attacks per day in 2020. The question is: are your workloads secure enough? In this blog, we will discuss how to make your Open Source workloads more secure in one second.
Run your mission-critical applications with Ubuntu Pro on Google Cloud
If your workloads are running on Ubuntu, congratulations! Ubuntu has been the most secure operating system since 2013, according to UK Government Communications Headquarters (GCHQ). This is thanks to Canonical’s teams, who strive to keep Ubuntu at the forefront of safety and reliability. Earlier this year, Canonical and Google launched Ubuntu Pro on Google Cloud, a premium image designed to deliver the most comprehensive security and compliance features on public clouds. Ubuntu Pro allows instant access to security patching, covering more than 30,000 open source applications for up to 10 years. It also comes with critical compliance features essential to running workloads in regulated environments.
Upgrading Ubuntu LTS workloads to Pro in seconds
Now you can run your mission-critical applications on this brand new Ubuntu Pro. But how about your running workloads on Ubuntu LTS? How to migrate them to new Ubuntu Pro virtual machines?
The good news is, you don’t have to. Google and Canonical have been working together to give you the most seamless experience while getting industry-leading protection. Today, you can upgrade your Ubuntu LTS to Ubuntu Pro with one command.
Suppose you have one VM running on Ubuntu 16.04 LTS. Here is how it works:
And that is it! Now you have upgraded your Ubuntu 16.04 LTS into Ubuntu 16.04 Pro. Let’s see what it looks like now. When you SSH into this machine, input the following:
Since you have upgraded your Ubuntu to Ubuntu Pro, this versatile machine will immediately update kernel patches whenever available, extend security coverage for the most important open source applications, such as Apache Kafka, NGINX, MongoDB, Redis and PostgreSQL…
A spell to rule them all
Now, if you just want all these great security features at once, here us the single magic spell you need to remember:
BOOT_DISK_NAME: the name of the boot disk to append the license to
ZONE: the zone containing the boot disk to append the license to
LICENSE_URI: the license URI for the version of Ubuntu Pro you are upgrading to. The following table shows the license URI for the supported versions of Ubuntu Pro:
1st November 2021: Today, Canonical announced support with Microsoft for Microsoft SQL Server with Ubuntu Pro on Microsoft Azure.
Canonical has worked with Microsoft to bring a highly performant and fully supported solution for SQL Server to market, based around the Ubuntu Pro 20.04 LTS operating system. Customers on Microsoft Azure can launch fully supported instances of SQL Server 2017 or SQL Server 2019 – Web, Standard and Enterprise editions – on both Ubuntu Pro 18.04 LTS and Ubuntu Pro 20.04 LTS. The SQL Server on Ubuntu Pro Azure solution offers an extremely cost effective alternative for enterprise data management.
“Our customers need ways to run enterprise-grade, highly demanding and business critical data workloads on Ubuntu. This need is fully addressed with Microsoft SQL Server on Ubuntu Pro and Azure. This solution is a logical extension of our continued collaboration with Microsoft” said Alex Gallagher, VP Cloud Alliances at Canonical.
SQL Server on Ubuntu Pro makes use of the XFS filesystem with Direct I/O and Forced Unit Access (FUA) for reliable synchronisation with underlying NVMe SSD storage media. Additionally, SQL Server takes advantage of persistent memory (PMEM) when this is available. SQL Server on Ubuntu Pro 20.04 LTS includes support for high availability scenarios through Corosync and Pacemaker with a specialised fencing agent for Azure.
With Ubuntu Pro, customers get up to 10 years of maintenance updates, and Ubuntu Pro includes officially certified components for FIPS and Common Criteria EAL2 configurations, supporting compliance scenarios like FedRAMP, HIPAA, PCI and ISO. With integrated hardening automation to apply and audit CIS benchmark, customers can readily enable industry-standard benchmarks for security hardened compute profiles. With Kernel Livepatch, Ubuntu Pro systems receive regular kernel updates immediately, without requiring a system reboot. Additionally, Ubuntu Pro adds 10 years extended security coverage for a range of open source applications.
Customers receive support on the entire solution, including security updates and joint technical support from Canonical and Microsoft. Customers can access the supported virtual machine images via the Microsoft Azure Marketplace. SQL Server on Ubuntu Pro delivers customers an alternative, highly cost-effective and fully supported RDBMS option, ideal for high performance, highly transactional workloads. As a fully supported offer, the solution also offers a low-friction path for existing SQL Server users to benefit from adopting Ubuntu Pro.
About Canonical
Canonical is the publisher of Ubuntu, the OS for most public cloud workloads as well as the emerging categories of smart gateways, self-driving cars and advanced robots. Canonical provides enterprise security, support and services to commercial users of Ubuntu. Established in 2004, Canonical is a privately held company.
This month:
* Command & Conquer : LMMS
* How-To : Python, Latex and Clone To New PC
* Graphics : Inkscape
* Everyday Ubuntu
* Micro This Micro That
* Review : Ubuntu 21.10
* Review : System76 Galaga Pro Laptop
* Ubuntu Games : Death Trash
plus: News, The Daily Waddle, Q&A, and more.
OpenUK will be hosting a venue on 11 November with a day of events about sustainability with technology emphasising why open tech is the most effective way to do that.
Sessions include an opening from former government minister Francis Maude, Launch of the OpenUK Consortium Data Centre Blueprint, Open Collaboration Opening Sustainability led by Red Hat, Opening Up the Energy Sector, building the Sustainable Open Future for the UK.
In the evening I’ll be hosting the OpenUK awards 2021, showcasing and recognising the best people and organisations for open tech in the UK.
Quarto episódo da série Hacktoberfest, com o super especial Marcos Marado que veio partilhar a sua vasta e rica experiências em todas as edições em que participou, bem como na edição 2021 que se aproxima do seu fim.
Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.
Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.
Atribuição e licenças
Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.
A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).
O Firefox fez as pazes com o Constantino, já o contrário… O Carrondo irritou-se com o OBS, saiu uma nova versão do Ubuntu – Impish Indri, sabiam? – e a comunidade global continua a dar ares da sua graça.
Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.
Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.
Atribuição e licenças
Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.
A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).
In my day job (in the IT world) we staged an online event in Oct 2021. As with past events like these, it makes the event a lot more fun if you have music in between quite technical talks and folks can get up from your desk and dance while you grab a new cup of tea.
On Mixcloud it’s my first mix using a DJ controller - it’s a very recent development for me. A lot of fun though. I enjoyed the whole event, but it was also sensory overload as I was watching 3 laptops to e.g. catch cues when new speakers would come on or if there was audience feedback, so excuse these moments of distraction - along with the breaks! I’ll get a distraction-free mix out soon again - promise! So without further ado, here’s the music folks from the event as people asked for it. Enjoy!
jiony - Sincretismo
Notorious B.I.G. - Hypnotize (Benedikt Frey Edit)
okuma - Garnatxa
Daniel Hokum - Burn (Paul Traeumer’s Shuffled Remix)
Ocelot did it again! The French speaking Ubuntu community is happy to present you his splendid Impish Indri t-shirt. :) You can buy it before the end of October for €15 (+ shipping costs) and receive it at the end of November 2021. You can try to buy it later but it will be more expensive and you will not have any garanty of stock.
We are pleased to announce that Plasma 5.23.1 is now available in our backports PPA for Kubuntu 21.10 (Impish Indri).
The release announcement detailing the new features and improvements in Plasma 5.23 can be found here.
To upgrade:
Add the following repository to your software sources list:
ppa:kubuntu-ppa/backports
or if it is already added, the updates should become available via your preferred update method.
The PPA can be added manually in the Konsole terminal with the command:
sudo add-apt-repository ppa:kubuntu-ppa/backports
and packages then updated with
sudo apt full-upgrade
IMPORTANT
Please note that more bugfix releases are scheduled by KDE for Plasma 5.23, so while we feel these backports will be beneficial to enthusiastic adopters, users wanting to use a Plasma release with more rounds of stabilisation/bugfixes ‘baked in’ may find it advisable to stay with Plasma 5.22 as included in the original 21.10 (Impish Indri) release.
The Kubuntu Backports PPA for 21.10 also currently contains newer versions of KDE Gear (formerly Applications) and other KDE software. The PPA will also continue to receive updated versions of KDE packages other than Plasma, for example KDE Frameworks.
Issues with Plasma itself can be reported on the KDE bugtracker [1]. In the case of packaging or other issues, please provide feedback on our mailing list [2], IRC [3], and/or file a bug against our PPA packages [4].
Xubuntu 21.10 "Impish Indri" was released on October 14, 2021. Check out the release announcement and release notes. I&aposve expanded on both below.
New Features
GNOME Disk Usage Analyzer
GNOME Disk Usage Analyzer (baobab) scans folders, devices, and remote locations to provide an in-depth report on disk usage. It can quickly identify large files and folders wasting disk space and enable users to act on them. A tree-like and graphical representation are used to display disk usage.
Disk Usage Analyzer makes it much easier to recover lost disk space.
GNOME Disks
GNOME Disks provides an easy way to inspect, format, partition, and configure disks. You can view SMART data, manage devices, benchmark physical disks, and image flash drives using GNOME Disks. Another benefit is that it can mount partitions on-demand or automatically.
GNOME Disks is an all-in-one solution for managing physical disks and partitions.
Rhythmbox
Rhythmbox is a music-playing application. It features a media library, podcast feeds, and live internet radio stations. It integrates with the Xfce PulseAudio Plugin in Xubuntu, controlling playback and granting easy access to recent playlists. Xubuntu ships with the Alternative Toolbar plugin enabled, making the application layout fit in with the rest of the desktop. Additionally, the Music key on multimedia keyboards will now launch Rhythmbox instead of Parole.
Rhythmbox is customized to feel at home in Xubuntu. If you prefer a more modern look, it is only a few clicks away.
Super Key Support
The Super (or Windows) key will now reveal the application menu, similar to Windows and other desktop environments. This is possible thanks to the inclusion of xcape. xcape is used to configure modifier keys to act as other keys when pressed. For Xubuntu, the left Super key is now mapped to trigger the Ctrl+Escape key combination used for the Whisker Menu. For a peek into the technical reason for this workaround, please see the upstream Xfce bug.
Today&aposs @Xubuntu daily iso (2021-08-20) features xcape, enabling the Super key for showing the Whisker Menu.
The Super key now works exactly as you&aposd expect.
PipeWire
PipeWire is now included in Xubuntu and the other flavors. PipeWire is a project that improves audio and video handling in Linux. It is used alongside PulseAudio to significantly improve hardware support, particularly for Bluetooth audio devices. For regular usage, PipeWire quietly works in the background. Audio devices are still controlled through the Xfce PulseAudio Plugin and PulseAudio Volume Control (pavucontrol).
Pidgin Removal
Pidgin, “the universal chat client,” is no longer included in Xubuntu. Due to an increasing number of chat services moving to proprietary and restricted protocols, the overall usefulness of Pidgin has diminished significantly over the years. However, if you still use Pidgin, it can be installed from the repository.
Late Night Linux Extra episode 32 featured Gary Kramlich, the lead Pidgin maintainer. In this episode, Gary explained that while many of these services are no longer available within Pidgin by default, existing plugins enable support for those services. Unfortunately, many plugins change rapidly, making it impossible to keep them packaged and up-to-date in Ubuntu.
UX Updates
In continuing our keyboard shortcut clean-up, the long-obsoleted Super+{1,2,3,4} shortcuts were removed. These shortcuts go way back to when Xubuntu had a two-panel layout and launched the first four pinned applications. For a complete list of keyboard shortcuts, click here.
We also made a minor change to our Thunar defaults, updating the title bar to always display the full path of the current directory. This should make navigating and managing the filesystem easier with multiple open windows.
Go layers deep in your filesystem and never forget where you are with the full path displayed in Thunar at all times.
About the Xubuntu Versions
Xubuntu has three installable versions. Using the main ISO (2.0G), you can pick from the Normal or Minimal installation option, whereas Xubuntu Core (1.0G) will result in a much smaller installation size. Normal includes everything you need to be productive and have fun with Xubuntu. Meanwhile, Minimal and Core are designed to provide the bare essentials, enabling you to tailor Xubuntu to your needs.
When installing from the main ISO, you have an option to perform a "Normal" or "Minimal" installation.
Core and Minimal seem to have the same purpose, but Core has a few advantages. For one, the download size is much smaller and more accessible for those with limited connectivity options. Second, the install size is quite a bit smaller due to how the different versions work. Core installs only the minimal set of packages. Minimal first installs the Normal Xubuntu version and then removes the excess packages. Unfortunately, it’s impossible to reliably identify and remove all of the extra packages, so you end up with another 1.0G of bloat.
Save nearly 2.0G of disk space by opting for the Xubuntu Core version.
You can learn more about Xubuntu Core here or view the spreadsheet I put together with the package and memory differences here.
Wrapping Up
Xubuntu 21.10 features the work of numerous contributors from the Xfce, GNOME, MATE, Ubuntu, and Debian communities. If you&aposd like to contribute, check out the following links:
Next up, we have the 22.04 "Jammy Jellyfish" LTS cycle. The next six months will be focused primarily on bug fixes and other improvements, building a solid LTS foundation for the next three years. As it is an LTS, we&aposll be running a Wallpaper Contest again, so keep an eye on the Xubuntu website and Twitter for updates.
The Kubuntu Team is happy to announce that Kubuntu 21.10 has been released, featuring the ‘beautiful’ KDE Plasma 5.22: simple by default, powerful when needed.
Codenamed “Impish Indri”, Kubuntu 21.10 continues our tradition of giving you Friendly Computing by integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.
The team has been hard at work through this cycle, introducing new features and fixing bugs.
Under the hood, there have been updates to many core packages, including a new 5.13-based kernel, KDE Frameworks 5.86, KDE Plasma 5.22 and KDE Gear 21.08.
Kubuntu has seen many updates for other applications, both in our default install, and installable from the Ubuntu archive.
Krita, Kdevelop, Yakuake, and many many more applications are updated.
Applications for core day to day usage are included and updated, such as Firefox, VLC and Libreoffice.
For a list of other application updates, and known bugs be sure to read our release notes.
Note: For upgrades from 21.04, there may a delay of a few hrs to days between the official release announcements and the Ubuntu Release Team enabling upgrades.
The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Xena on Ubuntu 21.10 (Impish Indri) and Ubuntu 20.04 LTS (Focal Fossa) via the Ubuntu Cloud Archive. Details of the Xena release can be found at: https://www.openstack.org/software/xena
To get access to the Ubuntu Xena packages:
Ubuntu 21.10
OpenStack Xena is available by default for installation on Ubuntu 21.10.
Ubuntu 20.04 LTS
The Ubuntu Cloud Archive for OpenStack Xena can be enabled on Ubuntu 20.04 by running the following command:
The Xubuntu team is happy to announce the immediate release of Xubuntu 21.10.
Xubuntu 21.10, codenamed Impish Indri, is a regular release and will be supported for 9 months, until June 2022. If you need a stable environment with longer support time we recommend that you use Xubuntu 20.04 LTS instead.
The final release images are available as torrents and direct downloads from xubuntu.org/download/.
As the main server might be busy in the first few days after the release, we recommend using the torrents if possible.
Xubuntu Core, our minimal ISO edition, is available to download from unit193.net/xubuntu/core/ [torrent]. Find out more about Xubuntu Core here.
We’d like to thank everybody who contributed to this release of Xubuntu!
Highlights and Known Issues
Highlights
New Software: Xubuntu now comes pre-installed with GNOME Disk Analyzer, GNOME Disk Utility, and Rhythmbox. Disk Analyzer and Disk Utility make it easier to monitor and manage your partitions. Rhythmbox enables music playback with a dedicated media library.
Pipewire: Pipewire is now included in Xubuntu, and is used in conjunction with PulseAudio to improve audio playback and hardware support in Linux.
Keyboard Shortcuts: The Super (Windows) key will now reveal the applications menu. Existing Super+ keyboard shortcuts are unaffected.
Known Issues
The shutdown prompt may not be displayed at the end of the installation. Instead you might just see a Xubuntu logo, a black screen with an underscore in the upper left hand corner, or just a black screen. Press Enter and the system will reboot into the installed environment. (LP: #1944519)
For more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions, please refer to the Xubuntu Release Notes.
The main Ubuntu Release Notes cover many of the other packages we carry and more generic issues.
Support
For support with the release, navigate to Help & Support for a complete list of methods to get help.
Thanks to all the hard work from our contributors, Lubuntu 21.10 has been released. With the codename Impish Indri, Lubuntu 21.10 is the 21st release of Lubuntu, the seventh release of Lubuntu with LXQt as the default desktop environment. Support lifespan Lubuntu 21.10 will be supported for 9 months until July 2022. Our main focus […]
The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 21.10, code-named “Impish Indri”. This marks Ubuntu Studio’s 30th release. This release is a regular release, and as such it is supported for nine months until July 2022.
Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a complete list of changes and known issues.
You can download Ubuntu Studio 21.10 from our download page.
Due to the change in desktop environment that started after the release of 20.04 LTS, direct upgrades from supported releases prior to 21.04 are not supported.
We have had anecdotal reports of successful upgrades from 20.04 LTS (Xfce desktop) to later releases (Plasma desktop), but this will remain at your own risk.
Instructions for upgrading are included in the release notes.
New This Release
This release includes Plasma 5.22.5, the full-featured desktop environment made by KDE. The theming uses the Materia theme and icons are Papirus icons.
Audio
Studio Controls has seen further development as its own independent project and has been updated to version 2.2.7. This version has an all-new layout and features, including JACK over network and MIDI over network.
Ardour 6.9
Ardour has been updated to version 6.9 and includes a ton of bugfixes and enhancements. For more information, check out the official release announcement.
For those that would like to use the advanced audio processing power of JACK with OBS Studio, OBS Studio is JACK-aware!
More Updates
There are many more updates not covered here but are mentioned in the Release Notes. We highly recommend reading those release notes so you know what has been updated and know any known issues that you may encounter.
Get Involved!
A great way to contribute is to get involved with the project directly! We’re always looking for new volunteers to help with packaging, documentation, tutorials, user support, and MORE! Check out all the ways you can contribute!
Special Thanks
Huge special thanks for this release go to:
Len Ovens: Studio Controls, Ubuntu Studio Installer, Coding
Thomas Ward: Packaging, Ubuntu Core Developer for Ubuntu Studio
Eylul Dogruel: Artwork, Graphics Design, Website Lead
Ross Gammon: Upstream Debian Developer, Guidance, Testing
Sebastien Ramacher:Upstream Debian Developer
Dennis Braun: Debian Package Maintainer
Rik Mills: Kubuntu Council Member, help with Plasma desktop
Mauro Gaspari: Tutorials, Promotion, and Documentation, Testing
Brian Hechinger: Testing and bug reporting
Chris Erswell:Testing and bug reporting
Robert Van Den Berg:Testing and bug reporting, IRC Support
Krytarik Raido: IRC Moderator, Mailing List Moderator
Erich Eickmeyer: Project Leader, Packaging, Direction, Treasurer
The significant change in Ubuntu MATE 21.10 is the introduction of MATE Desktop
1.26.0 ✨ which was
18 months in the making. Thanks to the optimisations in MATE Desktop 1.26, Ubuntu
MATE 21.10 is faster and leaner 💪
A significant effort 😅 has been invested in fixing bugs 🐛 in MATE Desktop 1.26.0,
optimising performance ⚡ and plugging memory leaks. MATE Desktop is faster and
leaner as a result and it’s underpinnings have been modernised and updated. This
last point mostly benefits developers working on MATE, but is important to
highlight to users at it demonstrates MATE Desktop is being maintained to ensure it’s longevity.
Here are some of the other quality of life 💌 improvements in MATE Desktop 1.26:
The Control Center features:
Improved Window Preferences dialog with a more comprehensive window behaviour and placement options presented.
Display preferences now has an option for discrete display scaling.
Power Manager has a new option to enable keyboard dimming.
Notifications now support for hyperlinks.
Caja can format drives and has a new Bookmarks sidebar.
Caja Actions, which allows you to add arbitrary programs to be launched through the context menu, is now part of the Desktop.
Calculator now uses GNU MPFR/MPC for high precision, faster computation and additional functions.
Pluma has a new mini map instant overview, a grid background to turn Pluma into a writing pad and the preferences have been redesigned.
Atril is much faster scrolling through large documents and the memory footprint has been reduced.
Engrampa, the archive manager, now supports EPUB, ARC and encrypted RAR files.
Marco, the windows manager:
Correctly restores minimised windows to their original position.
Thumbnail window previews support HiDPI.
Netspeed applet shows more information about your network interfaces.
While MATE Desktop is not completely ready for Wayland just yet, 1.26.0
represents a significant stepping stone towards that objective with most of the
MATE Desktop being able to run on a Wayland compositor. 👍
Ubuntu MATE Enhancements
Ubuntu MATE has tweaked 🔧 the default desktop configuration slighty:
Image Extrapolation and Interpolation is disabled by default in Eye of MATE to make image viewing faster and image quality sharper.
The Alt-Tab pop-up is now expanded to fit long window titles.
If you use the Mutiny layout, session loading is now faster.
Guest Session
Once in a while a friend, family member, or colleague may want to borrow your
computer 😱 The Guest Session provides a convenient way, with a high level of
security, to lend your computer to someone else. A guest session can be launched
either from the login screen or from within a regular session. If you are
currently logged in, click the icon at the far right of the menu bar and select
Guest Session. This will lock the screen for your own session and start the
guest session.
A guest cannot view the home folders of other users, and by default any saved
data or changed settings will be removed/reset at logout. It means that each
session starts with a fresh environment, unaffected by what previous guests did.
RedShift
RedShift makes a return, after being temporarily removed in Ubuntu MATE 21.04.
Raspberry Pi images
We will be refreshing our Ubuntu MATE images for Raspberry Pi in the coming
weeks.
Major Applications
Accompanying MATE Desktop 1.26.0 and Linux 5.13 are Firefox 93.0,
Celluloid 0.20, LibreOffice 7.2.1.2
See the Ubuntu 21.10 Release Notes
for details of all the changes and improvements that Ubuntu MATE benefits from.
Download Ubuntu MATE 21.10
This new release will be first available for PC/Mac users.
You can upgrade to Ubuntu MATE 21.10 from Ubuntu MATE 21.04. Ensure that you
have all updates installed for your current version of Ubuntu MATE before you
upgrade.
Open the “Software & Updates” from the Control Center.
Select the 3rd Tab called “Updates”.
Set the “Notify me of a new Ubuntu version” drop down menu to “For any new version”.
Press Alt+F2 and type in update-manager -c -d into the command box.
Update Manager should open up and tell you: New distribution release ‘XX.XX’ is available.
If not, you can use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
Click “Upgrade” and follow the on-screen instructions.
There are no offline upgrade options for Ubuntu MATE. Please ensure you have
network connectivity to one of the official mirrors or to a locally accessible
mirror and follow the instructions above.
Is there anything you can help with or want to be involved in? Maybe you just
want to discuss your experiences or ask the maintainers some questions. Please
come and talk to us.
This was not wholly unexpected, and at the same time it was completely unexpected. I should probably explain that.
She was ninety, which is a fair age for anyone, but her mother (my great-grandmother) lived to be even older. What’s different is that nobody knew she was ninety, other than us. Most of her friends were in their mid-seventies. Now, you might be thinking, LOL, old, but this is like you in your mid-forties hanging out with someone who’s 30, or you in your late twenties hanging out with someone who’s 13, or you at eighteen hanging out with someone who’s still learning what letters are. Gaps get narrower as we get older, but the thing that most surprised me was that all her friends were themselves surprised at the age she was. She can’t have been that age, they said when we told them, and buried in there is a subtle compliment: she was like us, and we’re so much younger, and when we’re that much older we won’t be like her.
No. No, you won’t be like my grandma.
I don’t want to talk much about the last few weeks. We, my mum and me, we flew to Ireland in the middle of the night, we sorted out her house and her garden and her affairs and her funeral and her friends and her family, and we came home. All I want to say about it is that, and all I want to say about her is probably best said in the eulogy I wrote and spoke for her death, and I don’t want to say it again.
But (and this is where people in my family should tune out) I thought I’d talk about the website I made for her. Because of course I made a website. You know how some people throw themselves into work to dull the pain when something terrible happens to the people they love? I’m assuming that if you were a metalworker in 1950 and you wanted to handle your grief that a bunch of people got a bunch of metal stuff that you wouldn’t ordinarily have made. Well, I am no metalworker; I build the open web, and I perform, on conference stages or for public perception. So I made a website for my grandma; something that will maybe live on beyond her and maybe say what we thought about her.
Firstly I should say: it’s at kryogenix.org/nell because her name was Nell and I made it. But neither of those things are really true. Her name was Ellen, and what I did was write down what we all said and what we all did to say goodbye. I wanted to capture the words we said while they were still fresh in my memory, but more while how I felt was fresh in my memory. Because in time the cuts will become barely noticeable scars and I’ll be able to think of her not being here without stopping myself crying, and I don’t want to forget. I don’t want to lose it amongst memories of laughter and houses and lights. So I wrote down what we all said right now while I can still feel the hurt of it like fingernails, so maybe I won’t let it fade away.
I want to write some things about the web, but that’s not for this post. This post is to say: goodbye, Grandma.
Goodbye, Grandma. I tried to make a thing that would make people think of you when they looked at it. I wanted people to think of memories of you when they read it. So I made a thing of memories of you, and I spoke about memories of you, and maybe people who knew you will remember you and people who didn’t know you will learn about you from what we all said.
I have a re-purposed AMD64 laptop motherboard, ready to become an experimental Ubuntu Core server.
It's in fine condition. You can see that it boots an Ubuntu LiveUSB's "Try Ubuntu" environment just fine. Attached to the motherboard is a new 60GB SSD for testing. The real server will use a 1TB HDD.
But Ubuntu Core doesn't install on bare metal from a Live USB. It's still easy, though.
1. Boot a "Try Ubuntu" Environment on the target system.
Test your network connection. The picture shows a wireless connection. This particular laptop has a wireless chip that is recognized out-of-the box, so I didn't need to get out the long network cable.
Test that your storage device works. You can see in the picture that Gnome Disks can see the storage device.
2. Terminal: sudo fdisk -l. Locate the storage device that you want to install Ubuntu Core onto.
The entire storage device will be erased.
My storage device is at /dev/sda today. It might be different next boot. Yours might be different.
My file was called ubuntu-core-20-amd64.img.xz. The download is a .img.xz file, not a .iso file
Your browser downloads to your Downloads directory, of course.
4. Write Ubuntu Core to the storage device.
Warning: This command will erase your entire storage device. If there is anything valuable on your storage device, then you have skipped too many steps!
When prompted by the "Try Ubuntu" environment, remove the LiveUSB so you are booting from your newly-written storage device.
Be patient. My first boot into Ubuntu Core led to a black screen for nearly a minute before the system acknowledged that it actually has been working the entire time.
After 3-4 minutes of non-interactive setup alternating between blank screens and scrolling setup output, Ubuntu Core finally asked me two questions: Which network to connect to, and my Ubuntu SSO e-mail address.
Finally, the system rebooted again. This time it didn't ask any question - just displayed the new Ubuntu Core system's IP address.
In summary -- your feedback matters! There are hundreds of engineers and designers working for *you* to continue making Ubuntu amazing!
Along with the switch from Unity to GNOME, we’re also reviewing some of the desktop applications we package and ship in Ubuntu. We’re looking to crowdsource input on your favorite Linux applications across a broad set of classic desktop functionality.
We invite you to contribute by listing the applications you find most useful in Linux in order of preference. To help us parse your input, please copy and paste the following bullets with your preferred apps in Linux desktop environments. You’re welcome to suggest multiple apps, please just order them prioritized (e.g. Web Browser: Firefox, Chrome, Chromium). If some of your functionality has moved entirely to the web, please note that too (e.g. Email Client: Gmail web, Office Suite: Office360 web). If the software isn’t free/open source, please note that (e.g. Music Player: Spotify client non-free). If I’ve missed a category, please add it in the same format. If your favorites aren’t packaged for Ubuntu yet, please let us know, as we’re creating hundreds of new snap packages for Ubuntu desktop applications, and we’re keen to learn what key snaps we’re missing.
Web Browser: ???
Email Client: ???
Terminal: ???
IDE: ???
File manager: ???
Basic Text Editor: ???
IRC/Messaging Client: ???
PDF Reader: ???
Office Suite: ???
Calendar: ???
Video Player: ???
Music Player: ???
Photo Viewer: ???
Screen recording: ???
In the interest of opening this survey as widely as possible, we’ve cross-posted this thread to HackerNews, Reddit, and Slashdot. We very much look forward to another friendly, energetic, collaborative discussion.
Thanks to the Ubuntu Community Fund, the Xubuntu web server has been funded for another two years. Elizabeth announced the news on Twitter early in September. If you want to sponsor Xubuntu and the other flavors, the community fund is the way to go. Other options are available on the Xubuntu website.
On September 30, Pasi uploaded the new wallpaper for Xubuntu 21.10. This cycle&aposs wallpaper features overlapping teal geometric shapes, continuing the tradition of calm and clean backdrops. The updated xubuntu-artwork package has been submitted and is currently awaiting acceptance into the 21.10 archive.
Featuring teal geometric shapes, the Xubuntu 21.10 wallpaper gives off a relaxing vibe.
Package Updates
In September, the Xubuntu packageset saw only a few updates to Xfce and Xubuntu components.
Greybird-dark was updated with a much smoother gradient for non-CSD window titlebars, better aligning with the CSD version.
In other package-related (semi-Xubuntu-related) news, Ayatana indicators have replaced the original libappindicator and libindicator in Ubuntu, with the originals demoted from main to universe. libappindicator and libindicator have been removed entirely from Debian.
Xfce 4.17 PPA Update
Xfce 4.18 is still very early in its development, with Xfce 4.17 as its development series. Our Debian package manager, Unit 193, has started publishing Xfce 4.17 builds to the Xubuntu QA Experimental PPA for testing and development. There&aposs not much to see here yet, but if you&aposre curious (and don&apost mind the occasional breakage), check it out! If you prefer not to install bleeding-edge packages on your system, you can also use the XFCE Test docker image to try out the latest changes.
Xfce 4.17 packages are now available for 21.10 (featured), 21.04, and 20.04.
Upcoming Dates
If you&aposre following along with the Release Schedule, you know that Xubuntu 21.10 "Impish Indri" is just around the corner! The BETA release notes are up on the Xubuntu Wiki. More than ever, please take the time to test Xubuntu to help us catch some last-minute bugs.
This week we’ve been watching The Matrix, giving up Facebook and buying a new car. We make predictions for the next 14 years, bring you some command line love and go over your feedback to conclude this, the last episode of Ubuntu Podcast ever!
A colleague recently talked me into buying one of these nifty HDMI to USB video capture dongles that allows me to try out my ARM boards attached to my desktop without the need for a separate monitor. Your video output just ends up in a window on your desktop … this is quite a feature for just 11€
The device shows up on Linux as a new /dev/video* device when you plug it in, it also registers in pulseaudio as an audio source. To display the captured output a simple call to mplayer is sufficient, like:
Now, you might have other video devices (i.e. a webcam) attached to your machine and it will not always be /dev/video2 … so we want a bit of auto-detection to determine the device …
Re-plugging the device while running dmesg -w shows the following:
usb 1-11: new high-speed USB device number 13 using xhci_hcd
usb 1-11: New USB device found, idVendor=534d, idProduct=2109, bcdDevice=21.00
usb 1-11: New USB device strings: Mfr=1, Product=2, SerialNumber=0
usb 1-11: Product: USB Video
usb 1-11: Manufacturer: MACROSILICON
uvcvideo: Found UVC 1.00 device USB Video (534d:2109)
hid-generic 0003:534D:2109.000C: hiddev2,hidraw7: USB HID v1.10 Device [MACROSILICON USB Video] on usb-0000:00:14.0-11/input4
So our vendor id for the device is 534d … this should help finding the correct device from Linux’ sysfs … lets write a little helper:
VIDEODEV=$(for file in /sys/bus/usb/devices/*/idVendor; do
if grep -q 534d $file 2>/dev/null; then
ls $(echo $file| sed 's/idVendor/*/')/video4linux;
fi;
done | sort |head -1)
Running this snippet in a terminal and echoing $VIDEODEV now puts out “video2”, this is something we can work with … lets put both of the above together into one script:
#! /bin/sh
VIDEODEV=$(for file in /sys/bus/usb/devices/*/idVendor; do
if grep -q 534d $file 2>/dev/null; then
ls $(echo $file| sed 's/idVendor/*/')/video4linux;
fi;
done | sort |head -1)
mplayer -ao pulse tv:// -tv driver=v4l2:device=/dev/$VIDEODEV:width=1280:height=720
Making the script executable with chmod +x run.sh (i have called it run.sh) and executing it as ./run.sh now pops up a 720p window showing the screen of my attached Raspberry Pi.
Video works now, lets take a look at how we can get the audio output too. First we need to find the correct name for the pulseaudio source, again based on the Vendor Id:
AUDIODEV=$(pactl list sources | egrep 'Name:|device.vendor.id.' | grep -B1 534d | head -1 | sed 's/^.*Name: //')
Running the above and then echoing $AUDIODEV returns alsa_input.usb-MACROSILICON_USB_Video-02.analog-stereo so this is our pulse source we want to capture and play back to the default audio output, this can be easily done with a pipe between two pacat commands, one for record (-r) and one for playback (-p) like below:
Now playing a video with audio on my Pi while running the ./run.sh script in one terminal and the pacat pipe in a second one gives me both, video and audio output … To not have to use two terminals we should rather merge the pacat, auto detection and mplayer commands into one script … since both of them are actually blocking, we need to fiddle a bit by putting pacat into the background (by adding a & to the end of the command) and telling our shell to actually kill all subprocesses (even backgrounded ones) that were started by our script when we stop it with the following trap command:
So lets merge everything into one script, it should look like below then:
#! /bin/sh
pid=$$
terminate() {
pkill -9 -P "$pid"
}
trap terminate 1 2 3 9 15 0
VIDEODEV=$(for file in /sys/bus/usb/devices/*/idVendor; do
if grep -q 534d $file 2>/dev/null; then
ls $(echo $file| sed 's/idVendor/*/')/video4linux;
fi;
done | sort |head -1)
AUDIODEV=$(pactl list sources | egrep 'Name:|device.vendor.id.' | grep -B1 534d | head -1 | sed 's/^.*Name: //')
pacat -r --device="$AUDIODEV" --latency-msec=1 | pacat -p --latency-msec=1 &
mplayer -ao pulse tv:// -tv driver=v4l2:device=/dev/$VIDEODEV:width=1280:height=720
And this is it, executing the script now plays back video and audio from the dongle …
Collecting all the above info to create that shell script took me the better part of a Sunday afternoon and I was figuring that everyone who buys such a device might hit the same pain, so why not package it up in a distro agnostic way so that everyone on Linux can simply use my script and does not have to do all the hackery themselves … snaps are an easy way to do this and they are really quickly packaged as well, so lets do it !
First of all we need the snapcraft tool to quickly and easily create a snap and use multipass as build environment:
Now lets create a workdir, copy our script in place and let snapcraft init create a template file as a boilerplate:
$ mkdir hdmi-usb-dongle
$ cd hdmi-usb-dongle
$ cp ../run.sh .
$ snapcraft init
Created snap/snapcraft.yaml.
Go to https://docs.snapcraft.io/the-snapcraft-format/8337 for more information about the snapcraft.yaml format.
$
We’ll edit the name in snap/snapcraft.yaml, change core18 to core20 (since we really want to be on the latest base), adjust description and summary, switch grade: to stable and confinement: to strict … Now that we have a proper skeleton, lets take a look at the parts: which tells snapcraft how to build the snap and what should be put into it … we just want to copy our script in place and make sure that mplayer and pacat are available to it … To copy a script we can use the dump plugin that snapcraft provides, to make sure the two applications our script uses get included we have the stage-packages: property, the parts: definition should look like:
parts:
copy-runscript: # give it any name you like here
plugin: dump
source: . # our run.sh lives in the top level of the source tree
organize:
run.sh: usr/bin/run # tell snapcraft to put run.sh into a PATH that snaps do know about
stage-packages:
- mplayer
- pulseaudio-utils
Now we can just call snapcraft while inside the hdmi-usb-dongle dir:
$ snapcraft
Launching a VM.
Launched: snapcraft-my-dongle
[...]
Priming copy-runscript
+ snapcraftctl prime
This part is missing libraries that cannot be satisfied with any available stage-packages known to snapcraft:
- libGLU.so.1
- libglut.so.3
These dependencies can be satisfied via additional parts or content sharing. Consider validating configured filesets if this dependency was built.
Snapping |
Snapped hdmi-usb-dongle_0.1_amd64.snap
OOPS ! Seems we are missing some libraries and snapcraft tells us about this (they are apparently needed by mplayer)… lets find where these libs live and add the correct packages to our stage-packages: entry … we’ll install apt-file for this which allows reverse searches in deb packages:
If we now just call snapcraft again, it will re-build the snap for us and the warning about the missing libraries will be gone …
So now we do have a snap containing all the bits we need, the run.sh script, mplayer and pacat (from the pulseaudio-utils package). We also have made sure that mplayer finds the libs it needs to run, now we just need to tell snapcraft how we want to execute our script. To do this we need to add an apps: section to our snapcraft.yaml:
apps:
hdmi-usb-dongle:
extensions: [gnome-3-38]
command: usr/bin/run
plugs:
- audio-playback # for the use of "pacat -p" and "pactl list sources"
- audio-record # for the use of "pacat -r"
- camera # to allow read access to /dev/videoX
- hardware-observe # to allow scanning sysfs for the VendorId
To save us from having to fiddle with any desktop integration, there are the desktop extensions (you can see which extensions exist with the snapcraft list-extensions command), since we picked base: core20 at he beginning when editing the template file, we will use the gnome-3-38 extension with our snap. Our app should execute our script from the place we have put it in with the organize: statement before so our command: entry points to usr/bin/run and to allow the different functions of our script we add a bunch of snap plugs that I have explained inline above. Now our snapcraft.yaml looks like below:
name: hdmi-usb-dongle
base: core20
version: '0.1'
summary: A script to use a HDMI to USB dongle
description: |
This snap allows to easily use a HDMI to USB dongle on a desktop
grade: stable
confinement: strict
apps:
hdmi-usb-dongle:
extensions: [gnome-3-38]
command: usr/bin/run
plugs:
- audio-playback
- audio-record
- camera
- hardware-observe
parts:
copy-runscript:
plugin: dump
source: .
organize:
run.sh: usr/bin/run
stage-packages:
- mplayer
- pulseaudio-utils
- libglu1-mesa
- freeglut3
And this is it … running snapcraft again will now create a snap with an executable script inside, you can now install this snap (because it is a local snap you need the –dangeours option), connect the interface plugs and run the app (note that audio-playback automatically connects on desktops, so you do not explicitly need to connect it) …
When you now run hdmi-usb-dongle you should see something like below (if you have a HDMI cable connected with a running device you will indeed not see the test pattern):
This is great, everything runs fine, but if we run this on a desktop an “Unknown” icon shows up in the panel … it is also annoying having to start our app from a terminal all the time, so lets turn our snapped shell script into a desktop app by simply adding a .desktop file and an icon:
$ pwd
/home/ogra/hdmi-usb-dongle
$ mkdir snap/gui
We’ll create the desktop file inside the snap/gui folder that we just created, with the following content:
[Desktop Entry]
Type=Application
Name=HDMI to USB Dongle
Icon=${SNAP}/meta/gui/icon.png
Exec=hdmi-usb-dongle
Terminal=false
Categories=Utility;
Note that the Exec= line just uses our command from the apps: section in our snapcraft.yaml Now find or create some .png icon, 256×256 is a good size (I tend to use flaticon.com to find something that fits (do not forget to attribute the author if you use downloaded icons, the description: field in your snapcraft.yaml is good for this)) and copy that icon.png into snap/gui …
Re-build your snap once again, install it with the --dangerous option and you should now find it in your application overview or menu (if you do not use GNOME). Your snapped shell script is done, congratulations !
You could now just upload it to snapcraft.io to allow others to use it … and here we are back to the reason for this blog post … as I wrote at the beginning it took me a bit of time to figure all the commands out that I added to the script … I’m crazy enough to think it might be useful for others, even though this USB dongle is pretty exotic hardware, so I expected it to probably find one or two other users for whom it might be useful and I created https://snapcraft.io/hdmi-usb-dongle
For snap publishers the snapcraft.io page offers the neat feature to actually see your number of users. I created this snap about 6 weeks ago, lets see how many people actually installed it in this period:
Oh, seems I was wrong, there are actually 95 (!) people out there that I could help with packaging my script as a snap package !! While indeed the majority of users will be on Ubuntu given that snaps are a default package tool there, even among these 95 users there are a good bunch of non Ubuntu systems (what the heck is “boss 7” ??):
So if you have any useful scripts lying on your disk, even for exotic tasks, why not share them with others ? As you can see from my example even scripts to handle exotic hardware quickly find a lot of users around the world and across different distros when you offer them in the snap store … and do not forget, snap packages can be services, GUI apps as well as CLI apps, there are no limits in what you can package as a snap !
Friday 1 October is the testing day for Plasma 25th Anniversary Edition.
Please show up on our Plasma Matrix room (accessible on Libera IRC as #plasma) and download one or more rolling distros with the beta on. Distros with Plasma beta.
The Ubuntu Studio team is pleased to announce the beta release of Ubuntu Studio 21.10, codenamed “Impish Indri”.
While this beta is reasonably free of any showstopper DVD build or installer bugs, you may find some bugs within. This image is, however, reasonably representative of what you will find when Ubuntu Studio 21.10 is released on October 21, 2021.
Please note: Due to the change in desktop environment, directly upgrading to Ubuntu Studio 21.10 from 20.04 LTS is not supported and will not be supported. However, upgrades from Ubuntu Studio 21.04 will be supported. See the Release Notes for more information.
Full updated information is available in the Release Notes.
New Features
Ubuntu Studio 21.10 includes the new KDE Plasma 5.22 desktop environment. This is a beautiful and functional upgrade to previous versions, and we believe you will like it.
Studio Controls is upgraded to 2.2.3 and includes a frontend to qnetjack which allows Jack sources to/from the local network.
OBS Studio is upgraded to version 27 and works with Wayland sessions. While Wayland is not currently the default, it is available as unsupported and experimental.
We now use the Icon-Only Task Manager by default. You can change this by right-clicking on the taskbar/top panel and select “Show Alternatives…”.
There are many other improvements, too numerous to list here. We encourage you to take a look around the freely-downloadable ISO image.
This week we’ve been buying iPhones and playing with floppy disk emulators. We bring you new from the Ubuntu community and our favourite stories from past episodes of Ubuntu Podcast.
So you just broke that PR you’ve been working on for months?
One day, you find yourself force pushing over your already existing Pull request/branch, because like me, you like to
reuse funny names:
git fetch origin
git checkout tellmewhy #already exists and has a pull request still open, but you didn't know
git reset --hard origin/master
# hack hack hack
git commit $files
git push -f nameofmyremote
Panic!
Here’s when you realize that You’ve done something wrong, very very wrong, because github will throw the message:
Error creating pull request: Unprocessable Entity (HTTP 422)
A pull request already exists for foursixnine:tellmewhy.
So, you already broke your old PR with a completely unrelated change,
Don’t panic
If you happen to know what’s the previous commit id, you can always pick it up again (go to github.com/pulls for instance and look for the PR with the branch),
AND, AND, AAANDDDD, ABSOLUTELY ANDDDD, you haven’t ran git gc.
In my case:
@foursixnine foursixnine force-pushed the tellmewhy branch from 9e86f3a to **9714c93** 2 months ago
All there’s to do is:
git checkout $commit
# even better:
git checkout tellmewhy # old branch with new commits that are unrelated/overwritten over the old ones
git checkout -b tellmewhyagain # your new copy, open your pr from here
git push tellmewhyagain # now all is "safe" in the cloud
git checkout tellmewhy # Let's bring the dead back to life
git reset --hard 9714c93
git push -f tellmewhy
And this is it, you’ve brought the dead back to life
When Inkscape was started, it was a loose coalition of folks that met on the Internet. We weren’t really focused on things like governance, the governance was mostly who was an admin on SourceForge (it was better back then). We got some donated server time for a website and we had a few monetary donations that Bryce handled mostly with his personal banking. Probably one of our most valuable assets, our domain, was registered to and paid for by Mentalguy himself.
Realizing that wasn’t going to last forever we started to look into ways to become a legal entity as well as a great graphics program. We decided to join the (then much smaller) Software Freedom Conservancy which has allowed us to take donations as a non-profit and connected us to legal and other services to ensure that all the details are taken care of behind the scenes. As part of joining The Conservancy we setup a project charter, and we needed some governance to go along with that. This is where we officially established what we call “The Inkscape Board” and The Conservancy calls the Project Leadership Committee. We needed a way to elect that board, for which we turned to the AUTHORS file in the Inkscape source code repository.
Today it is clear that the AUTHORS file doesn’t represent all the contributors to Inkscape. It hasn’t for a long time and realistically didn’t when we established it. But it was easy. What makes Inkscape great isn’t that it is a bunch of programmers in the corner doing programmer stuff, but that it is a collaboration between people with a variety of skill sets bringing those perspectives together to make something they couldn’t build themselves.
Who got left out? We chose a method that had a vocational bias, it preferred people who are inclined to and enjoy computer programming. As a result translators, designers, technical writers, article authors, moderators, and others were left out of our governance. And because of societal trends we picked up both a racial and gender bias in our governance. Our board has never been anything other than a group of white men.
We are now taking specific actions to correct this in the Inkscape charter and starting to officially recognize the contributions that have been slighted by this oversight.
Our core of recognizing contributors has always been about peer-review with a rule we’ve called the “two patch rule.” It means that with two meaningful patches that are peer-reviewed and committed you’re allowed to have commit rights to the repository and added to the AUTHORS file. We want to keep this same spirit as we start recognize a wider range of contributions so we’re looking to make it the “two peers rule.” Here we’ll add someone to the list of contributors if two peers who are contributors say the individual has made significant contributions. Outside of the charter we expect each group of contributors will make a list of what they consider to be a significant contribution so that potential contributors know what to expect. For instance, for developers it will likely remain as patches.
We’re also taking the opportunity to build a process for contributors who move on to other projects. Life happens, interests change, and that’s a natural cycle of projects. But our old process which focused more on copyright of the code didn’t allow for contributors to be marked as retired. We will start to track who voted in elections (board members, charter changes, about screens, etc.) and contributors who fail to vote in two consecutive elections will be marked as retired. A retired contributor can return to active status by simply going through the “two peers rule.”
As a founder it pains me to think of all the contributions that have gone unrecognized. Sure there were “thank yous” and beers at sprints, but that’s not enough. I hope this new era for Inkscape will see these contributions recognized and amplified so that Inkscape can continue to grow. The need for Free Software has only grown throughout Inkscape’s lifetime and we need to keep up!
Debian 11 (codename Bullseye) was recently released. This was the smoothest upgrade I've experienced in some 20 years as a Debian user. In my haste, I completely forgot to first upgrade dpkg and apt, doing a straight dist-upgrade. Nonetheless, everything worked out of the box. No unresolved dependency cycles. Via my last-mile Gigabit connection, it took about 5 minutes to upgrade and reboot. Congratulations to everyone who made this possible!
Since the upgrade, only a handful of bugs were found. I filed bug reports. Over these past few days, maintainers have started responding. In once particular case, my report exposed a CVE caused by copy-pasted code between two similar packages. The source package fixed their code to something more secure a few years ago, while the destination package missed it. The situation has been brought to Debian's security team's attention and should be fixed over the next few days.
Afterthoughts
Having recently experienced hard-disk problems on my main desktop, upgrading to Bullseye made me revisit a few issues. One of these was the possibility of transiting to BTRFS. Last time I investigated the possibility was back when Ubuntu briefly switched their default filesystem to BRTFS. Back then, my feeling was that BRTFS wasn't ready for mainstream. For instance, the utility to convert an EXT2/3/4 partition to BTRFS corrupted the end of the partition. No thanks. However, in recent years, many large-scale online services have migrated to BRTFS and seem to be extremely happy with the result. Additionally, Linux kernel 5 added useful features such as background defragmentation. This got me pondering whether now would be a good time to migrate to BRTFS. Sadly it seems that the stock kernel shipping with Bullseye doesn't have any of these advanced features enabled in its configuration. Oh well.
Geode
The only point that has become problematic is my Geode hosts. For one things, upstream Rust maintainers have decided to ignore the fact that i686 is a specification and arbitrarily added compiler flags for more recent x86-32 CPUs to their i686 target. While Debian Rust maintainers have purposely downgraded the target, RustC still produces binaries that the Geode LX (essentially an i686 without PAE) cannot process. This affects fairly basic packages such as librsvg, which breaks SVG image support for a number of dependencies. Additionally, there's been persistent problems with systemd crashing on my Geode hosts whenever daemon-reload is issued. Then, a few days ago, problems started occurring with C++ binaries, because GCC-11 upstream enabled flags for more recent CPUs in their default i686 target. While I realize that SSE and similar recent CPU features produce better binaries, I cannot help but feel that treating CPU targets as anything else than a specification is a mistake. i686 is a specification. It is not a generic equivalent to x86-32.
The Debian Janitor is an automated
system that commits fixes for (minor) issues in Debian packages that can be
fixed by software. It gradually started proposing merges in early
December. The first set of changes sent out ran lintian-brush on sid packages maintained in
Git. This post is part of a series about the progress of the Janitor.
As covered in my post from last week, the Janitor now regularly tries to
import new upstream git snapshots or upstream releases into packages in Sid.
Moving parts
There are about 30,000 packages in sid, and it usually takes a couple of weeks
for the janitor to cycle through all of them. Generally speaking, there are up
to three moving targets for each package:
The packaging repository; vcswatch regularly scans this for changes,
and notifies the janitor when a repository has changed. For salsa
repositories it is instantly notified through a web hook
The upstream release tarballs; the QA watch service regularly polls these,
and the janitor scans for changes in the UDD tables with watch data (used for
fresh-releases)
The upstream repository; there is no service in Debian that watches this at
the moment (used for
fresh-snapshots)
When the janitor notices that one of these three targets has changed, it
prioritizes processing of a package - this means that a push to a packaging
repository on salsa usually leads to a build being kicked off within 10
minutes. New upstream releases are usually noticed by QA watch within a day or
so and then lead to a build. Now commits in upstream repositories don’t get
noticed today.
Note that there are no guarantees; the scheduler tries to be clever and not
e.g. rebuild the same package over and over again if it’s constantly changing
and takes a long time to build.
Packages without priority are processed with a scoring system that takes into
account perceived value (based on e.g. popcon), cost (based on wall-time
duration of previous builds) and likelihood of success (whether recent builds
were successful, and how frequently the repositories involved change).
webhooks for upstream repositories
At the moment there is no service in Debian (yet - perhaps this is something
that vcswatch or a sibling service could also do?) that scans upstream
repositories for changes.
However, if you maintain an upstream package, you can use a webhook to notify
the janitor that commits have been made to your repository, and it will create
a new package in fresh-snapshots. Webhooks from the
following hosting site software are currently supported:
You can simply use the URLhttps://janitor.debian.net/ as the target for hooks. There is no need to specify a secret, and the hook can either use a JSON or form encoding payload.
The endpoint should tell you whether it understood a webhook request, and
whether it took any action. It’s fine to submit webhooks for repositories that
the janitor does not (yet) know about.
GitHub
For GitHub, you can do so in the Webhooks section of the Settings tab. Fill the form as shown below and click on Add webhook:
GitLab
On GitLab instances, you can find the Webhooks tab under the Settings menu for each repository (under the gear symbol). Fill the form in as shown below and click Add Webhook:
Launchpad
For Launchpad, go to the repository (for Git) web view and click Manage Webhooks. From there, you can add a new webhook; fill the form in as shown below and click Add Webhook:
They asked that we keep confidential the exact details of what was discussed and asked, which I think is reasonable, but I did put together a slide deck to summarise my thoughts which I presented to them, and you can certainly see that. It’s at kryogenix.org/code/cma-apple and shows everything that I presented to the CMA along with my detailed notes on what it all means.
Bruce had a similar slide deck, and you can read his slides on iOS’s browser monopoly and progressive web apps. Bruce has also summarised our other colleague’s presentation, which is what we led off with. The discussion that we then went into was really interesting; they asked some very sensible questions, and showed every sign of properly understanding the problem already and wanting to understand it better. This was good: honestly, I was a bit worried that we might be trying to explain the difference between a browser and a rendering engine to a bunch of retired colonel types who find technology to be baffling and perhaps a little unmanly, and this was emphatically not the case; I found the committee engaging and knowledgeable, and this is encouraging.
In the last few weeks we’ve seen quite a few different governments and regulatory authorities begin to take a stand against tech companies generally and Apple’s control over your devices more specifically. These are baby steps — video and music apps are now permitted to add a link to their own website, saints preserve us, after the Japan Fair Trade Commission’s investigation; developers are now allowed to send emails to their own users which mention payments, which is being hailed as “flexibility” although it doesn’t allow app devs to tell their users about other payment options in the app itself, and there are still court cases and regulatory investigations going on all around the world. Still, the tide may be changing here.
What I would like is that I can give users the best experience on the web, on the best mobile hardware. That best mobile hardware is Apple’s, but at the moment if I want to choose Apple hardware I have to choose a sub-par web experience. Nobody can fix this other than Apple, and there are a bunch of approaches that they could take — they could make Safari be a best-in-class experience for the web, or they could allow other people to collaborate on making the browser best-in-class, or they could stop blocking other browsers from their hardware. People have lots of opinions about which of these, or what else, could and should be done about this; I think pretty much everyone thinks that something should be done about it, though. Even if your goal is to slow the web down and to think that it shouldn’t compete with native apps, there’s no real reason why flexbox and grid and transforms should be worse in Safari, right? Anyway, go and read the talk for more detail on all that. And I’m interested in what you think. Do please hit me up on Twitter about this, or anything else; what do you think should be done, and how?
Last week I sat down with the UK Competition and Markets Authority to talk about browser choice on Apple devices, and whether the claims that limiting choice is good for security and privacy actually hold up. Here's the presentation I gave.https://t.co/7i35fwdnrM
I mostly worked on different things, I guess. But mostly on packaging keylime and some Google Agents upload(s) and SRU(s). Also did a lot of reviewing, et al.
I was too lazy to maintain a list of things I worked on so there’s no concrete list atm. Maybe I’ll get back to this section later or will start to list stuff from next month onward, as I’ve been doing before. :D
Debian (E)LTS
Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.
And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).
This was my twenty-third month as a Debian LTS and eleventh month as a Debian ELTS paid contributor.
I was assigned 23.75 hours for LTS and 40.00 hours for ELTS and worked on the following things:
(however, I only worked for 23.75h on ELTS work, thereby, carrying the rest to next month)
Noticed that there’s a fallout of CVE-2021-3185, where an update was issued for gst-plugins-bad1.0, however, not for gst-plugins-bad0.10.
Thanks to Sylvain’s script, this came up and I prepped an update for that.
Started to work on libjdom1-java’s regression.
Other (E)LTS Work:
Front-desk duty from 26-07 until 01-08 and from 30-08 until 05-09 for both LTS and ELTS.
Mark CVE-2021-39240/haproxy as not-affected for stretch and jessie.
Mark CVE-2021-39241/haproxy as not-affected for stretch and jessie.
Mark CVE-2021-39242/haproxy as not-affected for stretch and jessie.
Mark CVE-2021-33582/cyrus-imapd as no-dsa for stretch.
Mark CVE-2020-18771/exiv2 as no-dsa for exiv2 for stretch.
Mark CVE-2020-18899/exiv2 as no-dsa for exiv2 for stretch.
Mark CVE-2021-38171/ffmpeg as postponed for stretch.
Mark CVE-2021-40330/git as no-dsa for stretch and jessie.
Mark CVE-2020-19481/gpac as ignored for stretch.
Mark CVE-2021-40491/inetutils as no-dsa for stretch.
Mark CVE-2021-36370/mc as no-dsa for stretch and jessie.
Mark CVE-2021-35368/modsecurity-crs as no-dsa for stretch.
Mark CVE-2021-23434/node-object-path as end-of-life for stretch.
Mark CVE-2021-32610/php-pear as no-dsa for stretch.
Mark CVE-2017-9525/systemd-cron as no-dsa for stretch.
Mark CVE-2021-37701/node-tar as end-of-life for stretch.
Mark CVE-2021-37712/node-tar as end-of-life in stretch.
Mark CVE-2021-3750/qemu as postponsed for jessie.
Mark CVE-2021-27511/prototypejs as postponsed for jessie.
Mark CVE-2021-23437/pillow as postponed for stretch and jessie.
Auto EOL’ed gpac, cacti, openscad, cgal, cyrus-imapd-2.4, libsolv, mosquitto, atomicparsley, gtkpod, node-tar, libapache2-mod-auth-openidc, neutron, inetutils and linux for jessie.
Drop cpio from ela-needed; open issues don’t warrant an ELA.
Attended monthly Debian LTS meeting.
Answered questions (& discussions) on IRC (#debian-lts and #debian-elts).
I first started using Debian sometime in the mid 90s and started contributing as a developer and package maintainer more than two decades years ago. My first very first scholarly publication, collaborative work led by Martin Michlmayr that I did when I was still an undergrad at Hampshire College, was about quality and the reliance on individuals in Debian. To this day, many of my closest friends are people I first met through Debian. I met many of them at Debian’s annual conference DebConf.
Given my strong connections to Debian, I find it somewhat surprising that although all of my academic research has focused on peer production, free culture, and free software, I haven’t actually published any Debian related research since that first paper with Martin in 2003!
So it felt like coming full circle when, several days ago, I was able to sit in the virtual DebConf audience and watch two of my graduate student advisees—Kaylea Champion and Wm Salt Hale—present their research about Debian at DebConf21.
Salt presented his masters thesis work which tried to understand the social dynamics behind organizational resilience among free software projects. Kaylea presented her work on a new technique she developed to identifying at risk software packages that are lower quality than we might hope given their popularity (you can read more about Kaylea’s project in our blog post from earlier this year).
And if you’re interested in joining us—perhaps to do more research on FLOSS and/or Debian and/or a graduate degree of your own?—please be in touch with me directly!
Thanks to all the hard work from our contributors, we are pleased to announce that Lubuntu 20.04.3 LTS has been released! What is Lubuntu? Lubuntu is an official Ubuntu flavor which uses the Lightweight Qt Desktop Environment (LXQt). The project’s goal is to provide a lightweight yet functional Linux distribution based on a rock-solid Ubuntu […]
The Debian Janitor is an automated
system that commits fixes for (minor) issues in Debian packages that can be
fixed by software. It gradually started proposing merges in early
December. The first set of changes sent out ran lintian-brush on sid packages maintained in
Git. This post is part of a series about the progress of the Janitor.
Linux distributions like Debian fulfill an important function in the FOSS ecosystem - they are system integrators that take existing free and open source software projects and adapt them where necessary to work well together. They also make it possible for users to install more software in an easy and consistent way and with some degree of quality control and review.
One of the consequences of this model is that the distribution package often lags behind upstream releases. This is especially true for distributions that have tighter integration and standardization (such as Debian), and often new upstream code is only imported irregularly because it is a manual process - both updating the package, but also making sure that it still works together well with the rest of the system.
The process of importing a new upstream used to be (well, back when I started working on
Debian packages) fairly manual and something like this:
Go to the upstream’s homepage, find the tarball and signature and verify the tarball
Make modifications so the tarball matches Debian’s format
Diff the original and new upstream tarballs and figure out whether changes
are reasonable and which require packaging changes
Update the packaging, changelog, build and manually test the package
Upload
Ecosystem Improvements
However, there have been developments over the last decade that make it easier to import new upstream releases into Debian packages.
Uscan and debian QA watch
Uscan and
debian/watch have been around for a
while and make it possible to find upstream tarballs.
A debian watch file usually looks something like this:
The QA watch service regularly polls
all watch locations in the archive and makes the information available, so it’s
possible to know which packages have changed without downloading each one of them.
Git
Git is fairly ubiquitous nowadays, and most upstream projects and packages in Debian use it. There are still exceptions that do not use any version control system or that use a different control system, but they are becoming increasingly rare. [1]
debian/upstream/metadata
DEP-12 specifies a file format with metadata about the upstream project that a package was based on. In particular relevant for our case is the fact it has fields for the location of the upstream version control location.
debian/upstream/metadata files look something like this:
While DEP-12 is still a draft, it has already been widely adopted - there are about 10000 packages in Debian that ship a debian/upstream/metadata file with Repository information.
Autopkgtest
The Autopkgtest
standard and associated tooling provide a way to run a defined set of tests
against an installed package. This makes it possible to verify that a package
is working correctly as part of the system as a whole. ci.debian.net regularly runs these tests against Debian packages to
detect regressions.
Vcs-Git headers
The Vcs-Git headers in debian/control are the equivalent of the Repository field in debian/upstream/metadata, but for the packaging repositories (as opposed to the upstream ones).
They’ve been around for a while and are widely adopted, as can be seen from zack’s stats:
The vcswatch service that regularly
polls packaging repositories to see whether they have changed makes it a lot
easier to consume this information in usable way.
Debhelper adoption
Over the last couple of years, Debian has slowly been converging on a single
build tool - debhelper’s dh interface.
Being able to rely on a single build tool makes it easier to write code to
update packaging when upstream changes require it.
Debhelper DWIM
Debhelper (and its helpers) increasingly can figure out how to do the Right
Thing in many cases without being explicitly configured. This makes packaging
less effort, but also means that it’s less likely that importing a new upstream
version will require updates to the packaging.
With all of these improvements in place, it actually becomes feasible in a lot
of situations to update a Debian package to a new upstream version
automatically. Of course, this requires that all of this information is
available, so it won’t work for all packages. In some cases, the packaging for
the older upstream version might not apply to the newer upstream version.
The Janitor has attempted to import a new upstream Git snapshot and a new
upstream release for every package in the archive where a debian/watch file or
debian/upstream/metadata file are present.
These are the steps it uses:
Find new upstream version
If release, use debian/watch - or maybe tagged in upstream repository
If snapshot, use debian/upstream/metadata’s Repository field
If neither is available, use guess-upstream-metadata from upstream-ontologist to guess the upstream Repository
Merge upstream version into packaging repository, possibly importing tarballs using pristine-tar
Update the changelog file to mention the new upstream version
Run some checks to ensure there are no unintentional changes, e.g.:
Scan diff between old and new for surprising license changes
Today, abort if there are any - in the future, maybe update debian/copyright
Check for obvious compatibility breaks - e.g. sonames changing
Attempt to update the packaging to reflect upstream changes
Refresh patches
Attempt to build the package with deb-fix-build, to deal with any missing dependencies
Run the autopkgtests with deb-fix-build to deal with missing dependencies, and abort if any tests fail
Results
When run over all packages in unstable (sid), this process works for a surprising number of them.
Fresh Releases
For fresh-releases (aka imports of upstream releases), processing all packages maintained in Git for which QA watch reports new releases (about 11,000):
That means about 2300 packages updated, and about 4000 unchanged.
Fresh Snapshots
For fresh-snapshots (aka imports of latest Git commit from upstream), processing all packages maintained in Git (about 26,000):
Or 5100 packages updated and 2100 for which there was nothing to do, i.e. no upstream commits since the last Debian upload.
As can be seen, this works for a surprising fraction of packages. It’s possible to get the numbers up even higher, by both improving the tooling, the autopkgtests and the metadata that is provided by packages.
Using these packages
All the packages that have been built can be accessed from the Janitor APT repository. More information can be found at https://janitor.debian.net/fresh, but in short - run:
1
2
3
4
5
6
echo deb "[arch=amd64 signed-by=/usr/share/keyrings/debian-janitor-archive-keyring.gpg]" \ https://janitor.debian.net/ fresh-snapshots main | sudo tee /etc/apt/sources.list.d/fresh-snapshots.listecho deb "[arch=amd64 signed-by=/usr/share/keyrings/debian-janitor-archive-keyring.gpg]" \ https://janitor.debian.net/ fresh-releases main | sudo tee /etc/apt/sources.list.d/fresh-releases.listsudo curl -o /usr/share/keyrings/debian-janitor-archive-keyring.gpg https://janitor.debian.net/pgp_keysapt update
And then you can install packages from the fresh-snapshots (upstream git snapshots) or fresh-releases suites on a case-by-case basis by running something like:
1
apt install -t fresh-snapshots r-cran-roxygen2
Most packages are updated based on information provided by vcswatch and qa watch, but it’s also possible for upstream repositories to call a web hook to trigger a refresh of a package.
These packages were built against unstable, but should in almost all cases also work for testing.
Caveats
Of course, since these packages are built automatically without human supervision it’s likely that some of them will have bugs in them that would otherwise have been caught by the maintainer.