April 21, 2021

Not Disappeared

Stephen Michael Kellat

It may seem like I have disappeared over the past little while. That should not be regarded as disengagement. There have just been things coming up that require attention. I may be posting to Twitter but as Frank Luntz says Twitter Is Not Real Life.

I do intend to participate in what release festivities that I can this week. I know Thursday will be tied up with me being in the classroom getting training to be a pollworker during the primary/special election on May 4th. Paying attention to the trainer will be kinda important after the increased scrutiny my county’s elections unit has received from Ohio’s Secretary of State.

Tags: Stay-Alive

on April 21, 2021 08:03 PM

What is indaba?

An indaba is a conference or gathering to discuss matters of importance, originating from the Xhosa and Zulu languages. Following this theme, we are excited to be hosting our first Desktop Indaba this Friday. Everybody is welcome to take part by asking questions for our AMA (Ask Me Anything) part of the session. Learn more below!

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/a4b4/samuele-giglio-kNRX0eWXYxs-unsplash.jpg" width="720" /> </noscript>

AMA to celebrate 21.04

It’s nearly that time of year again! 21.04 Hirsute Hippo will be released tomorrow and we are excited to show you the new features on the way. To celebrate this interim release, we will be hosting a livestream indaba. Members of the Ubuntu Desktop development team and a special guest from the community will be telling us more about the release and hosting a live Q&A.

When: 3-4pm UTC on Friday 23rd April

Where: ubuntuonair.com

This will take place 3-4pm UTC on Friday 23rd April, the day after the official 21.04 release. The stream will be hosted on ubuntuonair.com. We are excited to announce that Frederik Feichtmeier from the Yaru team will be participating. Yaru is the default theme of Ubuntu, backed by the community. Our speakers will be taking questions from the community via live chat so get involved! You can post questions in advance here or tune in and ask them live at ubuntuonair.com.

If this event is successful, we strive to host indabas more regularly, with varying topics and different faces from the team as well as the community. Watch this space!

The release of Ubuntu Server 21.04 is also coming! To learn more about new features, register for the webinar on May 26th 2021.

Photo by Samuele Giglio on Unsplash

on April 21, 2021 04:37 PM

Since 2009, Juju has been enabling administrators to seamlessly deploy, integrate and operate complex applications across multiple cloud platforms. Juju has evolved significantly over time, but a testament to its original design is the fact that the approach Juju takes to operating workloads hasn’t fundamentally changed; Juju still provides fine grained control over workloads by placing operators right next to applications on any platform. This is exemplified in our most recent changes to how Charmed Operators behave on Kubernetes.

In recent release candidates of Juju 2.9 (rc7/rc8/rc9/rc10), we’ve done a lot of work to ensure the juju bootstrap process on Kubernetes is as smooth and as universal as possible – meaning it should be easier than ever to bootstrap a Juju controller on a Bring-your-own-Kubernetes!

But don’t take our word for it, deploy yourself some killer apps on a Kubernetes of your choice…

Get Bootstrapped

To get started, you just need:

  • The latest version of Juju from the 2.9/candidate channel
  • Access to an existing Kubernetes cluster (any will do!)
  • A few minutes!

Get started by installing or updating Juju:

# If you're installing from scratch
$ sudo snap install juju --classic --channel=2.9/candidate
# If you're updating an existing Juju install
$ sudo snap refresh juju --channel=2.9/candidate

Now, confirm you have access to a Kubernetes cluster, and bootstrap it! Your KUBECONFIG will be picked up automatically by the bootstrap process, provided you use the same name in the bootstrap command as the name of the kubectl context!

# Check we've got access to a cluster context
$ kubectl config get-contexts
CURRENT   NAME                       CLUSTER              AUTHINFO                              NAMESPACE
          microk8s                   microk8s-cluster     admin
*         super-cool-cluster-admin   super-cool-cluster   clusterAdmin_juju-test_juju-cluster
# Bootstrap the controller on the cluster
$ juju bootstrap super-cool-cluster-admin

Get Charming

When that command returns successfully, you’ll be ready to deploy a range of awesome charms (and charm bundles). Take your pick from Charmhub, or check out some our favourites:

Mattermost

Mattermost is a flexible, open source messaging platform that enables secure team collaboration. Check it out on Charmhub, or if you can’t wait:

# Create a Juju model
$ juju add-model mattermost
# First, deploy PostgresSQL on Kubernetes
$ juju deploy cs:~postgresql-charmers/postgresql-k8s postgresql
# Now deploy Mattermost
$ juju deploy cs:~mattermost-charmers/mattermost --config juju-external-hostname=mattermost.test
# Seamlessly integrate Mattermost and PostgreSQL 🚀
$ juju add-relation mattermost postgresql:db
# Expose the service so you can hit it in your browser
$ juju expose mattermost
# Confirm the deployment was successful:
$ juju status
Model       Controller  Cloud/Region        Version  SLA          Timestamp
mattermost  micro       microk8s/localhost  2.9-rc9  unsupported  13:31:09+01:00
App         Version            Status  Scale  Charm           Store       Channel  Rev  OS          Address        Message
mattermost  mattermost:5.32.1  active      1  mattermost      charmstore  stable    20  kubernetes  10.152.183.54
postgresql  pgcharm:edge       active      1  postgresql-k8s  charmstore  stable    10  kubernetes
Unit           Workload  Agent  Address       Ports     Message
mattermost/0*  active    idle   10.1.215.212  8065/TCP
postgresql/0*  active    idle   10.1.215.211  5432/TCP  Pod configured

Using the example above, you should now be able to get Mattermost on http://10.152.183.54:8065. Your mileage may vary depending on your individual cluster networking setup!

Charmed Kubeflow

If you’re feeling more adventurous, Charmed Kubeflow wraps the 30+ apps that make up Kubeflow with rock-solid ops code. Charmed Kubeflow integrates these charms to provide the best Kubeflow experience, from deployment to day-2 operations.

Extra goodies for Charming Ninjas

We’re also building the foundations of a better future for Juju + Kubernetes. Check out the Future of Charmed Operators on Kubernetes post for more details and try your hand at building a fancy new charm that implements the sidecar pattern! You can check out the developer docs here!

Thank you!

We look forward to hearing your feedback and making Juju even more awesome! You can send us feedback, or get help in a few different ways:

on April 21, 2021 12:22 PM

My article was recently published on Linode on how to setup many WordPress websites on a single server, and put each website inside LXD containers. By doing so, you can have greater density on your VPS and more value for money.

Apart from that, it also has great educational value to go through all the process and doing the setup manually. By doing it manually, it would be easier for you to graduate and move to using orchestration software.

The article was in the queue for publishing for quite a bit of time and it shows because it mentions Ubuntu 18.04. But still it works great and should be lighter than Ubuntu 20.04.

Here is the article,

https://www.linode.com/docs/guides/how-to-set-up-multiple-wordpress-sites-with-lxd-containers/

on April 21, 2021 11:50 AM

April 19, 2021

Welcome to the Ubuntu Weekly Newsletter, Issue 679 for the week of April 11 – 17, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on April 19, 2021 10:36 PM

April 15, 2021

Ep 138 – Radiotransmissor

Podcast Ubuntu Portugal

O ritmo dos streamings do Diogo não abranda, a Canonical vai perder mais um notável mas a Keychron portou-se bem e a Canonical com a Collabora ofereceram mais prendinhas à comunidade.

Já sabem: oiçam, subscrevam e partilhem!

  • https://www.twitch.tv/podcastubuntuportugal
  • https://cdimages.ubuntu.com/
  • http://iso.qa.ubuntu.com/
  • https://formbuilder.online/
  • https://www.limesurvey.org/pt
  • https://apps.nextcloud.com/apps/forms
  • https://twitter.com/popey/status/1380139900108963848?s=19
  • https://ubuntu.com/blog/canonical-collabora-nextcloud-deliver-work-from-home-solution-to-raspberry-pi-and-enterprise-arm
  • https://keychronwireless.referralcandy.com/3P2MKM7
  • https://www.humblebundle.com/books/machine-learning-zero-to-hero-manning-publications-books?parner=PUP
  • https://www.humblebundle.com/books/ultimate-python-bookshelf-packt-books?partner=PUP
  • https://shop.nitrokey.com/shop/product/nk-pro-2-nitrokey-pro-2-3?aff_ref=3
  • https://shop.nitrokey.com/shop?aff_ref=3
  • https://youtube.com/PodcastUbuntuPortugal

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on April 15, 2021 09:45 PM

S14E06 – Flap Bake Signal

Ubuntu Podcast from the UK LoCo

This week we have been deploying bitwarden_rs and get the Stream Deck to work well on Ubuntu. We discuss how much we really use desktop environments, bring you a GUI love and go over all your wonderful feedback.

It’s Season 14 Episode 06 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on April 15, 2021 04:00 PM

The French speaking Ubuntu community really likes making ubuntu t-shirts. Each six months, releases after releases, they make new ones. Here is the Hirsute Hippo. You can buy it before the 26th of April for €15 (+ shipping costs) and receive it at the end of May 2021. You can buy it later but it will be more expensive and you will not have any garanty of stock.

The designer, a Ubuntu-fr member, is Ocelot. Thank you Ocelot !

on April 15, 2021 02:10 PM

April 12, 2021

Welcome to the Ubuntu Weekly Newsletter, Issue 678 for the week of April 4 – 10, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on April 12, 2021 10:37 PM

One year ago I joined GitLab as a Solution Architect. In this blog post I do not want to focus on my role, my daily work or anything pandemic related. Also, there won’t be a huge focus in regard to all remote working. I rather want to focus on my personal experiences in regard to the work culture. I’ll focus on things which I certainly did not think about before I joined GitLab (or any other company before).

Before joining GitLab I worked for four German companies. As a German with a Sri-Lankan Tamil heritage I was always a minority at work. Most of the time it wasn’t an issue. At least that’s what I thought. At all those previous companies there were mostly white male and with very few (or even none) non-males especially in technical and leading roles. Nowadays, I realize what a huge difference a globally distributed company makes with people from different countries, cultures, background and gender.

There were sooo many small things which makes a difference and which opened my eyes.

People are pronouncing my name correctly

Some of you might (hopefully) think:

Wait that was an issue!?

Yes, yes it was. And it was super annoying. Working in a globally distributed companies means that the default behavior is: People are asking how to correctly (!) pronounce the full (!) name. It’s a simple question, and it directly shows respect even if you struggle to pronounce it correctly on the first time. My name is transcribed from Tamil to English. So the average colleagues simply tries to pronounce it in English, and it’s perfect and that includes the German GitLab colleagues. In previous jobs there were a lot of colleagues who didn’t ask, and I was “the person with the complicated name”, “you know who” or some even called me “Sushiwahn”. One former colleague referenced me to the customer in a phone call as “the other colleague”. That was not cool. If you wonder on how to pronounce my name: I uploaded a recording on my profile website on sujeevan.vijayakumaran.com. I should’ve done that way earlier.

The meaning/origin of my name

I never really cared about the meaning of my name. So many people have asked me if my name has a meaning or what the origin was. I didn’t know, and I also didn’t really care. My mum always simply told me “Your name has style”. My teammate Sri one day randomly dropped a message in our team channel:

If you break down your name into the root words, it basically translates to “Good Life (Sujeevan) Prince Of Victory (Vijayakumaran).

That blew my mind 🤯.

#BlackLivesMatter and #StopAsianHate

So many terrible things happened in the last year in the world. When these two movements appeared it was a big topic in the company. Even with messages in our #company-fyi channel which is normally used for company related announcements. While #BlackLivesMatter was covered in the German media, #StopAsianHate was not really covered in the German media at all.

Around the time of #BlackLivesMatter my manager asked in our team meeting how/if it affects us even if we - in our EMEA team – are far away from the US. I had the chance to share stories from my past which wouldn’t have happened for the average white person in a white country. This never happened in any other company I worked before. When the Berlin truck attack at a Christmas market (back in 2016) happened it was a big topic at lunchtime with the colleagues. When a racist shot German Migrants in Hanau in February 2020 it was not really a topic at work. At one of the attacks Germans were the victims in the other it were migrants. Both shootings happened before I joined GitLab. When there was a shooting in Vienna in November 2020 colleagues directly jumped into the local Vienna channel and asked if everyone is somewhat okay. See the difference!

Languages

We have a #lang-de Channel for German (the language not the country!) related content. There are obviously many other channels for other languages. What surprised me? There were way more people outside of Germany without a German background who are learning or trying to learn German. It’s a small thing but it’s cool! There were many discussions about word meanings and how to handle the German language. Personally that got me thinking if I should pick up learning French again.

Meanings of Emojis

There are a lot of emojis. Specially in Slack. At the beginning it was somewhat overwhelming, but I got used to it. One thing which confused me right after I joined was the usage of the 🙏🏻 emoji. Interestingly the name of the emoji is simply “folded hands”. But what is the meaning of it? When I first saw the use of it I was somewhat confused. For me as a Hindu it’s clearly “praying”. The second meaning which comes to my mind is use of it as a form of greeting – see Namaste. However, there are so many colleagues who use it for some sort of “thanks”. Or even “sorry”. Emojis have different meanings in different cultures!

Different People – Different Mindsets

Since GitLab is an all-remote company our informal communication is happening in Slack channels and in coffee chats in Zoom calls. In the first weeks I scheduled a lot of coffee chats to get to know my teammates and some other colleagues in other teams. The most useful bot (for me) in Slack is the Donut Bot which randomly connects two persons every two weeks. I don’t have to take care to randomly select people from different teams and department. And honestly I most likely would be somewhat biased when I would cherry-pick people to schedule coffee chats.

So every two weeks I get matched with some “random” person. This lowers the bar to talk to someone from some other department where I (shamefully) thought: “Oh that role sounds boring to me.” But if it sounds boring to me that’s the first sign that I should talk to them. Without the Donut Bot I would’ve most likely not talked to someone from the Legal department, just to give one small example. And there were also a lot of engineers who didn’t really talk to someone from Sales, like I am part of the Sales team. Even though we do not need to talk about work related stuff I generally learned something new when I leave the conversation at the end.

However, the more interesting part is to get to know all the different people in the different countries and continents with different cultures. There are many colleagues who left their home country and live somewhere else. The majority of these people are either in the group “I moved because of my (previous) work” or “I moved because of my partner”. The most surprising sentence came from a Canadian colleague though:

I’m thinking of relocating to Germany for a couple of years since it’s easily possible with GitLab. All my friends here are migrants and I really want to experience how it is to learn a new language and live in another country.

That was by far the most interesting reason I heard so far! Besides that my favorite question I ask those people who moved away from their home country is what they’re missing and what they would miss if they moved back. This also leads to some fascinating stories. Most of them are related to food, some to specific medicine and some reasons are even “I like the $specific_mentality over here which I would miss”.

I left out the more obvious parts of a globally distributed team like getting to know how life is in the not-so-rich countries of the world. Also, I finally understood what the difference between the average German is compared to the average Silicon Valley person. The latter is way more open to a visionary goal while the average German wants to keep their safe job for a long time (yes, even in IT).

Mental Health Awareness

We have a lot of content related to Mental Health which I still need to check out. It’s a super important topic on so many different levels. At all my previous employers this was not a topic at all. I might even say it’s generally a taboo topic. One thing which I definitely did not expect was the introduction of the Family and Friends day which was introduced in May 2020 shortly after I joined the company and it happened nearly every month since then and was introduced because of the COVID-Lockdowns. On that day (nearly) the whole company has a day off to spend time with their family and friends. My German friends reaction to that was something like:

Wait didn’t you join a hyper-growth startup-ish company? That doesn’t sound like late-stage capitalism what I would have expected!

In addition to that, there’s also a #mental-health-aware Slack channel where everyone can talk about their problems. I was really surprised to see so many team members to share their problems and what they are currently struggling with. I couldn’t have imagined that people share very personal stories in the company and that includes people sharing their experince with getting help from a therapist.

As someone who is somewhat an introvert who struggles to talk to a lot of people in big groups in real life this past year (and a few more months) has been relatively easy to handle in this regard as I only met four team members in person so far. However, the first in-person company event is coming up, and I’m pretty confident that getting in touch with a lot of mostly unknown people will be easier than at other companies I’ve worked for so far.

Things which I totally expected

There are still things which I expected to work as intended. Here’s a short list:

  • All Remote and working async works pretty damn good and I really don’t want to go back to an office
  • Spending company money is easy and definitely not a hassle
  • Not having to justify how and when I exactly work is a huge relief
  • Not being forced to request paid time off is an unfamiliar feeling at the beginning, but I got used to it pretty quickly
  • Working with people with a vision who can additionally identify with the company is great
  • No real barriers between teams and departments
  • Values matter

For me personally GitLab has set the bar for companies I might work for in the future pretty high. That’s good and bad at the same time ;-). If you want to read another story of “1 year at GitLab” I can highly recommend the blogpost of dnsmichi from a month ago.

on April 12, 2021 07:15 PM

April 08, 2021

Ep 137 – Cura

Podcast Ubuntu Portugal

Num exercício de reavaliação sobre se se altera ou não o nome deste podcast para Streaming, vinho tinto, cerveja e outras cenas ou se mantemos, demos uma volta pelos assuntos mundanos que assolam os vossos anfitriões preferidos.

Já sabem: oiçam, subscrevam e partilhem!

  • https://keychronwireless.referralcandy.com/3P2MKM7
  • https://www.humblebundle.com/books/machine-learning-zero-to-hero-manning-publications-books?parner=PUP
  • https://shop.nitrokey.com/shop/product/nk-pro-2-nitrokey-pro-2-3?aff_ref=3
  • https://shop.nitrokey.com/shop?aff_ref=3
  • https://youtube.com/PodcastUbuntuPortugal

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on April 08, 2021 09:45 PM

S14E05 – Newspaper Scoop Carrots

Ubuntu Podcast from the UK LoCo

This week we’ve been spring cleaning and being silly on Twitter. We round up the news from the Ubuntu community and discuss our favourite stories from the tech news.

It’s Season 14 Episode 05 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on April 08, 2021 03:00 PM

April 06, 2021

Full Circle Weekly News #204

Full Circle Magazine


Please welcome new host, Moss Bliss

.

DigiKam 7.2 released
https://www.digikam.org/news/2021-03-22-7.2.0_release_announcement/

4MLinux 36.0 released
https://4mlinux-releases.blogspot.com/2021/03/4mlinux-360-stable-released.html

Malicious changes detected in the PHP project Git repository
https://news-web.php.net/php.internals/113838

New version of Cygwin 3.2.0, the GNU environment for Windows
https://www.mail-archive.com/cygwin-announce@cygwin.com/msg09612.html

SeaMonkey 2.53.7 Released
https://www.seamonkey-project.org/news#2021-03-30

Nitrux 1.3.9 with NX Desktop is Released
https://nxos.org/changelog/changelog-nitrux-1-3-9/

Parrot 4.11 Released with Security Checker Toolkit
https://parrotsec.org/blog/parrot-4.11-release-notes/

Systemd 248 system manager released
https://lists.freedesktop.org/archives/systemd-devel/2021-March/046289.html

GIMP 2.10.24 released
https://www.gimp.org/news/2021/03/29/gimp-2-10-24-released/

Deepin 20.2 ready for download
https://www.deepin.org/en/2021/03/31/deepin-20-2-beautiful-and-wonderful/

Installer added to Arch Linux installation images
https://archlinux.org/news/installation-medium-with-installer/

Ubuntu 21.04 beta released
https://ubuntu.com//blog/announcing-ubuntu-on-windows-community-preview-wsl-2


Credits:
Full Circle Magazine
@fullcirclemag
Host: @bardictriad
Bumper: Canonical
Theme Music: From The Dust - Stardust
https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/
on April 06, 2021 05:02 PM

Reuse Licensing Helper

Harald Sitter

It’s boring but important! Stay with me! Please! 😘

For the past couple of years Andreas Cord-Landwehr has done excellent work on moving KDE in a more structured licensing direction. Free software licensing is an often overlooked topic, that is collectively understood to be important, but also incredibly annoying, bureaucratic, and complex. We all like to ignore it more than we should.

If you are working on KDE software you really should check out KDE’s licenses howto and maybe also glance over the comprehensive policy. In particular when you start a new repo!

I’d like to shine some light on a simple but incredibly useful tool: reuse. reuse helps you check licensing compliance with some incredibly easy commands.

Say you start a new project. You create your prototype source, maybe add a readme – after a while it’s good enough to make public and maybe propose for inclusion as mature KDE software by going through KDE Review. You submit it for review and if you are particularly unlucky you’ll have me come around the corner and lament how your beautiful piece of software isn’t completely free software because some files lack any sort of licensing information. Alas!

See, you had better use reuse…

pip3 install --user reuse

reuse lint: lints the source and tells you which files aren’t licensed

reuse download --all: downloads the complete license files needed for compliance based on the licenses used in your source (unfortunately you’ll still need to manually create the KDEeV variants)

If you are unsure how to license a given file, consult the licensing guide or the policy or send a mail to one of the devel mailing lists. There’s help a plenty.

Now that you know about the reuse tool there’s even less reason to start projects without 100% compliance so I can shut up about it 🙂

on April 06, 2021 01:03 PM

April 05, 2021

Previously: v5.8

Linux v5.9 was released in October, 2020. Here’s my summary of various security things that I found interesting:

seccomp user_notif file descriptor injection
Sargun Dhillon added the ability for SECCOMP_RET_USER_NOTIF filters to inject file descriptors into the target process using SECCOMP_IOCTL_NOTIF_ADDFD. This lets container managers fully emulate syscalls like open() and connect(), where an actual file descriptor is expected to be available after a successful syscall. In the process I fixed a couple bugs and refactored the file descriptor receiving code.

zero-initialize stack variables with Clang
When Alexander Potapenko landed support for Clang’s automatic variable initialization, it did so with a byte pattern designed to really stand out in kernel crashes. Now he’s added support for doing zero initialization via CONFIG_INIT_STACK_ALL_ZERO, which besides actually being faster, has a few behavior benefits as well. “Unlike pattern initialization, which has a higher chance of triggering existing bugs, zero initialization provides safe defaults for strings, pointers, indexes, and sizes.” Like the pattern initialization, this feature stops entire classes of uninitialized stack variable flaws.

common syscall entry/exit routines
Thomas Gleixner created architecture-independent code to do syscall entry/exit, since much of the kernel’s work during a syscall entry and exit is the same. There was no need to repeat this in each architecture, and having it implemented separately meant bugs (or features) might only get fixed (or implemented) in a handful of architectures. It means that features like seccomp become much easier to build since it wouldn’t need per-architecture implementations any more. Presently only x86 has switched over to the common routines.

SLAB kfree() hardening
To reach CONFIG_SLAB_FREELIST_HARDENED feature-parity with the SLUB heap allocator, I added naive double-free detection and the ability to detect cross-cache freeing in the SLAB allocator. This should keep a class of type-confusion bugs from biting kernels using SLAB. (Most distro kernels use SLUB, but some smaller devices prefer the slightly more compact SLAB, so this hardening is mostly aimed at those systems.)

new CAP_CHECKPOINT_RESTORE capability
Adrian Reber added the new CAP_CHECKPOINT_RESTORE capability, splitting this functionality off of CAP_SYS_ADMIN. The needs for the kernel to correctly checkpoint and restore a process (e.g. used to move processes between containers) continues to grow, and it became clear that the security implications were lower than those of CAP_SYS_ADMIN yet distinct from other capabilities. Using this capability is now the preferred method for doing things like changing /proc/self/exe.

debugfs boot-time visibility restriction
Peter Enderborg added the debugfs boot parameter to control the visibility of the kernel’s debug filesystem. The contents of debugfs continue to be a common area of sensitive information being exposed to attackers. While this was effectively possible by unsetting CONFIG_DEBUG_FS, that wasn’t a great approach for system builders needing a single set of kernel configs (e.g. a distro kernel), so now it can be disabled at boot time.

more seccomp architecture support
Michael Karcher implemented the SuperH seccomp hooks, Guo Ren implemented the C-SKY seccomp hooks, and Max Filippov implemented the xtensa seccomp hooks. Each of these included the ever-important updates to the seccomp regression testing suite in the kernel selftests.

stack protector support for RISC-V
Guo Ren implemented -fstack-protector (and -fstack-protector-strong) support for RISC-V. This is the initial global-canary support while the patches to GCC to support per-task canaries is getting finished (similar to the per-task canaries done for arm64). This will mean nearly all stack frame write overflows are no longer useful to attackers on this architecture. It’s nice to see this finally land for RISC-V, which is quickly approaching architecture feature parity with the other major architectures in the kernel.

new tasklet API
Romain Perier and Allen Pais introduced a new tasklet API to make their use safer. Much like the timer_list refactoring work done earlier, the tasklet API is also a potential source of simple function-pointer-and-first-argument controlled exploits via linear heap overwrites. It’s a smaller attack surface since it’s used much less in the kernel, but it is the same weak design, making it a sensible thing to replace. While the use of the tasklet API is considered deprecated (replaced by threaded IRQs), it’s not always a simple mechanical refactoring, so the old API still needs refactoring (since that CAN be done mechanically is most cases).

x86 FSGSBASE implementation
Sasha Levin, Andy Lutomirski, Chang S. Bae, Andi Kleen, Tony Luck, Thomas Gleixner, and others landed the long-awaited FSGSBASE series. This provides task switching performance improvements while keeping the kernel safe from modules accidentally (or maliciously) trying to use the features directly (which exposed an unprivileged direct kernel access hole).

filter x86 MSR writes
While it’s been long understood that writing to CPU Model-Specific Registers (MSRs) from userspace was a bad idea, it has been left enabled for things like MSR_IA32_ENERGY_PERF_BIAS. Boris Petkov has decided enough is enough and has now enabled logging and kernel tainting (TAINT_CPU_OUT_OF_SPEC) by default and a way to disable MSR writes at runtime. (However, since this is controlled by a normal module parameter and the root user can just turn writes back on, I continue to recommend that people build with CONFIG_X86_MSR=n.) The expectation is that userspace MSR writes will be entirely removed in future kernels.

uninitialized_var() macro removed
I made treewide changes to remove the uninitialized_var() macro, which had been used to silence compiler warnings. The rationale for this macro was weak to begin with (“the compiler is reporting an uninitialized variable that is clearly initialized”) since it was mainly papering over compiler bugs. However, it creates a much more fragile situation in the kernel since now such uses can actually disable automatic stack variable initialization, as well as mask legitimate “unused variable” warnings. The proper solution is to just initialize variables the compiler warns about.

function pointer cast removals
Oscar Carter has started removing function pointer casts from the kernel, in an effort to allow the kernel to build with -Wcast-function-type. The future use of Control Flow Integrity checking (which does validation of function prototypes matching between the caller and the target) tends not to work well with function casts, so it’d be nice to get rid of these before CFI lands.

flexible array conversions
As part of Gustavo A. R. Silva’s on-going work to replace zero-length and one-element arrays with flexible arrays, he has documented the details of the flexible array conversions, and the various helpers to be used in kernel code. Every commit gets the kernel closer to building with -Warray-bounds, which catches a lot of potential buffer overflows at compile time.

That’s it for now! Please let me know if you think anything else needs some attention. Next up is Linux v5.10.

© 2021, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

on April 05, 2021 11:24 PM

April 02, 2021

KDE Plasma desktop 5.21 on Kubuntu 21.04

The beta of Hirsute Hippo (to become 21.04 in April) has now been released, and is available for download.

This milestone features images for Kubuntu and other Ubuntu flavours.

Pre-releases of the Hirsute Hippo are not recommended for:

  • Anyone needing a stable system
  • Regular users who are not aware of pre-release issues
  • Anyone in a production environment with data or workflows that need to be reliable

They are, however, recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Kubuntu, KDE, and Qt developers
  • Other Ubuntu flavour developers

The Beta includes some software updates that are ready for broader testing. However, it is an early set of images, so you should expect some bugs.

We STRONGLY advise testers to read the Kubuntu 21.04 Beta release notes before installing, and in particular the section on ‘Known issues‘.

Kubuntu is taking part in #UbuntuTestingWeek from 1st to 7th of April, details of which can be found in our Kubuntu 21.04 Testing Week blog post, and in general for all flavours on the Ubuntu Discourse announcement.

You can also find more information about the entire 21.04 release (base, kernel, graphics etc) in the main Ubuntu Beta release notes and announcement.

on April 02, 2021 06:02 PM

Catching Up Driving

Stephen Michael Kellat

Usually in the realm of Ubuntu when it comes to a “daily driver” we are often talking about computers. Over the past couple days my 2005 Subaru Forester decided to fail on me. Harsh climate, roads that are not well maintained in an economically disadvantaged area, and more helped bring about the end of being able to drive that nice station wagon.

I don’t really ask for much in a car. I don’t really need much in a car. When it comes to the “entertainment system” I end up listening to AM radio for outlets like WJR, WTAM, CKLW, KDKA, and CFZM. On the FM side I have been listening to WERG quite a bit. In the midst of all that I probably forgot to mention the local station WWOW. A simple radio for me goes quite a long way.

In the new vehicle there has been the option for using Android Auto. This is a new thing to me. I’ve only ever had the opportunity to drive a vehicle equipped with such tech this week.

Android Auto is certainly different and something I will need to get used to. Fortunately we live in a time of change. I’m still trying to wrap my head around the idea of having VLC available to me on the car dashboard.

This is definitely not the way I expected to start the fourth month of 2021 but this has been a year of surprises. I’ve got some ISO testing to get back to if I can manage to avoid other non-computer things breaking…

Tags: Automobiles

on April 02, 2021 04:16 AM

April 01, 2021

The Ubuntu Studio team is pleased to announce the beta release of Ubuntu Studio 21.04, codenamed Hirsute Hippo.

While this beta is reasonably free of any showstopper DVD build or installer bugs, you may find some bugs within. This image is, however, reasonably representative of what you will find when Ubuntu Studio 21.04 is released on April 22, 2021.

Please note: Due to the change in desktop environment, directly upgrading to Ubuntu Studio 21.04 from 20.04 LTS is not supported and will not be supported.  However, upgrades from Ubuntu Studio 20.10 will be supported. See the Release Notes for more information.

Images can be obtained from this link: https://cdimage.ubuntu.com/ubuntustudio/releases/21.04/beta/

Full updated information is available in the Release Notes.

New Features

Ubuntu Studio 20.04 includes the new KDE Plasma 5.21 desktop environment. This is a beautiful and functional upgrade to previous versions, and we believe you will like it.

Agordejo, a refined GUI frontend to New Session Manager, is now included by default. This uses the standardized session manager calls throughout the Linux Audio community to work with various audio tools.

Studio Controls is upgraded to 2.1.4 and includes a host of improvements and bug fixes.

BSEQuencer, Bshapr, Bslizr, and BChoppr are included as new plugins, among others.

QJackCtl has been upgraded to 0.9.1, and is a huge improvement. However, we still maintain that Jack should be started with Studio Controls for its features, but QJackCtl is a good patchbay and Jack system monitor.

There are many other improvements, too numerous to list here. We encourage you to take a look around the freely-downloadable ISO image.

Known Issues

Official Ubuntu Studio release notes can be found at https://wiki.ubuntu.com/HirsuteHippo/Beta/UbuntuStudio

Further known issues, mostly pertaining to the desktop environment, can be found at https://wiki.ubuntu.com/HirsuteHippo/ReleaseNotes/Kubuntu

Additionally, the main Ubuntu release notes contain more generic issues: https://wiki.ubuntu.com/HirsuteHippo/ReleaseNotes

Please Test!

If you have some time, we’d love for you to join us in testing. Testing begins…. NOW!

on April 01, 2021 09:58 PM
We are pleased to announce that the beta images for Lubuntu 21.04 have been released! While we have reached the bugfix-only stage of our development cycle, these images are not meant to be used in a production system. We highly recommend joining our development group or our forum to let us know about any issues. Ubuntu Testing Week Ubuntu, […]
on April 01, 2021 09:53 PM

March 31, 2021

Google Pixel phones support what they call ”Motion Photo” which is essentially a photo with a short video clip attached to it. They are quite nice since they bring the moment alive, especially as the capturing of the video starts a small moment before the shutter button is pressed. For most viewing programs they simply show as static JPEG photos, but there is more to the files.

I’d really love proper Shotwell support for these file formats, so I posted a longish explanation with many of the details in this blog post to a ticket there too. Examples of the newer format are linked there too.

Info posted to Shotwell ticket

There are actually two different formats, an old one that is already obsolete, and a newer current format. The older ones are those that your Pixel phone recorded as ”MVIMG_[datetime].jpg", and they have the following meta-data:

Xmp.GCamera.MicroVideo                       XmpText     1  1
Xmp.GCamera.MicroVideoVersion XmpText 1 1
Xmp.GCamera.MicroVideoOffset XmpText 7 4022143
Xmp.GCamera.MicroVideoPresentationTimestampUs XmpText 7 1331607

The offset is actually from the end of the file, so one needs to calculate accordingly. But it is exact otherwise, so one simply extract a file with that meta-data information:

#!/bin/bash
#
# Extracts the microvideo from a MVIMG_*.jpg file

# The offset is from the ending of the file, so calculate accordingly
offset=$(exiv2 -p X "$1" | grep MicroVideoOffset | sed 's/.*\"\(.*\)"/\1/')
filesize=$(du --apparent-size --block=1 "$1" | sed 's/^\([0-9]*\).*/\1/')
extractposition=$(expr $filesize - $offset)
echo offset: $offset
echo filesize: $filesize
echo extractposition=$extractposition
dd if="$1" skip=1 bs=$extractposition of="$(basename -s .jpg $1).mp4"

The newer format is recorded in filenames called ”PXL_[datetime].MP.jpg”, and they have a _lot_ of additional metadata:

Xmp.GCamera.MotionPhoto                      XmpText     1  1
Xmp.GCamera.MotionPhotoVersion XmpText 1 1
Xmp.GCamera.MotionPhotoPresentationTimestampUs XmpText 6 233320
Xmp.xmpNote.HasExtendedXMP XmpText 32 E1F7505D2DD64EA6948D2047449F0FFA
Xmp.Container.Directory XmpText 0 type="Seq"
Xmp.Container.Directory[1] XmpText 0 type="Struct"
Xmp.Container.Directory[1]/Container:Item XmpText 0 type="Struct"
Xmp.Container.Directory[1]/Container:Item/Item:Mime XmpText 10 image/jpeg
Xmp.Container.Directory[1]/Container:Item/Item:Semantic XmpText 7 Primary
Xmp.Container.Directory[1]/Container:Item/Item:Length XmpText 1 0
Xmp.Container.Directory[1]/Container:Item/Item:Padding XmpText 1 0
Xmp.Container.Directory[2] XmpText 0 type="Struct"
Xmp.Container.Directory[2]/Container:Item XmpText 0 type="Struct"
Xmp.Container.Directory[2]/Container:Item/Item:Mime XmpText 9 video/mp4
Xmp.Container.Directory[2]/Container:Item/Item:Semantic XmpText 11 MotionPhoto
Xmp.Container.Directory[2]/Container:Item/Item:Length XmpText 7 1679555
Xmp.Container.Directory[2]/Container:Item/Item:Padding XmpText 1 0

Sounds like fun and lots of information. However I didn’t see why the “length” in first item is 0 and I didn’t see how to use the latter Length info. But I can use the mp4 headers to extract it:

#!/bin/bash
#
# Extracts the motion part of a MotionPhoto file PXL_*.MP.mp4

extractposition=$(grep --binary --byte-offset --only-matching --text \
-P "\x00\x00\x00\x18\x66\x74\x79\x70\x6d\x70\x34\x32" $1 | sed 's/^\([0-9]*\).*/\1/')

dd if="$1" skip=1 bs=$extractposition of="$(basename -s .jpg $1).mp4"

UPDATE: I wrote most of this blog post earlier. When now actually getting to publishing it a week later, I see the obvious ie the ”Length” is again simply the offset from the end of the file so one could do the same less brute force approach as for MVIMG. I’ll leave the above as is however for the ❤️ of binary grepping.

(cross-posted to my other blog)

on March 31, 2021 11:06 AM

On a little bit of a tangent from my typical security posting, I thought I’d include some of my “making” efforts.

Due to the working from home for an extended period of time, I wanted to improve my video-conferencing setup somewhat. I have my back to windows, so the lighting is pretty bad, so I wanted to get some lights. I didn’t want to spend big money, so I got this set of Neewer USB-powered lights. It came with tripod bases, monopod-style stands, and ball heads to mount the lights.

The lights work well and are a great value for the money, but the stands are not as great. The tripods are sufficiently light that they’re easy to knock over, and they take more desk space than I’d really like. I have a lot of stuff on my desk, and appreciate desk real estate, so go to great length to minimize permanent fixtures on the desk. I have my monitors on monitor arms, my desk lamp on a mount, etc. I really wanted to minimize the space used by these lights.

I looked for an option to clamp to the desk and support the existing monopods with the light. I found a couple of options on Amazon, but they either weren’t ideal, or I was going to end up spending as much on the clamps as I did on the lamps. I wanted to see if I could do an alternative.

I have a 3D Printer, so almost every real-world problem looks like a use case for 3D printing, and this was no exception. I wasn’t sure if a 3D-printed clamp would have the strength and capability to support the lights, and didn’t think the printer could make threads small enough to fit into the base of the lamp monopods (which accept a 1/4x20 thread, just like used on cameras and other photography equipment).

I decided to see if I could incorporate a metal thread into a 3D printed part in some way. There are threaded inserts you can implant into a 3D print, but I was concerned about the strength of that connection, and would still need a threaded adapter to connect the two (since both ends would now be a “female” connector). Instead, I realized I could incorporate a 1/4x20 bolt into the print. I settled on 3/8” length so it wouldn’t stick too far through the print and a hex head so it wouldn’t rotate in the print, making screwing/unscrewing the item easier.

I designed a basic clamp shape with a 2” opening for the desk, and then used this excellent thread library to make a large screw in the device to clamp it to the desk from the bottom. I put an inset for the hex head in the top and a hole for the screw to fit through. When I printed my first test, I was pretty concerned that things wouldn’t fit or would break at the slightest torquing.

Clamp Sideview

Much to my own surprise, it just worked! The screw threads on the clamp side were a little bit tight at first, but they work quite well, and certainly don’t come undone over time. I’ve now had my light mounted on one of these clamps for a few months and no problems, but I would definitely not recommend a 3D printed clamp for something heavy or very valuable. (If I’m going to hold up a several thousand dollar camera, I’m going to mount it on proper mounts.)

Clamp On Table



Note on printing: If you want to 3D print this yourself, lay the clamp on its side on the print bed. Not only do you avoid needing support, you ensure that the layers lines go along the “spine” of the clamp, rather than the stress separating layers.

Clamp Model

on March 31, 2021 07:00 AM

March 30, 2021

A C for-loop Gotcha

Colin King

The C infinite for-loop gotcha is one of the less frequent issues I find with static analysis, but I feel it is worth documenting because it is obscure but easy to do.

Consider the following C example:

Since i is a 8 bit integer, it will wrap around to zero when it reaches the maximum 8 bit value of 255 and so we end up with an infinite loop if the upper limit of the loop n is 256 or more. 

The fix is simple, always ensure the loop counter is at least as wide as the type of the maximum limit of the loop. This example, variable i should be a uint32_t type.

I've seen this occur in the Linux kernel a few times.  Sometimes it is because the loop counter is being passed into a function call that expects a specific type such as a u8, u16.  In other occasions I've seen a u16 (or short) integer being used presumably because it was expected to produce faster code, however, most commonly 32 bit integers just as fast (or sometimes faster) than 16 bit integers for this kind of operation.

on March 30, 2021 11:37 AM

Diamond Rio PMP300

Alan Pope

My loft is a treasure trove of old crap. For some reason I keep a bunch of aged useless junk up there. That includes the very first MP3 player I owned. Behold, the Diamond Rio PMP 300. Well, the box, in all its ’90s artwork glory. Here’s the player. It’s powered by a single AA battery for somewhere around 8 hours of playback. It’s got 32MB (yes, MegaBytes) of on-board storage.
on March 30, 2021 11:00 AM

March 29, 2021

I wrote this blog post with Kaylea Champion and a version of this post was originally posted on the Community Data Science Collective blog.

Critical software we all rely on can silently crumble away beneath us. Unfortunately, we often don’t find out software infrastructure is in poor condition until it is too late. Over the last year or so, I have been supporting Kaylea Champion on a project my group announced earlier to measure software underproduction—a term we use to describe software that is low in quality but high in importance.

Underproduction reflects an important type of risk in widely used free/libre open source software (FLOSS) because participants often choose their own projects and tasks. Because FLOSS contributors work as volunteers and choose what they work on, important projects aren’t always the ones to which FLOSS developers devote the most attention. Even when developers want to work on important projects, relative neglect among important projects is often difficult for FLOSS contributors to see.

Given all this, what can we do to detect problems in FLOSS infrastructure before major failures occur? Kaylea Champion and I recently published a paper laying out our new method for measuring underproduction at the IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER) 2021 that we believe provides one important answer to this question.

A conceptual diagram of underproduction. The x-axis shows relative importance, the y-axis relative quality. The top left area of the graph described by these axes is 'overproduction' -- high quality, low importance. The diagonal is Alignment: quality and importance are approximately the same. The lower right depicts underproduction -- high importance, low quality -- the area of potential risk.Conceptual diagram showing how our conception of underproduction relates to quality and importance of software.

In the paper, we describe a general approach for detecting “underproduced” software infrastructure that consists of five steps: (1) identifying a body of digital infrastructure (like a code repository); (2) identifying a measure of quality (like the time to takes to fix bugs); (3) identifying a measure of importance (like install base); (4) specifying a hypothesized relationship linking quality and importance if quality and importance are in perfect alignment; and (5) quantifying deviation from this theoretical baseline to find relative underproduction.

To show how our method works in practice, we applied the technique to an important collection of FLOSS infrastructure: 21,902 packages in the Debian GNU/Linux distribution. Although there are many ways to measure quality, we used a measure of how quickly Debian maintainers have historically dealt with 461,656 bugs that have been filed over the last three decades. To measure importance, we used data from Debian’s Popularity Contest opt-in survey. After some statistical machinations that are documented in our paper, the result was an estimate of relative underproduction for the 21,902 packages in Debian we looked at.

One of our key findings is that underproduction is very common in Debian. By our estimates, at least 4,327 packages in Debian are underproduced. As you can see in the list of the “most underproduced” packages—again, as estimated using just one more measure—many of the most at risk packages are associated with the desktop and windowing environments where there are many users but also many extremely tricky integration-related bugs.

This table shows the 30 packages with the most severe underproduction problem in Debian, shown as a series of boxplots.These 30 packages have the highest level of underproduction in Debian according to our analysis.

We hope these results are useful to folks at Debian and the Debian QA team. We also hope that the basic method we’ve laid out is something that others will build off in other contexts and apply to other software repositories.

In addition to the paper itself and the video of the conference presentation on Youtube by Kaylea, we’ve put a repository with all our code and data in an archival repository Harvard Dataverse and we’d love to work with others interested in applying our approach in other software ecosytems.


For more details, check out the full paper which is available as a freely accessible preprint.

This project was supported by the Ford/Sloan Digital Infrastructure Initiative. Wm Salt Hale of the Community Data Science Collective and Debian Developers Paul Wise and Don Armstrong provided valuable assistance in accessing and interpreting Debian bug data. René Just generously provided insight and feedback on the manuscript.

Paper Citation: Kaylea Champion and Benjamin Mako Hill. 2021. “Underproduction: An Approach for Measuring Risk in Open Source Software.” In Proceedings of the IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER 2021). IEEE.

Contact Kaylea Champion (kaylea@uw.edu) with any questions or if you are interested in following up.

on March 29, 2021 11:56 PM

Kubuntu 21.04 Testing Week

Kubuntu General News

Kubuntu 21.04 Testing Week

We’re delighted to announce that we’re participating in another ‘Ubuntu Testing Week’ from April 1st to April 7th with other flavours in the Ubuntu family. On April 1st, the beta version of Kubuntu 21.04 ‘Hirsute Hippo’ will be released after freezing all new changes to its features, user interface, and documentation. Between April 1st and the final release on April 22nd, all efforts by the Kubuntu team and community should be focused on ISO testing, reporting bugs, fixing bugs, and translations right up to final release.

On social media, please use the #UbuntuTestingWeek hashtag if you write about your testing or want to spread the word about the event to your followers. Testers can visit the ISO tracker and read bug reporting tutorials.

You can test without changing your system by running it in a VM (Virtual Machine) with software like VMWare Player, VirtualBox (apt install). Or run Hirsute from USB, SD Card, or DVD to test on your hardware.

There are a variety of ways that you can help test the release, including trying out the various live session and installation test cases from the ISO tracker. If you find a bug, you’ll need a Launchpad account to file it against the package the app is bundled in, which you can find by asking around on the IRC/Telegram/Matrix Kubuntu channels or the user mail list.

Chat live in IRC (Freenode) #ubuntu-quality (or #kubuntu-devel if it cannot be reproduced on other flavours) or Telegram: Ubuntu Testers.

The easiest and fastest way to file a bug is in the command line (Konsole): ubuntu-bug $packagename, such as ubiquity – `ubuntu-bug ubiquity`

It is important to file the bug within the testing environment so that the necessary results are properly provided to the bug-tracker. All you need to provide to the ISO tracker is the bug number.

If the bug is found in the installer, file it against `ubiquity`, or file it against the `linux` if your hardware isn’t working. We encourage those that are willing, to install it either in a VM or on physical hardware. It requires at least 15GB of hard drive space. If you can use it for a few days, more bugs can be discovered and reported.

Please test apps that you regularly use, so you can identify bugs and regressions that should be reported, especially as the recently released Plasma 5.21.3 is bundled in this release. New ISO files are built every day, and you should always test with the most up-to-date ISO. It is easier and faster to update an existing daily ISO file on Linux with the command below. Run in the terminal or konsole from within the folder with the ISO file:

$ zsync http://cdimage.ubuntu.com/kubuntu/daily-live/current/hirsute-desktop-amd64.iso.zsync

We look forward to you joining us to make Kubuntu 21.04 an even bigger success, and hope that you will also test out the other Ubuntu flavours.

on March 29, 2021 07:43 PM

March 28, 2021

Multiplying integers in C is easy.  It is also easy to get it wrong.  A common issue found using static analysis on the Linux kernel is the integer overflow before widening gotcha.

Consider the following code that takes the 2 unsigned 32 bit integers, multiplies them together and returns the unsigned 64 bit result:

The multiplication is performed using unsigned 32 bit arithmetic and the unsigned 32 bit results is widened to an unsigned 64 bit when assigned to ret. A way to fix this is to explicitly cast a to a uint64_t before the multiplication to ensure an unsigned 64 bit multiplication is performed:

Fortunately static analysis finds these issues.  Unfortunately it is a bug that keeps on occurring in new code.
on March 28, 2021 10:33 PM

March 26, 2021

I served as a director and as a voting member of the Free Software Foundation for more than a decade. I left both positions over the last 18 months and currently have no formal authority in the organization.

So although it is now just my personal opinion, I will publicly add my voice to the chorus of people who are expressing their strong opposition to Richard Stallman’s return to leadership in the FSF and to his continued leadership in the free software movement. The current situation makes me unbelievably sad.

I stuck around the FSF for a long time (maybe too long) and worked hard (I regret I didn’t accomplish more) to try and make the FSF better because I believe that it is important to have voices advocating for social justice inside our movement’s most important institutions. I believe this is especially true when one is unhappy with the existing state of affairs. I am frustrated and sad that I concluded that I could no longer be part of any process of organizational growth and transformation at FSF.

I have nothing but compassion, empathy, and gratitude for those who are still at the FSF—especially the staff—who are continuing to work silently toward making the FSF better under intense public pressure. I still hope that the FSF will emerge from this as a better organization.

on March 26, 2021 07:24 PM

Full Circle Magazine #167

Full Circle Magazine

This month:
* Command & Conquer : LMMS
* How-To : Python, Latex [NEW!] and Fritzing
* Graphics : Inkscape
* Linux Loopback : My Story
* Everyday Ubuntu : RetroComputing CoCo Nuts Pt2
* Micro This Micro That [NEW!]
* Review : Entroware Ares
* Book Review: Learn Linux Quickly
* Ubuntu Games : Nebuchadnezar
plus: News, My Story, The Daily Waddle, Q&A, and more.

Get it while it’s hot: https://fullcirclemagazine.org/issue-167/

Telegram group: https://t.me/joinchat/PujkVH1HopRKvfd3

on March 26, 2021 01:46 PM

March 25, 2021

“Hirsute Hippo” is the project code-name for what will become Ubuntu 21.04 when it releases on April 22nd 2021. On April 1st, the Beta of Ubuntu Hirsute will be released, but we’re no fools! This is a great time to do some testing! So, starting on April 1st, we’re doing another Ubuntu Testing Week. As always, everyone is welcome to test Ubuntu at any point in the year. But during the beta is a good time to focus on testing.
on March 25, 2021 10:00 AM

We’re delighted to announce that we’re participating in another ‘Ubuntu Testing Week’ from April 1st to April 7th with other flavours in the Ubuntu family. On April 1st, the beta version of Xubuntu 21.04 ‘Hirsute Hippo’ will be released after halting all new changes to its features, user interface and documentation. Between April 1st and the final release on April 22nd, all efforts by the Xubuntu team and community should be focused on ISO testing, reporting bugs, fixing bugs, and translations.

It has been a year since we last did a collaboration with other Ubuntu flavors for an Ubuntu Testing Week, which was done for Xubuntu 20.04 LTS. That event was a major success, as a large volume of testers participated and it was announced on various linux news sites and podcasts. Alan Pope (aka Popey) from Canonical, Rick Timmis from the Kubuntu team, and Bill from the Ubuntu Mate team helped spread the word about the previous event in this clip from Big Daddy Linux Live (BDLL) on how the event came about, its goals, as well as points on how to test. You won’t want to miss being part of the event this year! Read on to learn how.

During the testing week you can download the daily ISO image and try it out, though you are welcome to start from today. You can test without changing your system by running it in a VM (Virtual Machine) with software like VMWare Player, VirtualBox (apt-install) and Gnome Boxes (apt-install), or you may run it from a USB, SD Card, or DVD, to test if your hardware works correctly. You can use software like Etcher and Gnome Disks (apt-install) to copy the ISO to a USB Drive or SD Card, while apps like Brasero (apt-install) and Xfburn (apt-install) can be used to burn it to DVD. We encourage those that are willing, to install it either in a VM or on physical hardware (it requires at least 15GB of hard disk space) and use it for a few days, as more bugs can be discovered and reported this way.

There are a variety of ways that you can help test the release, including trying out the various live session and installation test cases from the ISO tracker, which take less than 30 minutes to complete (example 1, example 2, example 3 below). If you find a bug, you’ll need a Launchpad account to file it against the package the app is bundled in, which you can find by watching this Easy Bug Reporting By Example video. If the bug is found in the installer, you can file it against ubiquity, or you can file it against the linux package, if your hardware isn’t working.

Please test apps that you regularly use, so you can identify bugs and regressions that should be reported, especially as the recently released Xfce 4.16 is bundled in this release. You can learn about what else is new in this release in the Release Notes. New ISO files are built everyday, and you should always test with the most up-to-date ISO. It is easier and faster to update an existing daily ISO file on Linux with the command below (you’ll need to run it in the terminal from within the folder with the ISO file).

$ zsync http://cdimage.ubuntu.com/xubuntu/daily-live/current/hirsute-desktop-amd64.iso.zsync

We look forward to you joining us to make Xubuntu 21.04 an even bigger success, and hope that you will also test out the other Ubuntu flavours. The success of the previous event was mentioned by the former Ubuntu Desktop Lead Martin Wimpress (aka Wimpy) in the Ubuntu Podcast Season 13 Episode 03 at 20:21 where he said,

“… It is definitely paying dividends. In the nicest way possible, they made members of the Desktop Team cry today. We had our weekly team meeting where we go through all the bug reports to triage them and usually there are some, and there were pages of them and we didn’t get through them all. So we are scheduling another bug triage meeting later this week in order to pick up where we left off from. But this is great because we are actually getting decent bug reports that we can work with and [take] action [on] and improve what will be the final release in 3 weeks time. So for all the tears that were shed, it was definitely a worthwhile endeavor because these are bugs that other people would encounter when they install 20.04 for the first time. So thank you everyone that was involved in that effort. It was much appreciated.”

You are welcome to chat with us live in our dedicated telegram groups ( Ubuntu Testers, Xubuntu Development ) or IRC channel ( #ubuntu-quality on freenode ). In order to assist you in your testing efforts, we encourage you to also read our Quality Assurance (QA) guide and new testers wiki. We look forward to your contributions, your live chatting, and hopefully your participation in  future testing sessions. Follow the #UbuntuTestingWeek hashtag on twitter and facebook for the latest news. Happy bug hunting and don’t forget to spread the word!

on March 25, 2021 08:00 AM

March 23, 2021

Trying to share a GTK application window with https://meet.google.com/ might not work if you run under a Wayland session.

A workaround is to run a GTK application under XWayland:

GDK_BACKEND=x11 gedit

Now gedit can be shared within google meet.

If you want to share a gnome-terminal window, things are a bit more complicated. gnome-terminal has a client/server architecture so gnome-terminal-server needs to run with the x11 backend.

mkdir -p ~/.config/systemd/user/gnome-terminal-server.service.d/
cat <<EOF > ~/.config/systemd/user/gnome-terminal-server.service.d/override.conf
[Service]
Environment=GDK_BACKEND=x11
EOF
systemctl --user daemon-reload
systemctl --user restart gnome-terminal-server

The last command will kill all your terminals!

Now you can share your gnome-terminal windows again!

on March 23, 2021 02:48 PM

March 22, 2021

KDE Gear is the new name for the app (and libraries and plugins) bundle of project that want the release faff taken off their hands. It was once called just KDE, then KDE SC, then KDE Applications, then the unbranded release service and now we’re banding it again as KDE Gear.

We’re working on an announcement now for 21.04 so if you have a project being released as part of KDE Gear send us your new features on this merge request.

on March 22, 2021 11:59 AM

March 18, 2021

I used 2 of the variants supported by mmdebstrap to illustrate the different small build options.

Essential

Uncompressed tarball size 94M

For when you don't even want to have apt.

base-files
base-passwd
bash
bsdutils
coreutils
dash
debconf
debianutils
diffutils
dpkg
findutils
gcc-10-base:amd64
grep188M
init-system-helpers
libacl1:amd64
libattr1:amd64
libaudit-common
libaudit1:amd64
libblkid1:amd64
libbz2-1.0:amd64
libc-bin
libc6:amd64
libcap-ng0:amd64
libcom-err2:amd64
libcrypt1:amd64
libdb5.3:amd64
libdebconfclient0:amd64
libgcc-s1:amd64
libgcrypt20:amd64
libgmp10:amd64
libgpg-error0:amd64
libgssapi-krb5-2:amd64
libk5crypto3:amd64
libkeyutils1:amd64
libkrb5-3:amd64
libkrb5support0:amd64
liblz4-1:amd64
liblzma5:amd64
libmount1:amd64
libnsl2:amd64
libpam-modules:amd64
libpam-modules-bin
libpam-runtime
libpam0g:amd64
libpcre2-8-0:amd64
libpcre3:amd64
libselinux1:amd64
libsmartcols1:amd64
libssl1.1:amd64
libsystemd0:amd64
libtinfo6:amd64
libtirpc-common
libtirpc3:amd64
libudev1:amd64
libuuid1:amd64debian-requirements.md
zlib1g:amd64

Added in minbase

Uncompressed tarball size 123M

adduser
apt
debian-archive-keyring
e2fsprogs
gcc-9-base:amd64
gpgv
libapt-pkg6.0:amd64
libext2fs2:amd64
libffi7:amd64
libgnutls30:amd64
libhogweed6:amd64
libidn2-0:amd64
libnettle8:amd64
libp11-kit0:amd64
libseccomp2:amd64
libsemanage-common
libsemanage1:amd64Added in minbase
libxxhash0:amd64
logsave
mount
passwd
tzdata

Added in default variant

Uncompressed tarball size 188M

Theoretically all Priority: Important packages.

This is where items start to get a bit redundant IMHO. Mostly because I prefer the built-in systemd options as opposed to ifupdown, rsyslog/logrotate and cron.

apt-utils
cpio
cron
debconf-i18n
dmidecode
dmsetup
fdisk
ifupdown
init
iproute2
iputils-ping
isc-dhcp-client
isc-dhcp-common
kmod
less
libapparmor1:amd64
libargon2-1:amd64
libbpf0:amd64
libbsd0:amd64
libcap2:amd64
libcap2-bin
libcryptsetup12:amd64
libdevmapper1.02.1:amd64
libdns-export1110
libedit2:amd64
libelf1:amd64
libestr0:amd64
libfastjson4:amd64
libfdisk1:amd64
libip4tc2:amd64
libisc-export1105:amd64
libjansson4:amd64
libjson-c5:amd64
libkmod2:amd64
liblocale-gettext-perl
liblognorm5:amd64
libmd0:amd64
libmnl0:amd64
libncurses6:amd64
libncursesw6:amd64
libnewt0.52:amd64
libnftables1:amd64
libnftnl11:amd64
libpopt0:amd64
libprocps8:amd64
libreadline8:amd64
libslang2:amd64
libtext-charwidth-perl
libtext-iconv-perl
libtext-wrapi18n-perl
libxtables12:amd64
logrotate
nano
netbase
nftables
procps
readline-common
rsyslog
sensible-utils
systemd
systemd-sysv
systemd-timesyncd
tasksel
tasksel-data
udev
vim-common
vim-tiny
whiptail
xxd
on March 18, 2021 07:16 AM

March 17, 2021

Introduction

If you have data that requires more power than the typical laptop, creating a software RAID can be an easy and inexpensive solution. With the rise of data science and machine learning, processing large sets of data using Pandas, NumPy, Spark or even SQLite can drastically benefit from fast and resilient storage.

RAID (Redundant Array of Inexpensive Disks) is useful for achieving high performance and recovery if a disk fails. There are many types of configurations including hardware, pretend hardware (raid actually done in software), and software. RAID10 (mirrored and striped) provides high performance and redundancy against a single disk failure. RAID10 writes a copy of the data to two disks, and then in the 4-disk setup described below, creates a volume that spans a second set of two disks. This allows for reading and writing using all four disks, and potentially allows for up to two disk failures without losing data (if we are lucky). In the worst-case RAID10 can always suffer a single disk failure without losing any data.

Another popular approach is RAID5, which splits data across devices using an Exclusive OR (XOR) and has a parity block. RAID5 setups are common but have some pitfalls including compounding failures during rebuilds and checksum overhead. RAID5 configurations will lose data if two disks fail.

Rationale

Development using medium sized data can get expensive quickly using cloud providers. Using Amazon Web Services Elastic Block Store (AWS EBS), a comparable setup would run $160/mo, not including the additional charges of EC2 instances, network transit etc. AWS does have additional enterprise / data center capabilities priced in, however for many developer workflows a local setup is often more efficient and performant.

There are many choices and constraints to take into consideration with both architectures. AWS only includes 125MB/s throughput by default in the gp3 tier for durable storage, which is considerably slower than a local commodity RAID setup, and incurs additional latency on each IOP. On the other hand Samsung rates the 860 EVO 1TB for a 600 TBW lifecycle (5 year warranty), which in practice most workloads will never come close to using, but is an engineering decision to take into account.

Saving $1420 in the first year alone ($1920/yr each additional year) allows for more powerful compute and graphics components, not including additional cloud pricing premiums that would be incurred.

Hardware

TOTAL: ~$500

Configuration

Identify the target disks to be used for the RAID using blkid or fdisk -l:

sudo fdisk -l
Disk /dev/sdb: 931.53 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Samsung SSD 860 
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sde: 931.53 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Samsung SSD 860 
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdc: 931.53 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Samsung SSD 860 
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdd: 931.53 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Samsung SSD 860 
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Create partitions for each disk:

sudo parted -a optimal /dev/sd{b,c,d,e} --script mklabel gpt
sudo parted -a optimal /dev/sd{b,c,d,e} --script mkpart primary ext4 0% 100%

Create a RAID10 software raid:

sudo apt install mdadm
sudo mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sd{b,c,d,e}1
watch -n 10 mdadm --detail /dev/md0

Once md0 has finished syncing, setup a filesystem:

sudo mkfs.ext4 /dev/md0

Add /etc/mdadm/mdadm.conf and /etc/fstab entries, which instruct the system on how to assemble the raid and mount the filesystem:

sudo mkdir /mnt/md0
sudo mdadm --detail /dev/md0 | sudo tee -a /etc/mdadm/mdadm.conf
echo "/dev/md0     /mnt/md0     ext4 defaults 0 0" | sudo tee -a /etc/fstab

In order to allow the volume to be used prior to the root filesystem being available, update the initial ramdisk to include the module and configuration:

sudo update-initramfs -u

SSD TRIM Support

By default Ubuntu 20.04 has periodic fstrim which discards blocks of data no longer in use, ensuring optimal performance:

# systemctl status fstrim.timer
● fstrim.timer - Discard unused blocks once a week
     Loaded: loaded (/lib/systemd/system/fstrim.timer; enabled; vendor preset: enabled)
     Active: active (waiting) since Wed 2021-03-17 11:30:39 MDT; 4h 1min ago
    Trigger: Mon 2021-03-22 00:00:00 MDT; 4 days left
   Triggers: ● fstrim.service
       Docs: man:fstrim

Mar 17 11:30:39 dxdt systemd[1]: Started Discard unused blocks once a week.

Performance

In this particular configuration we now have 1.8T of capacity, and even with budget disks get decent performance (388.88MB/sec using hdparm -tT):

# sudo hdparm -tT /dev/md0
/dev/md0:
 Timing cached reads:   38396 MB in  1.99 seconds = 19316.93 MB/sec
 Timing buffered disk reads: 1168 MB in  3.00 seconds = 388.88 MB/sec

# df -h /mnt/md0
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0        1.8T   77M  1.7T   1% /mnt/md0

hdparm is not a precise benchmark. After running tests using bonnie++, md0 delivered 179MB/s writes and 392MB/s reads for the sequential tests.

The limiting factor is the Marvell 9215 PCIE 2.0 card, which has a maximum throughput of 380-450MB/s. Using higher end SATA controllers will likely yield improved results.

Conclusion

After creating the partition entries, assembling the raid, syncing across disks, creating a filesystem and updating /etc to reflect the changes, the RAID will now persist across restarts. Ubuntu will take care of periodically checking for unused blocks, and mdadm can be used to check the health of the raid device. This setup will likely exceed most commodity cloud storage performance offerings with lower latency, higher throughput and IOPS.

on March 17, 2021 08:07 PM

The last few weeks was really tough time-wise due to a whole bunch of personal things requiring attention. At least that is cooling down now, in the meantime, here’s last monght’s uploads (16 days late, yikes!). Hope everyone is doing well out there.

2021-02-02: Upload package python-strictyaml (1.1.1-2) to Debian unstable (Initial source-only upload).

2021-02-03: Upload package bundlewrap (4.4.2-1) to Debian unstable.

2021-02-03: Upload package python-aniso8601 (8.1.1-1) to Debian unstable.

2021-02-15: Upload package desktop-base (11.0.1-1) to Debian unstable.

2021-02-15: Upload package rootskel-gtk (11.0.1) to Debian unstable.

2021-02-16: Upload package btfs (2.24-1) to Debian unstable.

2021-02-16: Upload package gnome-shell-extension-multi-monitors (23-1) to Debian unstable.

2021-02-16: Upload package xabacus (8.3.4-1) to Debian unstable.

2021-02-19: Upload package bundlewrap (4.5.0-1) to Debian unstable.

2021-02-19: Upload package calamares (3.2.36-1) to Debian unstable.

2021-02-19: Upload package calamares (3.2.36-1~bpo10+1) to Debian buster-backports.

2021-02-19: Upload package bundlewrap (4.5.1-1) to Debian unstable.

on March 17, 2021 09:13 AM

March 16, 2021

https://blogs.imf.org/2021/03/15/rising-market-power-a-threat-to-the-recovery/

on March 16, 2021 06:15 AM
Stable Release Updates on Xubuntu

From the moment an Ubuntu release (and flavors) reaches Final Freeze until the release is end-of-life (EOL), updates are released following the "stable release update" procedure, or SRU. This process is documented on the Ubuntu Wiki. However, it can be intimidating for new and long-time contributors and also confusing for users. I'd like to explain this process from a Xubuntu perspective.

We currently have two packages going through the SRU procedure for Xubuntu 20.04 and 20.10. After you've read this article, consider checking them out and helping with verification.

Stable Release Update Procedure

  1. Identify the "why"
  2. Create or update bug reports
  3. Package and upload the fixes for each release
  4. Wait for the SRU team to review and accept the upload
  5. SRU Verification
  6. Wait for the SRU team to release the package to the -updates pocket

Identify the "why"

A common misconception about stable Ubuntu releases is that all bug fixes and new releases (small and large) will arrive via an update. There is not an automatic process for these updates to land in a stable release. Further, any fixes and new features have to be documented and tested according to the procedure above. As you might imagine, this can be a massive time sink.

Stable release updates specific to Xubuntu will generally be limited to high-impact bugs:

  • Security vulnerabilities
  • Severe regressions
  • Bugs that directly cause a loss of user data
  • External changes that cause the current version to no longer work

If an update meets these requirements, it is eligible to be updated with a Stable Release Update. Lower impact bugs will generally not be considered for SRU, but may be fixed in the -backports pocket or the Xubuntu QA Staging PPA.

Create or update bug reports

If the bugs being fixed have already been reported on Launchpad, they will need to be updated with the SRU template. Other bugs and new features present in an upload should have new bug reports opened. Due to the time-based nature of landing a fix and following the SRU process, it can be a significant setback to have to start over, so more information is better than less. When in doubt, fill it out.

The standard SRU template typically includes the following sections: Impact, Test Plan, Where problems could occur (formerly Regression Potential), and Other Info. These sections help others who may not be completely familiar with the software or bug to be able to test it, and demonstrates that the upload and its regression potential has been sufficiently considered.

Once the bug reports are ready, the packages can be prepared and uploaded.

Package and upload the fixes for each release

The first packaging-related step for a Stable Release Update is ensuring the fix is already released on the current development release. This may not always be applicable, but applies more often than not. Once the fix is verified in the development release, packages can be prepared for each affected stable release.

The packages are prepared with each bug fix linked in the Debian changelog. This will automatically update the affected bugs with status updates and tags and progress the bug status from In Progress to Fix Committed, and later to Fix Released. The packages are then uploaded to the -proposed pocket and Xubuntu SRU Staging PPA.

Wait for the SRU team to review and accept the upload

The Ubuntu SRU team is a small and busy one! Sometimes, it may take a few days for the team to review and accept the upload to the -proposed pocket. Once the package has been accepted, this starts the 7-day clock. Even after verification, a package upload must wait a minimum of 7 days before it is accepted into the -updates pocket.

SRU verification

SRU verification can be completed by any Launchpad user, and is recommended for getting a fix released to the -updates pocket sooner. Following the test cases documented on each bug report...

  • If the fix is sufficient and there's no regression, replace the verification-needed-release tag with verification-done-release. Add a comment describing the steps taken to verify the package.
  • If the fix is insufficient or there are regressions introduced, replace the verification-needed-release tag with verification-failed-release. Add a comment describing steps taken and issues encountered.

An update is considered verified if it has at least two positive and no negative testimonials. If you're affected by a bug that is going through SRU verification, please join in and provide your feedback on the fix.

Wait for the SRU team to release the package to the -updates pocket

Once an uploaded package has met the 7-day minimum wait and has been verified, it is eligible to be reviewed and released by the SRU team. Again, this can sometimes take more than 7 days, but if it seems to be running unusually long, add a comment (but please don't spam us) on the bug report to check in.

Finally, updates are phased, which means that a release package update may not show for a few days. Assuming there are no significant regressions that cause the update to be halted, the update should be available soon. Please be patient and respect that this process is designed to keep your computer running smoothly.

Wrapping Up

While not an exciting topic, I hope this helped to provide some insight into the inner workings of a stable Xubuntu release. Let me know if you have any questions or if I got something wrong. If you're working on another Ubuntu flavor or derivative, what's the post-development release process look like for your team?

on March 16, 2021 12:33 AM

March 14, 2021

If you are already building Snap packages, I guess you know what an “interface” is in the world for Snapcraft. However it seems, the terminology may not be very clear for someone who doesn’t know anything (my developer friends and colleagues) about Snap. So I just wanted to clearly write this one out so that it becomes clear for people coming from Android App development background. I like to think that “permissions” in Android and “interfaces” in Snap world are pretty much interchangeable. I have done a fair amount of Android App development (professionally and privately) in the past six years and have been involved in the Snapcraft ecosystem almost since it’s inception.

Before going into further details I think it’d make sense to clear some of the Snapcraft terminologies as well.

  • Snapcraft is the name of the wider project, which involves the build tools, the public Store, the daemon that runs on your computer to manage and update snap packages. However snapcraft is also the name of the command line tool that is used to actually build snap packages, confusing ? yeah, a bit!
  • Snapd is the software and daemon that runs on your computer to install/remove/update snap packages both from the Snap store or a locally built one. The CLI tool is called snap

Android App permissions

In Android, if an app wants to access geolocation, it has to “request” the OS for it’s permissions, which then pops up a dialog that the user sees, in that dialog the user may choose to grant the requesting app the permission to use geolocation or deny it.

It is also pertinent to mention that some permissions like INTERNET only needs to be added to AndroidManifest.xml file and the user is not asked if the App should be allowed to access the internet.

Snap Permissions

In the classic Linux packaging like deb and rpm, the installation is mostly the extraction of the relevant software into the rootfs and some scripts get run (as root!) during the installation process. After that the software has unrestricted access to the system, it can access different hardware devices, can read the whole filesystem and even change it.

However the above stuff is quite different for a Snap package, a snap package’s build configuration is a simple yaml file, that defines what system resources the snap is expected to access, like the network, usb camera, opengl or the sound server etc. Specifically that gets defined under the plugs stanza, similar to how permissions are defined in AndroidManifest.xml.

Just like the INTERNET permission in Android, there are multiple such interfaces in the snap-world that are pre-granted or “connected” as you would call it in the snap world.

For interfaces that are deemed sensitive for auto-connection, there is a process by which an app developer can request the snap store admins to grant their snap the permissions to automatically connect a specific interface on installation. That process is documented here. The developers mostly need to justify why their App needs permission to access a certain system resource.

One thing that is missing currently is that there is no way for a software to request the system to prompt the user to grant permissions for an interface. I think something like that could help circumvent the need for asking the Snap store admins. Hopefully we can have that feature some day as well.

That mostly concludes this article. We have been using Snap packages for building a commercial product and have been using them on a Yocto-based system. I will be writing quite a bit more in the coming days and months about that journey.

on March 14, 2021 10:24 PM

March 12, 2021

Net Matroyshka was one of our “1337” tagged challenges for the 2021 BSidesSF CTF. This indicated it was particularly hard, and our players can probably confirm that.

If you haven’t played our CTF in the past, you might not be familiar with the Matryoshka name. (Yep, I misspelled Matryoshka this year and didn’t catch it before we launched.) It refers to the nesting Matryoshka dolls, and we’ve been doing a series of challenges where they contain layers to be solved, often by different encodings, formats, etc. This year, it was layers of PCAPs for some network forensics challenges.

The description from the scoreboard was simple:

We heard you like PCAPs, so we put a PCAP inside your PCAP.

You were provided with a file 8.zip, which yielded 8.pcap when unzipped.

Layer 8: HTTP

Looking at 8.pcap in Wireshark, we see a bunch of small HTTP packets and several HTTP connections. If you look at the HTTP request statistics, we see several connections, including the BSidesSF website, my website, and a request to a private IP for a file named 7.zip.

HTTP Requests

Guessing that we’ll need 7.zip, you can use Wireshark to extract the HTTP object (the contents). (File > Export Objects > HTTP) Extracting 7.zip, you discover that it requires a password. If you return to the connection in Wireshark and look at the TCP connection with Follow TCP Stream, you’ll see the full HTTP Request/Response. In the response, there’s a header that says X-Zip-Password: goodluck,havefun. Using the password goodluck,havefun, we’re able to extract 7.pcap.

Layer 7: FTP

If you open 7.pcap in Wireshark, you’ll discover an FTP connection. The entirety of the FTP control connection is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
220 (vsFTPd 3.0.3)
USER anonymous
331 Please specify the password.
PASS thisisnottheflag
230 Login successful.
SYST
215 UNIX Type: L8
TYPE I
200 Switching to Binary mode.
PORT 10,128,0,2,226,169
200 PORT command successful. Consider using PASV.
RETR 6.zip
150 Opening BINARY mode data connection for 6.zip (38384 bytes).
226 Transfer complete.
QUIT
221 Goodbye.

Unsurprisingly, we see that a file named 6.zip was transferred. If you go to the FTP-DATA protocol stream and use Follow TCP Stream, you can hit Save As (in Raw mode) and get 6.zip. Unzipping 6.zip, you get 6.pcap. (I’m starting to see a pattern here!)

Layer 6: Rsync

(Side note: this level turned out to be much harder than I really intended. rsyncd is not as well documented as I’d thought.)

Opening 6.pcap, you find a single rsyncd connection. You’ll note the @RSYNCD magic and the version of 31.0. I ended up using the rsync source code to understand the traffic along with a known sample connection to confirm my understanding.

I started by looking at receive_data. If you follow it down, you see that it calls a function called recv_token. Following recv_token, we see it calls simple_recv_token if compression is not enabled.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
static int32 simple_recv_token(int f, char **data)
{
	static int32 residue;
	static char *buf;
	int32 n;

	if (!buf)
		buf = new_array(char, CHUNK_SIZE);

	if (residue == 0) {
		int32 i = read_int(f);
		if (i <= 0)
			return i;
		residue = i;
	}

	*data = buf;
	n = MIN(CHUNK_SIZE,residue);
	residue -= n;
	read_buf(f,buf,n);
	return n;
}

This function reads a serialized integer off the socket (read_int), then attempts to read up to either CHUNK_SIZE (which is 32k) or the integer bytes. This is a pretty common pattern: send a length encoded in a fixed format, followed by that many bytes of data. Most of the time, I would expect the length to be in “network byte order” (big-endian), but for some reason, rsyncd uses little-endian. I’m guessing this wasn’t originally specified and implementations were on x86. (It also makes the code ever so slightly more efficient on x86.)

So we know now how files are transferred, but it turns out there’s a bunch of metadata before the file transfer. I didn’t want to deal with decoding that. I decided to look for the zip file signature as a start, then back up 4 bytes to read the chunk length. I wasn’t 100% sure this would work, so I set up an rsync server with a known file to test against, and it did. I used scapy to extract the packet contents and then Python’s struct module to extract information.

Returning to the challenge’s 6.pcap, I was able to apply this technique and discovered that it was transferred in 2 chunks: the first was 32768 bytes (32k), which is the maximum CHUNK_SIZE used by rsync, then the 2nd was 3881 bytes.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
pcap = scapy.rdpcap('6.pcap')
sess = pcap.sessions()['TCP 10.128.0.3:873 > 10.128.0.2:57536']
# Get application-layer bytes
raw = b''.join(p.load for p in sess.getlayer(scapy.Raw))
# Find start of zip
pk_start = raw.index(b'PK')
# get length of first chunk
chunk_len = struct.unpack('<I', raw[pk_start-4:pk_start])[0]
zip_bytes = raw[raw.index(b'PK'):]
first = zip_bytes[:chunk_len]
left = zip_bytes[chunk_len:]
# get length of second chunk
chunk_len = struct.unpack('<I', zip_bytes[:4])[0]
first += left[4:4+chunk_len]
open('5.zip', 'wb').write(first)

Using this code gave our 5.zip, which contains, of course, 5.pcap.

A lot of people seemed to attempt to blindly carve the Zip file out of the PCAP stream, using binwalk or other tools. Often, they reported that the file was corrupted, even specifying that it was 4 bytes. This was probably from the error received from unzip:

1
warning [5E.zip]:  4 extra bytes at beginning or within zipfile

Alternatively, attempting to open the resulting 5.pcap with Wireshark gave an error claiming corruption.

Wireshark Error

Both of these were caused by the inclusion of the 4 byte length of the 2nd chunk in the data stream. Failing to recognize that it was part of the rsync metadata lead players astray into believing the Zip file or PCAP were corrupt, but it was the packet carving technique that lead to this.

Layer 5: TFTP

Opening 5.pcap in Wireshark, we find a single TFTP session. TFTP is a UDP protocol, but we don’t appear to have any missing or out-of-order packets here. Looking at the TFTP request, we see that there’s a read request for 4.zip, and that the “Type” is netascii:

TFTP Request

If we use Wireshark to extract 4.zip by using the File > Export Objects > TFTP menu option, then try to unzip the resulting file, we’ll be told it’s corrupt.

1
2
3
4
5
6
7
8
% unzip -l 4.zip
Archive:  4.zip
warning [4.zip]:  256 extra bytes at beginning or within zipfile
  (attempting to process anyway)
error [4.zip]:  start of central directory not found;
  zipfile corrupt.
  (please check that you have transferred or created the zipfile in the
  appropriate BINARY mode and that you have compiled UnZip properly)

It turns out that Wireshark does not decode the netascii decoding in the course of the transfer, so we need to do that after. According to Wikipedia:

Netascii is a modified form of ASCII, defined in RFC 764. It consists of an 8-bit extension of the 7-bit ASCII character space from 0x20 to 0x7F (the printable characters and the space) and eight of the control characters. The allowed control characters include the null (0x00), the line feed (LF, 0x0A), and the carriage return (CR, 0x0D). Netascii also requires that the end of line marker on a host be translated to the character pair CR LF for transmission, and that any CR must be followed by either a LF or the null.

To do the decoding we must substitute a CRLF (\r\n) pair with a plain newline (\n), and a CRNUL (\r\0) with a plain carriage return (\r). This can be done with the following python code:

1
data.replace(b'\x0d\x0a', b'\x0a').replace(b'\x0d\x00', b'\x0d')

Note that the order is important, if you reverse the replacements, you could cause corruption. If we apply this to the 4.zip we got out of Wireshark, we can then extract the zip file.

1
2
3
data = open('4.zip', 'rb').read()
data = data.replace(b'\x0d\x0a', b'\x0a').replace(b'\x0d\x00', b'\x0d')
open('4.zip', 'wb').write(data)

Unzipping the decoded 4.zip, we get 4.pcap. We’ve now made it through half the layers! (Unless, of course, the filenames are misleading…)

Layer 4: SMB

Opening 4.pcap in Wireshark, we find a bunch of SMB traffic. Fortunately, encryption is not enabled, or we’d be in a world of trouble. This level is pretty straightforward, as Wireshark has an Export Objects feature for us. (File > Export Objects > SMB). We can directly export 3.zip, and unzipping it, we’re straight on to 3.pcap.

Wireshark SMB

Layer 3: Git Smart Protocol

After we open 3.pcap, we find traffic for the “Git Smart Protocol”. You might be used to seeing Git traffic going over either HTTP or SSH, but it turns out Git has its own protocol for data transfer.

The good news is that, unlike rsync, the protocol is well documented. The bad news is that it is more complex to extract data.

This data is also transmitted in chunks, but unlike rsync, the lengths are encoded in 4 hexadecimal characters (so 16 bits only). The data contained in the repository is transmitted as a Git packfile, which is separately described and specified.

At first, I just sought the start of a packfile (PACK), and looked for the 4 hex characters before for length, but there was a byte in between. It turns out git also multiplexes data in order to pass the pack data and status updates at the same time, so the format actually becomes:

  • 4 hex characters, length (note: includes the length itself!)
  • 1 octet, identifying the ‘sideband’ (channel) in use
  • data

So we need to find the start of the packfile, back up 5 bytes, then start decoding to get the whole packfile. (Again, this is a hack to avoid decoding the whole protocol.) Each time, we read the length, the sideband number, then the data. If the sideband number is 1, we concatenate this to get the raw packfile data.

Once we have the packfile, we need to decode it and extract the objects from the git repository. Since every layer has been a zipfile, I reason we can extract a zipfile here as well, so I’ll hunt for objects in the packfile that are also zipfiles.

I wrote a script in python to do this (in order to have an automated solution), but you can even do this directly with git.

  1. Create an empty git repository.
  2. In the git repository, run git unpack-objects < PACKFILE
  3. Run git cat-file --batch-all-objects --batch-check to find information about all objects known to git. Only one is a blob, which is what git uses to refer to a chunk of actual data.
  4. Run git cat-file -p BLOBID to cat the contents of the blob (the raw zipfile).

For example:

1
2
3
4
5
6
7
8
9
10
11
12
% git init
Initialized empty Git repository in /ctf/3tmp/.git/
% git unpack-objects < ../3.pack
Unpacking objects: 100% (3/3), 24.23 KiB | 24.23 MiB/s, done.
% git cat-file --batch-all-objects --batch-check
4067275272fa8d87b431329240f99e98c8c84887 blob 24633
7695bd963881302327d1ca5ff1fc4c4f04f342a2 tree 33
9f3d8f7b17525ec77c3bcf00ce2a4b305d47c6c9 commit 223
% git cat-file -p 4067275272fa8d87b431329240f99e98c8c84887 > tmp.zip
% unzip tmp.zip
Archive:  tmp.zip
  inflating: 2.pcap

So, we have 2.pcap, and we’re off to the next level!

Layer 2: dnscat2

Upon opening 2.pcap in Wireshark, we’ll notice a large quantity of DNS traffic right off the bat. Using Wireshark’s DNS statistics, we see that it’s mostly larger record types: TXT, MX, and CNAME.

DNS Statistics

The first few queries we see are for dnscat2.c2.challenges.bsidessf.net. Looking up dnscat2 we find that it’s a DNS tunneling protocol written by fellow BSidesSF CTF organizer @iagox86. The good news is that it’s well-documented: both the transport protocol and the command protocol.

Looking at the command protocol, we see that the file data is sent in one contiguous block, so if we can reconstruct the transport protocol, we can just carve out the zipfile we expect at the next layer.

To reconstruct the transport protocol, we must take each DNS response and decode it. Only 3 types of DNS records are being used: TXT, MX, and CNAME. For TXT records, the entire response will be hex-encoded data. For the MX and CNAME records, the response will be formatted like a valid DNS name by appending the domain of the C2 server, so it will be <hexstring>.c2.challenges.bsidessf.net. The hexstring may be split into multiple labels to fit the DNS limits on 63 bytes per label.

The simple way to handle all this is to delete . and .c2.challenges.bsidessf.net from all the responses, so we just have the hex data left. Then, in each response, it begins with the following:

  • 2 octets: packet_id
  • 1 octet: message_type
  • 2 octets: session_id
  • 2 octets: seq number
  • 2 octets: ack number

This is followed by the actual data. If packets were out of order, repeated, or dropped, we might need to deal with this, but I can work around it by just dropping the first 9 octets from each message. I once again turned to scapy to solve this problem:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
def decode_bytes(b):
    return bytes.fromhex(b.replace(b'.c2.challenges.bsidessf.net.', b'').replace(b'.', b'').decode('ascii'))

def c2_pkt(pkt):
    if pkt.haslayer(scapy.DNSRR):
        if isinstance(pkt[scapy.DNSRR].rdata, list):
            return decode_bytes(b''.join(pkt[scapy.DNSRR].rdata))
        return decode_bytes(pkt[scapy.DNSRR].rdata)
    if pkt.haslayer(scapy.DNSRRMX):
        return decode_bytes(pkt[scapy.DNSRRMX].exchange)


pcap = scapy.rdpcap('2.pcap')
pkts = [p for p in pcap
        if p.haslayer(scapy.UDP) and
            p.haslayer(scapy.DNS) and
            p[scapy.DNSQR].qname != b'dnscat2.c2.challenges.bsidessf.net.']
c2_data = [c2_pkt(p) for p in pkts]
c2_data = [p[9:] for p in c2_data if p is not None]
data_stream = b''.join(c2_data)
cut_data = data_stream[data_stream.index(b'PK\x03\x04'):]
open('1.zip', 'wb').write(cut_data)

This gets us 1.zip, which contains 1.pcap, as we expect. Getting close now!

Layer 1: Telnet

This should be the hardest layer by the tradition of Matryoshka. It turns out that I went a little easy here. If we load 1.pcap into Wireshark, we see a single telnet connection.

Telnet Session

There’s no obvious flag, and the login password appears to be thisisnottheflag, but there’s also a command to cat a bunch of data to a flag.txt file:

1
echo -e "\x43\x54\x46\x7b\x62\x61\x62\x79\x5f\x77\x69\x72\x65\x73\x68\x61\x72\x6b\x5f\x64\x6f\x6f\x5f\x64\x6f\x6f\x5f\x64\x6f\x6f\x5f\x62\x61\x62\x79\x5f\x77\x69\x72\x65\x73\x68\x61\x72\x6b\x7d" > flag.txt

If we run this command ourselves, we’re rewarded:

1
2
% echo -e "\x43\x54\x46\x7b\x62\x61\x62\x79\x5f\x77\x69\x72\x65\x73\x68\x61\x72\x6b\x5f\x64\x6f\x6f\x5f\x64\x6f\x6f\x5f\x64\x6f\x6f\x5f\x62\x61\x62\x79\x5f\x77\x69\x72\x65\x73\x68\x61\x72\x6b\x7d"
CTF{baby_wireshark_doo_doo_doo_baby_wireshark}

Conclusion

You can see the automated solution script and all the individual layers in our open-source challenge release. Hopefully you found this challenge fun, educational, and/or challenging. I promise no files were corrupt when they were transferred, it just turns out that not all protocols are so straightforward.

on March 12, 2021 08:00 AM

March 06, 2021

Why PyCharm with WSL2?

PyCharm is a professional grade Python IDE that includes a broad range of capabilities such as auto-completion, debugging and integrated database tooling. Many developers recommend the JetBrains suite for a variety of reasons, however when working with multiple companies and teams they may not have a choice underlying operating system.

Using PyCharm with built-in remote capabilities allows for the same workflow (PyCharm IDE, Linux backend) across platforms including Windows / WSL2, MacOS / Docker, or any desktop and a remote Linux virtual machine in the cloud.

PyCharm and WSL2

Configuring WSL2

When setting up WSL2, check your BIOS settings and make sure that virtualization is enabled, and VT-D for optimal performance.

For Windows users using the Windows Insider program, open a command prompt with administrator priviledges and run wsl --install.

If you are running the stable version of Windows 10, run the following in an administrator command prompt:

dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart

Reboot and then download the Linux Kernel Update package from Microsoft.

Configure wsl to use version 2:

wsl --set-default-version 2

Once complete visit the Microsoft Store and download Ubuntu 20.04. At this point you should be able to open a terminal (if you don’t have one yet checkout microsoft/terminal) and see an Ubuntu 20.04 command prompt.

WSL2 Ubuntu 20.04 Setup

Download and Configure PyCharm

WSL2 is now installed, and we are ready to download PyCharm from jetbrains.com and run through the installer. Once complete start PyCharm, click Settings -> Project -> Python Interpeter -> Gear Icon -> Add. For the interpreter type select WSL and for the path use /usr/bin/python3. At this point I prefer to swap Ubuntu’s default python with miniconda, but I will save that for a later post.

While in the settings panel you can also switch the integrated terminal to use the WSL2 instance. Select Settings -> Tools -> Terminal and under Shell path replace cmd.exe with wsl.exe.

Conclusion

Following the steps outlined above you can now develop fully featured Python applications, with all the advanced IDE features of PyCharm, using Linux but without having to format the system or having to pick a particular set of hardware components. Found a problem or have suggestions on a better method? Let me know!

on March 06, 2021 06:20 PM