February 18, 2020

Ceph is a compelling open-source alternative to proprietary software defined storage solutions from traditional vendors, with a vibrant community collaborating on the technology. Ubuntu was an early supporter of Ceph and its community. That support continues today as Canonical maintains premier member status and serves on the governing board of the Ceph Foundation.

With many global enterprises and telco operators running Ceph on Ubuntu, organisations are able to combine block and object storage at scale while tapping into the economic and upstream benefits of open source.

Why use Ceph?

Ceph is unique because it makes data available in multiple ways: as a POSIX compliant filesystem through CephFS, as block storage volumes via the RBD driver and for object store, compatible with both S3 and Swift protocols, using the RADOS gateway.

A common use case for Ceph is to provide block and object store to OpenStack clouds via Cinder and as a Swift replacement. Kubernetes has similarly adopted Ceph as a popular way for physical volumes (PV) as a Container Storage Interface (CSI) plugin.

Even as a stand-alone, Ceph is a compelling open-source storage alternative to closed-source, proprietary solutions as it reduces OpEx costs organisations commonly accrue with storage from licensing, upgrades and potential vendor lock-in fees.

How Ceph works

Ceph storage architecture diagram

Ceph stores data in pools, which are accessible from users or other services to provide block, file or object storage. A Ceph pool backs each of these mechanisms. Replication, access rights and other internal characteristics (such as data placement, ownership, access, etc.) are expressed on a per pool basis.

The Ceph Monitors (MONs) are responsible to maintain the Cluster state. They manage the location of data using the CRUSH map. These operate in a cluster with quorum-based HA, with data stored and retrieved via Object Storage Devices (OSDs).

There is a 1:1 mapping of a storage device and a running OSD daemon process. OSDs utilise the CPU and RAM of the participating cluster member host aggressively. This is why it is important to carefully balance the number of OSDs with the number of CPU cores and memory when architecting a Ceph cluster. This is especially true when aiming for a hyper-converged architecture (such as for OpenStack or Kubernetes for example).

Using LXD as a container hypervisor helps to properly enforce resource limitations on most running processes on a given node. LXD is used extensively to provide the best economics in Canonical’s Charmed OpenStack distribution by isolating the Ceph MONs. Containerising the Ceph OSDs is currently not recommended.

Ceph storage mechanisms

Accessing each data pool equates choosing the mechanism to do so. For example, one pool may be used to store block volumes, while another provides the storage backend for object store or filesystems. In the case of volumes, the host that is seeking to mount the volume needs to load the RBD kernel module after which Ceph volumes can be mounted, just as local volumes would be.

Object buckets are not mounted generally – client-side applications can use overlay filesystems to simulate a ‘drive’, but it is not an actual volume being mounted. Instead, the RADOS gateway enables access to Object buckets. RADOSGW provides a REST API to access objects using the S3 or Swift protocols. Filesystems are created and formatted using CephFS after which they are exported in a similar fashion as NFS mounts, and available to local networks.

Volume and object store use cases have been in production at scale for quite some time. Using Ceph to combine volume and object store provides many benefits to operators. Aside from the obvious support for multiple storage use cases, it also allows for the best density when properly architected and scaled.

Ceph storage support with Canonical

Canonical provides Ceph support as part of Ubuntu Advantage for Infrastructure with Standard and Advanced SLAs, corresponding to business hours and 24×7 support, respectively. Each covered node includes support for up to 48TB of raw storage of a Ceph cluster.

This coverage derives from our reference hardware recommendation for OpenStack and Kubernetes in a hyper-converged architecture at an optimal price per TB range while preserving best performance across compute and network in an on-premise cloud. In case the number of nodes to TB ratio does not coincide with our recommendation and exceeding this limit, Canonical offers per TB pricing beyond the included accommodating our scale-out storage customers.

Ceph is available on Ubuntu in the main repository, and as such, users receive free security updates for up to five years on an LTS version. An additional five years of paid, commercial support beyond the standard support cycle is available through UA Infrastructure.

Discover how organisations benefit from Canonical’s Ceph support with these case studies:

Learn more about Canonical’s Ceph storage solution and get more details on our Ceph support offering.

on February 18, 2020 11:58 AM

The Web and Design team at Canonical looks after most of our main websites, the brand, our Vanilla CSS framework and several of our products with web front-ends.  Here are some of the highlights of our completed work over our last two-week iteration.

Web & Brand squad

Our Web Squad develops and maintains most of Canonical’s promotional websites.


We have rolled out instances of tutorials to ubuntu.com and three of our product websites.

The tutorials are now maintained in the logical discourse for each site and pulled into the websites. This allows people to contribute to existing tutorials and help create new ones.

Anbox cloud

We released the anbox-cloud.io site last iteration  This iteration we have been working on the logged-in pages for potential customers to see the technology in action.

18.04.4 point release

To support the new ‘point’ release, we updated ubuntu.com’s download pages with links to the new images, as well as updated the checksums used to verify each download.  We also updated the China and Japan websites.

You can learn more about the release on the wiki.

Raspberry PI download page refresh

With the 18.04.4 release, we updated the Raspberry Pi download page to make it easier to find the right version of Ubuntu to use.  We also move the instructions to the thank-you page with more details on verifying checksums, how to flash your microSD card and more.


We built two new takeovers this iteration.

One for the webinar ‘Artificial Intelligence at the edge in a 5G world’.

And another for the ‘A decision maker’s guide to Kubernetes data centre deployments’ webinar.

Ubuntu Masters

We released an updated version of the Masters’ Conference page, featuring full keynote presentations from three well-respected speakers.

The start of a new kernel section

We are starting to roll out a new section of the website that explains a bit about the Ubuntu kernel.  So far we have two pages, but a couple more will follow soon.

See the kernel section ›

Supporting icons and illustrations

Web icons worked on this iteration

A new batch of illustrations was created for various web pages.

We helped the Desktop Team to create some new icons for the upcoming theme

Supporting marketing

We completed stand design exploration for the upcoming Kubecon event

And helped Marketing generate a number of translated PDF documents

Video style exploration

Worked on developing animation styles for upcoming videos


  • maas.io is now running on a focal container and other sites will be following soon
  • buy.ubuntu.com, built using Shopify, has been updated for the February rollout of the new version of their checkout


The MAAS squad develop the UI for the maas project.

Machine listing rebuild

The machine listing rebuild in React is going great! We are slightly ahead of schedule thanks to the hard work of Huw, Kit, and Caleb. In the last iteration, we completed in-table actions, the pools tab, and nearly all of the additional hardware options.


As we are adding LXD as a KVM host to MAAS in 2.8 there are a number of small changes we will introduce in the next release. They will showcase the new data we have and make it easier for users to understand they are looking at a virtual machine that lives on a host, managed by MAAS.

2.7 release

The 2.7 release is coming closer and closer, we did face a number of very unexpected bugs in the UI that we fixed quickly. Currently, we have not had any bugs on our end with the last release candidate, so hopefully, this will be the last time we write about 2.7!

Specifying the shared table component

Work is continuing on a comprehensive spec for our cross-product modular table React component.  Among the features specced out this iteration:

  • Column visibility control
  • Column resizing
  • Column reordering
  • Pinned row
  • Row selection
  • Responsive reordering of content
  • Single and double row
  • Dynamic summaries per table / per group of rows


The JAAS squad develops the UI for the JAAS store and Juju GUI  projects.

Report on the user testing

The team worked on the final report and analysis of the user testing done during the Product sprint in Cape Town. This report generated a conversation within the team in order to help to define the next changes in UX, functionality and UI. 

JAAS dashboard filtering

We have been working on a table filtering component which dynamically displays filterable options. The UI of the drop-down is not functional and clearly displays the values users selected to filter the list of models from the table.

Another form of filtering has been developed in the model details page. When users click on a particular application the data is filtered to related items (units, machines, relations):


The team implemented a new ‘tutorials’ page on jaas.ai, to collect the relevant tutorials previously published on Ubuntu.com:


Evolution of the CharmHub store homepage

In the previous iteration, the homepage content was focused on publishing charms and testimonials from users. We have now updated the design to be more store focused, showing the wide selection of charms available. The user can also filter these charms using the list of categories and publishers on the left-hand side of the page, or via the filter and sort dropdown menu.


The Vanilla squad design and maintain the design system and Vanilla framework library. They ensure a consistent style across Canonical sites.

Documentation search

We’ve been working on replacing the current simple pages filtering with full documentation search for Vanilla. This should make it easier to find relevant content in the docs.

A new version of the docs site is going to be deployed live together with Vanilla 2.7.0 early next week.

Dark theme for contextual menu

We’ve worked on some updates and improvements to the contextual menu component. We added support for a dark theme to it and fixed some bugs around positioning the menu.


Smaller maintenance tasks that we managed to finish this iteration include fixing the placement of search input in IE11 and adding support for numbers in heading components’ variants (p-heading–1, p-heading–2, etc.) alongside existing ones (p-heading–one, –two, etc.)

Vanilla React components

The team looked at existing tables considering functionality and interactions, working on the creation of a single modular table across products. The modularity of the table takes into consideration the requirements and the use cases on product/project level.

The aim of this work is to create reusable code and consistent experience and interactions within our suite of products.

You can follow the development of the table on our Discourse posts:



The long-awaited release of the Vanillised SSO has been released to production. Please file a bug if you stumble on any issues.


The Snapcraft team works closely with the Snap Store team to develop and maintain the Snap Store site.

Manually triggering new builds

We’ve been continuing the work to migrate build.snapcraft.io functionality into snapcraft.io this iteration. We’ve already done work to connect a GitHub repo to a snap but this iteration we’ve been making sure that connection is useful. The work required to build a snap manually is done, we’re just tidying up the UX. This is a large milestone in the migration as it allows us to prepare an initial version to deploy to production for feedback, which takes us to the other big build related feature we’ve worked on…

Build messaging flow

Getting the migration finished is only part of the story. The other part is to inform new and established users on the changes, why they’re happening, and what they need to do (if anything).

The strategy is based on a blog post as the main source of communication with other formats being used to reach out and link back. An email will be sent as a notification of the changes, and a forum thread will help us gather user feedback on the new flow for future improvements. There will also be messages and links to the blog post from different relevant places on both build.snapcraft.io and snapcraft.io.


Team blog posts:

on February 18, 2020 10:58 AM

February 17, 2020

A certain amount of kerfuffle over the last couple of days in the half of the Birmingham tech scene that I tend to inhabit, over an article in Business Live about Birmingham Tech Week, a new organisation in the city which ran a pretty successful set of events at the back end of last year.1

I think what got people’s backs up was the following from BTW organiser Yiannis Maos, quoted in the article:

I saw an opportunity borne out of the frustration that Birmingham didn’t really have a tech scene, or at least not one that collaborated very much.

You see, it doesn’t appear that the Tech Week team did much in the way of actually trying to find out whether there was a tech scene before declaring that there probably wasn’t one. If they had then they’d have probably discovered the Birmingham.io calendar which contains all the stuff that’s going on, and can be subscribed to via Google. They’d probably have spoken to the existing language-specific meetups in the city before possibly doing their own instead of rather than in conjunction with. They’d have probably discovered the Brum tech Slack which has 800-odd people in it, or2 CovHack or HackTheMidlands or FusionMeetup or devopsdays or CodeYourFuture_ or yougotthisconf or Tech Wednesday or Django Girls or OWASP or Open Code or any one of a ton of other things that are going on every week.

Birmingham, as anyone who’s decided to be here knows, is a bit special. A person involved in tech in Birmingham is pretty likely to be able to get a similar job in London, and yet they haven’t done so. Why is that? Because Brum’s different. Things are less frantic, here, is why. We’re all in this together. London may have kings and queens: we’re the city of a thousand different trades, all on the same level, all working hand in hand. All collaborating. It’s a grass roots thing, you see. Nobody’s in charge. The calendar mentioned above is open source exactly so that there’s not one person in charge of it and anyone else can pick it up and run with it if we disappear, so the work that’s already gone into it isn’t wasted.

Yiannis goes on to say “I guess we weren’t really banging the drum about some of the successes Birmingham had seen in regards to tech.” And this is correct. Or, more accurately, I don’t personally know whether it’s correct, but I entirely believe it. I’m personally mostly interested in the tech scene in the city being good for people in the city, not about exhibiting it to others… but that doesn’t mean that that shouldn’t be done. Silicon Canal already do some of that, but having more of it can’t be bad. We all want more stuff to happen, there just doesn’t need to be one thing which attempts to subsume any of the others. Birmingham Tech Week’s a great idea. I’d love to see it happen again, and it’s great that Yiannis has taken a lead on this; five thousand people showing up can’t be wrong.

And, to be clear, this is not an attempt to rag on them. I don’t know Yiannis myself, but I’ve been told by people whose opinions I value and who do know him that he’s not intending to be a kingmaker; that what he’s looking to do is to elevate what’s already going on, and add more to it. That’s fantastic. They’ve contacted people I know and trust to ask for opinions and thoughts. I spoke to them when they set up their own events listing and asked people to contribute to theirs specifically and I said, hey, you know there already is one of those, right? If you use that (as Silicon Canal do) and ask people to contribute to that, then we all win, because everyone uses it as the single source and we don’t have fifteen incomplete calendars. And they said, hey, we didn’t know that, soz, but we’ll certainly do that from now on, and indeed they have done so, recommending to event organisers that they add their stuff to the existing calendar, and that’s brilliant. That’s collaboration.

I think of the tech scene in my city like a night out dancing. You go out for the evening to have a dance, to have a laugh. Show up on your own or with a partner or with a group, and then all get out there on the floor together and throw some shapes; be there for one minute or the whole night, nobody minds. And nobody’s directing it. Nobody wins a dance. If someone tries to tell everyone how to dance and when to dance and where to dance… then it stops being fun.

And so there’s a certain amount of resistance, on my side of the fence, to kingmakers. To people who look at the scene, all working together happily, and then say: you people need organising for your own good, because there needs to be someone in charge here. There needs to be hierarchy, otherwise how will journalists know who to ask for opinions? It’s difficult to understand an organisation which doesn’t have any organisation. W. L. Gore and Patagonia and Valve are companies that work a similar way, without direct hierarchy, in a way that the management theorist Frédéric Laloux calls a “teal organisation” and others call “open allocation”, and they baffle people the world over too; half the managers and consultants in the world look at them and say, but that can’t work, if you don’t have bosses, nobody will do anything. But it works for them. And it seems to me to be a peculiarly Brum approach to things. If we were in this for the fame and the glory we’d have gone down to London where everyone’s terribly serious and in a rush all the time. Everyone works with everyone else; BrumPHP talks about BrumJS, Fusion talks about School of Code; one meetup directs people to others that they’ll find interesting; if the devopsdays team want a speaker about JavaScript they’ll ping BrumJS to ask about who’d be good. That’s collaboration. Everyone does their bit, and tries to elevate everyone else at the same time.

So I really hope that the newspaper article was a misquote; that the journalist involved could have looked more into what’s going on in the city and then written something about all of that, too. It’s certainly easy to just report on one thing that’s going on, but exactly what makes the Birmingham tech scene different from others is that it’s rich and deep and there isn’t one convenient person who knows all of it. I’d love to see Birmingham journalism talking more about the Birmingham scene. Let’s hope there’s more of that.

  1. That link describes the 2019 Birmingham tech week at time of writing in February 2020. I do not know whether they’ll keep the 2019 schedule around (and I hope they do and don’t just overwrite it).
  2. to quote Jim
on February 17, 2020 10:44 PM

Welcome to the Ubuntu Weekly Newsletter, Issue 618 for the week of February 9 – 15, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on February 17, 2020 09:53 PM

With February 13th passing it would appear there are only 3 Malaysia patents left:

  • MY 128994 (possible expiration of 30 Mar 2022)

  • MY 141626-A (possible expiration of 31 May 2025)

  • MY-163465-A (possible expiration of 15 Sep 2032)

These two just expired:

  • MY 118734-A - Exp. Jan 31, 2020

  • PH 1-1995-50216 - Exp. Feb 13, 2020

I am very much not a patent lawyer, but my reading indicates all the 3 remaining are really all the same expired US Patent US5565923A with varying Grant dates causing to expire far in the future.

I've started a detailed tracker for those who want more details.

on February 17, 2020 12:00 AM

February 15, 2020

Getting a new phone

Stuart Langridge

So, I’m getting a new phone. Here’s an insight into my decision-making processes.

I have, repeatedly and irritatedly, complained that phones now are too big. My beloved Sony Xperia Z5 Compact is the right size for a phone, in my opinion. I always, always use my phone one-handed; I never ever hold in one hand and touch the screen with the other. It is a loss to me why this, which was the normal way of using a phone for years, has been reclassified as a thing that nobody wants any more, but c’est la vie, I suppose. Anyway, said beloved Z5C finally threw a seven the other day and decided that it wouldn’t do wifi any more. Or, more accurately, that it was fine doing wifi when within about two feet of the router, and not otherwise.

That’s not ideal, I thought.

I mean, it’s five years old. So I probably did OK out of it. And the battery life is shocking now. So I’ve been vaguely thinking about getting something new for a while. I’m not sure that the wifi thing is sensibly repairable, and I read a forum post about a chap who took his Z5C apart to replace the (not-user-replaceable) battery (a process which involves heating it up to break the glue behind the glass, and a bunch of other stuff that there’s no way I’d be able to do without breaking the phone and possibly burning down the building) and while doing so managed to snap a tiny bit of metal which then broke the wifi antenna and made it exhibit the problems I’m seeing. So that’s probably what happened; it got jolted or something. No chance of me fixing that; if I have to solder anything, I’ll screw it up. This is the universe telling me to get a new phone, I thought.

Consequently, it’s off to GSM Arena’s phone finder. I don’t actually have much in the way of requirements for a phone. My original list of needs was:

  • not too big (for what exactly qualifies as not too big, read on)
  • not too expensive (flagship phones are now a thousand pounds! even non-flagship ones are £500! I don’t have a monkey just lying around)
  • NFC (for Google Pay, which I use all the time)
  • a headphone jack (because wireless headphones are pointless and expensive and worse in every conceivable way other than “the cable doesn’t get tangled up”, and I don’t like them and don’t want to buy any)
  • made in the last couple of years, since if I’m spending money on a phone I want it to last a while; there’s not a lot of point replacing my 2015 Z5C with a phone of similar vintage
  • and I like pretty things. Design is important to me. Beauty, vitality, and openness are all important.

This is not much of a strenuous list of requirements, to be honest. Or so I thought. I did some searches, and quickly established that I’d have to get something bigger than the Z5C; there isn’t anything at all that size, these days. So I wandered into town with the intention of picking up some actual phones to get a sense of what was too big.

This was harder than it looks, because basically every phone shop now bolts all their phones to the table so they can’t be picked up. All you can do is jab at the screen like a bloody caveman. This is pretty goddamn annoying when the point of the test is to see what a phone feels like in the hand. The O2 shop had a bunch of plastic models of phones and even those had a massive thing glued on the back with a retracting wire cable in it, as if I’m going to steal a non-functional plastic box, O2, for god’s sake, stop treating your customers like criminals, this is almost as bad as hotels giving you those crap two-part hangers like I’m going to spend £150 on a night in the Premier Inn and then nick the hangers, what’s wrong with you… but it did at least let me establish that the absolute outside maximum size for a phone that I’m able to tolerate is the Samsung Galaxy S10e. Anything bigger than that is just too big; I can’t reach the top of the screen without using my other hand.

A search on gsmarena for phones less than 143mm in height, with NFC and 3.5mm jack, from 2018 onwards lists three phones. The S10e as mentioned, the Sony Xperia XA2, and the Sharp Aquos R2 Compact. Now, I quite like the look of the Aquos (despite all the reviews saying “it’s got two notches! not even the Nazis did that!”) but as far as I can tell it just was flat never made available in the UK at all; getting hold of one is hard. And the S10e, while it seems OK, is a Samsung (which I’m not fond of) and more importantly is £450. This left me looking at the Xperia XA2, which was a possibility — it’s sort of a grand-nephew of my Z5C. Reviews weren’t very encouraging, but I figured… this might be OK.

Andrew Hutchings pointed out on Twitter (because of course I was bitching about this whole situation on Twitter) that there are USB-to-headphone-jack adaptors. Now, I knew this — my daughter uses one to plug her headphones into her iPhone — but for some reason I hadn’t properly considered that option; I’d just assumed that no headphone jack = stupid wireless headphones. An adaptor wouldn’t be that big a deal; my headphones just get thrown in my coat pocket (I have cheapish in-ear headphones, not posh cans that go ove the ear and need a bag to carry them around in) and so I’d just leave the adaptor attached to them at all times. That wouldn’t be so bad.

Taking the headphone jack requirement away from the search added two more options (and a bunch of smartwatches, unhelpfully): the Sony Xperia XZ2 Compact and the Nokia 8 Sirocco. I liked the sound of the Nokia, but… not for very good reasons. My favourite ever phone from an industrial design perspective was the Nokia N9, which I loved with a passion that was all-encompassing. I like the Nokia brand; it says classy and well-thought-out and well-integrated and thoughtful and elegant. And “Sirocco” is a cool name; I like things with names. I hate that phones are just called a code number, now. So “Sirocco” is much cooler than “S10e”. None of these are good reasons, particularly the ones that revolve around my nostalgia for the Nokia brand considering that it’s been bought by some other company now. And the Sirocco only got fairly average reviews.

Ah, but then I read the reviews. And all the things that reviewers didn’t like, I either didn’t care about or, more worryingly yet, I completely disagreed with. “As the phone has a 16:9 screen rather than the now more popular 18:9 (or even 19:9) style, it already seems dated” says techradar, for which you can just sod off. Why, why would I want a phone a foot long? That’s the opposite of what I want! So reviews that complain that it’s not tall enough (which was a lot of them) got discounted. Complaints that it’s using an older chipset than some of its contemporaries don’t bother me; it’s quite some newer than my current phone, after all. Apparently the camera isn’t perfect, about which I don’t care; it’s got a camera, so I’m good. And they all agreed on two things: it’s Android One, meaning that it’s stock Android and will get updates (which I like, since my Z5C is stuck on Android 7 (!)), and that it’s pretty. I like pretty.

The price tag was off-putting, though. £475 on Amazon. That’s rather too much; I’d have to save up for that, and as noted I have a phone with no wifi, so this problem needs solving sooner rather than later. I don’t mind second-hand, though, so I checked eBay and it was still £250 there, which is on the very utmost outer edge of what I can just drop on a purchase and I’d have to be really convinced of it. I don’t like buying things on the knock-knock, and I am in a 12-month can’t-leave SIM-only contract with Three, so the idea of getting an “upgrade” from my carrier was a no-no even if I wanted to, which I don’t (the SIM-only thing gives me unlimited texts and calls and 6GB of data per month for nine quid. Not forty nine. Nine. I don’t want to lose that).

And then I checked CeX. And CeX had it in stock, online, class A, for £175.

What? A hundred and seventy-five quid for a phone which elsewhere is nearly five hundred?

So I bought it. And now it’s not in stock online any more, so I assume I have the only one they had. This means you can’t do the same. Sorry about that.

It’s due to arrive early next week (which is the problem with buying on a Saturday). I’ll let you know how it goes. I’m rather looking forward to it.

on February 15, 2020 02:09 PM

February 13, 2020

Episódio 77 – Passwords, pela trigésima oitava vez! O Diogo chegou da FOSDEM, o Tiago continua a tentar optimizar a gestão das suas passwords, e a terra continua a girar à volta do Sol… Já sabem: oiçam, comentem e partilhem!

  • https://keepassxc.org
  • https://addons.mozilla.org/en-US/firefox/addon/keepassxc-browser/
  • https://www.f-droid.org/en/packages/com.kunzisoft.keepass.libre/
  • https://pool.xmr.pt/
  • https://snapcraft.io/cvescan
  • https://pixels.camp/
  • https://fosdem.org


Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on February 13, 2020 10:45 PM

Recently, Loongson made some Pi 2K boards available to Debian developers and Aron Xu was kind enough to bring me one to FOSDEM earlier this month. It’s a MIPS64 based board with 2GB RAM, 2 gigabit ethernet cards, an m.2 (SATA) disk slot and a whole bunch more i/o. More details about the board itself is available on the Debian wiki, here is a quick board tour from there:

On my previous blog post I still had the protective wrapping on the acrylic case. Here it is all peeled off and polished after Holger pointed that out to me on IRC. I’ll admit I kind of liked the earthy feel that the protective covers had, but this is nice too.

The reason why I wanted this board is that I don’t have access to any MIPS64 hardware whatsoever, and it can be really useful for getting Calamares to run properly on MIPS64 on Debian. Calamares itself builds fine on this platform, but calamares-settings-debian will only work on amd64 and i386 right now (where it will either install grub-efi or grub-pc depending in which mode you booted, otherwise it will crash during installation). I already have lots of plans for the Bullseye release cycle (and even for Calamares specifically), so I’m not sure if I’ll get there but I’d like to get support for mips64 and arm64 into calamares-settings-debian for the bullseye release. I think it’s mostly just a case of detecting the platforms properly and installing/configuring the right bootloaders. Hopefully it’s that simple.

In the meantime, I decided to get to know this machine a bit better. I’m curious how it could be useful to me otherwise. All its expansion ports definitely seems interesting. First I plugged it into my power meter to check what power consumption looks like. According to this, it typically uses between 7.5W and 9W and about 8.5W on average.

I initially tried it out on an old Sun monitor that I salvaged from a recycling heap. It wasn’t working anymore but my anonymous friend replaced its power supply and its CFL backlight with an LED backlight, now it’s a really nice 4:3 monitor for my vintage computers. On a side-note, if you’re into electronics, follow his YouTube channel where you can see him repair things. Unfortunately the board doesn’t like this screen by default (just black screen when xorg started), I didn’t check if it was just a xorg configuration issue or a hardware limitiation, but I just moved it to an old 720P TV that I usually use for my mini collection and it displayed fine there. I thought I’d just mention it in case someone tries this board and wonders why they just see a black screen after it boots.

I was curious whether these Ethernet ports could realistically do anything more than 100mbps (sometimes they go on a bus that maxes out way before gigabit does), so I install iperf3 and gave it a shot. This went through 2 switches that has some existing traffic on it, but the ~85MB/s I got on my first test completely satisfied me that these ports are plenty fast enough.

Since I first saw the board, I was curious about the PCIe slot. I attached an older NVidia (that still runs fine with the free Nouveau driver), also attached some external power to the card and booted it all up…

The card powers on and the fan enthusiastically spins up, but sadly the card is not detected on the Loongson board. I think you need some PC BIOS equivelent stuff to poke the card at the right places so that it boots up properly.

Disk performance is great, as can be expected with the SSD it has on board. It’s significantly better than the extremely slow flash you typically get on development boards.

I was starting to get curious about whether Calamares would run on this. So I went ahead and installed it along with calamares-settings-debian. I wasn’t even sure it would start up, but lo and behold, it did. This is quite possibly the first time Calamares has ever started up on a MIPS64 machine. It started up in Chinese since I haven’t changed the language settings yet in Xfce.

I was curious whether Calamares would start up on the framebuffer. Linux framebuffer support can be really flaky on platforms with weird/incomplete Linux drivers. I ran ‘calamares -platform linuxfb’ from a virtual terminal and it just worked.

This is all very promising and makes me a lot more eager to get it all working properly and get a nice image generated that you can use Calamares to install Debian on a MIPS64 board. Unfortunately, at least for now, this board still needs its own kernel so it would need it’s own unique installation image. Hopefully all the special bits will make it into the mainline Linux kernel before too long. Graphic performance wasn’t good, but I noticed that they do have some drivers on GitHub that I haven’t tried yet, but that’s an experiment for another evening.


  • Price: A few people asked about the price, so I asked Aron if he can share some pricing information. I got this one for free, it’s an unreleased demo model. At least two models might be released that’s based on this, a smaller board with fewer pinouts for about €100, and the current demo version is about $200 (CNY 1399), so the final version might cost somewhere in that ballpark too. These aren’t any kind of final prices, and I don’t represent Loongson in any capacity, but at least this should give you some idea of what it would cost.
  • More boards: Not all Debian Developers who requested their board have received them, Aron said that more boards should become available by March/April.
on February 13, 2020 08:29 PM

The Ubuntu team is pleased to announce the release of Ubuntu 18.04.4 LTS (Long-Term Support) for its Desktop, Server, and Cloud products, as well as other flavours of Ubuntu with long-term support.

Like previous LTS series, 18.04.4 includes hardware enablement stacks for use on newer hardware. This support is offered on all architectures and is installed by default when using one of the desktop images.

Ubuntu Server defaults to installing the GA kernel; however you may select the HWE kernel from the installer bootloader.

As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 18.04 LTS.

Kubuntu 18.04.4 LTS, Ubuntu Budgie 18.04.4 LTS, Ubuntu MATE 18.04.4 LTS, Lubuntu 18.04.4 LTS, Ubuntu Kylin 18.04.4 LTS, and Xubuntu 18.04.4 LTS are also now available. More details can be found in their individual release notes:


Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud, and Ubuntu Base. All the remaining flavours will be supported for 3 years.

To get Ubuntu 18.04.4

In order to download Ubuntu 18.04.4, visit:


Users of Ubuntu 16.04 will be offered an automatic upgrade to 18.04.4 via Update Manager. For further information about upgrading, see:


As always, upgrades to the latest version of Ubuntu are entirely free of charge.

We recommend that all users read the 18.04.4 release notes, which document caveats and workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:


If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

#ubuntu on irc.freenode.net

Help Shape Ubuntu

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:


About Ubuntu

Ubuntu is a full-featured Linux distribution for desktops, laptops, clouds and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:


More Information

You can learn more about Ubuntu and about this release on our website listed below:


To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:


Originally posted to the ubuntu-announce mailing list on Wed Feb 12 18:06:17 UTC 2020 by Łukasz ‘sil2100’ Zemczak, on behalf of the Ubuntu Release Team

on February 13, 2020 02:07 AM

February 12, 2020

Thanks to all the hard work from our contributors, we are pleased to announce that Lubuntu 18.04.4 LTS has been released! What is Lubuntu? Lubuntu is an official Ubuntu flavor which uses the Lightweight X11 Desktop Environment (LXDE). The project’s goal is to provide a lightweight yet functional Linux distribution based on a rock solid […]
on February 12, 2020 09:42 PM

Making A Service Launch

Stephen Michael Kellat

I hinted previously that I would launch this and that there were ironic reasons for doing so. It is best that I explain myself. Explanations generally make things clearer.

There was a push for Erie Looking Productions to return to producing at least general programming. Now that I am no longer a federal civil servant the restrictions that held me back for six years from doing so are off. The problem is that the current studio space suffered physical damage. The audio equipment and the recording computer are fine and secured but the physical space is not usable. We don’t have a fallback space at this time nor do we have the economic means to procure one. Repairs to the physical space were supposed to be done this month but I have not heard much about the status of that lately.

I am also dealing with some long-term recovery from some surgery that was done. While I can talk to people it can sound slightly garbled at times. The last check-in at the doctor’s office didn’t show me breaking any speed records to be able to move ahead in matters so I am stuck with “slow and steady” for the time being. That is going to possibly not be fixed properly until some point in April.

While I know the folks behind the Ubuntu Podcast are planning to return to air shortly I will instead be taking a different path. The current hotness appears to be launching your own newsletter such as this technology one. Since podcasting is not feasible at the moment the reformatting of content to a strictly textual form seem like the simplest way forward for now.

I could operate an announce-only mailman list on a minimal Ubuntu 19.10 droplet on Digital Ocean. However, my current economic circumstances have instead pushed me over to trying to utilize tinyletter.com instead. To quote the 13th & 21st US Secretary of Defense, Donald Rumsfeld, in an apt manner: “As you know, you go to war with the army you have, not the army you might want or wish to have at a later time.”

The newsletter is entitled “The Interim Edgewood Stratagem”. Release frequency should be once weekly but I haven’t settled a firm day yet but it would likely be out on a Wednesday, Thursday, or Friday each week. We’ll see how it develops. I am pretty sure we are not initially going to dive into unknown unknowns if we can help it.

Twice a month there is the chance that the text of my sermon presented on Sunday morning at the nursing home may be presented that week. Initially the first release would be in fact the text of my sermon from Sunday. That is planned to go out later in the day on February 12th.

The first regular essay would go out during the week of February 16th. In that essay I want to get to would be to talk about a bit of an unresolved mystery case involving a Pacific island nation suffering an Internet blackout. Style-wise this wouldn’t be Darknet Diaries but rather something a bit different. After the Pacific jaunt would hopefully be an essay on the changing business regulation climate in the US that may not help folks in the open source world who “hang their own shingle” to work as independent contractors. Once through that we would see where stories go. I do know I do not want to talk about the presidential primary campaigns if at all possible as nothing productive comes out of worrying about them while other craziness is in play.

You can sign up at the subscribe page for free. Support is welcomed though not required through avenues such as Liberapay. If you just want to drop in a quarter via PayPal or maybe just two cents that is doable as well. The shopping wishlist for replacement gear also still exists as I watch equipment fail and otherwise decay.

This is an adventure that I am nervous about starting. Once upon a time I was a working journalist who was routinely published in print, no less. The media landscape isn’t the same these days but this will likely feel like getting back on a bicycle to ride again after a long time away.

To quote a commercial that aired quite a bit a few years ago: “Let me tell you a story…”

on February 12, 2020 03:46 AM

February 10, 2020

Most Linux based distributions come pre-installed with DBus, which is a language-independent way of IPC on such systems. DBus is great and have been extensively used for a long-time. It, however is written largely to be used on a single client computer, where apps running locally are able to talk to each other. It could be used over TCP, however it may not be suitable for reasons I state below.

In modern times and especially with the advent of smartphones, many new app communication paradigms have appeared. With IoT being the new cool kid in town, its becoming more and more a “required” when different apps running in a premise have to “talk” to each other. DBus daemon can be accessed over TCP, however a client running in a web browser cannot talk to it because browsers no longer provide direct access to the TCP socket, so writing a DBus client library won’t be possible. For Android and iOS, talking to a DBus daemon running on a PC would need new implementations.

Much of the above effort could be reduced if we used a more general purpose protocol, that supports PubSub and RPCs, is secure (supports end to end encryption), cross-platform and have an ever increasing client libraries ecosystem. WAMP protocol is one such protocol, it can be run over WebSocket, allowing “free” browser support. It also runs over RawSocket (custom framing atop TCP). In principle, WAMP could run on any bi-directional and reliable transport hence the future prospects of the protocol look quite good.

To that effort I have been working on a pet project for the last couple of months, called DeskConn. It uses Crossbar as WAMP router (equivalent: DBus daemon) and couple with it an authentication scheme and service discovery using python zerconf, allowing for the daemon running on the desktop/RPi to be discover-able by Clients on the local network (WiFi, LAN or other interfaces).

With the network layer figured, writing applications on top that is pretty straightforward and can be done with very little code. I’ll come up with some example code in different programming languages in a later blogpost. For the curious, the umbrella deskconn project has quite a few sub-projects to be run on different environment https://github.com/deskconn/

Note: I am a Core developer at Crossbar.io GmbH, the company that funds the development of Crossbar (the router) and a few WAMP client library implementations in Java, Python, JS/Node and C++, under the Autobahn project. I am the maintainer of autobahn-java and autobahn-js. DeskConn is a personal project that I have been working on in my free time.

A more wider list of implementations mostly done by the community could be seen here https://crossbar.io/about/Supported-Languages

on February 10, 2020 05:31 PM

February 07, 2020

I came back from FOSDEM on Tuesday but got busy with my day time job at Crossbar.io. Finally, today when I got to write something, I found my blogspot based web page to be really uncomfortable to navigate and manage, so I spent the last few hours trying to move my blog over to wordpress. I also had to update the planet ubuntu bzr repository for my new blog to show up on Planet Ubuntu.

Having been part of the Ubuntu community, I have had the chance to travel to different software events, mostly Ubuntu specific. While at Canonical, my travel was for Ubuntu Developer Summit and for internal Canonical sprints. Post-Canonical layoff in 2017, I didn’t really travel much for conferences, though last year, while visiting Crossbar.io GmbH’s HQ in Erlagen, Germany, I used that opportunity to plan my trip as such that it coincides with UbuCon Europe in Sintra. That was a great event and I got to meet really great people, the social part of that event was on par or even better than the talks/workshops.

So when FOSDEM’s date was announced, I was yet again excited to travel to a community event and since its known as the biggest FOSS conference in Europe and the fact that lots of super-intelligent people from the wider open-source community attend it every year, I knew I had to be there. To that regard I applied for the Ubuntu community donation fund and guess what I got the nod. Rest is just details.

Talks were great

I attended lots of great talks (lighting as well) and one of the great and a "must watch" talk was from James Bottomley of IBM titled "The Selfish Contributor Explained", according to him that to unleash the true potential of an employee, companies should make an effort to figure out what interests their employee and if a developer is working on something they enjoy, they will likely go out of their way to make things work better.

From future’s perspective and something that affects us all is how the web will transform in the coming years; for that Daniel Stenberg (curl creator) gave an informative talk about HTTP/3 and the problems that it solves. Of course much of the "heavy lifting" was done by the new underlying transport QUIC (thanks Google for the earlier work)

Behold HTTP/3 is coming

I gave a talk

DeskConn is a project that I have been working on in my free time for a bit and I wanted to introduce that to a wider audience, hence I gave a brief talk on what could potentially be done with it. DeskConn project enables network based IPC, allowing for different apps, written in different languages to be able to communicate with each other and since the technology is based around WebSocket/WAMP/Zeroconf, a client could be running in any programming language that has a WAMP library.

For simplicity sake: Its a technology that could enable creation of projects like KDE Connect but something that runs on all platforms like Windows, macOS and Linux.

My talk about the DeskConn project

Met old colleagues and friends

FOSDEM gave me the opportunity to meet lots of great people that I truly admire in the Ubuntu community, people I hadn’t seen or talked to for more than 3 years.

I met quite a few people from the Ubuntu desktop team and it was refreshing to know how hard they are working on making Ubuntu 20.04 a success. Olivier Tilloy and I had a short discussion about browser maintenance that he does to ensure we have the latest and greatest versions of our two favorite browsers (Firefox and Chromium). Jibel told me about the ZFS installation feature work that He and Didier have been doing; I hope we’ll be able to use that technology in "production" soon.

from left to right: Martin Pitt (from RedHat), Ian Lane and Jean-Baptiste Lallement and I


My first FOSDEM was a great learning experience, navigating around the ULB is also a challenge of sorts but it was all worth it. I’d definitely go back to a FOSDEM given the chance, maybe next year 😉

on February 07, 2020 10:24 PM

On 1-2 February I attended FOSDEM. This is only the second time I’ve attended this annual event in Brussels, and it’s just about as crazy as it was last year with over 8000 attendees and 835 talk/BoF/etc sessions.

I did all the typical FOSDEM stuff. Visiting booths, attend a few talks and BOFs, catch up with a few people, meet some new ones I’ve only known on IRC, signed a few GPG keys and consumed a whole lot of club mate, fries, chocolate and waffles.

Below follows some random bits that I happen to remember or took photos of. They are in no order of particular importance. I wish I got some photos with some of the cool people I so seldomly see, if I ever attend an event like this, I’ll pay some more attention to this.

Axiom Free hardware camera

Axiom Beta camer profile, from the Apertus website

I attended a talk about the Axiom free video camera along with a few members of the DebConf video team. It’s completely free hardware, professional grade and overall very impressive. Here’s a YouTube link to some sample footage taken with an earlier model of the camera which they displayed at FOSDEM.

All the nice things above come at a cost, kits currently range from around £4000 to £6000. And then you have to assemble it yourself on a component level. It’s not likely that we’ll ever use the main Axiom camera for DebConf, it’s really better suited for projects that are creating film-grade content or for academic purposes where you might want to capture something very specific at a high colour depth or frame rate (it can do up to 300fps).

Fortunately, there’s a project by some talented developers to create a smaller version of the camera (the Axiom Micro) which may be a lot more appropriate for DebConf. We have a lot of problems with the current hardware stack, like needing to source a whole bunch of local PCs and shipping a whole lot of hardware to new and interesting countries every year all with their own unique logistical problems. Using free camera hardware could go a long way in helping reduce the sheer amount of hardware needed for videoing a DebConf without having to lose any functionality. It’s probably quite a bit of time before this could even yet be considered as a viable option, but it’s nice to get a sneak peek of what might be possible in the not-so-distant future.

Godot Game Engine

It’s been about 20 years since I wrote my last game, and I’ve been toying around with the idea of writing a new and very niche adventure game for the last two years. In the last few months, I’ve been shopping around for game engines. I came very close to just settling on making it a Flask app and playable in a web browser. That might give me some excuse to play with Brython too, which is a Python interpreter that runs in JavaScript.

In recent months, I’ve been reading more and more about Godot, (at FOSDEM I learned that upstream pronounces this as “guh-dough”, but they’re fine with you pronouncing it anyway you want).

What I like about it:

  • Very flexible game engine for both 2D and 3D graphics
  • Fully free software with no gotchas
  • Widely cross platform (compile for Windows, Linux, macOS, Android, webassembly and more)
  • Nice built-in scripting language called gdscript (syntax similar to Python) (You can also programming in a lot of existing languages like C++, Rust, C#, Python, etc)
  • It has a fully integrated development environment in which you can create your games
  • Its IDE and runtime engines are already packaged in Debian and even in stable (Debian 10) already

I could go on, but at this point I’m personally sold on these core points already. Here is a nice video comparing some of its pros and cons, it’s biggest downside is that it’s 3D performance isn’t that great compared to the other major 3D gaming engines, but a lot of work is going into ironing out a few kinks in that area.

I liked visiting their booth and finding out more about the system, unfortunately I seemed to have lost my photo of their booth, but fortunately I bought this really cool t-shirt, I hope to find some time to properly dive into Godot soon.

On the last day of FOSDEM, news broke that they received an EPIC megagrant that they applied for. That’s quite some news since they are in some ways a competitor to EPIC’s own game engine, but I think they did the right thing and it makes sense for them to have a good relationship with an open source engine such as Godot. Anyway, read more about Godot on their website.

Librem 5 BoF

When the crowdsourcing kicked off for the Librem 5 (a fully free phone created by Purism), it didn’t take me long to decide that I’ll order one. This was quite some time ago and it seems that I may actually receive my long-awaited device in late March / early April. There was a Librem 5 BoF at FOSDEM, so of course I went.

It was nice to finally hold one of these devices and see what it’s really like. Many on-line reviews complained that it’s bulky. It is indeed bulky compared to a sleek modern phone, but it’s not nearly as thick as something like the Nokia N800/N900, which was probably the closest thing to this phone that existed before. It’s really pleasant to hold and will fit comfortably in cargo pants. Unfortunately it did feel a bit warm (not quite hot, but warm enough that it would distract me if it got that warm in my pocket). They hope to improve power management drastically by the end of the year. One of the biggest challenges is that all the major components are discrete, separate chips. This is done by design and makes the phone more modular, but it does make power saving a bigger challenge since coordinating timing and wake-ups between these discrete chips takes a lot more finesse than if they were integrated in a single chip.

I asked about their roadmap and whether they’ll start working on the next version of the phone now that this phone is pretty much done on a hardware level. Fortunately for owners of this model, their focus will remain on the current Librem 5 and to optimize it, so we shouldn’t expect any new models or major development on a new generation of Librem 5 hardware for the next 2 years or so.

I just played with the device for about two minutes. The Gnome apps are fast and responsive and just feel natural on this form factor. I’m looking forward again to having a pocket computer that can run Debian. I hope they improve the terminal app a bit, I felt that it lacked some buttons. On Android I use JuiceSSH which makes it really easy to navigate typical terminal apps like mc, htop and irssi.

I asked whether they’re thinking of creating any accessories for this phone, since a keyboard-case would be nice. They said that they’re certainly looking into it and will be releasing some limited accessories for the phone, but couldn’t yet elaborate on exactly what that would be yet.

Loongson development board

Loongson made some Loongson 2K Pi boards available to Debian Developers. I requested one and thanks to the two DDs who co-ordinated to bring it in from China I got my one at FOSDEM. It’s a MIPS64 based CPU with a Vivante GC1000 Series GPU. I got the board with a 16GB SSD, a nifty mountboard and a 5V/3A power adaptor. It’s got dual gigabit ethernet ports, space for a wifi card, lots of GPIO pins and even a PCIe slot.

I’ve never tried a proper Debian system on a MIPS platform before. Unfortunately you need a special kernel to boot this, but that will hopefully change in the future. A standard Debian mips64el userland should run fine on it, I’ll try it out over the weekend. I couldn’t help myself from assembling the kit before finishing this blog post. Even if it ends up not being very useful for Debian development, I’ll certainly be able to find a use for it with all the extensibility it has. I’ll do an update blog post and/or video about it when I’ve played with it a bit.

Gentoo prefixes

At the Gentoo stand, I learned about Gentoo prefixes. This allows you to set up a directory on an existing Linux (or in some cases, even on other) systems where you can emerge from Gentoo ports. You can then just add this prefix to your path to easily run applications from it. This is especially useful for people who don’t have root access on their computer but might need access to some additional software. Or maybe they want a newer (or older) version that’s available on their system release.

I’m kind of surprised that there’s been less talk about enabling some kind of user-mode APT in Debian where you can get something similar to a Gentoo prefix, but where you can install package from archives just for your user. I guess the interest is currently too low and the amount of work too much. When I find some time to properly play with Gentoo prefixes I’ll look into how much work it is to just package its scripts for Debian, someone might find it useful.

Perl6 / Raku

At the Perl booth I bought a book on getting started with Perl6 (Raku). It’s a thin book and was just €5 so I thought I might as well give it a shot. I made it through a quarter of the book by the end of FOSDEM and it seems I have since misplaced the book. I found quite a few aspects of the language interesting, but it seems that it’s still way to easy to write modern Perl/Raku that would be later considered hard to read. I’ll at least finish up the book if it emerges again.

ReactOS running on real hardware

I met some actual ReactOS developers after being interested in the system for a long time. I usually try out new releases in a virtual machine whenever they make a new release to track its progress. I’ve tried it on some physical machines too, but real hardware support is still very sketchy for ReactOS. It turns out that old Dell laptops can actually run ReactOS, the developers say this is pretty much their reference machine and they’re cheap to buy online. Unfortunately the graphics still only run in vesa mode on this hardware so there’s no hardware acceleration. I got ReactOS installed on an old netbook once but when I installed the Intel graphics drivers for Windows it would just blue screen. I asked them why graphics drivers are so difficult, they said that graphics drivers tend to be written in odd ways because it pokes the system in all kind of weird places where it shouldn’t or where you wouldn’t expect it to.

Pinebook at KDE Booth

At the KDE booth I saw a Pinebook for the first time. I was quite surprised at both how good the build quality is (you could almost mistake it for Dell Latitude variant) and how fast KDE runs on it. It made me want one, but I’m trying to avoid the temptation of buying more hardware that I probably won’t even use all that much.

I also ended up circling the KDE booth a lot on Sunday in search of Adriaan de Groot, the lead maintainer behind Calamares. He was looking for me too but FOSDEM is just too darn big.


The Debian booth was busy as always, I tried to get a good shot of it but it was not easy.


I came across these young people doing a great job of running a Debian-Edu booth, patiently explaining what Debian-Edu is to people who come by to visit the booth. I have no idea who they are and asked Holger if he knows them, he says they seem to know who he is but he doesn’t know them. They got very engaged in conversations with booth visitors so I decided not to bother them too much at that point and come back to chat later, but FOSDEM gets really busy so I didn’t get the chance.

Debian-Edu pamphlets

LPI Booth

I meant to say hi to Jon “maddog” Hall too, but syncing our available free time proved to be tough. I saw him here explaining when he got this t-shirt and why it was significant, I couldn’t really hear so well from the back since the hall was quite noisy at that point. I’ll ask him about that when I talk to him again on our next video interview.


Back in the 90’s I had high hopes for BeOS (which was soon bought by Palm Inc as it rose in populatrity), now reborn as a free software operating system known as Haiku. I got a Haiku CD that I’ll try on an old laptop that should run it well on.


The Nuspell developers ran a booth and was at the MiniDebCamp before FOSDEM too. They are writing a standalone spell checker that can be integrated into other applications so that this work doesn’t have to be re-invented for every app. They are looking for people to help write bindings for Python, Golang and Java. I’m just passing on the message, if you want to help, get in touch with them.

More random stuff

I noticed this “JavaScript on MS-DOS?” t-shirt which caught my attention. After some quick searching (didn’t try the QR code) I found this Adafruit blog entry which led me to it’s GitHub repository. I like to play with old computers and I’m kind of curious how this will run on an actual DOS machine (something that I even have!).

Unrelated, but I also came across JS-DOS 6.22, a DOS emulator written in JavaScript. So I guess you could run JavaScript in a DOS emulator running on JavaScript.

I didn’t have time to ask any questions at the Automotive Grade Linux booth, but they had an extensive setup that seemed to emulate the electronics on a Suzuki dashboard.

“GMail is eating email” poster spotted at FOSDEM.
The 0AD game running on various devices.
GNOME Demos and t-shirts
Who uses FreeBSD?
VLC developers are somewhat easy to spot.
Oh what a co-incidence! A distro release during FOSDEM!


My photos above mostly only cover a part of the K building during the FOSDEM event, a large amount of videos will soon be released covering the talks of the event. Status of videos is available on this year’s FOSDEM page.

UPDATE: Videos are now live at https://fosdem.org/2020/schedule/events/

on February 07, 2020 12:42 PM

February 06, 2020

Episódio 76 – O PUP foi, outra vez, à FOSDEM! Não podíamos deixar de marcar presença na FOSDEM, o segundo melhor evento ligado ao software livre, logo a seguir à Ubucon Europe em Sintra… Já sabem: oiçam, comentem e partilhem!

  • https://fosdem.org


Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on February 06, 2020 10:45 PM

February 05, 2020

KUserFeedback 1.0.0

Jonathan Riddell

KUserFeedback is a framework for collecting user feedback for applications  via telemetry and surveys.

The library comes with an accompanying control and result UI tool.


Signed by Jonathan Riddell <jr@jriddell.org> 2D1D5B0588357787DE9EE225EC94D18F7F05997E

KUserFeedback as it will be used in Plasma 5.18 LTS


on February 05, 2020 02:21 PM

Early February Miscellany

Stephen Michael Kellat

In no particular order:

  • After fussing with it enough I was able to move the website for Erie Looking Productions over to a different provider. Eventually there will be an SSL certificate once that actually generates within the next day or so. The transition had a few too many moving parts to it which resulted in a bit of breakage. Fortunately the website wasn’t down too long.

  • I got word back that the almost-novella story I submitted for a contest didn’t make it to the list for judges to consider. It is a pretty big contest. The question now is what to do with the story. It is long enough that if I utilize the novel class in LuaLaTex with appropriate font choices and set my paper size wisely I could possibly make a print offering somewhere like Lulu and just release it as an independent pocket book as well as make an ebook offering on Leanpub. Since the story was originally written step-by-step in a gitit wiki I also ended up using the markdown package found on the Comprehensive TeX Archive Network to easily shift into submission format using the science fiction manuscripts class the raw text with some bash scripting and coreutils usage. Yes, it is happened to be a dirty hack that I’m not ready to stick on Launchpad anywhere but it worked nicely. No, pandoc was not used in this scenario.

  • So far none of the packages I have installed on my Focal Fossa machine have significantly broken on me. This is good. I have been using the machine for day to day use.

  • I’ve disappeared from IRC again as the droplet on Digital Ocean that had my ZNC bouncer had to be turned off. I’ll figure something out eventually and make a return when resources permit.

  • There may be a need for me to start a newsletter on tinyletter to try get familiar with the platform and otherwise be able to evaluate it. I cannot actually engage in podcasting right now for some ironic reasons. Considering that I cannot being on a microphone and outside being a writer or post-production editor it seems maintaining a newsletter would be an interesting side trip for now. We’ll see if I have to go ahead and launch that project. Watch this space for details…

on February 05, 2020 03:12 AM

February 03, 2020

You are creating LXD containers and you enter a container (lxc shell, lxc exec or lxc console) in order to view or modify the files of that container. But can you access the filesystem of the container from the host?

If you use the LXD snap package, LXD mounts the filesystem of the container in a subdirectory under /var/snap/lxd/common/lxd/storage-pools/lxd/containers/. If you run the following, you will see a list of your containers. Each container can be found in a subdirectory, named after each container name.

$ sudo -i
# ls -l /var/snap/lxd/common/lxd/storage-pools/lxd/containers/

But the container directories are empty

Most likely (if you do not use a dir storage pool) the container subdirectories are empty. The container is running but the subdirectory is empty?

This happens due to how Linux namespaces work, and LXD uses them. You would need to enter the namespace of the LXD service in order to view the container files from the host. Here is how it’s done. With -t we specify the target, the process ID of the LXD service. With -m we specify that we want to enter the mount namespace of this process.

$ sudo nsenter -t $(cat /var/snap/lxd/common/lxd.pid) -m
[sudo] password for myusername:

On other terminal, let’s launch a container. You will be investigating this container.

$ lxc launch ubuntu:18.04 mycontainer
Creating mycontainer
Starting mycontainer

Now, let’s look at the files of this container. There’s a backup.yaml, which is similar to the output of the command lxc config show mycontainer --expanded but has additional keys for pool, volume, snapshots. This file is important if you lose your LXD database. The metadata.yaml together with the templates/ directory is the description of how the container was parametarized. In an Ubuntu container, the defaults are used, except for the networking at templates/cloud-init-network.tpl where it setups a minimal default configuration for eth0 to obtain a DHCP lease from the network. And last is rootfs/, which is the location of the filesystem of the container.

# cd /var/snap/lxd/common/lxd/storage-pools/lxd/containers/
# ls -l mycontainer/
total 6
-r--------  1 root    root    2952 Feb  3 17:07 backup.yaml
-rw-r--r--  1 root    root    1050 Jan 29 23:55 metadata.yaml
drwxr-xr-x 22 1000000 1000000   22 Jan 29 23:19 rootfs
drwxr-xr-x  2 root    root       7 Jan 29 23:55 templates

The rootfs/directory has UID/GID of 100000/100000. The files inside the root filesystem of the container have IDs that are shifted by 100000 from the typical range 0-65534. That is, the files inside the container will have ID that range from 100000 to 165534. The root account in the container will have real UID 100000 but will appear as 0 in the container. Here is the list of the root directory of the container, according to the host.

# ls -l /var/snap/lxd/common/lxd/storage-pools/lxd/containers/mycontainer/rootfs/
total 41
drwxr-xr-x  2 1000000 1000000 172 Jan 29 23:17 bin
drwxr-xr-x  2 1000000 1000000   2 Jan 29 23:19 boot
drwxr-xr-x  4 1000000 1000000  15 Jan 29 23:17 dev
drwxr-xr-x 88 1000000 1000000 176 Feb  3 17:07 etc
drwxr-xr-x  3 1000000 1000000   3 Feb  3 17:07 home
drwxr-xr-x 20 1000000 1000000  23 Jan 29 23:16 lib
drwxr-xr-x  2 1000000 1000000   3 Jan 29 23:15 lib64
drwxr-xr-x  2 1000000 1000000   2 Jan 29 23:15 media
drwxr-xr-x  2 1000000 1000000   2 Jan 29 23:15 mnt
drwxr-xr-x  2 1000000 1000000   2 Jan 29 23:15 opt
drwxr-xr-x  2 1000000 1000000   2 Apr 24  2018 proc
drwx------  3 1000000 1000000   5 Feb  3 17:07 root
drwxr-xr-x  4 1000000 1000000   4 Jan 29 23:19 run
drwxr-xr-x  2 1000000 1000000 221 Jan 29 23:17 sbin
drwxr-xr-x  2 1000000 1000000   3 Feb  3 17:07 snap
drwxr-xr-x  2 1000000 1000000   2 Jan 29 23:15 srv
drwxr-xr-x  2 1000000 1000000   2 Apr 24  2018 sys
drwxrwxrwt  8 1000000 1000000   8 Feb  3 17:08 tmp
drwxr-xr-x 10 1000000 1000000  10 Jan 29 23:15 usr
drwxr-xr-x 13 1000000 1000000  15 Jan 29 23:17 var

If we create a file in the container’s rootfs from the host, how will it look from within the container? Let’s try.

root@mycomputer:/var/snap/lxd/common/lxd/storage-pools/lxd/containers/mycontainer/rootfs# touch mytest.txt

Then, from the container we run the following. The file with invalid UID and GID per container, appears to have UID nobody and GID nogroup. That is, if you notice in a container too many files owned by nobody, then there is a chance that something went bad with the IDs and requires investigation.

$ lxc shell mycontainer
mesg: ttyname failed: No such device
root@mycontainer:~# ls -l /mytest.txt 
-rw-r--r-- 1 nobody nogroup 0 Feb  3 15:32 /mytest.txt


Error: I did nsenter but still cannot see any files?

If you rebooted your LXD computer and the container has been set not to autostart after boot, then LXD optimizes here and does not mount the container’s rootfs. You can either start the container (so LXD performs the mount for you), or mount manually.

To mount manually a container called mycontainer2, you would run the following

# mount -t zfs lxd/containers/mycontainer2 /var/snap/lxd/common/lxd/storage-pools/lxd/containers/mycontainer2 


We have seen how to enter the mount namespace of the LXD service, have a look at the files of the containers, and also manually perform this mount, if needed.

on February 03, 2020 03:54 PM

February 01, 2020

As we begin getting closer to the next release date of Ubuntu Studio 20.04 LTS, now is a great time to show what the best of the Ubuntu Studio Community has to offer! We know that many of our users are graphic artists and photographers and we would like to... Continue reading
on February 01, 2020 08:45 PM

January 31, 2020

Full Circle Magazine #153

Full Circle Magazine

This month:
* Command & Conquer
* How-To : Python, Test Linux in VirtualBox, and Darktable
* Graphics : Inkscape
* Graphics : Krita for Old Photos
* Linux Loopback: Project Trident and other BSD options
* Everyday Ubuntu
* Interview : FuryBSD Developer
* Review : mtPaint
* Ubuntu Games : Stygian
plus: News, My Opinion, The Daily Waddle, Q&A, and more.

Get it while it’s hot!


on January 31, 2020 08:40 PM

January 29, 2020

I want to have a head start to make sure Ubuntu 20.04 works flawlessly by the time it releases. I am however making some bold moves, like setting up ZFS on Root during install which is currently marked as EXPERIMENTAL. TODO zfs on root screenshot After the installation is done, we can check that the installer creates two pools: $ zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT bpool 1,88G 178M 1,70G - - 0% 9% 1.
on January 29, 2020 04:29 PM

January 28, 2020

My first interaction with the Ubuntu community was in March of 2005 when I put Ubuntu on an old Dell laptop and signed up for the Ubuntu Forums. This was just a few years into my tech career and I was mostly a Linux hobbyist, with a handful of junior systems administrator jobs on the side to do things like racking servers and installing Debian (with CDs!). Many of you with me on this journey have seen my role grow in the Ubuntu community with Debian packaging, local involvement with events and non-profits, participation in the Ubuntu Developer Summits, membership in the Ubuntu Community Council, and work on several Ubuntu books, from technical consultation to becoming an author on The Official Ubuntu Book.

These days I’ve taken my 15+ years of Linux Systems Administration and open source experience down a slightly different path: Working on Linux on the mainframe (IBM Z). The mainframe wasn’t on my radar a year ago, but as I got familiar with the technical aspects, the modernization efforts to incorporate DevOps principles, and the burgeoning open source efforts, I became fascinated with the platform.

As a result, I joined IBM last year to share my discoveries with the broader systems administration and developer communities. Ubuntu itself got on board with this mainframe journey with official support for the architecture (s390x) in Ubuntu 16.04, and today there’s a whole blog that gets into the technical details of features specific to Ubuntu on the mainframe: Ubuntu on Big Iron

I’m excited to share that I’ll be joining the author of the Ubuntu on Big Iron blog, Frank Heimes, live on February 6th for a webinar titled How to protect your data, applications, cryptography and OS – 100% of the time. I’ll be doing an introduction to the IBM Z architecture (including cool hardware pictures!) and general security topics around Linux on Z and LinuxONE.

I’ll then hand the reins over to Frank to get into the details of the work Canonical has done to take advantage of hardware cryptography functions and secure everything from network ports to the software itself with automatic security updates.

What I find most interesting about all of this work is how much open source is woven in. You’re not using proprietary tooling on the Linux level for things like encryption. As you’ll see from the webinar, on a low level Linux on Z uses dm-crypt and in-kernel crypto algorithms. At the user level, TLS/SSL is all implemented with OpenSSL and libcrypto. Even the libica crypto library is open source.

You can sign up for the webinar here, and you’ll have the option to watch it live or on-demand replays: How to protect your data, applications, cryptography and OS – 100% of the time and read the blog post from the Ubuntu blog here. We’re aiming to make this technical and fun, so I hope you’ll join us!

on January 28, 2020 06:26 PM

January 27, 2020

It has been a while since the last AppStream-related post (or any post for that matter) on this blog, but of course development didn’t stand still all this time. Quite the opposite – it was just me writing less about it, which actually is a problem as some of the new features are much less visible. People don’t seem to re-read the specification constantly for some reason 😉. As a consequence, we have pretty good adoption of features I blogged about (like fonts support), but much of the new stuff is still not widely used. Also, I had to make a promise to several people to blog about the new changes more often, and I am definitely planning to do so. So, expect posts about AppStream stuff a bit more often now.

What actually was AppStream again? The AppStream Freedesktop Specification describes two XML metadata formats to describe software components: One for software developers to describe their software, and one for distributors and software repositories to describe (possibly curated) collections of software. The format written by upstream projects is called Metainfo and encompasses any data installed in /usr/share/metainfo/, while the distribution format is just called Collection Metadata. A reference implementation of the format and related features written in C/GLib exists as well as Qt bindings for it, so the data can be easily accessed by projects which need it.

The software metadata contains a unique ID for the respective software so it can be identified across software repositories. For example the VLC Mediaplayer is known with the ID org.videolan.vlc in every software repository, no matter whether it’s the package archives of Debian, Fedora, Ubuntu or a Flatpak repository. The metadata also contains translatable names, summaries, descriptions, release information etc. as well as a type for the software. In general, any information about a software component that is in some form relevant to displaying it in software centers is or can be present in AppStream. The newest revisions of the specification also provide a lot of technical data for systems to make the right choices on behalf of the user, e.g. Fwupd uses AppStream data to describe compatible devices for a certain firmware, or the mediatype information in AppStream metadata can be used to install applications for an unknown filetype easier. Information AppStream does not contain is data the software bundling systems are responsible for. So mechanistic data how to build a software component or how exactly to install it is out of scope.

So, now let’s finally get to the new AppStream features since last time I talked about it – which was almost two years ago, so quite a lot of stuff has accumulated!

Specification Changes/Additions

Web Application component type

(Since v0.11.7) A new component type web-application has been introduced to describe web applications. A web application can for example be GMail, YouTube, Twitter, etc. launched by the browser in a special mode with less chrome. Fundamentally though it is a simple web link. Therefore, web apps need a launchable tag of type url to specify an URL used to launch them. Refer to the specification for details. Here is a (shortened) example metainfo file for the Riot Matrix client web app:

<component type="web-application">
  <summary>A glossy Matrix collaboration client for the web</summary>
    <p>Communicate with your team[...]</p>
  <icon type="stock">im.riot.webapp</icon>
  <url type="homepage">https://riot.im/</url>
  <launchable type="url">https://riot.im/app</launchable>

Repository component type

(Since v0.12.1) The repository component type describes a repository of downloadable content (usually other software) to be added to the system. Once a component of this type is installed, the user has access to the new content. In case the repository contains proprietary software, this component type pairs well with the agreements section.

This component type can be used to provide easy installation of e.g. trusted Debian or Fedora repositories, but also can be used for other downloadable content. Refer to the specification entry for more information.

Operating System component type

(Since v0.12.5) It makes sense for the operating system itself to be represented in the AppStream metadata catalog. Information about it can be used by software centers to display information about the current OS release and also to notify about possible system upgrades. It also serves as a component the software center can use to attribute package updates to that do not have AppStream metadata. The operating-system component type was designed for this and you can find more information about it in the specification documentation.

Icon Theme component type

(Since v0.12.8) While styles, themes, desktop widgets etc. are already covered in AppStream via the addon component type as they are specific to the toolkit and desktop environment, there is one exception: Icon themes are described by a Freedesktop specification and (usually) work independent of the desktop environment. Because of that and on request of desktop environment developers, a new icon-theme component type was introduced to describe icon themes specifically. From the data I see in the wild and in Debian specifically, this component type appears to be very underutilized. So if you are an icon theme developer, consider adding a metainfo file to make the theme show up in software centers! You can find a full description of this component type in the specification.

Runtime component type

(Since v0.12.10) A runtime is mainly known in the context of Flatpak bundles, but it actually is a more universal concept. A runtime describes a defined collection of software components used to run other applications. To represent runtimes in the software catalog, the new AppStream component type was introduced in the specification, but it has been used by Flatpak for a while already as a nonstandard extension.

Release types

(Since v0.12.0) Not all software releases are created equal. Some may be for general use, others may be development releases on the way to becoming an actual final release. In order to reflect that, AppStream introduced at type property to the release tag in a releases block, which can be either set to stable or development. Software centers can then decide to hide or show development releases.

End-of-life date for releases

(Since v0.12.5) Some software releases have an end-of-life date from which onward they will no longer be supported by the developers. This is especially true for Linux distributions which are described in a operating-system component. To define an end-of-life date, a release in AppStream can now have a date_eol property using the same syntax as a date property but defining the date when the release will no longer be supported (refer to the releases tag definition).

Details URL for releases

(Since v0.12.5) The release descriptions are short, text-only summaries of a release, usually only consisting of a few bullet points. They are intended to give users a fast, quick to read overview of a new release that can be displayed directly in the software updater. But sometimes you want more than that. Maybe you are an application like Blender or Krita and have prepared an extensive website with an in-depth overview, images and videos describing the new release. For these cases, AppStream now permits an url tag in a release tag pointing to a website that contains more information about a particular release.

Release artifacts

(Since v0.12.6) AppStream limited release descriptions to their version numbers and release notes for a while, without linking the actual released artifacts. This was intentional, as any information how to get or install software should come from the bundling/packaging system that Collection Metadata was generated for.

But the AppStream metadata has outgrown this more narrowly defined purpose and has since been used for a lot more things, like generating HTML download pages for software, making it the canonical source for all the software metadata in some projects. Coming from Richard Hughes awesome Fwupd project was also the need to link to firmware binaries from an AppStream metadata file, as the LVFS/Fwupd use AppStream metadata exclusively to provide metadata for firmware. Therefore, the specification was extended with an artifacts tag for releases, to link to the actual release binaries and tarballs. This replaced the previous makeshift “release location” tag.

Release artifacts always have to link to releases directly, so the releases can be acquired by machines immediately and without human intervention. A release can have a type of source or binary, indicating whether a source tarball or binary artifact is linked. Each binary release can also have an associated platform triplet for Linux systems, an identifier for firmware, or any other identifier for a platform. Furthermore, we permit sha256 and blake2 checksums for the release artifacts, as well as specifying sizes. Take a look at the example below, or read the specification for details.

​  <release version="1.2" date="2014-04-12" urgency="high">
​    [...]
​    <artifacts>
​      <artifact type="binary" platform="x86_64-linux-gnu">
​        <location>https://example.com/mytarball.bin.tar.xz</location>
​        <checksum type="blake2">852ed4aff45e1a9437fe4774b8997e4edfd31b7db2e79b8866832c4ba0ac1ebb7ca96cd7f95da92d8299da8b2b96ba480f661c614efd1069cf13a35191a8ebf1</checksum>
​        <size type="download">12345678</size>
​        <size type="installed">42424242</size>
​      </artifact>
​      <artifact type="source">
​        <location>https://example.com/mytarball.tar.xz</location>
​        [...]
​      </artifact>
​    </artifacts>
​  </release>

Issue listings for releases

(Since v0.12.9) Software releases often fix issues, sometimes security relevant ones that have a CVE ID. AppStream provides a machine-readable way to figure out which components on your system are currently vulnerable to which CVE registered issues. Additionally, a release tag can also just contain references to any normal resolved bugs, via bugtracker URLs. Refer to the specification for details. Example for the issues tag in AppStream Metainfo files:

​  <issue url="https://example.com/bugzilla/12345">bz#12345</issue>
​  <issue type="cve">CVE-2019-123456</issue>

Requires and Recommends relations

(Since v0.12.0) Sometimes software has certain requirements only justified by some systems, and sometimes it might recommend specific things on the system it will run on in order to run at full performance.

I was against adding relations to AppStream for quite a while, as doing so would add a more “functional” dimension to it, impacting how and when software is installed, as opposed to being only descriptive and not essential to be read in order to install software correctly. However, AppStream has pretty much outgrown its initial narrow scope and adding relation information to Metainfo files was a natural step to take. For Fwupd it was an essential step, as Fwupd firmware might have certain hard requirements on the system in order to be installed properly. And AppStream requirements and recommendations go way beyond what regular package dependencies could do in Linux distributions so far.

Requirements and recommendations can be on other software components via their id, on a modalias, specific kernel version, existing firmware version or for making system memory recommendations. See the specification for details on how to use this. Example:

  <id version="1.0" compare="ge">org.example.MySoftware</id>
​  <kernel version="5.6" compare="ge">Linux</kernel>
​  <memory>2048</memory> <!-- recommend at least 2GiB of memory -->

This means that AppStream currently supported provides, suggests, recommends and requires relations to refer to other software components or system specifications.


(Since v0.12.1) The new agreement section in AppStream Metainfo files was added to make it easier for software to be compliant to the EU GDPR. It has since been expanded to be used for EULAs as well, which was a request coming (to no surprise) from people having to deal with corporate and proprietary software components. An agreement consists of individual sections with headers and descriptive texts and should – depending on the type – be shown to the user upon installation or first use of a software component. It can also be very useful in case the software component is a firmware or driver (which often is proprietary – and companies really love their legal documents and EULAs).

Contact URL type

(Since v0.12.4) The contact URL type can be used to simply set a link back to the developer of the software component. This may be an URL to a contact form, their website or even a mailto: link. See the specification for all URL types AppStream supports.

Videos as software screenshots

(Since v0.12.8) This one was quite long in the making – the feature request for videos as screenshots had been filed in early 2018. I was a bit wary about adding video, as that lets you run into a codec and container hell as well as requiring software centers to support video and potentially requiring the appstream-generator to get into video transcoding, which I really wanted to avoid. Alternatively, we would have had to make AppStream add support for multiple likely proprietary video hosting platforms, which certainly would have been a bad idea on every level. Additionally, I didn’t want to have people add really long introductory videos to their applications.

Ultimately, the problem was solved by simplification and reduction: People can add a video as “screenshot” to their software components, as long as it isn’t the first screenshot in the list. We only permit the vp9 and av1 codecs and the webm and matroska container formats. Developers should expect the audio of their videos to be muted, but if audio is present, the opus codec must be used. Videos will be size-limited, for example Debian imposes a 14MiB limit on video filesize. The appstream-generator will check for all of these requirements and reject a video in case it doesn’t pass one of the checks. This should make implementing videos in software centers easy, and also provide the safety guarantees and flexibility we want.

So far we have not seen many videos used for application screenshots. As always, check the specification for details on videos in AppStream. Example use in a screenshots tag:

​  <screenshot type="default">
​    <image type="source" width="1600" height="900">https://example.com/foobar/screenshot-1.png</image>
​  </screenshot>
​  <screenshot>
​    <video codec="av1" width="1600" height="900">https://example.com/foobar/screencast.mkv</video>
​  </screenshot>
​ </screenshots>

Emphasis and code markup in descriptions

(Since v0.12.8) It has long been requested to have a little bit more expressive markup in descriptions in AppStream, at least more than just lists and paragraphs. That has not happened for a while, as it would be a breaking change to all existing AppStream parsers. Additionally, I didn’t want to let AppStream descriptions become long, general-purpose “how to use this software” documents. They are intended to give a quick overview of the software, and not comprehensive information. However ultimately we decided to add support for at least two more elements to format text: Inline code elements as well as em emphases. There may be more to come, but that’s it for now. This change was made about half a year ago, and people are currently advised to use the new styling tags sparingly, as otherwise their software descriptions may look odd when parsed with older AppStream implementation versions.

Remove-component merge mode

(Since v0.12.4) This addition is specified for the Collection Metadata only, as it affects curation. Since AppStream metadata is in one big pool for Linux distributions, and distributions like Debian freeze their repositories, it sometimes is required to merge metadata from different sources on the client system instead of generating it in the right format on the server. This can also be used for curation by vendors of software centers. In order to edit preexisting metadata, special merge components are created. These can permit appending data, replacing data etc. in existing components in the metadata pool. The one thing that was missing was a mode that permitted the complete removal of a component. This was added via a special remove-component merge mode. This mode can be used to pull metadata from a software center’s catalog immediately even if the original metadata was frozen in place in a package repository. This can be very useful in case an inappropriate software component is found in the repository of a Linux distribution post-release. Refer to the specification for details.

Custom metadata

(Since v0.12.1) The AppStream specification is extensive, but it can not fit every single special usecase. Sometimes requests come up that can’t be generalized easily, and occasionally it is useful to prototype a feature first to see if it is actually used before adding it to the specification properly. For that purpose, the custom tag exists. The tag defines a simple key-value structure that people can use to inject arbitrary metadata into an AppStream metainfo file. The libappstream library will read this tag by default, providing easy access to the underlying data. Thereby, the data can easily be used by custom applications designed to parse it. It is important to note that the appstream-generator tool will by default strip the custom data from files unless it has been whitelisted explicitly. That way, the creator of a metadata collection for a (package) repository has some control over what data ends up in the resulting Collection Metadata file. See the specification for more details on this tag.

Miscellaneous additions

(Since v0.12.9) Additionally to JPEG and PNG, WebP images are now permitted for screenshots in Metainfo files. These images will – like every image – be converted to PNG images by the tool generating Collection Metadata for a repository though.

(Since v0.12.10) The specification now contains a new name_variant_suffix tag, which is a translatable string that software lists may append to the name of a component in case there are multiple components with the same name. This is intended to be primarily used for firmware in Fwupd, where firmware may have the same name but actually be slightly different (e.g. region-specific). In these cases, the additional name suffix is shown to make it easier to distinguish the different components in case multiple are present.

(Since v0.12.10) AppStream has an URI format to install applications directly from webpages via the appstream: scheme. This URI scheme now permits alternative IDs for the same component, in case it switched its ID in the past. Take a look at the specification for details about the URI format.

(Since v0.12.10) AppStream now supports version 1.1 of the Open Age Rating Service (OARS), so applications (especially games) can voluntarily age-rate themselves. AppStream does not replace parental guidance here, and all data is purely informational.

Library & Implementation Changes

Of course, besides changes to the specification, the reference implementation also received a lot of improvements. There are too many to list them all, but a few are noteworthy to mention here.

No more automatic desktop-entry file loading

(Since v0.12.3) By default, libappstream was loading information from local .desktop files into the metadata pool of installed applications. This was done to ensure installed apps were represented in software centers to allow them to be uninstalled. This generated much more pain than it was useful for though, with metadata appearing two to three times in software centers because people didn’t set the X-AppStream-Ignore=true tag in their desktop-entry files. Also, the generated data was pretty bad. So, newer versions of AppStream will only load data of installed software that doesn’t have an equivalent in the repository metadata if it ships a metainfo file. One more good reason to ship a metainfo file!

Software centers can override this default behavior change by setting the AS_POOL_FLAG_READ_DESKTOP_FILES flag for AsPool instances (which many already did anyway).

LMDB caches and other caching improvements

(Since v0.12.7) One of the biggest pain points in adding new AppStream features was always adjusting the (de)serialization of the new markup: AppStream exists as a YAML version for Debian-based distributions for Collection Metadata, an XML version based on the Metainfo format as default, and a GVariant binary serialization for on-disk caching. The latter was used to drastically reduce memory consumption and increase speed of software centers: Instead of loading all languages, only the one we currently needed was loaded. The expensive icon-finding logic, building of the token cache for searches and other operations were performed and the result was saved as a binary cache on-disk, so it was instantly ready when the software center was loaded next.

Adjusting three serialization formats was pretty laborious and a very boring task. And at one point I benchmarked the (de)serialization performance of the different formats and found out the the XML reading/writing was actually massively outperforming that of the GVariant cache. Since the XML parser received much more attention, that was only natural (but there were also other issues with GVariant deserializing large dictionary structures).

Ultimately, I removed the GVariant serialization and replaced it with a memory-mapped XML-based cache that reuses 99.9% of the existing XML serialization code. The cache uses LMDB, a small embeddable key-value store. This makes maintaining AppStream much easier, and we are using the same well-tested codepaths for caching now that we also use for normal XML reading/writing. With this change, AppStream also uses even less memory, as we only keep the software components in memory that the software center currently displays. Everything that isn’t directly needed also isn’t in memory. But if we do need the data, it can be pulled from the memory-mapped store very quickly.

While refactoring the caching code, I also decided to give people using libappstream in their own projects a lot more control over the caching behavior. Previously, libappstream was magically handling the cache behind the back of the application that was using it, guessing which behavior was best for the given usecase. But actually, the application using libappstream knows best how caching should be handled, especially when it creates more than one AsPool instance to hold and search metadata. Therefore, libappstream will still pick the best defaults it can, but give the application that uses it all control it needs, down to where to place a cache file, to permit more efficient and more explicit management of caches.

Validator improvements

(Since v0.12.8) The AppStream metadata validator, used by running appstreamcli validate <file>, is the tool that each Metainfo file should run through to ensure it is conformant to the AppStream specification and to give some useful hints to improve the metadata quality. It knows four issue severities: Pedantic issues are hidden by default (show them with the --pedantic flag) and affect upcoming features or really “nice to have” things that are completely nonessential. Info issues are not directly a problem, but are hints to improve the metadata and get better overall data. Things the specification recommends but doesn’t mandate also fall into this category. Warnings will result in degraded metadata but don’t make the file invalid in its entirety. Yet, they are severe enough that we fail the validation. Things like that are for example a vanishing screenshot from an URL: Most of the data is still valid, but the result may not look as intended. Invalid email addresses, invalid tag properties etc. fall into that category as well: They will all reduce the amount of metadata systems have available. So the metadata should definitely be warning-free in order to be valid. And finally errors are outright violation of the specification that may likely result in the data being ignored in its entirety or large chunks of it being invalid. Malformed XML or invalid SPDX license expressions would fall into that group.

Previously, the validator would always show very long explanations for all the issues it found, giving detailed information on an issue. While this was nice if there were few issues, it produces very noisy output and makes it harder to quickly spot the actual error. So, the whole validator output was changed to be based on issue tags, a concept that is also known from other lint tools such as Debian’s Lintian: Each error has its own tag string, identifying it. By default, we only show the tag string, line of the issue, severity and component name it affects as well a short repeat of an invalid value (in case that’s applicable to the issue). If people do want to know detailed information, they can get it by passing --explain to the validation command. This solution has many advantages:

  • It makes the output concise and easy to read by humans and is mostly already self-explanatory
  • Machines can parse the tags easily and identify which issue was emitted, which is very helpful for AppStream’s own testsuite but also for any tool wanting to parse the output
  • We can now have translators translate the explanatory texts

Initially, I didn’t want to have the validator return translated output, as that may be less helpful and harder to search the web for. But now, with the untranslated issue tags and much longer and better explanatory texts, it makes sense to trust the translators to translate the technical explanations well.

Of course, this change broke any tool that was parsing the old output. I had an old request by people to have appstreamcli return machine-readable validator output, so they could integrate it better with preexisting CI pipelines and issue reporting software. Therefore, the tool can now return structured, machine-readable output in the YAML format if you pass --format=yaml to it. That output is guaranteed to be stable and can be parsed by any CI machinery that a project already has running. If needed, other output formats could be added in future, but for now YAML is the only one and people generally seem to be happy with it.

Create desktop-entry files from Metainfo

(Since v0.12.9) As you may have noticed, an AppStream Metainfo file contains some information that a desktop-entry file also contains. Yet, the two file formats serve very different purposes: A desktop file is basically launch instructions for an application, with some information about how it is displayed. A Metainfo file is mostly display information and less to none launch instructions. Admittedly though, there is quite a bit of overlap which may make it useful for some projects to simply generate a desktop-entry file from a Metainfo file. This may not work for all projects, most notably ones where multiple desktop-entry files exists for just one AppStream component. But for the simplest and most common of cases, a direct mapping between Metainfo and desktop-entry file, this option is viable.

The appstreamcli tool permits this now, using the appstreamcli make-desktop-file subcommand. It just needs a Metainfo file as first parameter, and a desktop-entry output file as second parameter. If the desktop-entry file already exists, it will be extended with the new data from tbe Metainfo file. For the Exec field in a desktop-entry file, appstreamcli will read the first binary entry in a provides tag, or use an explicitly provided line passed via the --exec parameter.

Please take a look at the appstreamcli(1) manual page for more information on how to use this useful feature.

Convert NEWS files to Metainfo and vice versa

(Since v0.12.9) Writing the XML for release entries in Metainfo files can sometimes be a bit tedious. To make this easier and to integrate better with existing workflows, two new subcommands for appstreamcli are now available: news-to-metainfo and metainfo-to-news. They permit converting a NEWS textfile to Metainfo XML and vice versa, and can be integrated with an application’s build process. Take a look at AppStream itself on how it uses that feature.

In addition to generating the NEWS output or reading it, there is also a second YAML-based option available. Since YAML is a structured format, more of the features of AppStream release metadata are available in the format, such as marking development releases as such. You can use the --format flag to switch the output (or input) format to YAML.

Please take a look at the appstreamcli(1) manual page for a bit more information on how to use this feature in your project.

Support for recent SPDX syntax

(Since v0.12.10) This has been a pain point for quite a while: SPDX is a project supported by the Linux Foundation to (mainly) provide a unified syntax to identify licenses for Open Source projects. They did change the license syntax twice in incompatible ways though, and AppStream already implemented a previous versions, so we could not simply jump to the latest version without supporting the old one.

With the latest release of AppStream though, the software should transparently convert between the different version identifiers and also support the most recent SPDX license expressions, including the WITH operator for license exceptions. Please report any issues if you see them!

Future Plans?

First of all, congratulations for reading this far into the blog post! I hope you liked the new features! In case you skipped here, welcome to one of the most interesting sections of this blog post! 😉

So, what is next for AppStream? The 1.0 release, of course! The project is certainly mature enough to warrant that, and originally I wanted to get the 1.0 release out of the door this February, but it doesn’t look like that date is still realistic. But what does “1.0” actually mean for AppStream? Well, here is a list of the intended changes:

  • Removal of almost all deprecated parts of the specification. Some things will remain supported forever though: For example the desktop component type is technically deprecated for desktop-application but is so widely used that we will support it forever. Things like the old application node will certainly go though, and so will the /usr/share/appdata path as metainfo location, the appcategory node that nobody uses anymore and all other legacy cruft. I will be mindful about this though: If a feature still has a lot of users, it will stay supported, potentially forever. I am closely monitoring what is used mainly via the information available via the Debian archive. As a general rule of thumb though: A file for which appstreamcli validate passes today is guaranteed to work and be fine with AppStream 1.0 as well.
  • Removal of all deprecated API in libappstream. If your application still uses API that is flagged as deprecated, consider migrating to the supported functions and you should be good to go! There are a few bigger refactorings planned for some of the API around releases and data serialization, but in general I don’t expect this to be hard to port.
  • The 1.0 specification will be covered by an extended stability promise. When a feature is deprecated, there will be no risk that it is removed or become unsupported (so the removal of deprecated stuff in the specification should only happen once). What is in the 1.0 specification will quite likely be supported forever.

So, what is holding up the 1.0 release besides the API cleanup work? Well, there are a few more points I want to resolve before releasing the 1.0 release:

  • Resolve hosting release information at a remote location, not in the Metainfo file (#240): This will be a disruptive change that will need API adjustments in libappstream for sure, and certainly will – if it happens – need the 1.0 release. Fetching release data from remote locations as opposed to having it installed with software makes a lot of sense, and I either want to have this implemented and specified properly for the 1.0 release, or have it explicitly dismissed.
  • Mobile friendliness / controls metadata (#192 & #55): We need some way to identify applications as “works well on mobile”. I also work for a company called Purism which happens to make a Linux-based smartphone, so this is obviously important for us. But it also is very relevant for users and other Linux mobile projects. The main issue here is to define what “mobile” actually means and what information makes sense to have in the Metainfo file to be future-proof. At the moment, I think we should definitely have data on supported input controls for a GUI application (touch vs mouse), but for this the discussion is still not done.
  • Resolving addon component type complexity (lots of issue reports): At the moment, an addon component can be created to extend an existing application by $whatever thing This can be a plugin, a theme, a wallpaper, extra content, etc. This is all running in the addon supergroup of components. This makes it difficult for applications and software centers to occasionally group addons into useful groups – a plugin is functionally very different from a theme. Therefore I intend to possibly allow components to name “addon classes” they support and that addons can sort themselves into, allowing easy grouping and sorting of addons. This would of course add extra complexity. So this feature will either go into the 1.0 release, or be rejected.
  • Zero pending feature requests for the specification: Any remaining open feature request for the specification itself in AppStream’s issue tracker should either be accepted & implemented, or explicitly deferred or rejected.

I am not sure yet when the todo list will be completed, but I am certain that the 1.0 release of AppStream will happen this year, most likely before summer. Any input, especially from users of the format, is highly appreciated.

Thanks a lot to everyone who contributed or is contributing to the AppStream implementation or specification, you are great! Also, thanks to you, the reader, for using AppStream in your project 😉. I definitely will give a bit more frequent and certainly shorter updates on the project’s progress from now on. Enjoy your rich software metadata, firmware updates and screenshot videos meanwhile! 😀

on January 27, 2020 02:48 PM

January 24, 2020

Traditionally, LXD is used to create system containers, light-weight virtual machines that use Linux Container features and not hardware virtualization.

However, starting from LXD 3.19, it is possible to create virtual machines as well. That is, now with LXD you can create both system containers and virtual machines.

In the following we see how to setup LXD for virtual machines, then start a virtual machine and use it. Finally, we go through some troubleshooting.

How to setup LXD for virtual machines

Launching LXD virtual machines requires some preparation. We need to pass some information to the virtual machine so that we can then be able to connect to it as soon as it boots up. We pass the necessary information to the virtual machine using a LXD profile, through cloud-init.

Creating a LXD profile for virtual machines

Here is such a profile. There is a cloud-init configuration that essentially has all the information that is passed to the virtual machine. Then, there is a config device that makes available a disk device to the virtual machine, and from there it can setup a VM-specific LXD component.

   user.user-data: |
     ssh_pwauth: yes
       - name: ubuntu
         passwd: "$6$iBF0eT1/6UPE2u$V66Rk2BMkR09pHTzW2F.4GHYp3Mb8eu81Sy9srZf5sVzHRNpHP99JhdXEVeN0nvjxXVmoA6lcVEhOOqWEd3Wm0"
         lock_passwd: false
         groups: lxd
         shell: /bin/bash
         sudo: ALL=(ALL) NOPASSWD:ALL
description: LXD profile for virtual machines
    source: cloud-init:config
    type: disk
name: vm

This profile

  • Enables password authentication in SSH (ssh_pwauth: yes)
  • Adds a non-root user ubuntu with password ubuntu. See Troubleshooting below on how to change this.
  • The password is not in a locked state.
  • The user account belongs to the lxd group, in case we want to run LXD inside the LXD virtual machine.
  • The shell is /bin/bash.
  • Can sudo to all without requiring a password.
  • Some extra configuration will be passed to the virtual machine through an ISO image named config.iso. Once you get a shell in the virtual machine, you can install the rest of the support by mounting this ISO image and running the installer.

We now need to create a profile with the above content. Here is how we do this. You first create an empty profile called vm. Then, you run the cat | lxc profile edit vm command which allows you to paste the above profile configuration and finally hit Control+D to have it saved. Alternatively, you can run lxc profile edit vm and then paste in there the following text. The profile was adapted from the LXD 3.19 announcement page.

$ lxc profile create vm
$ cat | lxc profile edit vm
   user.user-data: |
     ssh_pwauth: yes
       - name: ubuntu
         passwd: "$6$iBF0eT1/6UPE2u$V66Rk2BMkR09pHTzW2F.4GHYp3Mb8eu81Sy9srZf5sVzHRNpHP99JhdXEVeN0nvjxXVmoA6lcVEhOOqWEd3Wm0"
         lock_passwd: false
         groups: lxd
         shell: /bin/bash
         sudo: ALL=(ALL) NOPASSWD:ALL
description: LXD profile for virtual machines
    source: cloud-init:config
    type: disk
name: vm

$ lxc profile show vm

We have created the profile with the virtual machine-specific. We have now the pieces in place to launch a LXD virtual machine.

Launching a LXD virtual machine

We launch a LXD virtual machine with the following command. It is the standard lxc launch command, with the addition of the --vm option to create a virtual machine (instead of a system container). We specify the default profile (whichever base configuration you use in your LXD installation) and on top of that we add our VM-specific configuration with --profile vm. Depending on your computer’s specifications, it takes a few seconds to launch the container, and then less than 10 seconds for the VM to boot up and receive the IP address from your network.

$ lxc launch ubuntu:18.04 vm1 --vm --profile default --profile vm
Creating vm1
Starting vm1
$ lxc list vm1
| NAME |  STATE  | IPV4 | IPV6 |      TYPE       | SNAPSHOTS |
| vm1  | RUNNING |      |      | VIRTUAL-MACHINE | 0         |
$ lxc list vm1
| NAME |  STATE  |        IPV4        | IPV6 |      TYPE       | SNAPSHOTS |
| vm1  | RUNNING | (eth0) |      | VIRTUAL-MACHINE | 0         |

We have enabled password authentication for SSH, which means that we can connect to the VM straight away with the following command.

$ ssh ubuntu@
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-74-generic x86_64)

* Documentation:  https://help.ubuntu.com
* Management:     https://landscape.canonical.com
* Support:        https://ubuntu.com/advantage 

System information as of Fri Jan 24 09:22:19 UTC 2020 
 System load:  0.03              Processes:             100
 Usage of /:   10.9% of 8.68GB   Users logged in:       0
 Memory usage: 15%               IP address for enp3s5:
 Swap usage:   0%

0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.


Using the console in a LXD VM

LXD has the lxc console command to give you a console to a running system container and virtual machine. You can use the console to view the boot messages as they appear, and also log in using a username and password. In the LXD profile we set up a password primarily to be able to connect through the lxc console. Let’s get a shell through the console.

$ lxc console vm1
To detach from the console, press: +a q                      [NOTE: Press Enter at this point]

Ubuntu 18.04.3 LTS vm1 ttyS0

vm1 login: ubuntu
Password: **********
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-74-generic x86_64)

* Documentation:  https://help.ubuntu.com
* Management:     https://landscape.canonical.com
* Support:        https://ubuntu.com/advantage 

System information as of Fri Jan 24 09:22:19 UTC 2020 
 System load:  0.03              Processes:             100
 Usage of /:   10.9% of 8.68GB   Users logged in:       0
 Memory usage: 15%               IP address for enp3s5:
 Swap usage:   0%

0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.


To exit from the console, logout from the shell first, then press Ctrl+A q.

ubuntu@vm1:~$ logout

Ubuntu 18.04.3 LTS vm1 ttyS0

vm1 login:                                               [Press Ctrl+A q]

Bonus tip: When you launch a LXD VM, you can run straight away lxc console vm1 and you get the chance to view the boot up messages of the Linux kernel in the VM as they appear.

Setting up the LXD agent inside the VM

In any VM environment the VM is separated from the host. For usability purposes, we often add a service in the VM so that it makes it easier to access the VM resources from your host. This service is available in the config device that was made available to the VM through cloud-init. At some point in the future, the LXD virtual machine images will be adapted so that they automatically setup the configuration from the config device. But for now, we do this manually by setting up the LXD agent service. First, get a shell into the virtual machine either through SSH or lxc console. We become root and perform the mount of the config device. We can see the exact files of the config device. We run ./install.sh and make the LXD Agent service run automatically in the VM. Finally, we reboot the VM so that the changes take effect.

ubuntu@vm1:~$ sudo -i
root@vm1:~# mount -t 9p config /mnt/
root@vm1:~# cd /mnt/
root@vm1:/mnt# ls -l
total 6390
-r-------- 1 999 root      745 Jan 24 09:18 agent.crt
-r-------- 1 999 root      288 Jan 24 09:18 agent.key
dr-x------ 2 999 root        5 Jan 24 09:18 cloud-init
-rwx------ 1 999 root      595 Jan 24 09:18 install.sh
-r-x------ 1 999 root 11495360 Jan 24 09:18 lxd-agent
-r-------- 1 999 root      713 Jan 24 09:18 server.crt
dr-x------ 2 999 root        4 Jan 24 09:18 systemd
root@vm1:/mnt# ./install.sh 
Created symlink /etc/systemd/system/multi-user.target.wants/lxd-agent.service → /lib/systemd/system/lxd-agent.service.
Created symlink /etc/systemd/system/multi-user.target.wants/lxd-agent-9p.service → /lib/systemd/system/lxd-agent-9p.service.

LXD agent has been installed, reboot to confirm setup.
To start it now, unmount this filesystem and run: systemctl start lxd-agent-9p lxd-agent
root@vm1:/mnt# reboot

Now the LXD Agent service is running in the VM. We are ready to use the LXD VM just like a LXD system container.

Using a LXD virtual machine

By installing the LXD agent inside the LXD VM, we can run the usual LXD commands such as lxc exec, lxc file, etc. Here is how to get a shell, either using the built-in alias lxc shell, or lxc exec to get a shell with the non-root account of the Ubuntu container images (from the repository ubuntu:).

$ lxc shell vm1
root@vm1:~# logout
$ lxc exec vm1 -- sudo --user ubuntu --login

We can transfer files between the host and the LXD virtual machine. We create a file mytest.txt on the host. We push that file to the virtual machine vm1. The destination of the push is vm1/home/ubuntu/, where vm1 is the name of the virtual machine (or system container). It is a bit weird that we do not use : to separate the name from the path, just like in SSH and elsewhere. The reason is that : is used to specify a remote LXD server, so it cannot be used to separate the name from the path. We then perform a recursive pull of the ubuntu home directory and place it in /tmp. Finally, we have a look at the retrieved directory.

$ echo "This is a test" > mytest.txt
$ lxc file push mytest.txt vm1/home/ubuntu/
$ lxc file pull --recursive vm1/home/ubuntu/ /tmp/
$ ls -ld /tmp/ubuntu/
drwxr-xr-x 4 myusername myusername 4096 Jan  28 01:00 /tmp/ubuntu/

We can view the lxc info of the virtual machine.

$ lxc info vm1 
Name: vm1
Location: none
Remote: unix://
Architecture: x86_64
Created: 2020/01/27 20:20 UTC
Status: Stopped
Type: virtual-machine
Profiles: default, vm

Other functionality that is available to system containers should be made also available to virtual machines in the following months.


Error: unknown flag: –vm

You will get this error message when you try to launch a virtual machine while your version of LXD is 3.18 or lower. VM support has been added to LXD 3.19, therefore the version should be either 3.19 or newer.

Error: Failed to connect to lxd-agent

You can launched a LXD VM and you are trying to connect to it using lxc exec and get a shell (or run other commands). The LXD VM needs to have a service running inside the VM that will receive the lxc exec commands. This service has not been installed yet into the LXD VM, or for some reason it is not running.

Error: The LXD VM does not get automatically an IP address

The LXD virtual machine should be able to get an IP address from LXD’s dnsmasq without issues.

macvlan works as well but would not show up in lxc list vm1 until you setup the LXD Agent.

$ lxc list vm1
| NAME |  STATE  |         IPV4         | IPV6 |      TYPE       | SNAPSHOTS |
| vm1  | RUNNING | (enp3s5) |      | VIRTUAL-MACHINE | 0         |

I created a LXD VM and did not have to do any preparation at all!

When you lxc launchor lxc init with the aim to create a LXD VM, you need to remember to pass the --vm option in order to create a virtual machine instead of a container. To verify whether your newly created machine is a system container or a virtual machine, run lxc list and it should show you the type under the Type column.

How do I change the VM password in the LXD profile?

You can generate a new password using the following command. We are not required to echo -n in this case because mkpasswd with take care of the newline for us. We use the SHA-512 method, because this is the password hashing algorithm since Ubuntu 16.04.

$  echo "mynewpassword" | mkpasswd --method=SHA-512 --stdin

Then, run lxc profile edit vm and replace the old password field with your new one.

How do I set my public key instead of a password?

Instead of passwd, use ssh-authorized-keys. See the cloud-init example on ssh-authorized-keys.


In LXD 3.19 there is initial support for virtual machines. As new versions of LXD are being developed, more features from system containers will get implemented into virtual machines as well. In April 2020 we will be getting LXD 4.0, long-term support for five to ten years. There is ongoing work to add as much functionality for virtual machines in order to make it into the feature freeze for LXD 4.0. If you are affected, it makes sense to follow closely the development of virtual machine support in LXD towards the LXD 4.0 feature freeze.

on January 24, 2020 10:08 AM

January 23, 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In December, 208.00 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

Though December was as quiet as to be expected due to the holiday season, the usual amount of security updates were still released by our contributors.
We currently have 59 LTS sponsors each month sponsoring 219h. Still, as always we are welcoming new LTS sponsors!

The security tracker currently lists 34 packages with a known CVE and the dla-needed.txt file has 33 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on January 23, 2020 06:19 PM
Our favorite Disco Dingo, Ubuntu Studio 19.04, has reached end-of-life and will no longer receive any updates. If you have not yet upgraded, please do so now or forever lose the ability to upgrade! Ubuntu Studio 20.04 LTS is scheduled for April of 2020. The transition from 19.10 to 20.04... Continue reading
on January 23, 2020 12:00 AM

January 17, 2020

Are you using Kubuntu 19.10 Eoan Ermine, our current Stable release? Or are you already running our development builds of the upcoming 20.04 LTS Focal Fossa?

We currently have Plasma 5.17.90 (Plasma 5.18 Beta)  available in our Beta PPA for Kubuntu 19.10.

The 5.18 beta is also available in the main Ubuntu archive for the 20.04 development release, and can be found on our daily ISO images.

This is a Beta Plasma release, so testers should be aware that bugs and issues may exist.

If you are prepared to test, then…..

For 19.10 add the PPA and then upgrade

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt update && sudo apt full-upgrade -y

Then reboot. If you cannot reboot from the application launcher,

systemctl reboot

from the terminal.

In case of issues, testers should be prepare to use ppa-purge to remove the PPA and revert/downgrade packages.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use a launchpad.net to post testing feedback to the Kubuntu team as a bug, or give feedback on IRC [1], Telegram [2] or mailing lists [3].
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

Please review the release announcement and changelog.

[Test Case]

* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.16 or 5.17?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.

* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing. e.g. “clock combobox instead of tri-state checkbox for 12/24 hour display.”
– Test the ‘fixed’ functionality or ‘new’ feature.

Testing involves some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu and the KDE community as a whole.


Please stop by the Kubuntu-devel IRC channel or Telegram group if you need clarification of any of the steps to follow.

[1] – irc://irc.freenode.net/kubuntu-devel
[2] – https://t.me/kubuntu_support
[3] – https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel

on January 17, 2020 09:48 AM

January 15, 2020

KUserFeedback is a framework for collecting user feedback for applications via telemetry and surveys.

The library comes with an accompanying control and result UI tool.


Signed by Jonathan Riddell <jr@jriddell.org> 2D1D5B0588357787DE9EE225EC94D18F7F05997E

KUserFeedback as it will be used in Plasma 5.18 LTS

on January 15, 2020 04:15 PM

Some time ago, there was a thread on debian-devel where we discussed how to make Qt packages work on hardware that supports OpenGL ES, but not the desktop OpenGL.

My first proposal was to switch to OpenGL ES by default on ARM64, as that is the main affected architecture. After a lengthy discussion, it was decided to ship two versions of Qt packages instead, to support more (OpenGL variant, architecture) configurations.

So now I am announcing that we finally have the versions of Qt GUI and Qt Quick libraries that are built against OpenGL ES, and the release team helped us to rebuild the archive for compatibility with them. These packages are not co-installable together with the regular (desktop OpenGL) Qt packages, as they provide the same set of shared libraries. So most packages now have an alternative dependency like libqt5gui5 (>= 5.x) | libqt5gui5-gles (>= 5.x). Packages get such a dependency automatically if they are using ${shlibs:Depends}.

These Qt packages will be mostly needed by ARM64 users, however they may be also useful on other architectures too. Note that armel and armhf are not affected, because there Qt was built against OpenGL ES from the very beginning. So far there are no plans to make two versions of Qt on these architectures, however we are open to bug reports.

To try that on your system (running Bullseye or Sid), just run this command:

# apt install libqt5gui5-gles libqt5quick5-gles

The other Qt submodule packages do not need a second variant, because they do not use any OpenGL API directly. Most of the Qt applications are installable with these packages. At the moment, Plasma is not installable because plasma-desktop FTBFS, but that will be fixed sooner or later.

One major missing thing is PyQt5. It is linking against some Qt helper functions that only exist for desktop OpenGL build, so we will probably need to build a special version of PyQt5 for OpenGL ES.

If you want to use any OpenGL ES specific API in your package, build it against qtbase5-gles-dev package instead of qtbase5-dev. There is no qtdeclarative5-gles-dev so far, however if you need it, please let us know.

In case you have any questions, please feel free to file a bug against one of the new packages, or contact us at the pkg-kde-talk mailing list.

on January 15, 2020 02:55 PM

January 12, 2020

Kubuntu 19.04 reaches end of life

Kubuntu General News

Kubuntu 19.04 Disco Dingo was released on April 18, 2019 with 9 months support. As of January 23, 2020, 19.04 reaches ‘end of life’. No more package updates will be accepted to 19.04, and it will be archived to old-releases.ubuntu.com in the coming weeks.

The official end of life announcement for Ubuntu as a whole can be found here [1].

Kubuntu 19.10 Eoan Ermine continues to be supported, receiving security and high-impact bugfix updates until July 2020.

Users of 19.04 can follow the Kubuntu 19.04 to 19.10 Upgrade [2] instructions.

Should for some reason your upgrade be delayed, and you find that the 18.10 repositories have been archived to old-releases.ubuntu.com, instructions to perform a EOL Upgrade can be found on the Ubuntu wiki [3].

Thank you for using Kubuntu 19.04 Disco Dingo.

The Kubuntu team.

[1] – https://lists.ubuntu.com/archives/ubuntu-announce/2020-January/000252.html
[2] – https://help.ubuntu.com/community/EoanUpgrades/Kubuntu
[3] – https://help.ubuntu.com/community/EOLUpgrades

on January 12, 2020 11:23 PM
Lubuntu 19.04 (Disco Dingo) will reach End of Life on Thursday, January 23, 2020. This means that after that date there will be no further security updates or bugfixes released. We highly recommend that you update to 19.10 as soon as possible if you are still running 19.04. After January 23rd, the only supported releases […]
on January 12, 2020 06:50 PM

January 11, 2020

Fernando Lanero, Paco Molinero y Marcos Costales analizamos la privacidad en la red de redes: Internet. Además entrevistamos a Paco Molinero como responsable del grupo de traductores de Ubuntu al Español en Launchpad URL de traducción.

Escúchanos en:

on January 11, 2020 03:56 PM

January 06, 2020

Amazon recently announced the AWS IAM Access Analyzer, a useful tool to help discover if you have granted unintended access to specific types of resources in your AWS account.

At the moment, an Access Analyzer needs to be created in each region of each account where you want to run it.

Since this manual requirement can be a lot of work, it is a common complaint from customers. Given that Amazon listens to customer feedback and since we currently have to specify a “type” of “ACCOUNT”, I expect at some point Amazon may make it easier to run Access Analyzer across all regions and maybe in all accounts in an AWS Organization. Until then…

This article shows how I created an AWS IAM Access Analyzer in all regions of all accounts in my AWS Organization using the aws-cli.


To make this easy, I use the bash helper functions that I defined in last week’s blog post here:

Running AWS CLI Commands Across All Accounts In An AWS Organization

Please read the blog post to see what assumptions I make about the AWS Organization and account setup. You may need to tweak things if your setup differs from mine.

Here is my GitHub repo that makes it more convenient for me to install the bash functions. If your AWS account structure matches mine sufficiently, it might work for you, too:


IAM Access Analyzer In All Regions Of Single Account

To start, let’s show how to create an IAM Access Analyzer in all regions of a single account.

Here’s a simple command to get all the regions in the current AWS account:

aws ec2 describe-regions \
  --output text \
  --query 'Regions[][RegionName]'

This command creates an IAM Access Analyzer in a specific region. We’ll tack on a UUID because that’s what Amazon does, though I suspect it’s not really necessary.

uuid=$(uuid -v4 -FSIV || echo "1") # may need to install "uuid" command
aws accessanalyzer create-analyzer \
   --region "$region" \
   --analyzer-name "$analyzer" \
   --type ACCOUNT

By default, there is a limit of a single IAM Access Analyzer per account region. The fact that this is a “default limit” implies that it may be increased by request, but for this guide, we’ll just not create an IAM Access Analyzer if one already exists.

This command lists the name of any IAM Access Analyzers that might already have been created in a region:

aws accessanalyzer list-analyzers \
  --region "$region" \
  --output text \
  --query 'analyzers[][name]'

We can put the above together, iterating over the regions, checking to see if an IAM Access Analyzer already exists, and creating one if it doesn’t:

regions=$(aws ec2 describe-regions \
  --output text \
  --query 'Regions[][RegionName]' |

for region in $regions; do
  analyzer=$(aws accessanalyzer list-analyzers \
    --region "$region" \
    --output text \
    --query 'analyzers[][name]')
  if [ -n "$analyzer" ]; then
    echo "$region: EXISTING: $analyzer"
    uuid=$(uuid -v4 -FSIV || echo "1") # may need to install "uuid" command
    echo "$region: CREATING: $analyzer"
    aws accessanalyzer create-analyzer \
       --region "$region" \
       --analyzer-name "$analyzer" \
       --type ACCOUNT \
       > /dev/null # only show errors

Creating IAM Access Analyzers In All Regions Of All Accounts

Now let’s prepare to run the above in multiple accounts using the aws-cli-multi-account-sessions bash helper functions from last week’s article:

git clone git@github.com:alestic/aws-cli-multi-account-sessions.git
source aws-cli-multi-account-sessions/functions.sh

Specify the values for source_profile and mfa_serial from your aws-cli config file. You can leave the mfa_serial empty if you aren’t using MFA:


Specify the role you can assume in all accounts:

role="admin" # Yours might be called "OrganizationAccountAccessRole"

Get a list of all accounts in the AWS Organization, and a list of all regions:

accounts=$(aws organizations list-accounts \
             --output text \
             --query 'Accounts[].[JoinedTimestamp,Status,Id,Email,Name]' |
           grep ACTIVE |
           sort |
           cut -f3) # just the ids

regions=$(aws ec2 describe-regions \
            --output text \
            --query 'Regions[][RegionName]' |

Run this once to create temporary session credentials with MFA:

aws-session-init $source_profile $mfa_serial

Iterate through AWS accounts, running the necessary AWS CLI commands to create an AIM Access Analyzer in each account/role and each region:

for account in $accounts; do
  echo "Visiting account: $account"
  aws-session-set $account $role || continue

  for region in $regions; do
    # Run the aws-cli commands using the assume role credentials
    analyzers=$(aws-session-run \
                  aws accessanalyzer list-analyzers \
                    --region "$region" \
                    --output text \
                    --query 'analyzers[][name]')
    if [ -n "$analyzers" ]; then
      echo "$account/$region: EXISTING: $analyzers"
      uuid=$(uuid -v4 -FSIV || echo "1")
      echo "$account/$region: CREATING: $analyzer"
      aws-session-run \
        aws accessanalyzer create-analyzer \
          --region "$region" \
          --analyzer-name "$analyzer" \
          --type ACCOUNT \
          > /dev/null # only show errors

Clear out bash variables holding temporary AWS credentials:


In a bit, you can go to the AWS IAM Console and view what the Access Analyzers found.

Yep, you have to look at the Access Analyzer findings in each account and each region. Wouldn’t it be nice if we had some way to collect all this centrally? I think so, too, so I’m looking into what can be done there. Thoughts welcome in the comments below or on Twitter.


The following deletes all IAM Access Analyzers in all regions in the current account. You don’t need to do this if you want to leave the IAM Access Analyzers running, especially since there is no additional cost for keeping them.


source_profile=[as above]
mfa_serial=[as above]
role=[as above]

accounts=$(aws organizations list-accounts \
             --output text \
             --query 'Accounts[].[JoinedTimestamp,Status,Id,Email,Name]' |
           grep ACTIVE |
           sort |
           cut -f3) # just the ids

regions=$(aws ec2 describe-regions \
            --profile "$source_profile" \
            --output text \
            --query 'Regions[][RegionName]' |

aws-session-init $source_profile $mfa_serial

for account in $accounts; do
  echo "Visiting account: $account"
  aws-session-set $account $role || continue

  for region in $regions; do
    # Run the aws-cli commands using the assume role credentials
    analyzers=$(aws-session-run \
                  aws accessanalyzer list-analyzers \
                    --region "$region" \
                    --output text \
                    --query 'analyzers[][name]')
    for analyzer in $analyzers; do
      echo "$account/$region: DELETING: $analyzer"
      aws-session-run \
        aws accessanalyzer delete-analyzer \
          --region "$region" \
          --analyzer-name "$analyzer"


Original article and comments: https://alestic.com/2020/01/aws-iam-access-analyzer/

on January 06, 2020 08:01 AM

January 01, 2020

Catfish 1.4.12 Released

Welcome to 2020! Let's ring in the new year with a brand new Catfish release.

What's New

Wayland Support

Catfish 1.4.12 adds support for running on Wayland. Before now, there were some X-specific dependencies related to handling display sizes. These have now been resolved, and Catfish should run smoothly and consistently everywhere.

Catfish 1.4.12 ReleasedCatfish 1.4.12 on Wayland on Ubuntu 19.10

Dialog Improvements

All dialogs now utilize client-side decorations (CSD) and are modal. The main window will continue to respect the window layout setting introduced in the 1.4.10 release.

I also applied a number of fixes to the new Preferences and Search Index dialogs, so they should behave more consistently and work well with keyboard navigation.

Release Process Updates

I've improved the release process to make it easier for maintainers and to ensure builds are free of temporary files. This helps ensure a faster delivery to package maintainers, and therefore to distributions.

Translation Updates

Albanian, Catalan, Chinese (China), Chinese (Taiwan), Czech, Danish, Dutch, French, Galician, German, Italian, Japanese, Norwegian Bokmål, Russian, Serbian, Spanish, Turkish


Source tarball

$ md5sum catfish-1.4.12.tar.bz2 

$ sha1sum catfish-1.4.12.tar.bz2 

$ sha256sum catfish-1.4.12.tar.bz2 

Catfish 1.4.12 will be included in Xubuntu 20.04 "Focal Fossa", available in April.

on January 01, 2020 09:57 PM

December 30, 2019

by generating a temporary IAM STS session with MFA then assuming cross-account IAM roles

I recently had the need to run some AWS commands across all AWS accounts in my AWS Organization. This was a bit more difficult to accomplish cleanly than I had assumed it might be, so I present the steps here for me to find when I search the Internet for it in the future.

You are also welcome to try out this approach, though if your account structure doesn’t match mine, it might require some tweaking.

Assumptions And Background

(Almost) all of my AWS accounts are in a single AWS Organization. This allows me to ask the Organization for the list of account ids.

I have a role named “admin” in each of my AWS accounts. It has a lot of power to do things. The default cross-account admin role name for accounts created in AWS Organizations is “OrganizationAccountAccessRole”.

I start with an IAM principal (IAM user or IAM role) that the aws-cli can access through a “source profile”. This principal has the power to assume the “admin” role in other AWS accounts. In fact, that principal has almost no other permissions.

I require MFA whenever a cross-account IAM role is assumed.

You can read about how I set up AWS accounts here, including the above configuration:

Creating AWS Accounts From The Command Line With AWS Organizations

I use and love the aws-cli and bash. You should, too, especially if you want to use the instructions in this guide.

I jump through some hoops in this article to make sure that AWS credentials never appear in command lines, in the shell history, or in files, and are not passed as environment variables to processes that don’t need them (no export).


For convenience, we can define some bash functions that will improve clarity when we want to run commands in AWS accounts. These freely use bash variables to pass information between functions.

The aws-session-init function obtains temporary session credentials using MFA (optional). These are used to generate temporary assume-role credentials for each account without having to re-enter an MFA token for each account. This function will accept optional MFA serial number and source profile name. This is run once.

aws-session-init() {
  # Sets: source_access_key_id source_secret_access_key source_session_token
  local source_profile=${1:-${AWS_SESSION_SOURCE_PROFILE:?source profile must be specified}}
  local mfa_serial=${2:-$AWS_SESSION_MFA_SERIAL}
  local token_code=
  local mfa_options=
  if [ -n "$mfa_serial" ]; then
    read -s -p "Enter MFA code for $mfa_serial: " token_code
    mfa_options="--serial-number $mfa_serial --token-code $token_code"
  read -r source_access_key_id \
          source_secret_access_key \
          source_session_token \
    <<<$(aws sts get-session-token \
           --profile $source_profile \
           $mfa_options \
           --output text \
           --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]')
  test -n "$source_access_key_id" && return 0 || return 1

The aws-session-set function obtains temporary assume-role credentials for the specified AWS account and IAM role. This is run once for each account before commands are run in that account.

aws-session-set() {
  # Sets: aws_access_key_id aws_secret_access_key aws_session_token
  local account=$1
  local role=${2:-$AWS_SESSION_ROLE}
  local name=${3:-aws-session-access}
  read -r aws_access_key_id \
          aws_secret_access_key \
          aws_session_token \
    <<<$(AWS_ACCESS_KEY_ID=$source_access_key_id \
         AWS_SECRET_ACCESS_KEY=$source_secret_access_key \
         AWS_SESSION_TOKEN=$source_session_token \
         aws sts assume-role \
           --role-arn arn:aws:iam::$account:role/$role \
           --role-session-name "$name" \
           --output text \
           --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]')
  test -n "$aws_access_key_id" && return 0 || return 1

The aws-session-run function runs a provided command, passing in AWS credentials in environment variables for that process to use. Use this function to prefix each command that needs to run in the currently set AWS account/role.

aws-session-run() {
  AWS_ACCESS_KEY_ID=$aws_access_key_id \
  AWS_SECRET_ACCESS_KEY=$aws_secret_access_key \
  AWS_SESSION_TOKEN=$aws_session_token \

The aws-session-cleanup function should be run once at the end, to make sure that no AWS credentials are left lying around in bash variables.

aws-session-cleanup() {
  unset source_access_key_id source_secret_access_key source_session_token
  unset    aws_access_key_id    aws_secret_access_key    aws_session_token

Running aws-cli Commands In Multiple AWS Accounts

After you have defined the above bash functions in your current shell, here’s an example for how to use them to run aws-cli commands across AWS accounts.

As mentioned in the assumptions, I have a role named “admin” in each account. If your role names are less consistent, you’ll need to do extra work to automate commands.

role="admin" # Yours might be called "OrganizationAccountAccessRole"

This command gets all of the account ids in the AWS Organization. You can use whatever accounts and roles you wish, as long as you are allowed to assume-role into them from the source profile.

accounts=$(aws organizations list-accounts \
             --output text \
             --query 'Accounts[].[JoinedTimestamp,Status,Id,Email,Name]' |
           grep ACTIVE |
           sort |
           cut -f3) # just the ids
echo "$accounts"

Run the initialization function, specifying the aws-cli source profile for assuming roles, and the MFA device serial number or ARN. These are the same values as you would use for source_profile and mfa_serial in the aws-cli config file for a profile that assumes an IAM role. Your “source_profile” is probably “default”. If you don’t use MFA for assuming a cross-account IAM role, then you may leave MFA serial empty.

source_profile=default # The "source_profile" in your aws-cli config
mfa_serial=arn:aws:iam::YOUR_ACCOUNTID:mfa/YOUR_USER # Your "mfa_serial"

aws-session-init $source_profile $mfa_serial

Now, let’s iterate through the AWS accounts, running simple AWS CLI commands in each account. This example will output each AWS account id followed by the list of S3 buckets in that account.

for account in $accounts; do
  # Set up temporary assume-role credentials for an account/role
  # Skip to next account if there was an error.
  aws-session-set $account $role || continue

  # Sample command 1: Get the current account id (should match)
  this_account=$(aws-session-run \
                   aws sts get-caller-identity \
                     --output text \
                     --query 'Account')
  echo "Account: $account ($this_account)"

  # Sample command 2: List the S3 buckets in the account
  aws-session-run aws s3 ls

Wrap up by clearing out the bash variables holding temporary credentials.


Note: The credentials used by this approach are all temporary and use the default expiration. If any expire before you complete your tasks, you may need to adjust some of the commands and limits in your accounts.


Thanks to my role model, Jennine Townsend, the above code uses a special bash syntax to set the AWS environment variables for the aws-cli commands without an export, which would have made the sensitive environment variables available to other commands we might need to run. I guess nothing makes you as (justifiably) paranoid as deep sysadmin experience.

Jennine also wrote code that demonstrates the same approach of STS get-session-token with MFA followed by STS assume-role for multiple roles, but I never quite understood what she was trying to explain to me until I tried to accomplish the same result. Now I see the light.

GitHub Repo

For my convenience, I’ve added the above functions into a GitHub repo, so I can easily add them to my $HOME/.bashrc and use them in my regular work.


Perhaps you may find it convenient as well. The README provides instructions for how I set it up, but again, your environment may need tailoring.

Original article and comments: https://alestic.com/2019/12/aws-cli-across-organization-accounts/

on December 30, 2019 09:00 AM

December 29, 2019

Full Circle Weekly News #160

Full Circle Magazine

Ubuntu “Complete” sound: Canonical

Theme Music: From The Dust – Stardust

on December 29, 2019 12:30 PM

December 28, 2019


Rhonda D'Vine

I was musing about writing about this publicly. For the first time in all these years of writing pretty personal stuff about my feelings, my way of becoming more honest with myself and a more authentic person through that I was thinking about letting you in on this is a good idea.

You see, people have used information from my personal blog in the past, and tried to use it against me. Needless to say they failed with it, and it only showed their true face. So why does it feel different this time?

Thing is, I'm in the midst of my second puberty, and the hormones are kicking in in complete hardcore mode. And it doesn't help at all that there is trans antagonist crap from the past and also from the present popping up left and right at a pace and a concentrated amount that is hard to swallow on its own without the puberty.

Yes, I used to be able to take those things with a much more stable state. But every. Single. Of. These. Issues is draining all the energy out of myself. And even though I'm aware that I'm not the only one trying to fix all of those, even though for some spots I'm the only one doing the work, it's easier said than done that I don't have to fix the world, when the areas involved mean the world to me. Are areas that support me in so many ways. Are places that I need. And on top of that, the hormones are multiplying the energy drain of those.

So ... I know it's not that common. I know you are not used to a grown up person to go through puberty. But for god's sake. Don't make it harder than it has to be. I know it's hard to deal with a 46 year old teenager, so to say, I'm just trying to survive in this world of systematic oppression of trans people.

It would be nice to go for a week without having to cry your eyes out because another hostile event happened that directly affects your existence. The existence of trans lives aren't a matter of different opinions or different points of view, so don't treat it like that, if you want me to believe that you are a person able of empathy and basic respect.

Sidenote: Finishing to write this at this year's #36c3 is quite interesting because of the conference title: Resource Exhaution. Oh the irony.

/personal | permanent link | Comments: 14 | Flattr this

on December 28, 2019 10:22 PM

My year on HackerOne

Riccardo Padovani

Last year, totally by chance, I found a security issue over Facebook - I reported it, and it was fixed quite fast. In 2018, I also found a security issue over Gitlab, so I signed up to HackerOne, and reported it as well. That first experience with Gitlab was far from ideal, but after that first report I’ve started reporting more, and Gitlab has improved its program a lot.


Since June 2019, when I opened my first report of the year, I reported 27 security vulnerabilities: 4 has been marked as duplicated, 3 as informative, 2 as not applicable, 9 have been resolved, and 9 are currently confirmed and the fix is ongoing. All these 27 vulnerabilities were reported to Gitlab.

Especially in October and November I had a lot of fun testing the implementation of ElasticSearch over Gitlab. Two of the issues I have found on this topic have already been disclosed:

Why just Gitlab?

I have an amazing daily job as Solutions Architect at Nextbit that I love. I am not interested in becoming a full-time security researcher, but I am having fun dedicating some hours every month in looking for securities vulnerabilities.

However, since I don’t want it to be a job, I focus on a product I know very well, also because sometimes I contribute to it and I use it daily.

I also tried to target some program I didn’t know anything about, but I get bored quite fast: to find some interesting vulnerability you need to spend quite some time to learn how the system works, and how to exploit it.

Last but not least, Gitlab nowadays manages its HackerOne program in a very cool way: they are very responsive, kind, and I like they are very transparent! You can read a lot about how their security team works in their handbook.

Can you teach me?

Since I have shared a lot of the disclosed reports on Twitter, some people came and asked me to teach them how to start in the bug bounties world. Unfortunately, I don’t have any useful suggestion: I haven’t studied on any specific resource, and all the issues I reported this year come from a deep knowledge of Gitlab, and from what I know thanks to my daily job. There are definitely more interesting people to follow on Twitter, just check over some common hashtags, such as TogetherWeHitHarder.

Gitlab’s Contest

I am writing this blog post from my new keyboard: a custom-made WASD VP3, generously donated by Gitlab after I won a contest for their first year of public program on HackerOne. I won the best written report category, and it was a complete surprise; I am not a native English speaker, 5 years ago my English was a monstrosity (if you want to have some fun, just go reading my old blog posts), and still to this day I think is quite poor, as you can read here.

Indeed, if you have any suggestion on how to improve this text, please write me!

custom keyboard

Congratulations to Gitlab for their first year on HackerOne, and keep up the good work! Your program rocks, and in the last months you improved a lot!

HackeOne Clear

HackerOne started a new program, called HackerOne Clear, only on invitation, where they vet all researchers. I was invited and I thought about accepting the invitation. However, the scope of the data that has to be shared to be vetted is definitely too wide, and to be honest I am surprised so many people accepted the invitation. HackerOne doesn’t perform the check, but delegates to a 3rd party. This 3rd party company asks a lot of things.

I totally understand the need of background checks, and I’d be more than happy to provide my criminal record. It wouldn’t be the first time I am vetted, and I am quite sure it wouldn’t be the last.

More than the criminal record, I am a puzzled about these requirements:

  • Financial history, including credit history, bankruptcy and financial judgments;
  • Employment or volunteering history, including fiduciary or directorship responsibilities;
  • Gap activities, including travel;
  • Health information, including drug tests;
  • Identity, including identifying numbers and identity documents;

Not only the scope is definitely too wide, but also all these data will be stored and processed outside EU! Personal information will be stored in the United States, Canada and Ireland. Personal information will be processed in the United States, Canada, the United Kingdom, India and the Philippines.

As European citizen who wants to protect his privacy, I cannot accept such conditions. I’ve written to HackerOne asking why such a wide scope of data, and they replied that since it’s their partner that actually collects the information, there is nothing they can do. I really hope HackerOne will require fewer data in the future, preserving privacy of their researchers.


In these days I’ve though a lot about what I want to do in my future about bug bounties, and for the 2020 I will continue as I’ve done in the last months: assessing Gitlab, dedicating not more than a few hours a month. I don’t feel ready to step up my game at the moment. I have a lot of other interests I want to pursue in 2020 (travelling, learning German, improve my cooking skills), so I will not prioritize bug bounties for the time being.

That’s all for today, and also for the 2019! It has been a lot of fun, and I wish to you all a great 2020! For any comment, feedback, critic, write to me on Twitter (@rpadovani93) or drop an email at riccardo@rpadovani.com.



  • 29th December 2019: added paragraph about having asked to HackerOne more information on why they need such wide scope of personal data.
on December 28, 2019 07:00 PM

December 24, 2019

In May 2019, my research group was invited to give short remarks on the impact of Janet Fulk and Peter Monge at the International Communication Association‘s annual meeting as part of a session called “Igniting a TON (Technology, Organizing, and Networks) of Insights: Recognizing the Contributions of Janet Fulk and Peter Monge in Shaping the Future of Communication Research.

Youtube: Mako Hill @ Janet Fulk and Peter Monge Celebration at ICA 2019

I gave a five-minute talk on Janet and Peter’s impact to the work of the Community Data Science Collective by unpacking some of the cryptic acronyms on the CDSC-UW lab’s whiteboard as well as explaining that our group has a home in the academic field of communication, in no small part, because of the pioneering scholarship of Janet and Peter. You can view the talk in WebM or on Youtube.

[This blog post was first published on the Community Data Science Collective blog.]

on December 24, 2019 05:04 PM

People logging in to Ubuntu systems via SSH or on the virtual terminals are familiar with the Message Of The Day greeter which contains useful URLs and important system information including the number of updates that need to be installed manually.

However, when starting a Ubuntu container or a Ubuntu terminal on WSL, you are entering a shell directly which is way less welcoming and also hides if there are software updates waiting to be installed:

user@host:~$ lxc shell bionic-container

To make containers and the WSL shell friendlier to new users and more informative to experts it would be nice to show MOTD there, too, and this is exactly what the show-motd package does. The message is printed only once every day in the first started interactive shell to provide up-to-date information without becoming annoying. The package is now present in Ubuntu 19.10 and WSL users already get it installed when running apt upgrade.
Please give it a try and tell us what you think!

Bug reports, feature requests are welcome and if the package proves to be useful it will be backported to current LTS releases!

on December 24, 2019 12:27 AM