March 23, 2018

Over the past 5 years there has been a steady increase in the number of kernel bug fix commits that use the "Fixes" tag.  Kernel developers use this annotation on a commit to reference an older commit that originally introduced the bug, which is obviously very useful for bug tracking purposes. What is interesting is that there has been a steady take-up of developers using this annotation:

With the 4.15 release, 1859 of the 16223 commits (11.5%) were tagged as "Fixes", so that's a fair amount of work going into bug fixing.  I suspect there are more commits that are bug fixes, but aren't using the "Fixes" tag, so it's hard to tell for certain how many commits are fixes without doing deeper analysis.  Probably over time this tag will be widely adopted for all bug fixes and the trend line will level out and we will have a better idea of the proportion of commits per release that are just devoted to fixing issues.  Let's see how this looks in another 5 years time,  I'll keep you posted!
on March 23, 2018 12:26 AM

March 22, 2018

Coding and Gardening

Valorie Zimmerman

Warning: metaphors ahead! May be inappropriate or stretched.

Reading through student proposals for Google Summer of Code yesterday, I took a break from sitting in front of a keyboard to get some gardening done. We've had a few windstorms since I last raked, and with spring beginning, a few weeds have been popping up as well.

One of the issues I've been reminding almost every student about is unit testing. The other is documentation. These are practices which are seen as not fun, not creative.

Raking isn't seen as fun or creative either! Nor is hunting and digging the wily dandelion. But I rake away the dead branches and fir cones, and snag those dandelions because later in the season, my healthy vegetables and beautiful flowers not only flourish without weeds, but look better without litter around them. In addition, we chop up the branches and cones, and use that as mulch, which saves water and keeps down weeds. The dandelions go into the compost pile and rot into richer soil to help transplants be healthy. In other words, the work I do now pays off in the future.

The same is true of writing unit tests, commenting your code, and keeping good notes for user documentation as well! These are habits to build, not onerous tasks to be put off for tomorrow. Your unit tests will serve you well as long as your code runs anywhere. The same is true of your commented code. And finally if you code is user-facing, user documentation is what lets people use it!

So students, please remember to put those necessary bits into your proposal. This along with good communication with your mentor and the entire team are absolutely crucial for a successful project, so bake these into your plans.
on March 22, 2018 11:42 PM
I wrote before about how to update superceeded ISOs using zsync, and it's time to do that again, now that 16.04 LTS has the latest point release, to .4.

So the new command needed, after cd /path/to/iso is

cp kubuntu-16.04{.3,.4}-desktop-i386.iso && zsync

The magic I didn't fully understand was the {.3,.4} part. Now I get that it is saying copy the files ending in .3 and replace them with files ending in .4.

I wanted also to point out that zsync is also invaluable for testing, because Ubuntu spins daily ISOs. For instance, on the qatracker such as the most recent for testing the above point releases, there are a number of small CD icons. When you click on one, you are led to a small page with for instance, the following links to get xenial-desktop-amd64.iso:
RSYNC rsync -tzhhP rsync:// 
ZSYNC zsync 
GPG signature 
MD5 checksum
The http link will download via your browser to your ~/.Downloads folder unless you have set that otherwise. Fine if you want your testing ISO to be there. If instead you do zsync by

cd ~/Downloads && zsync

in the commandline, you will see a remarkable difference in how long it takes to download the second and subsequent times. Rsync does roughly the same thing. For these you do not need the "copy" cp step.

Get familiar with zysnc and use it more. It will save you time and make you more productive.

(originally posted a couple of weeks ago, but to my genealogy blog by mistake)
on March 22, 2018 11:25 PM

History of containers

Serge Hallyn

I’ve researched these dates several times now over the years, in preparation for several talks. So I’m posting it here for my own future reference.

There are some…  “interesting” claims and posts out there about the history of containers.   In fact, the order appears to be:

  1.  The Virtuozzo product development (or at least idea) started in 1999). Initial release was in 2002.
  2.  FreeBSD Jails were described in SANE paper in 2000. They were first available in the 4.0 FreeBSD release in March 2000. (I implemented jails as an Linux LSM in 2004, but this was – rightly – deemed an abuse of the LSM infrastructure). I’m not sure when development started on them (which would be the fairer comparison to what I listed as #1). I do know they were not started by Daniel in 1996 🙂 despite some online claims which I won’t link here.
  3.  The first public open source implementation for linux was linux-vserver, announced on the linux-kernel mailing list in 2001.

After this we have the following sequence of events:

  1. Columbia university had a project called ZAP which placed processes into “Process Domains (PODS)” which are like containers. This paper is from 2002. This likely means PODS were in development for
    some time before that, but they also were never considered for “real” upstream use.
  2. According to wikipedia, Solaris Zones were first released in 2004.
  3. Meiosys had a container-like feature which was specifically geared toward being able to checkpoint and restart applications. Theirs worked on linux, AIX, and Solaris. (I believe WPARS were based on this work)
  4. In 2005 I was asked to work on getting this functionality upstream. Luckily I had a great team. After reviewing the viability of upstreaming any of the existing implementations, we decided we’d have to start from scratch. Our first straw-man proposal was here. This really was intended as a straw man to start discussion, as we knew there would be great ideas in response to this, including this one by Eric. And of course there was also some resistance.

From here it would seem appropriate to credit some of the awesome people who did much of the implementation of the various features. I keep starting lists of names, but if I list some, then I’ll miss some and feel bad. So I won’t. If you want to know about a specific feature, you can look at the git log and lkml archives.

on March 22, 2018 09:09 PM

S11E03 – The Three Musketeers - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

This week we’ve been tracking flight VS 105 on Flight Aware and are joined by Michael Tunnell from We discuss some news and bring you the latest from the Ubuntu Community.

It’s Season 11 Episode 03 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Michael Tunnell are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on March 22, 2018 03:00 PM

March 21, 2018

Today, gksu was removed from Debian unstable. It was already removed 2 months ago from Debian Testing (which will eventually be released as Debian 10 “Buster”).

It’s not been decided yet if gksu will be removed from Ubuntu 18.04 LTS. There is one blocker bug there.


on March 21, 2018 05:29 PM
Recently I got inspired (paranoid ?) by my boss who cares a lot about software security. Previously, I had almost the same password on all the websites I used, I had them synced to google servers (Chrome user previously), but once I started taking software security seriously, I knew the biggest mistake I was making was to have a single password everywhere, so I went one step forward and set randomly generated passwords on all online accounts and stored them in a keystore.

I then enabled 2FA authentication on some important services (GMail, GitHub, Twitter, DO) and adopted the policy to never login to my browser's sync features. Doing that, I realize that the browser is just a commodity, it doesn't matter which browser I use as long as I can log into my online accounts and of course a browser that actually works.

I am pretty sure there are many things that I could still improve around my computing patterns, which I will over time.

Motto: software security over convenience.
on March 21, 2018 08:57 AM

March 20, 2018

A few weeks ago, I proposed a GSoC project on the topic of Firefox and Thunderbird plugins for Free Software Habits.

At first glance, this topic may seem innocent and mundane. After all, we all know what habits are, don't we? There are already plugins that help people avoid visiting Facebook too many times in one day, what difference will another one make?

Yet the success of companies like Facebook and those that prey on their users, like Cambridge Analytica (who are facing the prospect of a search warrant today), is down to habits: in other words, the things that users do over and over again without consciously thinking about it. That is exactly why this plugin is relevant.

Many students have expressed interest and I'm keen to find out if any other people may want to act as co-mentors (more information or email me).

One Facebook whistleblower recently spoke about his abhorrence of the dopamine-driven feedback loops that keep users under a spell.

The game changer

Can we use the transparency of free software to help users re-wire those feedback loops for the benefit of themselves and society at large? In other words, instead of letting their minds be hacked by Facebook and Cambridge Analytica, can we give users the power to hack themselves?

In his book The Power of Habit, Charles Duhigg lays bare the psychology and neuroscience behind habits. While reading the book, I frequently came across concepts that appeared immediately relevant to the habits of software engineers and also the field of computer security, even though neither of these topics is discussed in the book.

where is my cookie?

Most significantly, Duhigg finishes with an appendix on how to identify and re-wire your habits and he has made it available online. In other words, a quickstart guide to hack yourself: could Duhigg's formula help the proposed plugin succeed where others have failed?

If you could change one habit, you could change your life

The book starts with examples of people who changed a single habit and completely reinvented themselves. For example, an overweight alcoholic and smoker who became a super-fit marathon runner. In each case, they show how the person changed a single keystone habit and everything else fell into place. Wouldn't you like to have that power in your own life?

Wouldn't it be even better to share that opportunity with your friends and family?

One of the challenges we face in developing and promoting free software is that every day, with every new cloud service, the average person in the street, including our friends, families and co-workers, is ingesting habits carefully engineered for the benefit of somebody else. Do you feel that asking your friends and co-workers not to engage you in these services has become a game of whack-a-mole?

Providing a simple and concise solution, such as a plugin, can help people to find their keystone habits and then help them change them without stress or criticism. Many people want to do the right thing: if it can be made easier for them, with the right messages, at the right time, delivered in a positive manner, people feel good about taking back control. For example, if somebody has spent 15 minutes creating a Doodle poll and sending the link to 50 people, is there any easy way to communicate your concerns about Doodle? If a plugin could highlight an alternative before they invest their time in Doodle, won't they feel better?

If you would like to provide feedback or even help this project go ahead, you can subscribe here and post feedback to the thread or just email me.

cat plays whack-a-mole

on March 20, 2018 12:15 PM

Following the GStreamer 1.14 release and the new round of gtk-rs releases, there are also new releases for the GStreamer Rust bindings (0.11) and the plugin writing infrastructure (0.2).

Thanks also to all the contributors for making these releases happen and adding lots of valuable changes and API additions.

GStreamer Rust Bindings

The main changes in the Rust bindings were the update to GStreamer 1.14 (which brings in quite some new API, like GstPromise), a couple of API additions (GstBufferPool specifically) and the addition of the GstRtspServer and GstPbutils crates. The former allows writing a full RTSP server in a couple of lines of code (with lots of potential for customizations), the latter provides access to the GstDiscoverer helper object that allows inspecting files and streams for their container format, codecs, tags and all kinds of other metadata.

The GstPbutils crate will also get other features added in the near future, like encoding profile bindings to allow using the encodebin GStreamer element (a helper element for automatically selecting/configuring encoders and muxers) from Rust.

But the biggest changes in my opinion is some refactoring that was done to the Event, Message and Query APIs. Previously you would have to use a view on a newly created query to be able to use the type-specific functions on it

let mut q = gst::Query::new_position(gst::Format::Time);
if pipeline.query(q.get_mut().unwrap()) {
    match q.view() {
        QueryView::Position(ref p) => Some(p.get_result()),
        _ => None,
} else {

Now you can directly use the type-specific functions on a newly created query

let mut q = gst::Query::new_position(gst::Format::Time);
if pipeline.query(&mut q) {
} else {

In addition, the views can now dereference directly to the event/message/query itself and provide access to their API, which simplifies some code even more.

Plugin Writing Infrastructure

While the plugin writing infrastructure did not see that many changes apart from a couple of bugfixes and updating to the new versions of everything else, this does not mean that development on it stalled. Quite the opposite. The existing code works very well already and there was just no need for adding anything new for the projects I and others did on top of it, most of the required API additions were in the GStreamer bindings.

So the status here is the same as last time, get started writing GStreamer plugins in Rust. It works well!

on March 20, 2018 11:42 AM

March 19, 2018

A new release for your Ubuntu Phone powered by UBports!

Why? Because we have a dream \o/

uNav 0.75


  • Migrated to Openrouteservice.
  • Car | Walk | Bicycle routes.
  • New default map by default powered by Carto.
  • Fixed unit in review the route steps.

Install/update it from the Open Store.
on March 19, 2018 08:16 PM

A frequent response I receive when talking to prospective mentors: "I'm not a Debian Developer yet".

As student applications have started coming in, now is the time for any prospective mentors to introduce yourself on the debian-outreach list if you would like to help with any of the listed projects or any topics that have been proposed spontaneously by students without any mentor.

It doesn't matter if you are a Debian Developer or not. Furthermore, mentoring in a program like GSoC or Outreachy is a form of volunteering that is recognized just as highly as packaging or any other development activity.

When an existing developer writes an email advocating your application to become a developer yourself, they can refer to your contribution as a mentor. Many other processes, such as requests for DebConf bursaries, also ask for a list of your contributions and you can mention your mentoring experience there.

With the student deadline on 27 March, it is really important to understand the capacity of the mentoring team over the next 10 days so we can decide how many projects can realistically be supported. Please ask on the debian-outreach list if you have any questions about getting involved.

on March 19, 2018 08:10 AM

March 18, 2018

En esta ocasión, Francisco MolineroFrancisco Javier Teruelo y Marcos Costalescharlamos sobre los siguientes temas:

  • ¿Qué es Creative Common y por qué deberíamos de usarlo?
  • Ubuntu en servidores.

Capítulo 5º de la segunda temporada

El podcast esta disponible para escuchar en:
on March 18, 2018 01:00 PM

March 16, 2018

In a terminal run:
$ sudo snap install android-studio --classic
on March 16, 2018 08:59 PM

On the Monday of the project teams gathering in Dublin a now somewhat familiar gathering of developers and operators got together to discuss upgrades – specifically fast forward upgrades but discussion over the day drifted into rolling upgrades and how to minimize downtime in supporting components as well. This discussion has been a regular feature over the last 18 months at PTG’s, Forums and Ops Meetups.

Fast Forward Upgrades?

So what is a fast forward upgrade? A fast forward upgrade takes an OpenStack deployment through multiple OpenStack releases without the requirement to run agents/daemons at each upgrade step; it does not allow you to skip an OpenStack release – the process allows you to just not run a release as you pass through it. This enables operators using older OpenStack releases to catch up with the latest OpenStack release in as short an amount of time as possible, accepting the compromise that the cloud control plane is down during the upgrade process.

This is somewhat adjunct to a rolling upgrade, where access to the control plane of the cloud is maintained during the upgrade process by upgrading units of a specific service individually, and leveraging database migration approaches such as expand/migrate/contract (EMC) to provide as seamless an upgrade process as possible for an OpenStack cloud. In common with fast forward upgrades, releases cannot be skipped.

Both upgrade approaches specifically aim to not disrupt the data plane of the cloud – instances, networking and storage – however this may be unavoidable if components such as Open vSwitch and the Linux kernel need to be upgraded as part of the upgrade process.

Deployment Project Updates

The TripleO team have been working towards fast forward upgrades during the Queens cycle and have a ‘pretty well defined model’ for what they’re aiming for with their upgrade process. They still have some challenges around ordering to minimize downtime specifically around Linux and OVS upgrades.

The OpenStack Ansible team gave an update – they have a concept of ‘leap upgrades’ which is similar to fast-forward upgrades – this work appears to lag behind the main upgrade path for OSA, which is a rolling upgrade approach which aims to be 100% online.

The OpenStack Charms team still continue to have a primary upgrade focus on rolling upgrades, minimizing downtime as much as possible for both the control and data plane of the Cloud. The primary focus for this team right now is supporting upgrades of the underlying Ubuntu OS between LTS releases with the imminent release of 18.04 on the horizon in April 2018, so no immediate work is planned on adopting fast-forward upgrades.

The Kolla team also have a primary focus on rolling upgrades, for which support starts at OpenStack Queens or later. There was some general discussion around automated configuration generation using Oslo to ease migration between OpenStack releases.

No one was present to represent the OpenStack Helm team.

Keeping Networking Alive

Challenges around keeping the Neutron data-plane alive during an upgrade where discussed – this included:

  • Minimising Open vSwitch downtime by saving and restoring flows.
  • Use of the ‘neutron-ha-tool’ from AT&T to manage routers across network nodes during an OpenStack cloud upgrade – there was also a bit of bike shedding on approaches to Neutron router HA in larger clouds. Plan are afoot to endeavor to make this part of the neutron code base.

Ceph Upgrades

We had a specific slot to discuss upgrade Ceph as part of an OpenStack Cloud upgrade; some deployment projects upgrade Ceph first (Charms), some last (TripleO) but there was general agreement that Ceph upgrades are pretty much always a rolling upgrade – i.e. no disruption to the storage services being provided. Generally there seems to be less pain in this area so it was not a long session.

Operator Feedback

A number of operators shared experiences of walking their OpenStack deployments through fast forward upgrades including some of the gotchas and trip hazards encountered.

Oath provided a lot of feedback on their experience of fast-forward upgrading their cloud from Juno to Ocata which included some increased complexity due to the move to using cells internally for Ocata. Ensuring compatibility between OpenStack and supporting projects was one challenge encountered – for example, snapshots worked fine with Juno and Libvirt 1.5.3, however on upgrade live snapshots where broken until Libvirt was upgraded to 2.9.0. Not all test combinations are covered in the gate!

Some of these have been shared on the OpenStack Wiki.

Upgrade SIG

Upgrade discussion has become a regular fixture at PTG’s, Forums, Summits and Meetups over the last few years; getting it right is tricky and the general feeling in the session was that this is something that we should talk about more between events.

The formation of an Upgrade SIG was proposed and supported by key participants in the session. The objective of the SIG is to improve the overall upgrade process for OpenStack Clouds, covering both offline ‘fast-forward’ and online ‘rolling’ upgrades by providing a forum for cross-project collaboration between operators and developers to document and codify best practice for upgrading OpenStack.

The SIG will initially be led by Lujin Luo (Fujitsu), Lee Yarwood (Redhat) and myself (Canonical) – we’ll be sorting out the schedule for bi-weekly IRC meetings in the next week or so – OpenStack operators and developers from across all projects are invited to participate in the SIG and help move OpenStack life cycle management forward!

on March 16, 2018 10:00 AM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In February, about 196 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change but a new platinum sponsor is about to join our project.

The security tracker currently lists 60 packages with a known CVE and the dla-needed.txt file 33. The number of open issues increased significantly and we seem to be behind in terms of CVE triaging.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on March 16, 2018 08:08 AM

March 15, 2018

S11E02 – A Tale of Two Cities - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

This week we interview Will Cooke, Manager of the Ubuntu Desktop team, about the changes we can expect to see in Ubuntu 18.04.

It’s Season 11 Episode 02 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

  • We interview Will Cooke about the upcoming Ubuntu Desktop (Bionic Beaver) 18.04 LTS release.

  • Image credit: Kim Gorga

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on March 15, 2018 03:00 PM
Since 17.10, netplan has been the default network configuration tool in Ubuntu. Since then, it has grown in features, bug fixes, and even got its package renamed in the archive from "nplan" to We added better routing, improved handling for bridges, support for marking devices as "optional" for boot (so that the system doesn't wait for them to come up at boot time), lots of documentation updates... There's even been work to get it building for other distros.

We have a website for it, too:

As we get closer to the release of Ubuntu 18.04, it is past due to involve everyone in testing netplan and making sure it is solid and as featureful as possible for a wide range of use cases.

This is where you get to participate.

Let us know about any feature gaps that remain in what
netplan supports, so that we can add the features when it's possible, or so that these feature gaps can be properly documented if they can't be closed by release time.

Report any bugs you find in netplan on Launchpad.

If you are unsure whether something is a bug, it might well be, so it doesn't hurt to file a bug. At the very least, we do want to know if something feels really difficult to do, so we can look into improving the experience.

If you're unsure how to do something you can look up questions and answers, or add your own, on AskUbuntu here:

Netplan is being actively developed and we can use your help; so if there's one feature you care deeply about, or a bug that bugs you and you want to have a hand in fixing it, you can also jump right in to the code in Github:
on March 15, 2018 01:18 PM

March 13, 2018

MAAS 2.4.0 Alpha 2 released!

Andres Rodriguez

Hello MAASters!

I’m happy to announce that MAAS 2.4.0 alpha 2 has now been released and is available for Ubuntu Bionic.
MAAS Availability
MAAS 2.4.0 alpha 1 is available in the Bionic -proposed archive or in the following PPA:

MAAS 2.4.0 (alpha2)

Important announcements

NTP services now provided by Chrony

Starting with 2.4 Alpha 2, and in common with changes being made to Ubuntu Server, MAAS replaces ‘ntpd’ with Chrony for the NTP protocol. MAAS will handle the upgrade process and automatically resume NTP service operation.

Vanilla CSS Framework Transition

MAAS 2.4 is undergoing a Vanilla CSS framework transition to a new version of vanilla, which will bring a fresher look to the MAAS UI. This framework transition is currently work in progress and not all of the UI have been fully updated. Please expect to see some inconsistencies in this new release.

New Features & Improvements

NTP services now provided by Chrony.

Starting from MAAS 2.4alpha2, chrony is now the default NTP service, replacing ntpd. This work has been done to align with the Ubuntu Server and Security team to support chrony instead of ntpd. MAAS will continue to provide services exactly the same way and users will not be affected by the changes, handling the upgrade process transparently. This means that:

  • MAAS will configure chrony as peers on all Region Controllers
  • MAAS will configure chrony as a client of peers for all Rack Controllers
  • Machines will use the Rack Controllers as they do today

MAAS Internals optimization

MAAS 2.4 is currently undergoing major surgery to improve various areas of operation that are not visible to the user. These updates will improve the overall performance of MAAS in larger environments. These improvements include:

  • AsyncIO based event loop
    • MAAS has an event loop which performs various internal actions. In older versions of MAAS, the event loop was managed by the default twisted event loop. MAAS now uses an asyncio based event loop, driven by uvloop, which is targeted at improving internal performance.

  • Improved daemon management
    • MAAS has changed the way daemons are run to allow users to see both ‘regiond’ and ‘rackd’ as processes in the process list.
    • As part of these changes, regiond workers are now managed by a master regiond process. In older versions of MAAS each worker was directly run by systemd. The master process is now in charge of ensuring workers are running at all times, and re-spawning new workers in case of failures. This also allows users to see the worker hierarchy in the process list.
  • Ability to increase the number of regiond workers
    • Following the improved way MAAS daemons are run, further internal changes have been made to allow the number of regiond workers to be increased automatically. This allows MAAS to scale to handle more internal operations in larger environments.
    • While this capability is already available, it is not yet available by default. It will become available in the following milestone release.
  • Database query optimizations
    • In the process of inspecting the internal operations of MAAS, it was discovered that multiple unnecessary database queries are performed for various operations. Optimising these requires internal improvements to reduce the footprint of these operations. Some areas that have been addressed in this release include:
      • When saving node objects (e.g. making any update of a machine, device, rack controller, etc), MAAS validated changes across various fields. This required an increased number of queries for fields, even when they were not being updated. MAAS now tracks specific fields that change and only performs queries for those fields.
        • Example: To update a power state, MAAS would perform 11 queries. After these improvements, , only 1 query is now performed.
      • On every transaction, MAAS performed 2 queries to update the timestamp. This has now been consolidated into a single query per transaction.
    • These changes  greatly improves MAAS performance and database utilisation in larger environments. More improvements will continue to be made as we continue to examine various areas in MAAS.
  • UI optimisations
    • MAAS is now being optimised to reduce the amount of data loaded in the websocket API to render the UI. This is targeted at only processing data for viewable information, improving various legacy areas. Currently, the work done in this area includes:
      • Script results are only loaded for viewable nodes in the machine listing page, reducing the overall amount of data loaded.
      • The node object is updated in the websocket only when something has changed in the database, reducing the data transferred to the clients as well as the amount of internal queries.

Audit logging

Continuing with the audit logging improvements, alpha2 now adds audit logging for all user actions that affect Hardware Testing & Commissioning.

KVM pod improvements

MAAS’ KVM pods was initially developed as a feature to help developers quickly iterate and test new functionality while developing MAAS. This, however, because a feature that allow not only developers, but also administrators to make better use of resources across their datacenter. Since the feature was initially create for developers, some features were lacking. As such, in 2.4 we are improving the usability of KVM pods:

  • Pod AZ’s.
    MAAS now allows setting the physical zone for the pod. This helps administrators by conceptually placing their KVM pods in a AZ, which enables them to request/allocate machines on demand based on its AZ. All VM’s created from a pod will inherit the AZ.

  • Pod tagging
    MAAS now adds the ability to set tags for a pod. This allows administrators to use tags to allow/prevent the creation of a VM inside the pod using tags. For example, if the administrator would like a machine with a ‘tag’ named ‘virtual’, MAAS will filter all physical machines and only consider other VM’s or a KVM pod for machine allocation.

Bug fixes

Please refer to the following for all bug fixes in this release.

on March 13, 2018 12:56 PM

March 12, 2018

I have noticed an interesting pattern when some new projects and initiatives get started: they have an excessive application of governance, in many cases to deliver an impression of “completeness” or project independence. I want to share a few words on how to avoid this over-complexity.

I understand why this happens. Generally, successful collaborative communities strive to be very objective environments, with process, workflow, and governance clearly documented as a means to ensure anyone and everyone can contribute. The governance piece plays a key role in affirming objective leadership and avoiding conflicts of interest.

These are valiant goals, but there needs to be a careful balance of this process and governance, where it is blended with a robust focus on simplicity and efficiency. I have seen many projects unwittingly sacrifice agility and the plain joy of participating with an overly bureaucratic machine that a few governance nerds obsess over.

There are countless examples (that shall remain anonymous here), such as a new advocacy group of around 10 people who had 2 boards to govern them (everyone was on a board) and had excessively long meetings. There was the concensus-based board with 18 members that could never make decisions. There was the community that required excessive commitments from their members to overly bureaucratic governance mantra handed down from high. In all of these cases those communities were worsened, not strengthened by governance.

Start Simple

The solution here is start small and simple, and then observe and iterate.

Just like how a chef applies salt to a dish, you should apply the smallest amount possible and adjust to taste. Start by putting in place the thinnest layer of governance possible to accomplish your goals.

To start with this we need to understand what our governance goals actually are. Again, simplicity is key here. I generally recommend you strive to build an environment in which formal governance is kept out of the day to day of participation in the community and instead focused on the rules and policies that underlay the project. Don’t bottleneck your community by requiring governance approval on decisions unless absolutely necessary (e.g. community-wide policy is a great governance-target, but not pull request approvals). Effective governance is as much about knowing where governance boards should steer clear of as well as where they should focus their attention.

Governance can of course be as long as a piece of string. The simplest start in many cases is no governance at all. See how the project runs and if there is even a requirement for a governance function. In many, many cases you simply don’t need any governance: just a communicative set of community participants who can make decisions collaboratively.

If there is a need for something more expansive, I recommend you start with a simple board of 3 – 5 people whose charter is focused on general community matters (e.g. handing sponsor funds, how the community is moderated, publishing policy etc). For technology communities, the board would not have any technical authority: that is for the developers to decide (this avoids impacts on engineering agility). You could grandfather in the initial board members, have them meet every month on a public channel, and log outcomes on a wiki. After a set period of time, open up nominations, and form the first independently elected board.

We did this in Ubuntu. We started with some core governance boards (the Community Council, focused on community policy and the Technical Council focused on technical policy). The rest of the extensive governance structure came as Ubuntu grew significantly. Our goal was always to keep things as lightweight as possible.

Iterate and Improve

I am a firm believer that the way in which we collaborate should be as much of a collaborative product as the output of a community project. Just like an open source project, we should review, iterate, and review the performance of our iterations. We should constantly assess how we can optimize our governance to be as simple and thin as possible. We should build an environment where someone can file a metaphorical or literal pull request with pragmatic ways to optimize how the project is governed. This assures the project is pulling the best insight from members to ensure it is as efficient and as lightweight as possible.

To do this, honestly observe how the governance performs. Is it accomplishing the goals it is designed for? Are governance members enjoying their work and fulfilled in the delivery? Is it supporting the success of community members? Evaluate how the meetings are run, if actions are followed up on, and whether people are late.

On a regular basis (e.g. once a quarter) plan some adjustments and changes based on these observations and track if these changes improve overall performance.

Throughout this process, deliberately practice muntzing (as I wrote about here) to remove anything that isn’t neccessary. This keeps your governance to a minimum and ensures there is a culture of challenging current norms and optimizing how the project works. This ultimately results in healthier more pragmatic communities that still benefit from the many benefits of well-structured governance.

The post Keeping Governance Simple and Uncomplicated appeared first on Jono Bacon.

on March 12, 2018 05:08 AM

March 10, 2018

webkitgtk is the GTK+ port of WebKit. webkitgtk provides web functionality for many things including GNOME Online Accounts’ login panels; Evolution’s HTML email editor and viewer; and the engine for the Epiphany web browser (also known as GNOME Web).

Last year, I announced here that Debian 9 “Stretch” included the latest version of webkitgtk (Debian’s package is named webkit2gtk). At the time, I hoped that Debian 9 would get periodic security and bugfix updates. Nine months later, let’s see how we’ve been doing.

Release History

Debian 9.0, released June 17, 2017, included webkit2gtk 2.16.3 (up to date).

Debian 9.1 was released July 22, 2017 with no webkit2gtk update (2.16.5 was the current release at the time).

Debian 9.2, released October 8, 2017, included 2.16.6 (There was a 2.18.0 release available then but for the first stable update, we kept it simple by not taking the brand new series.)

Debian 9.3 was released December 9, 2017 with no webkit2gtk update (2.18.3 was the current release at the time).

Debian 9.4 released March 10, 2018 (today!), includes 2.18.6 (up to date).

Release Schedule

webkitgtk development follows the GNOME release schedule and produces new major updates every March and September. Only the current stable series is supported (although sometimes there can be a short overlap; 2.14.6 was released at the same time as 2.16.1). Distros need to adopt the new series every six months.

Like GNOME, webkitgtk uses even numbers for stable releases (2.16 is a stable series, 2.16.3 is a point release in that series, but 2.17.3 is a development release leading up to 2.18, the next stable series).

There are webkitgtk bugfix releases, approximately monthly. Debian stable point releases happen approximately every two or three months (the first point release was quicker).

In a few days, webkitgtk 2.20 will be released. Debian 9.5 will need to include 2.20.1 (or 2.20.2) to keep users on a supported release.

Report Card

From five Debian 9 releases, we have been up to date in 2 or 3 of them (depending on how you count the 9.2 release).

Using a letter grade scale, I think I’d give Debian a B or B- so far. But this is significantly better than Debian 8 which offered no webkitgtk updates at all except through backports. In my grading, Debian could get a A- if we consistently updated webkitgtk in these point releases.

To get a full A, I think Debian would need to push the new webkitgtk updates (after a brief delay for regression testing) directly as security updates without waiting for point releases. Although that proposal has been rejected for Debian 9, I think it is reasonable for Debian 10 to use this model.

If you are a Debian Developer or Maintainer and would like to help with webkitgtk updates, please get in touch with Berto or me. I, um, actually don’t even run Debian (except briefly in virtual machines for testing), so I’d really like to turn over this responsibility to someone else in Debian.


I find the Repology webkitgtk tracker to be fascinating. For one thing, I find it humorous how the same package can have so many different names in different distros.

on March 10, 2018 05:25 PM
"The beaver told the rabbit as they stared at the Hoover Dam: No, I didn't
build it myself, but it's based on an idea of mine".
-- Charles Hard Townes

The first beta of the Bionic Beaver (to become 18.04) has now been
released, and is available for download!

This milestone features images for Kubuntu, Ubuntu Budgie, Ubuntu Kylin,
Ubuntu MATE, and Xubuntu.

Pre-releases of the Bionic Beaver are *not* encouraged for anyone needing a
stable system or anyone who is not comfortable running into occasional,
even frequent breakage. They are, however, recommended for Ubuntu flavour
developers and those who want to help in testing, reporting, and fixing
bugs as we work towards getting this release ready.

Beta 1 includes some software updates that are ready for broader testing.
However, it is quite an early set of images, so you should expect some bugs.

While these Beta 1 images have been tested and do work, except as noted in
the release notes, Ubuntu developers are continuing to improve the Bionic
Beaver. In particular, once newer daily images are available, system
installation bugs identified in the Beta 1 installer should be verified
against the current daily image before being reported in Launchpad. Using
an obsolete image to re-report bugs that have already been fixed wastes
your time and the time of developers who are busy trying to make 18.04 the
best Ubuntu release yet. Always ensure your system is up to date before
reporting bugs.

Kubuntu is the KDE-based flavour of Ubuntu. It uses the KDE Plasma desktop
and includes a wide selection of tools from the KDE project.

The Kubuntu 18.04 Beta 1 images can be downloaded from:

More information about Kubuntu 18.04 Beta 1 can be found here:

[Ubuntu Budgie]
Ubuntu Budgie is the Budgie Desktop based flavour of Ubuntu. Combines the
simplicity and elegance of the Budgie interface to produce a traditional
desktop orientated distro with a modern paradigm.

The Ubuntu Budgie 18.04 Beta 1 images can be downloaded from:

More information about Ubuntu Budgie 18.04 Beta 1 can be found here:

[Ubuntu Kylin]
Ubuntu Kylin is a flavour of Ubuntu that is more suitable for Chinese users.

The Ubuntu Kylin 18.04 Beta 1 images can be downloaded from:

More information about Ubuntu Kylin 18.04 Beta 1 can be found here:

[Ubuntu MATE]
Ubuntu MATE is the MATE Desktop based flavour of Ubuntu.  It is ideal for
those who want the most out of their computers and prefer​_ a traditional
desktop metaphor.

The Ubuntu MATE 18.04 Beta 1 images can be downloaded from:

More information about Ubuntu MATE 18.04 Beta 1 can be found here:

Xubuntu is the Xfce Desktop based flavour of Ubuntu.  It is perfect for
those who want the most out of their desktops, laptops and netbooks with a
modern look. It works well on older hardware too.

The Xubuntu 18.04 Beta 1images can be downloaded from:

More information about Xubuntu 18.04 Beta 1 can be found here:

If you're interested in following the changes as we further develop the
Bionic Beaver, we suggest that you subscribe to the ubuntu-devel-announce
list. This is a low-traffic list (a few posts a week) carrying
announcements of approved specifications, policy changes, beta releases and
other exciting events.


A big thank you to the developers and testers for their efforts to pull
together this Beta release!

On behalf of Ubuntu Release Team,

Dustin Krysak
Originally posted to the Ubuntu Release mailing list on Fri Mar 9 19:51:58 UTC 2018 
by Dustin Krysak, on behalf of the Ubuntu Release Team
on March 10, 2018 04:38 AM

March 09, 2018

The first beta of the Bionic Beaver (to become 18.04) has now been released, and is available for download!

This milestone features images for Kubuntu, Ubuntu Budgie, Ubuntu Kylin, Ubuntu MATE, and Xubuntu.

Pre-releases of the Bionic Beaver are not encouraged for:
* anyone needing a stable system
* anyone who is not comfortable running into occasional,
even frequent breakage.

They are, however, recommended for:
* Ubuntu flavour developers
* those who want to help in testing, reporting, and fixing
as we work towards getting this release ready.

Beta 1 includes some software updates that are ready for broader testing. However, it is quite an early set of images, so you should expect some bugs.

The full text of the announcement:

The Kubuntu 18.04 Beta 1 images can be downloaded from:

More information about Kubuntu 18.04 Beta 1 can be found here:

on March 09, 2018 10:05 PM

We are preparing Ubuntu MATE 18.04 (Bionic Beaver) for distribution on April 26th, 2018 With this Beta pre-release, you can see what we are trying out in preparation for our next (stable) version.

Ubuntu MATE 18.04 Beta 1

What works?

People tell us that Ubuntu MATE is stable. You may, or may not, agree.

Ubuntu MATE Beta Releases are NOT recommended for:

  • Regular users who are not aware of pre-release issues
  • Anyone who needs a stable system
  • Anyone uncomfortable running a possibly frequently broken system
  • Anyone in a production environment with data or workflows that need to be reliable

Ubuntu MATE Beta Releases are recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Ubuntu MATE, MATE, and GTK+ developers

What changed since the Ubuntu MATE 17.10 final release?

We've been refining Ubuntu MATE since the 17.10 release and making improvements to ensure that Ubuntu MATE offers what our users want today and what they'll need over the life of this LTS release. This is what's changed since 17.10.

MATE Desktop 1.20

As you may have seen, MATE Desktop 1.20 was released in February 2018 and offers some significant improvements:

  • MATE Desktop 1.20 supports HiDPI displays with dynamic detection and scaling.
    • HiDPI hints for Qt applications are also pushed to the environment to improve cross toolkit integration.
    • Toggling HiDPI modes triggers dynamic resize and scale, no log out/in required.
  • Marco now supports DRI3 and Present, if available.
    • Frame rates in games are significantly increased when using Marco.
  • Marco now supports drag to quadrant window tiling, cursor keys can be used to navigate the Alt + Tab switcher and keyboard shortcuts to move windows to another monitor were added.

If your hardware/drivers support DRI3 then Marco compositing is now hardware accelerated. This dramatically improves 3D rendering performance, particularly in games. If your hardware doesn't support DRI3 then Marco will fallback to a software compositor.

You can read the release announcement to discover everything that improved in MATE Desktop 1.20. It is a significant release that also includes a considerable number of bug fixes.

Global Menu and MATE HUD

Ubuntu MATE Global Menu

The Global Menu integration is much improved. When the Global Menu is added to a panel the application menus are automatically removed from the application window and only presented globally, no additional configuration (as was the case) is required. Likewise removing the Global Menu from a panel will restore menus to their application windows.


The HUD now has a 250ms (default) timeout, holding Alt any longer won't trigger the HUD. This is consistent with how the HUD in Unity 7 works. We've fixed a number of issues reported by users of Ubuntu MATE 17.10 regarding the HUD swallowing key presses. The HUD is also HiDPI aware now.

Indicators by default

Ubuntu MATE 18.04 uses Indicators by default in all layouts. These will be familiar to anyone who has used Unity 7 and offer better accessibility support and ease of use over notification area applets. The volume in Indicator Sound can now be over driven, so it is consistent with the MATE sound preferences. Notification area applets are still supported as a fallback.


MATE Dock Applet

MATE Dock Applet is used in the Mutiny layout, but anyone can add it to a panel to create custom panel arrangements. The new version adds support for BAMF and icon scrolling.

  • MATE Dock Applet no longer uses its own method of matching icons to applications and instead uses BAMF. What this means for users is that from now on the applet will be a lot better at matching applications and windows to their dock icons.
  • Icon scrolling is useful when the dock has limited space on its panel and will prevent it from expanding over other applets. This addresses an issue reported by several users in Ubuntu MATE 17.10.

Brisk Menu

Brisk Menu Dash Launcher

Many users commented that when using the Mutiny layout the "traditional" menu felt out of place. The Solus Project, the maintainers of Brisk Menu, have add a dash-style launcher at our request. Ubuntu MATE 18.04 includes a patched version of Brisk Menu that includes this new dash launcher. When MATE Tweak is used to enable the Mutiny or Cupertino layout, it now switches on the dash launcher which enables a full screen, searchable, application launcher. Similarly, switching to the other panel layouts restores the more traditional Brisk Menu.

MATE Window Applets

The Mutiny layout now integrates the mate-window-applets. You can see these in action alongside an updated Mutiny layout here:

Mutiny undecorated maximised windows

Minimal Installation

If you follow the Ubuntu news closely you may have heard that 18.04 now has a Minimal Install option. Ubuntu MATE was at the front of the queue to take advantage of this new feature.

Brisk Menu Dash Launcher

The Minimal Install is a new option presented in the installer that will install just the MATE Desktop, its utilities, its themes and Firefox. All the other applications such as office suite, email client, video player, audio manager, etc. are not installed. If you're interested, here is the complete list of software that will not be present on a minimal install of Ubuntu MATE 18.04

So, who's this aimed at? There are users who like to uninstall the software they do not need or want and build out their own desktop experience. So for those users, a minimal install is a great platform to build on. For those of you interested in creating "kiosk" style devices, such as home brew Steam machines or Kodi boxes, then a minimal install is another useful starting point.

MATE Tweak

MATE Tweak can now toggle the HiDPI mode between auto detection, regular scaling and forced scaling. HiDPI mode changes are dynamically applied. MATE Tweak has a deeper understanding of Brisk Menu and Global Menu capabilities and manages them transparently while switching layouts. Switching layouts is far more reliable now too. We've removed the Interface section from MATE Tweak. Sadly all the features the Interface section tweaked have been dropped from GTK3 so are now redundant.


We've landed caja-eiciel and caja-seahorse.

  • caja-eiciel - An extension for Caja to edit access control lists (ACLs) and extended attributes (xattr)
  • caja-seahorse - An extension for Caja which allows encryption and decryption of OpenPGP files using GnuPG

Artwork, Fonts & Emoji

Emoji Picker

We are no longer shipping mate-backgrounds by default. They have served us well, but are looking a little stale now. We have created a new selection of high quality wallpapers comprised of some abstract designs and high resolution photos from The Ubuntu MATE Plymouth theme (boot logo) is now HiDPI aware. Our friends at Ubuntu Budgie have uploaded a new version of Slick Greeter which now fades in smoothly, rather than the stuttering we saw in Ubuntu MATE 17.10. We've switched to Noto Sans for users of Japanese, Chinese and Korean fonts and glyphs. MATE Desktop 1.20 supports emoji input, so we've added a colour emoji font too.

Raspberry Pi images

We're am planning on releasing Ubuntu MATE images for the Raspberry Pi around the time 18.04.1 is released, which should be sometime in July. It takes about a month to get the Raspberry Pi images built and tested and we simply don't have time to do this in time for the April release of 18.04.

Download Ubuntu MATE 18.04 Beta 1

We've even redesigned the download page so it's even easier to get started.


Known Issues

Here are the known issues.

Ubuntu MATE

  • Anyone upgrading from Ubuntu 16.04 or newer may need to use MATE Tweak to reset the panel layout to one of the bundled layouts post upgrade.
    • Migrating panel layouts, particularly those without Indicator support, is hit and miss. Mostly miss.

Ubuntu family issues

This is our known list of bugs that affects all flavours.

You'll also want to check the Ubuntu MATE bug tracker to see what has already been reported. These issues will be addressed in due course.


Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

on March 09, 2018 07:00 PM

March 07, 2018

Backing up GPG keys

Mathieu Trudel

Using PGP/GPG keys for a long period of time (either expiring keys, or extending expiration dates) and the potential for travel, for hardware to fail, or for life's other events means that eventually rather than potentially, you will end up in a situation where a key is lost, damaged, or where you otherwise need to proceed with some disaster recovery techniques.

These techniques could be as simple as forgetting about the key altogether and letting it live forever on the Internet, without being used. It could also be that you were clever and saved a revocation certificate somewhere different than your private key is backed up, but what if you didn't?

What if you did not print the revocation certificate? Or you just really don't feel very much like re-typing half a gazillion character?

I wouldn't wish it to anyone, but there will always be the risk of a failure of your "backup options"; so I'm sharing here my personal backup methods.

I back up my GPG keys, which I use both at and outside of work, on multiple different media:

  • "Daily use" happens using a Yubikey that holds securely the private part of the keys (it can't be extracted from the smartcard), as well as the public part. I've already written about this two years ago, on this blog.
  • The first layer of backup is on a LUKS-encrypted USB key. The USB key must obviously be encrypted to block out most attempts at accessing the contents of the key; and it is a key that I usually carry on my person at all times, like the Yubikeys -- I also use it to back up other files I can't leave without, such as a password vault, some other certificates, copies of ID documents in case of loss for when I travel, etc.
  • The next layer is on paper. Well, cardstock actually, to avoid wanting to fold it. This is the process I want to dig into deeper here.

It turns out that backing up secure keys on paper is pretty straightforward, and something just fine to do. You will obviously want to keep the paper copies in a secure location that only you have access to, as much as possible safe from fire (or at least somewhere unlikely to burn down at the same time as you'd lose the other backups).

paperkey is a generally accepted way of saving the private part of your GPG key. It does a decent job at saving things in a printable form, from which point you would go ahead and re-type, or use OCR to recover the text generated by paperkey:

paperkey --secret-key secret.gpg --output printme.txt

This retains the same security systems as your original key. You should have added a passphrase to it anyway, so even if the paper copy was found and used to recover the key, you would be protected by the complexity of your passphrase.

But this depends on OCR working correctly, especially on an aging medium such as paper, or you spending many hours re-typing the contents, and potentially tracking down typos. There's error correction, but that sounds to me like not fun at all. When you want to recover your key, presumably it is because you really do need it as soon as possible.

Back in 2015 when I generated my latest keys, I found a blog post that explained how to use QR codes to back up data. QR codes have the benefit of being very resilient to corruption, and above all, do not require typing. QR codes are however limited in size, being limited to 177x177 squares, for about 1200 characters storage.

Along with that blog post, I also found out about DataMatrix codes (which are quite similar to QR codes), but where each symbol can save a bit more data (about 1500 bytes per image in the biggest size). Pick the format you prefer, I picked DataMatrix. Simply modify the size you split to in the commands below.

One might wish to save the paperkey or the private key directly (obviously, saving the private key might mean more chunks to print), and that can be done using the programs in dmtx-utils:
cat printme.txt | split -b 1500 - part-
rm printme.txt
for part in part-*; do
    dmtxwrite -e 8 ${part} > ${part}.png

You will be left with multiple parts of the file you originally split (without a file extension), as well as a corresponding image in PNG format that can be printed, and later scanned, to recover the original.

Keep these in a safe location and your key should be recoverable years down the line. It's not a bad idea to "pretend" there's a catastrophe and attempt to recover your key every few months, just to be sure you can go through the steps easily and that the paper keys are in good shape.

Recovery is simple:

for file in *.png; do dmtxread $file >> printme.txt; done

If all went well, the original and recovered files should be identical, and you just avoided a couple of hours of typing.

Stay safe!
on March 07, 2018 08:19 PM

Git master of Konsole recently grew integration for content along with a new category on the store for Konsole color schemes.

Soon you’ll be able to get a fresh look for your terminal without leaving the window or having to mess with copying around files manually!


To celebrate I’ve also made a new color scheme based on Atom’s One Dark syntax theme.


Happy Hacking!

on March 07, 2018 12:45 PM

Users of Kubuntu 17.10 Artful Aardvark can now upgrade via our backports PPA to the 3rd bugfix release (5.12.3) of the Plasma 5.12 LTS release series from KDE.

(Testers of 18.04 Bionic Beaver will need to be patient as the Ubuntu archive is currently in Beta 1 candidate freeze for our packages, but we hope to update the packages there once the Beta 1 is released)

The full changelog of fixes for 5.12.3 can be found here.

This includes an impressive list of fixes for Plasma Discover software centre, thanks in part to the excellent recent drive to improve and polish this important part of the plasma desktop by our Product Manager and KDE Developer Nate Graham.

Users of 17.10:

To update add the following repository to your software sources list:


or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt update
sudo apt full-upgrade


PPA upgrade notes:

~ The Kubuntu backports PPA includes various other backported applications, and KDE Frameworks 5.43, so please be aware that enabling the backports PPA for the 1st time and doing a full upgrade will result in a substantial amount of upgraded packages in addition to Plasma 5.12.

~ The PPA will also continue to receive bugfix updates to Plasma 5.12 when they become available, and further updated KDE applications.

~ While we believe that these packages represent a beneficial and stable update, please bear in mind that they have not been tested as comprehensively as those in the main Ubuntu archive, and are supported only on a limited and informal basis. Should any issues occur, please provide feedback on our mailing list [1], IRC [2], and/or file a bug against our PPA packages [3].

1. Kubuntu-devel mailing list:
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on
3. Kubuntu ppa bugs:

on March 07, 2018 11:55 AM

March 06, 2018

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Distro Tracker

Since we switched to salsa, and with the arrival of prospective GSOC students interested to work on distro-tracker this summer, I have been rather active on this project as can be seen in the project’s activity summary. Among the most important changes we can note:

  • The documentation and code coverage analysis is updated on each push.
  • Unit tests, functional tests and style checks (flake8) are run on each push but also on merge requests, allowing contributors to have quick feedback on their code. Implemented with this Gitlab CI configuration.
  • Multiple bug fixes (more of it). Update code to use python3-gpg instead of deprecated python3-gpgme (I had to coordinate with DSA to get the new package installed).
  • More unit tests for team related code. Still a work in progress but I made multiple reviews already.

Debian Live

I created the live-team on to prepare for the move of the various Debian live repositories. The move itself has been done by Steve McIntyre. In the discussion, we also concluded that the live-images source package can go away. I thus filed its removal request.

Then I spent a whole day reviewing all the pending patches. I merged most of them and left comments on the remaining ones:

  • Merged #885453 cleaning up double slashes in some paths.
  • Merged #885466 allowing to set upperdir tmpfs mount point size.
  • Merged #885455 switching back the live-boot initrd to use busybox’s wget as it supports https now.
  • Merged #886328 simplifying the mount points handling by using /run/live instead of /lib/live/mount.
  • Merged #886337 adding options to build smaller initrd by disabling some features.
  • Merged #866009 fixing a race condition between live-config and systemd-tmpfiles-setup.
  • Reviewed #884355 implementing new hooks in live-boot’s initrd. Not ready for merge yet.
  • Reviewed #884553 implementing cross-architecture linux flavour selection. Not ready for merge yet.
  • Merged #891206 fixing a regression with local mirrors.
  • Merged #867539 lowering the process priority of mksquasfs to avoid rendering the machine completely unresponsive during this step.
  • Merged #885692 adding UEFI support for ARM64.
  • Merged #847919 simplifying the bootstrap of foreign architectures.
  • Merged #868559 fixing fuse mounts by switching back to klibc’s mount.
  • Wrote a patch to fix verify-checksums option in live-boot (see #856482).
  • I released a new version of live-config but wanted some external testing before releasing the new live-boot. This did not happen yet unfortunately.

Debian LTS

I started a discussion on debian-devel about how we could handle the extension of the LTS program that some LTS sponsors are asking us to do.

The response have been rather mixed so far. It is unlikely that wheezy will be kept on the official mirror after its official EOL date but it’s not clear whether it would be possible to host the wheezy updates on some other server for longer.

Debian Handbook

I moved the git repository of the book to salsa and released a new version in unstable to fix two recent bugs: #888575 asking us to implement some parallel building to speed the build and #888578 informing us that a recent debhelper update broke the build process due to the presence of a build directory in the source package.

Debian Packaging

I moved all my remaining packages to and used the opportunity to clean them up:

  • dh-linktree, ftplib, gnome-shell-timer (fixed #891305 later), logidee-tools, publican, publican-debian, vboot-utils, rozofs
  • Some also got a new upstream release for the same price: tcpdf, lpctools, elastalert, notmuch-addrlookup.
  • I orphaned tcpdf in #889731 and I asked for the removal of feed2omb in #742601.
  • I updated django-modeltranslation to 0.12.2 to fix FTBFS bug #834667 (I submitted an upstream pull request at the same time).

Dolibarr. As a sponsor of dolibarr I filed its removal request and then I started a debian-devel discussion because we should be able to provide such applications to our users even though its development practice does not conform to some of our policies.

Bash. I uploaded a bash NMU (4.4.18-1.1) to fix a regression introduced by the PIE-enabled build (see #889869). I filed an upstream bug against bash but it turns out it’s actually a bug in qemu-user that really ought to be fixed. I reported the bug to qemu upstream but it hasn’t gotten much traction.

pkg-security team. I sponsored many updates over the month: rhash 1.3.5-1, medusa 2.2-5, hashcat, dnsrecon, btscanner, wfuzz 2.2.9, pixiewps 1.4.2-1, inetsim (new from kali). I also made a new upload of sslsniff with the OpenSSL 1.1 patch contributed by Hilko Bengen.

Debian bug reports

I filed a few bug reports:

  • #889814: lintian: Improve long description of epoch-change-without-comment
  • #889816: lintian: Complain when epoch has been bumped but upstream version did not go backwards
  • #890594: devscripts: Implement a salsa-configure script to configure project repositories
  • #890700 and #890701 about missing Vcs-Git fields to siridb-server and libcleri
  • #891301: lintian: privacy-breach-generic should not complain about <link rel=”generator”> and others

Misc contributions

Saltstack formulas. I pushed misc fixes to the munin-formula, the samba-formula and the openssh-formula. I submitted two other pull requests: on samba-formula and on users-formula.

QA’s carnivore database. I fixed a bug in a carnivore script that was spewing error messages about duplicate uids. This database links together multiple identifiers (emails, GPG key ids, LDAP entry, etc.) for the same Debian contributor.


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on March 06, 2018 07:00 PM

Glasgow’s group of Linux nerds has been gathering for 20 years so I was pleased to eat lots of curry at the Scottish Linux User Group’s 20th anniversary dinner.  In the pub afterwards I showed off the new KDE Slimbook II and recorded a little intro.  It’s maybe not the most slick presenting skills but it’s my first time making a video 🙂

The partnership with KDE and Slimbook is unique in the open source world and it’s really exciting they want to continue it with this new even-higher end model.  Faster memory, faster hard disk, larger screen, larger touchpad, USB-C, better wifi signal, this baby has it all. It’s a bargain too from only 700euro.


Facebooktwittergoogle_pluslinkedinby feather
on March 06, 2018 11:39 AM

March 05, 2018

LXD weekly status #37

Ubuntu Insights


So this past week was rather intense, in a nutshell, we’ve:

  • Merged LXD clustering support
  • Split python3-lxc, lua-lxc and lxc-templates out of the LXC codebase
  • Moved libpam-cgfs from lxcfs to lxc
  • Released 3.0.0 beta1 of python3-lxc and lxc-templates
  • Released 3.0.0 beta1 of lxcfs
  • Released 3.0.0 beta1 of lxc
  • Released 3.0.0 beta1 of lxd
  • Released 3.0.0 beta2 of lxd

So we’ve finally done it, most of the work that we wanted in for our 3.0 LTS release of all LXC/LXD/LXCFS repositories has been merged and we’re now focused on a few remaining tweaks, small additions and fixes with a plan to release the final 3.0 by the end of the month.

With all of this activity we’ve also had to update all the relevant packaging, moving a bunch of stuff around between packages and adding support for all the new features.

For those interesting in trying the new betas, the easiest way to see everything working together is through the LXD beta snap:

snap install lxd --beta

Note that the betas aren’t supported, you may incur data loss when upgrading or later down the line. Testing would be very much appreciated, but please do this on systems you don’t mind reinstalling if something goes wrong :slight_smile:

This week, the entire LXD team is meeting in Budapest, Hungary to go through the list of remaining things and make progress towards the final 3.0 release.

Upcoming conferences and events

Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

  • Various kernel work
  • Stable release work for LXC, LXCFS and LXD

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.




Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.


  • Uploaded python3-lxc 3.0.0~beta1 to Ubuntu 18.04 and PPAs.
  • Uploaded lxc-templates 3.0.0~beta1 to Ubuntu 18.04 and PPAs.
  • Uploaded lxcfs 3.0.0~beta1 to Ubuntu 18.04.
  • Uploaded lxc 3.0.0~beta1 to Ubuntu 18.04.
  • Uploaded lxd 3.0.0~beta1 to Ubuntu 18.04.
  • Uploaded lxd 3.0.0~beta2 to Ubuntu 18.04.
  • Several follow-up updates as we move content between packages and get automated tests to pass again.


  • Switched to Go 1.10.
  • Updated edge packaging to support LXD clustering.
  • Updated liblxc handling to reduce build time and automatically pick the right version of the library.
  • Created a new beta channel using the latest beta of all components.
on March 05, 2018 06:17 PM

March 04, 2018

It’s here, it’s finally here! The first 1.0 release of Parole Media Player has finally arrived. This release greatly improves the user experience for users without hardware-accelerated video and includes several fixes.

What’s New?

Parole 0.9.x Developments

If you’ve been following along with the stable release channel, you have a lot of updates to catch up on. Here’s a quick recap. For everybody else, skip to the next header.

  • Parole 0.9.0 introduced a new mini mode, boosted X11 playback, and made the central logo clickable. When your playlist is complete, the “play” logo changes to a “replay” logo.
  • Parole 0.9.1 improved support for remote files and live stream playback. Older code was stripped away to make Parole even leaner and faster.
  • Parole 0.9.2 introduced a keyboard shortcuts helper (Help > Keyboard Shortcuts), fixed numerous bugs, and included a huge code cleanup and refactor.

Parole 1.0.0: New Feature, Automatic Video Playback Output

  • We’ve finally resolved the long-standing “Could not initialise Xv output” error (Xfce #11950) that has plagued a number of our users, both in virtual machines and on real hardware.
  • In the past, we were delighted when we were able to implement the Clutter backend to solve this issue, but that API proved to be unstable and difficult to maintain between releases.
  • Now, we are using the “autoimagesink” for our newly defaulted “Automatic” video output option. This sink provides the best available sink (according to GStreamer) for the available environment, and should produce great results no matter the setup.

Parole 1.0.0: Bug Fixes

  • Fixed 32-bit crashes when using the MPRIS2 plugin (LP: #1374887)
  • Fixed crash on “Clear History” button press (LP: #1214514)
  • Fixed appdata validation (Xfce #13632)
  • Fixed full debug builds and resolved implicit-fallthrough build warning
  • Replaced stock icon by compliant option (Xfce #13738)

Parole 1.0.0: Translations

Albanian, Arabic, Asturian, Basque, Bulgarian, Catalan, Chinese (China), Chinese (Taiwan), Croatian, Czech, Danish, Dutch, English (Australia), Finnish, French, Galician, German, Greek, Hebrew, Hungarian, Icelandic, Indonesian, Italian, Japanese, Kazakh, Korean, Lithuanian, Malay, Norwegian Bokmål, Occitan (post 1500), Polish, Portuguese, Portuguese (Brazil), Russian, Serbian, Slovak, Spanish, Swedish, Thai, Turkish, Uighur, Ukrainian


Parole Media Player 1.0.0 is included in Xubuntu 18.04. Check it out this week when you test out the Beta!

sudo apt update
sudo apt install parole

The latest version of Parole Media Player can always be downloaded from the Xfce archives. Grab version 1.0.0 from the below link.

  • SHA-256: 6666b335aeb690fb527f77b62c322baf34834b593659fdcd21d21ed3f1e14010
  • SHA-1: ed56ab0ab34db6a5e0924a9da6bf2ee91233da8a
  • MD5: d00d3ca571900826bf5e1f6986e42992
on March 04, 2018 01:38 PM

Xfce has been steadily heading towards it’s GTK+ 3 future with Xfce 4.14, but that doesn’t mean our current stable users have been left behind. We’ve got some new features, bug fixes, and translations for you!

What’s New?

New Features

  • Default monospace font option in the Appearance dialog
  • Improved support for embedded DisplayPort connectors on laptops
  • Show location of the mouse pointer on keypress (as seen in the featured image)

Bug Fixes

  • Leave monitors where they were if possible (Xfce #14096)
  • syncdaemon not starting with certain locales
  • division by 0 crash from gdk_screen_height_mm()

Translation Updates

Arabic, Asturian, Basque, Bengali, Bulgarian, Catalan, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Croatian, Czech, Danish, Dutch, English (Australia), English (United
Kingdom), Finnish, French, Galician, German, Greek, Hebrew, Hungarian, Icelandic, Indonesian, Italian, Japanese, Kazakh, Korean, Lithuanian, Malay, Norwegian Bokmål, Norwegian Nynorsk, Occitan (post 1500), Polish, Portuguese, Portuguese (Brazil), Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Thai, Turkish, Uighur, Ukrainian


The latest version of Xfce Settings can always be downloaded from the Xfce archives. Grab version 4.12.2 from the below link.

  • SHA-256: af0e3c0a6501fc99e874103f2597abd1723f06c98f4d9e19a9aabf1842cc2f0d
  • SHA-1: 5991f4a542a556c24b6d8f5fe4698992e42882ae
  • MD5: 32263f1b704fae2db57517a2aff4232d
on March 04, 2018 12:33 PM

Xubuntu 18.04 “Bionic Beaver” is just around the corner. The first beta milestone arrives next week, and the final release is a little over a month away. 18.04 is an LTS release, meaning it has a 3-year support cycle and is definitely recommended for all users. Or it would be, if we knew it was ready. Stick around… this is a bit of a long read, but it’s important.

The ISO Tracker has seen little activity for the last few development cycles. We know we have some excited users already using and testing 18.04. But without testing results being recorded anywhere, we have to assume that nobody is testing the daily images and milestones. And this has major implications for both the 18.04 release and the project as a whole.

From the perspective of the QA team, and with full support from the development team – If we aren’t able to gauge an ISO at any of the milestones (Beta, Final Beta, Release Candidate, and the LTS Point Release), how can we possibly mark those as “Ready for Release”? And why should we?

It is notable that following any of our releases, often within less than a day, we have multiple reports of issues that were NEVER seen on the ISO Tracker. With the current SRU procedure, this means that all users will now have a minimum of 7 days before they can possibly see a fix. With development and testing time, these fixes may take significantly longer or never even make it into the 3-year support release.

Xubuntu is a community project. That includes all of you. If the community doesn’t care until it’s too late, what should we take from that? In fact, community support is part of the deal every flavor makes with Canonical to enable all of the things that make it possible for the flavor to exist. It’s actually the first bullet point in remaining a recognized flavor:

  • Image has track record of community interested in creating, supporting and promoting its use.

Ready to help? Let’s do this.

It is now time for the community to step up. Test ISOs, test the versions of packages you regularly use, check for any regressions, and record your results! Our ISO builds EVERY day around 0200UTC and the newest daily ISO is then available shortly after. The daily build can always be found on the daily builds page, regardless of the current development release name.

For those of you who do not believe you can help… you can!

Regression Testing

How hard is it to check for regression? Use the software you use every day. Does it work differently than it used to?

  • If not, no regression!
  • If it does, but works better than before, no regression!
  • Anything else, you’ve found a regression. Report it !

ISO Testing

How hard is it to check an ISO? If you have at 1Gb of disk space available, read on.

  • If you have sufficient disk space for a 10Gb file, you can probably use a virtual machine to run installation and post-installation tests.
  • If you are able to virtualize but lack the disk space for a full installation, consider using a VM to verify the ISO boots and applications run on the live disk.
  • If you have physical media available, either a DVD-R (RW to not waste the media on daily tests) or 2+ Gb capacity USB stick, you can boot Xubuntu from the media and perform installation, post-installation, and live testing.

More Information

In May of 2017, we ran a session on IRC for prospective testers. Other than our regular visitors, one new prospective tester attended and shared in the discussion. The logs for that session are still available if you want to spend 10 minutes checking out how easy it is to help.

We hope that you’ll join us in making Xubuntu 18.04 a success. We think it’s going to be the best release ever, but if the community can’t find the time to contribute to the release, we can’t guarantee we can have one.

on March 04, 2018 08:21 AM

March 03, 2018

Very often, people hear “SSH” and “two factor authentication” and assume you’re talking about an SSH keypair that’s got the private key protected with a passphrase. And while this is a reasonable approximation of a two factor system, it’s not actually two factor authentication because the server is not using two separate factors to authenticate the user. The only factor is the SSH keypair, and there’s no way for the server to know if that key was protected with a passphrase. However, OpenSSH has supported true two factor authentication for nearly 5 years now, so it’s quite possible to build even more robust security.


on March 03, 2018 08:00 AM
The Ubuntu team is pleased to announce the release of Ubuntu 16.04.4 LTS
(Long-Term Support) for its Desktop, Server, and Cloud products, as well
as other flavours of Ubuntu with long-term support.

Like previous LTS series', 16.04.4 includes hardware enablement stacks
for use on newer hardware.  This support is offered on all architectures
except for 32-bit powerpc, and is installed by default when using one of
the desktop images.  Ubuntu Server defaults to installing the GA kernel,
however you may select the HWE kernel from the installer bootloader.

As usual, this point release includes many updates, and updated
installation media has been provided so that fewer updates will need to
be downloaded after installation.  These include security updates and
corrections for other high-impact bugs, with a focus on maintaining
stability and compatibility with Ubuntu 16.04 LTS.

Kubuntu 16.04.4 LTS, Xubuntu 16.04.4 LTS, Mythbuntu 16.04.4 LTS,
Ubuntu GNOME 16.04.4 LTS, Lubuntu 16.04.4 LTS, Ubuntu Kylin 16.04.4 LTS,
Ubuntu MATE 16.04.4 LTS and Ubuntu Studio 16.04.4 LTS are also now
available. More details can be found in their individual release notes:

Maintenance updates will be provided for 5 years for Ubuntu Desktop,
Ubuntu Server, Ubuntu Cloud, Ubuntu Base, and Ubuntu Kylin.  All the
remaining flavours will be supported for 3 years.

To get Ubuntu 16.04.4

In order to download Ubuntu 16.04.4, visit:

Users of Ubuntu 14.04 will be offered an automatic upgrade to
16.04.4 via Update Manager.  For further information about upgrading,

Originally posted to the Ubuntu Release mailing list on Thu Mar 1 21:09:03 UTC 2018 
by Lukasz Zemczak, on behalf of the Ubuntu Release Team
on March 03, 2018 12:01 AM

March 02, 2018

As mentioned previously, I am advisor to a startup called Moltin which provides a simple yet powerful API for building eCommerce solutions in a variety of places. It has the potential to really change how we think of eCommerce transactions, making it easier, more discoverable, and more convenient for consumers, and more effective for organizations to sell products.

This week the team secured an $8million series A round by Underscore.VC and announced their new CEO, Jamus Driscoll.

I was offered an advisory role a little while back and agreed to to sign on for a few reasons. Firstly, Jamus delivered great results at DemandWare (and as EIN at Underscore.VC, when I first met him). Secondly, their founding team have a great product and community vision, and understand how to run a company. Thirdly, their Series A (and the original introduction) came from Underscore.VC, who I have a great relationship and enormous respect for. Finally, and critically, they are devoted to delivering a solid developer and community experience (which is primarily what I am advising them on). I am only interested in working with companies who want to deliver results, and Moltin are clearly formed into that mold.

Congratulations to the Moltin team. Looking forward to a fruitful 2018!

The post Moltin Secures $8m Series A and New CEO, Jamus Driscoll appeared first on Jono Bacon.

on March 02, 2018 09:32 PM
Thanks to all the hard work from our contributors, we are pleased to announce that Lubuntu 16.04.4 LTS has been released! What Is Lubuntu? Lubuntu is an official Ubuntu flavor based on the Lightweight X11 Desktop Environment (LXDE). The project’s goal is to provide a lightweight yet functional distribution. Lubuntu specifically targets older machines with […]
on March 02, 2018 12:14 AM

March 01, 2018

Hi All,

The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Queens on Ubuntu 16.04 LTS via the Ubuntu Cloud Archive. Details of the Queens release can be found at:

To get access to the Ubuntu Queens packages:

Ubuntu 16.04 LTS

You can enable the Ubuntu Cloud Archive pocket for OpenStack Queens on Ubuntu 16.04 installations by running the following commands:

sudo add-apt-repository cloud-archive:queens
sudo apt update

The Ubuntu Cloud Archive for Queens includes updates for:

aodh, barbican, ceilometer, ceph (12.2.2), cinder, congress, designate, designate-dashboard, dpdk (17.11), glance, glusterfs (3.13.2), gnocchi, heat, heat-dashboard, horizon, ironic, keystone, libvirt (4.0.0), magnum, manila, manila-ui, mistral, murano, murano-dashboard, networking-bagpipe, networking-bgpvpn, networking-hyperv, networking-l2gw, networking-odl, networking-ovn, networking-sfc, neutron, neutron-dynamic-routing, neutron-fwaas, neutron-lbaas, neutron-lbaas-dashboard, neutron-taas, neutron-vpnaas, nova, nova-lxd, openstack-trove, openvswitch (2.9.0), panko, qemu (2.11), rabbitmq-server (3.6.10), sahara, sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx, watcher, and zaqar.

For a full list of packages and versions, please refer to [0].

Branch Package Builds

If you would like to try out the latest updates to branches, we deliver continuously integrated packages on each upstream commit via the following PPA’s:

sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka
sudo add-apt-repository ppa:openstack-ubuntu-testing/newton
sudo add-apt-repository ppa:openstack-ubuntu-testing/ocata
sudo add-apt-repository ppa:openstack-ubuntu-testing/pike
sudo add-apt-repository ppa:openstack-ubuntu-testing/queens

Reporting bugs

If you have any issues please report bugs using the ‘ubuntu-bug’ tool to ensure that bugs get logged in the right place in Launchpad:

sudo ubuntu-bug nova-conductor

Thanks to everyone who has contributed to OpenStack Queens, both upstream and downstream!

Have fun and see you in Rocky!

(on behalf of the Ubuntu OpenStack team)


on March 01, 2018 09:31 PM

After some blood, sweat and tears, we finally brought Stacksmith into the world, yay!

It’s been a lengthy and intense process that started with putting together a team to be able to build the product in the first place, and taking Bitnami’s experience and some existing tooling to make the cloud more accessible to everyone. It’s been a good week.

However, I learnt something I didn’t quite grasp before: if you find really good people, focus on the right things, scope projects to an achievable goal and execute well, releases lack a certain explosion of emotions that are associated with big milestones. Compounded with the fact that the team that built the product are all working remotely, launch day was pretty much uneventful.
I’m very proud of what we’ve built, we did it with a lot of care and attention, we agonized over trade-offs during the development process, we did load testing to do some capacity planning, added metrics to get hints as to when the user experience would start to suffer, we did CI/CD from day one so deployments were well guarded against breaking changes and did not affect the user experience. We did enough but not too much. We rallied the whole company a few weeks before release to try and break the service, asked people who hadn’t used it before to go through the whole process and document each step, tried doing new and unexpected things with the product. The website was updated! The marketing messaging and material were discussed and tested, analysts were briefed, email campaigns were set up. All the basic checklists were completed. It’s uncommon to be able to align all the teams, timelines and incentives.
What I learned this week is that if you do, releases are naturally boring  🙂

I’m not quite sure what to do with that, there’s a sense of pride when rationalizing it, but I can’t help but feel that it’s a bit unfair that if you do things well enough the intrinsic reward seems to diminish.

I guess what I’m saying is, good job, Bitnami team!

on March 01, 2018 01:11 AM

February 28, 2018

Connecting new screens

Sebastian Kügler

Plasma's new screen layout selection dialogPlasma’s new screen layout selection dialog
This week, Dan Vratil and me have merged a new feature in KScreen, Plasma’s screen configuration tool. Up until now, when plugging in a new display (a monitor, beamer or TV, for example), Plasma would automatically extend the desktop area to include this screen. In many cases, this is expected behavior, but it’s not necessarily clear to the user what just happened. Perhaps the user would rather want the new screen on the other side of the current, clone the existing screen, switch over to it or perhaps not use it at all at this point.
The new behavior is to now pop up a selection on-screen display (OSD) on the primary screen or laptop panel allowing the user to pick the new configuration and thereby make it clear what’s happening. When the same display hardware is plugged in again at a later point, this configuration is remembered and applied again (no OSD is shown in that case).
Another change-set which we’re about to merge is to pop up the same selection dialog when the user presses the display button which can be found on many laptops. This has been nagging me for quite a while since the display button switched screen configuration but provided very little in the way of visual feedback to the user what’s happening, so it wasn’t very user-friendly. This new feature will be part of Plasma 5.13 to be released in June 2018.

on February 28, 2018 01:59 PM

February 27, 2018


Benjamin Mako Hill

XORcise (ɛɡ.zɔʁ.siz) verb 1. To remove observations from a dataset if they satisfy one of two criteria, but not both. [e.g., After XORcising adults and citizens, only foreign children and adult citizens were left.]

on February 27, 2018 07:41 PM