June 24, 2017

Disertaremos si debería de existir un Ubuntu rolling y sobre la obsolescencia que provoca el abandono de la arquitectura de 32 bits.

El podcast esta disponible para escuchar en:

Ubuntu y otras hierbas S01E06

En este capítulo intervenimos: Francisco MolineroFrancisco Javier TerueloFernando Lanero y Marcos Costales.
on June 24, 2017 09:43 PM

June 23, 2017

on June 23, 2017 09:47 PM


We’ve migrated ubuntu-session to a new unity-session package. This means that the default session is GNOME Shell and people can install Unity 7 and it’s related packages via unity-session. The migration is working well so far, but we still have some more work to do in order to make sure everything “just works”.


We’re now working on the update-manager UI to add the list of kernel CVEs which are handled by the LivePatch service and a brief description of each.


We’ve done more work on getting desktop themes working better with Snaps. We’re documenting the problems we’ve encountered and are creating some sample Snaps help with making the improvements we need.


We completed our review of the desktop test plan this week and have set our priorities for this cycle. This will cover installation, upgrades, some core application smoke tests, suspend/resume, Network Manager and translations. We will be publishing a blog on how you can get involved next week.


A new version of PulseAudio is in Xenial proposed (version 1:8.0-0ubuntu3.3). This brings fixes for Bluetooth A2DP audio devices. We’d appreciate testing and feedback.
Updated Chromium beta to 60.0.3112.32, dev to 61.0.3128.3.

Video Acceleration

We’ve got hardware accelerated video decoding working in a Proof-Of-Concept using a GStreamer and VA-API pipeline. The result is 3% CPU usage to play an h264 4K 60FPS video on Haswell. 4K h265 HEVC is also playable but requires a Skylake or later processor. This wiki page has been updated with information about how to try it yourself:


on June 23, 2017 05:41 PM
conjure-up dev summary for week 25

conjure-up dev summary for week 25 conjure-up dev summary for week 25 conjure-up dev summary for week 25

With conjure-up 2.2.2 out the door we bring a whole host of improvements!

sudo snap install conjure-up --classic  

Improved Localhost

We recently switched over to using a bundled LXD and with that change came a few hiccups in deployments. We've been monitoring the error reports coming in and have made several fixes to improve that journey. If you are one of the ones unable to deploy spells please give this release another go and get in touch with us if you still run into problems.


Our biggest underlying technology that we utilize for deployments is Juju. Version 2.2.1 was just released and contains a number of highly anticipated performance improvements:

  • frequent database writes (for logging and agent pings) are batched to significantly reduce database I/O
  • cleanup of log noise to make observing true errors much easier
  • status history is now pruned whereas before a bug prevented that from happening leading to unbounded growth
  • update-status interval configurable (this value must be set when bootstrapping or performing add-model via the --config option; any changes after that are not noticed until a Juju restart)
  • debug-log include/exclude arguments now more user friendly (as for commands like juju ssh, you now specify machine/unit names instead of tags; "rabbitmq-server/0" instead of "unit-rabbitmq-server-0".

Capturing Errors

In the past, we’ve tracked errors the same way we track other general usage metrics for conjure-up. This has given us some insight into what issues people run into, but it doesn’t give us much to go on to fix those errors. With this release, we’ve begun using the open source Sentry service (https://sentry.io/) to report some more details about failures, and it has already greatly improved our ability to proactively fix those bugs.

Sentry collects information such as the conjure-up release, the lxd and juju version, the type of cloud (aws, azure, gce, lxd, maas, etc), the spell being deployed, the exact source file and line in conjure-up where the error occurred, as well as some error specific context information, such as the reason why a controller failed to bootstrap.

As with the analytics tracking, you can easily opt out of reporting via the command line. In addition to the existing --notrack option, there is now also a --noreport option. You can now also set these option in a ~/.config/conjure-up.conf file. An example of that file would be:

notrack = true  
noreport = true  


Our next article is going to cover the major features planned for conjure-up 2.3! Be sure to check back soon!

on June 23, 2017 05:36 PM

This week Mark goes camping, we interview Michael Hall from Endless Computers, bring you another command line love and go over all your feedback.

It’s Season Ten Episode Sixteen of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Joey Sneddon are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been upto recently:
  • We interview Michael Hall about Endless Computers.

  • We share a Command Line Lurve:

    • nmon – nmon is short for Nigel’s performance Monitor
  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • This weeks cover image is taken from Wikimedia.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on June 23, 2017 02:00 PM

June 22, 2017

ISO Image Writer

Jonathan Riddell

ISO Image Writer is a tool I’m working on which writes .iso files onto a USB disk ready for installing your lovely new operating system.  Surprisingly many distros don’t have very slick recommendations for how to do this but they’re all welcome to try this.

It’s based on ROSA Image Writer which has served KDE neon and other projects well for some time.  This adds ISO verification to automatically check the digital signatures or checksums, currently supported is KDE neon, Kubuntu and Netrunner.  It also uses KAuth so it doesn’t run the UI as root, only a simple helper binary to do the writing.  And it uses KDE Frameworks goodness so the UI feels nice.

First alpha 0.1 is out now.

Download from https://download.kde.org/unstable/isoimagewriter/

Signed by release manager Jonathan Riddell with 0xEC94D18F7F05997E. Git tags are also signed by the same key.

It’s in KDE Git at kde:isoimagewriter and in bugs.kde.org, please do try it out and report any issues.  If you’d like a distro added to the verification please let me know and/or submit a patch. (The code to do with is a bit verbose currently, it needs tidied up.)

I’d like to work out how to make AppImages, Windows and Mac installs for this but for now it’s in KDE neon developer editions and available as source.


Facebooktwittergoogle_pluslinkedinby feather
on June 22, 2017 07:14 PM
The stress-ng logo
The latest release of stress-ng contains a mechanism to measure latencies via a cyclic latency test.  Essentially this is just a loop that cycles around performing high precisions sleeps and measures the (extra overhead) latency taken to perform the sleep compared to expected time.  This loop runs with either one of the Round-Robin (rr) or First-In-First-Out real time scheduling polices.

The cyclic test can be configured to specify the sleep time (in nanoseconds), the scheduling type (rr or fifo),  the scheduling priority (1 to 100) and also the sleep method (explained later).

The first 10,000 latency measurements are used to compute various latency statistics:
  • mean latency (aka the 'average')
  • modal latency (the most 'popular' latency)
  • minimum latency
  • maximum latency
  • standard deviation
  • latency percentiles (25%, 50%, 75%, 90%, 95.40%, 99.0%, 99.5%, 99.9% and 99.99%
  • latency distribution (enabled with the --cyclic-dist option)
The latency percentiles indicate the latency at which a percentage of the samples fall into.  For example, the 99% percentile for the 10,000 samples is the latency at which 9,900 samples are equal to or below.

The latency distribution is shown when the --cyclic-dist option is used; one has to specify the distribution interval in nanoseconds and up to the first 100 values in the distribution are output.

For an idle machine, one can invoke just the cyclic measurements with stress-ng as follows:

 sudo stress-ng --cyclic 1 --cyclic-policy fifo \
--cyclic-prio 100 --cyclic-method --clock_ns \
--cyclic-sleep 20000 --cyclic-dist 1000 -t 5
stress-ng: info: [27594] dispatching hogs: 1 cyclic
stress-ng: info: [27595] stress-ng-cyclic: sched SCHED_FIFO: 20000 ns delay, 10000 samples
stress-ng: info: [27595] stress-ng-cyclic: mean: 5242.86 ns, mode: 4880 ns
stress-ng: info: [27595] stress-ng-cyclic: min: 3050 ns, max: 44818 ns, std.dev. 1142.92
stress-ng: info: [27595] stress-ng-cyclic: latency percentiles:
stress-ng: info: [27595] stress-ng-cyclic: 25.00%: 4881 us
stress-ng: info: [27595] stress-ng-cyclic: 50.00%: 5191 us
stress-ng: info: [27595] stress-ng-cyclic: 75.00%: 5261 us
stress-ng: info: [27595] stress-ng-cyclic: 90.00%: 5368 us
stress-ng: info: [27595] stress-ng-cyclic: 95.40%: 6857 us
stress-ng: info: [27595] stress-ng-cyclic: 99.00%: 8942 us
stress-ng: info: [27595] stress-ng-cyclic: 99.50%: 9821 us
stress-ng: info: [27595] stress-ng-cyclic: 99.90%: 22210 us
stress-ng: info: [27595] stress-ng-cyclic: 99.99%: 36074 us
stress-ng: info: [27595] stress-ng-cyclic: latency distribution (1000 us intervals):
stress-ng: info: [27595] stress-ng-cyclic: latency (us) frequency
stress-ng: info: [27595] stress-ng-cyclic: 0 0
stress-ng: info: [27595] stress-ng-cyclic: 1000 0
stress-ng: info: [27595] stress-ng-cyclic: 2000 0
stress-ng: info: [27595] stress-ng-cyclic: 3000 82
stress-ng: info: [27595] stress-ng-cyclic: 4000 3342
stress-ng: info: [27595] stress-ng-cyclic: 5000 5974
stress-ng: info: [27595] stress-ng-cyclic: 6000 197
stress-ng: info: [27595] stress-ng-cyclic: 7000 209
stress-ng: info: [27595] stress-ng-cyclic: 8000 100
stress-ng: info: [27595] stress-ng-cyclic: 9000 50
stress-ng: info: [27595] stress-ng-cyclic: 10000 10
stress-ng: info: [27595] stress-ng-cyclic: 11000 9
stress-ng: info: [27595] stress-ng-cyclic: 12000 2
stress-ng: info: [27595] stress-ng-cyclic: 13000 2
stress-ng: info: [27595] stress-ng-cyclic: 14000 1
stress-ng: info: [27595] stress-ng-cyclic: 15000 9
stress-ng: info: [27595] stress-ng-cyclic: 16000 1
stress-ng: info: [27595] stress-ng-cyclic: 17000 1
stress-ng: info: [27595] stress-ng-cyclic: 18000 0
stress-ng: info: [27595] stress-ng-cyclic: 19000 0
stress-ng: info: [27595] stress-ng-cyclic: 20000 0
stress-ng: info: [27595] stress-ng-cyclic: 21000 1
stress-ng: info: [27595] stress-ng-cyclic: 22000 1
stress-ng: info: [27595] stress-ng-cyclic: 23000 0
stress-ng: info: [27595] stress-ng-cyclic: 24000 1
stress-ng: info: [27595] stress-ng-cyclic: 25000 2
stress-ng: info: [27595] stress-ng-cyclic: 26000 0
stress-ng: info: [27595] stress-ng-cyclic: 27000 1
stress-ng: info: [27595] stress-ng-cyclic: 28000 1
stress-ng: info: [27595] stress-ng-cyclic: 29000 2
stress-ng: info: [27595] stress-ng-cyclic: 30000 0
stress-ng: info: [27595] stress-ng-cyclic: 31000 0
stress-ng: info: [27595] stress-ng-cyclic: 32000 0
stress-ng: info: [27595] stress-ng-cyclic: 33000 0
stress-ng: info: [27595] stress-ng-cyclic: 34000 0
stress-ng: info: [27595] stress-ng-cyclic: 35000 0
stress-ng: info: [27595] stress-ng-cyclic: 36000 1
stress-ng: info: [27595] stress-ng-cyclic: 37000 0
stress-ng: info: [27595] stress-ng-cyclic: 38000 0
stress-ng: info: [27595] stress-ng-cyclic: 39000 0
stress-ng: info: [27595] stress-ng-cyclic: 40000 0
stress-ng: info: [27595] stress-ng-cyclic: 41000 0
stress-ng: info: [27595] stress-ng-cyclic: 42000 0
stress-ng: info: [27595] stress-ng-cyclic: 43000 0
stress-ng: info: [27595] stress-ng-cyclic: 44000 1
stress-ng: info: [27594] successful run completed in 5.00s

Note that stress-ng needs to be invoked using sudo to enable the Real Time FIFO scheduling for the cyclic measurements.

The above example uses the following options:

  • --cyclic 1
    • starts one instance of the cyclic measurements (1 is always recommended)
  • --cyclic-policy fifo 
    • use the real time First-In-First-Out scheduling for the cyclic measurements
  • --cyclic-prio 100 
    • use the maximum scheduling priority  
  • --cyclic-method clock_ns
    • use the clock_nanoseconds(2) system call to perform the high precision duration sleep
  • --cyclic-sleep 20000 
    • sleep for 20000 nanoseconds per cyclic iteration
  • --cyclic-dist 1000 
    • enable latency distribution statistics with an interval of 1000 nanoseconds between each data point.
  • -t 5
    • run for just 5 seconds
From the run above, we can see that 99.5% of latencies were less than 9821 nanoseconds and most clustered around the 4880 nanosecond model point. The distribution data shows that there is some clustering around the 5000 nanosecond point and the samples tail off with a bit of a long tail.

Now for the interesting part. Since stress-ng is packed with many different stressors we can run these while performing the cyclic measurements, for example, we can tell stress-ng to run *all* the virtual memory related stress tests and see how this affects the latency distribution using the following:

 sudo stress-ng --cyclic 1 --cyclic-policy fifo \  
--cyclic-prio 100 --cyclic-method clock_ns \
--cyclic-sleep 20000 --cyclic-dist 1000 \
--class vm --all 1 -t 60s

..the above invokes all the vm class of stressors to run all at the same time (with just one instance of each stressor) for 60 seconds.

The --cyclic-method specifies the delay used on each of the 10,000 cyclic iterations used.  The default (and recommended method) is clock_ns, using the high precision delay.  The available cyclic delay methods are:
  • clock_ns (use the clock_nanosecond() sleep)
  • posix_ns (use the POSIX nanosecond() sleep)
  • itimer (use a high precision clock timer and pause to wait for a signal to measure latency)
  • poll (busy spin-wait on clock_gettime() to eat cycles for a delay.
All the delay mechanisms use the CLOCK_REALTIME system clock for timing.

I hope this is plenty of cyclic measurement functionality to get some useful latency benchmarks against various kernel components when using some or a mix of the stress-ng stressors.  Let me know if I am missing some other cyclic measurement options and I can see if I can add them in.

Keep stressing and measuring those systems!

on June 22, 2017 06:45 PM
Thank you to Oracle Cloud for inviting me to speak at this month's CloudAustin Meetup hosted by Rackspace.

I very much enjoyed deploying Canonical Kubernetes on Ubuntu in the Oracle Cloud, and then exploring Kubernetes a bit, how it works, the architecture, and a simple workload within.  I'm happy to share my slides below, and you can download a PDF here:

If you're interested in learning more, check out:
It was a great audience, with plenty of good questions, pizza, and networking!

I'm pleased to share my slide deck here.

on June 22, 2017 03:20 PM

The Ubuntu OpenStack team is pleased to announce the general availability of the OpenStack Pike b2 milestone in Ubuntu 17.10 and for Ubuntu 16.04 LTS via the Ubuntu Cloud Archive.

Ubuntu 16.04 LTS

You can enable the Ubuntu Cloud Archive for OpenStack Pike on Ubuntu 16.04 LTS installations by running the following commands:

sudo add-apt-repository cloud-archive:pike
sudo apt update

The Ubuntu Cloud Archive for Pike includes updates for Barbican, Ceilometer, Cinder, Congress, Designate, Glance, Heat, Horizon, Ironic, Keystone, Manila, Murano, Neutron, Neutron FWaaS, Neutron LBaaS, Neutron VPNaaS, Neutron Dynamic Routing, Networking OVN, Networking ODL, Networking BGPVPN, Networking Bagpipe, Networking SFC, Nova, Sahara, Senlin, Trove, Swift, Mistral, Zaqar, Watcher, Senlin, Rally and Tempest.

We’ve also now included GlusterFS 3.10.3 in the Ubuntu Cloud Archive in order to provide new stable releases back to Ubuntu 16.04 LTS users in the context of OpenStack.

You can see the full list of packages and versions here.

Ubuntu 17.10

No extra steps required; just start installing OpenStack!

Branch Package Builds

If you want to try out the latest master branch updates, or updates to stable branches, we are maintaining continuous integrated packages in the following PPA’s:

sudo add-apt-repository ppa:openstack-ubuntu-testing/newton
sudo add-apt-repository ppa:openstack-ubuntu-testing/ocata
sudo add-apt-repository ppa:openstack-ubuntu-testing/pike

bear in mind these are built per-commitish (30 min checks for new commits at the moment) so ymmv from time-to-time.

Reporting bugs

Any issues please report bugs using the ‘ubuntu-bug’ tool:

sudo ubuntu-bug nova-conductor

this will ensure that bugs get logged in the right place in Launchpad.

Still to come…

In terms of general expectation for the OpenStack Pike release in August we’ll be aiming to include Ceph Luminous (the next stable Ceph release) and Open vSwitch 2.8.0 so long as the release schedule timing between projects works out OK.

Any finally – if you’re interested in the general stats – Pike b2 involved 77 package uploads including new 4 new packages for new Python module dependencies!

Thanks and have fun!


on June 22, 2017 10:00 AM

Input Method Editors, or IMEs for short, are ways for a user to input text in another, more complex character set using a standard keyboard, commonly used for Chinese, Japanese, and Korean languages (CJK for short). So in order to type anything in Chinese, Japanese, or Korean, you must have a working IME for that language.

Quite obviously, especially considering the massive userbase in these languages, it’s crucial for IMEs to be quick and easy to setup, and working in any program you decide to use.

The reality is quite far from this. While there are many problems that exist with IMEs under Linux, the largest one I believe is the fact that there’s no (good) standard for communicating with programs.

IMEs all have to implement a number of different interfaces, the 3 most common being XIM, GTK (2 and 3), and Qt (3, 4, and 5).

XIM is the closest we have to a standard interface, but it’s not very powerful, the pre-editing string doesn’t always work properly, isn’t extensible to more advanced features, doesn’t work well under many window systems (in those I’ve tested, it will always appear at the bottom of the window, instead of beside the text), and a number of other shortcomings that I have heard exist, but am not personally aware of (due to not being one who uses IMEs very often).

GTK and Qt interfaces are much more powerful, and work properly, but, as might be obvious, they only work with GTK and Qt. Any program using another widget toolkit (such as FLTK, or custom widget toolkits, which are especially prevalent in games) needs to fall back to the lesser XIM interface. Going around this is theoretically possible, but very difficult in practice, and requires GTK or Qt installed anyways.

IMEs also need to provide libraries for every version of GTK and Qt as well. If an IME is not updated to support the latest version, you won’t be able to use the IME in applications using the latest version of GTK or Qt.

This, of course, adds quite a large amount of work to IME developers, and causes quite a problem with IME users, where a user will no longer be able to use an IME they prefer, simply because it has not been updated to support programs using a newer version of the toolkit.

I believe these issues make it very difficult for the Linux ecostructure to advance as a truly internationalized environment. It first limits application developers that truly wish to honor international users to 2 GUI toolkits, GTK and Qt. Secondly, it forces IME developers to constantly update their IMEs to support newer versions of GTK and Qt, requiring a large amount of effort, duplicated code, and as a result, can result in many bugs (and abandonment).


I believe fixing this issue would require a unified API that is toolkit agnostic. There’s 2 obvious ways that come to mind.

  1. A library that an IME would provide that every GUI application would include
  2. A client/server model, where the IME is a server, and the clients are the applications

Option #1 would be the easiest and least painful to implement for IME developers, and I believe is actually the way GTK and Qt IMEs work. But there are also problems with this approach. If the IME crashes, the entire host application will crash as well, as well as the fact that there could only be one IME installed at a time (since every IME would need to provide the same library). The latter is not necessarily a big issue for most users, but in multi-user desktops, this can be a big issue.

Option #2 would require more work from the IME developers, juggling client connections and the likes (although this could be abstracted with a library, similar to Wayland’s architecture). However, it would also mean a separate address space (therefore, if the IME crashes, nothing else would crash as a direct result of this), the possibility for more than one IME being installed and used at once, and even the possibility of hotswapping IMEs at runtime.

The problem with both of these options is the lack of standardization. While they can adhere to a standard for communicating with programs, configuration, dealing with certain common problems, etc. are all left to the IME developers. This is the exact problem we see with Wayland compositors.

However, there’s also a third option: combining the best of both worlds in the options provided above. This would mean having a standard server that will then load a library that provides the IME-related functions. If there are ever any major protocol changes, common issues, or anything of the likes, the server will be able to be updated while the IMEs can be left intact. The library that it loads would be, of course, entirely configurable by the user, and the server could also host a number of common options for IMEs (and maybe also host a format for configuring specific options for IMEs), so if a user decides to switch IMEs, they wouldn’t need to completely redo their configuration.

Of course, the server would also be able to provide clients for XIM and GTK/Qt-based frontends, for programs that don’t use the protocol directly.

Since I’m not very familiar with IMEs, I haven’t yet started a project implementing this idea, since there may be challenges about a method like this that might have already been discussed, but that I’m not aware of.

This is why I’m writing this post, to hopefully bring up a discussion about how we can improve the state of IMEs under Linux :) I would be very willing to work with people to properly design and implement a better solution for the problem at hand.

on June 22, 2017 07:08 AM

June 21, 2017

The other day some of my fellow Ubuntu developers and I were looking at bug 1692981 and trying to figure out what was going on. While we don’t have an answer yet, we did use some helpful tools (at least one of which somebody hadn’t heard of) to gather more information about the bug.

One such tool is lp-bug-dupe-properties from the lptools package in Ubuntu. With this it is possible to quickly find out information about all the duplicates, 36 in this case, of a bug report. For example, if we wanted to know which releases are affected we can use:

lp-bug-dupe-properties -D DistroRelease -b 1692981

LP: #1692981 has 36 duplicates
Ubuntu 16.04: 1583463 1657243 1696799 1696827 1696863 1696930 1696940
1697011 1697016 1697068 1697099 1697121 1697280 1697290 1697313 1697335
1697356 1697597 1697801 1697838 1697911 1698097 1698100 1698104 1698113
1698150 1698171 1698244 1698292 1698303 1698324 1698670 1699329
Ubuntu 16.10: 1697072 1698098 1699356

While lp-bug-dupe-properites is useful, in this case it’d be helpful to search the bug’s attachments for more information. Luckily there is a tool, lp-grab-attachments (also part of lptools), which will download all the attachments of a bug report and its duplicates if you want. Having done that you can then use grep to search those files.

lp-grab-attachments -dD 1692981

The ‘-d’ switch indicates I want to get the attachments from duplicate bug reports and the ‘-D’ switch indicates that I want to have the bug description saved as Description.txt. While saving the description provides some of the same capability as lp-bug-dupe-properties it ends up being quicker. Now with the attachments saved I can do something like:

for desc in $(find . -name Description.txt); do grep "dpkg 1.18.[4|10]" $desc;

dpkg 1.18.4ubuntu1.2
dpkg 1.18.10ubuntu2
dpkg 1.18.10ubuntu1.1
dpkg 1.18.4ubuntu1.2

and find out that a variety of dpkg versions are in use when this is encountered.

I hope you find these tools useful and I’d be interested to hear how you use them!

on June 21, 2017 05:40 PM
I first started using Ubuntu just a few weeks after Lucid Lynx was released and have used Ubuntu, Kubuntu, Xubuntu, Lubuntu and Ubuntu GNOME since then. Towards the end of 2016 I took early retirement and decided to curtail some of my Ubuntu related activities in favour of some long abandoned interests which went back to the 1960s. Although I had no intention of spending every day sat in front of a computer screen I still wished to contribute to Ubuntu but at a reduced level. However, recent problems relating to my broadband connection, which I am hoping are now over, prompted me to look closely at how I could continue to contribute to Ubuntu if I lost my "always on" internet.


Thanks to my broadband provider, whose high profile front man sports a beard and woolly jumpers, my connection changed from being one that was "always on" to one that was "usually off". There's a limit to how many times I'm prepared to reboot my cable modem on the advice of the support desk, be sent unnecessary replacement modems because the one I'm using must be faulty, to allow engineers into my home to measure signal levels, and be told the next course of action will definitely get my connection working only to find that I'm still off-line the next day and the day after. I kept asking myself: "Just how many engineers will they need to send before someone successfully diagnoses the problem and fixes it?"

Mobile broadband

Much of my recent web browsing, on-line banking, and updating of my Xubuntu installations has been done with the aid of two iPhones acting as access points while connected to the 3 and EE mobile networks. It was far from being an ideal situation, connection speeds were often very low by today's standards but "it worked" and the connections were far more reliable than I thought that they would be. A recent test during the night showed a download speed on a 4G connection to be comparable to that offered by many other broadband providers. But downloading large Ubuntu updates took a long time especially during the evening. As updating the pre-installed apps on a smart phone can quickly use up one's monthly data allowance I made myself aware of where I could find local Wi-Fi hotspots to make some of the important or large phone updates and save some valuable bandwidth for Ubuntu. Interestingly with the right monthly plan and using more appropriate hardware than a mobile phone, I could actually save some money by switching from cable to mobile broadband although I would definitely miss my 100Mb/s download speed that is most welcome when downloading ISO images or large Ubuntu updates.

ISO testing

Unfortunately these problems, which lasted for over three weeks, meant that I had to cease ISO testing due to the amount of data I would need to download several times each week. I had originally intended to get a little more involved with testing of the development release of Xubuntu during the Artful cycle but those plans were put on hold while I waited for my broadband connection to be restored and deemed to be have been fixed permanently. During this outage I still managed to submit a couple of bug reports and comment on a few others but my "always on" high speed connection was very much missed.

Connection restored!

How I continue with Ubuntu long-term will now depend on the reliability of my broadband connection which does seem to have now been restored to full working order. I'm finalising this post a week after receiving yet another visit from an engineer who restored my connection in just a matter of minutes. Cables had been replaced and signal levels had been measured and brought to within the required limits. Apparently the blame for the failure of the most recent "fix" was put solely on one of his colleagues who I am told failed to correctly join two cables together. In other words, I wasn't actually connected to their network at all. It must have been so very obvious to my modem/router which sat quietly in the corner of the room forever looking to connect to something that it just could not find and yet was unable to actually tell me so. If only such devices could actually speak....

on June 21, 2017 09:58 AM
Friday, I uploaded an updated nplan package (version 0.24) to change its Priority: field to important, as well as an update of ubuntu-meta (following a seeds update), to replace ifupdown with nplan in the minimal seed.

What this means concretely is that nplan should now be installed by default on all images, part of ubuntu-minimal, and dropped ifupdown at the same time.

For the time being, ifupdown is still installed by default due the way debootstrap generates the very minimal images used as a base for other images -- how it generates its base set of packages, since that depends only on the Priority: field of packages. Thus, nplan was added, but ifupdown still needs to be changed (which I will do shortly) to disappear from all images.

The intent is that nplan would now be the standard way of configuring networks. I've also sent an email about this to ubuntu-devel-announce@.

I've already written a bit about what netplan is and does, and I have still more to write on the subject (discussing syntax and how to do common things). We especially like how using a purely declarative syntax makes things easier for everyone (and if you can't do what you want that way, then it's a bug you should report).

MaaS, cloud-init and others have already started to support writing netplan configuration.

The full specification (summary wiki page and a blueprint reachable from it) for the migration process is available here.

While I get to writing something comprehensive about how to use the netplan YAML to configure networks, if you want to know more there's always the manpage, which is the easiest to use documentation. It should always be up to date with the current version of netplan available on your release (since we backported the last version to Xenial, Yakkety, and Zesty), and accessible via:

man 5 netplan

To make things "easy" however, you can also check out the netplan documentation directly from the source tree here:


There's also a wiki page I started to get ready that links to the most useful things, such as an overview of the design of netplan, some discussion on the renderers we support and some of the commands that can be used.

We even have an IRC channel on Freenode: #netplan

I think you'll find that using netplan makes configuring networks easy and even enjoyable; but if you run into an issue, be sure to file a bug on Launchpad here:

on June 21, 2017 02:10 AM

June 20, 2017

Now that Ubuntu phones and tablets are gone, I would like to offer my thoughts on why I personally think the project failed and what one may learn from it.
on June 20, 2017 03:00 PM

June 19, 2017

Today I released the second development snapshot (3.25.3) of what will be GNOME Tweak Tool 3.26.

I consider the initial User Interface (UI) rework proposed by the GNOME Design Team to be complete now. Every page in Tweak Tool has been updated, either in this snapshot or the previous development snapshot.

The hard part still remains: making the UI look as good as the mockups. Tweak Tool’s backend makes this a bit more complicated than usual for an app like this.

Here are a few visual highlights of this release.

The Typing page has been moved into an Additional Layout Options dialog in the Keyboard & Mouse page. Also, the Compose Key option has been given its own dialog box.

Florian Müllner added content to the Extensions page that is shown if you don’t have any GNOME Shell extensions installed yet.

A hidden feature that GNOME has had for a long time is the ability to move the Application Menu from the GNOME top bar to a button in the app’s title bar. This is easy to enable in Tweak Tool by turning off the Application Menu switch in the Top Bar page. This release improves how well that works, especially for Ubuntu users where the required hidden appmenu window button was probably not pre-configured.

Some of the ComboBoxes have been replaced by ListBoxes. One example is on the Workspaces page where the new design allows for more information about the different options. The ListBoxes are also a lot easier to select than the smaller ComboBoxes were.

For details of these and other changes, see the commit log or the NEWS file.

GNOME Tweak Tool 3.26 will be released alongside GNOME 3.26 in mid-September.

on June 19, 2017 11:15 PM

This wasn't a joke!As previously announced, few days ago I attended the GNOME Fractional Scaling Hackfest that me and Red Hat‘s Jonas Ådahl organized at the Canonical office in Taipei 101.
Although the location was chosen mostly because it was the one closest to Jonas and near enough to my temporary place, it turned out to be the best we could use, since the huge amount of hardware that was available there, including some 4k monitors and HiDPI laptops.
Being there also allowed another local Caonical employee (Shih-Yuan Lee) to join our efforts!

As this being said I’ve to thank my employer, for allowing me to do this and for sponsoring the event in order to help making GNOME a better desktop for Ubuntu (and not only).

Going deeper into the event (for which we tracked the various more technical items in a WIP journal), it has been a very though week, hard working till late while trying to look for the various edge cases and discovering bugs that the new “logically sized” framebuffer and actors were causing.

In fact, as I’ve already quickly explained, the whole idea is to paint all the screen actors at the maximum scale value across the displays they intersect and then using scaled framebuffers when painting, so that we can redefine the screen coordinates in logical pixels, more than using pixel units. However, since we want to be able to use any sized element scaled at (potentially any) fractional value, we might incur in problems when eventually we go back to the pixel level, where everything is integer-indexed.

We started by defining the work items for the week and setting up some other HiDPI laptops (Dell XPS 15 and XPS 13 mostly) we got from the office with jhbuild, then as you can see we defined some list of things to care about:

  • Supporting multiple scaling values: allowing to scale up and down (< 1.0) the interface, not only to well-known value, but providing a wider range of floats we support
  • Non-perfect-scaling: covering the cases in which the actor (or the whole monitor) when scaled up/down to a fractional level has not anymore a pixel-friendly size, and thus there are input and outputs issues to handle due to rounding.
  • GNOME Shell UI: the shell StWidget‘s need to be drawn at proper resource scaling value, so that when they’re painted they won’t look blurred.
  • Toolkit supports: there are some Gtk issues when scaling more than 2x, while Qt has support for Fractional scaling.
  • Wayland protocol improvements: related to the point above we might define a way to tell toolkits the actual fractional scaling value, so that they could be scaled at the real value, instead of asking them to scale up to the upper integer scaling level. Also when it comes to games and video players, they should not be scaled up/down at all.
  • X11 clients: supporting XWayland clients

What we did

As you see the list of things we meant to work or plan was quite juicy, so more than enough for one week, but even if we didn’t finish all the tasks (despite the Super-Joans powers :-)), we have been able to start or address the work for most of them so that we’ll know what to work on for the next weeks.

Scaling at 1.25x

As a start, we had to ensure mutter was supporting various scaling values (including the ones < 1.0), we decided (this might change, but given the Unity experience, it proved to work well) to support 8 intermediate values per integer, from 0.5 to 4.0. This, as said, would lead to troubles when it comes to many resolutions (as you see in the board picture, 1280×720 is an example of a case that doesn’t work well when scaled at 1.5 for instance). So we decided to make mutter to expose a list of supported scaling values per each mode, while we defined an algorithm to compute the closest “good” scaling level to get a properly integer sized logical screen.
This caused a configuration API change, and we updated accordingly gnome-settings-daemon and gnome-control-center adding also some UI changes to reflect and control this new feature.
Not only, the ability of having such fractional values, caused various glitches in mutter, mostly related to the damage algorithm, which Jonas refactored. Other issues in screenshots or gnome-shell fullscreen animations have also been found and fixed.

Speaking of Gnome Shell toolkit, we started some work to fix the drawing of cairo-based areas, while I had already something done for labels, that needs to be tuned. Shih-Yuan fixed a scaling problem of the workspace thumbnails.

On toolkits support, we didn’t do much (a part Gnome Shell) as Gtk problem is not something that affects us much in normal scenarios yet, but still we debugged the issue, while it’s probably a future optimization to support fractional-friendly toolkits using an improved wayland protocol. Instead it’s quite important to define a such protocol for apps that don’t need to be scaled, such as games, but in order to do it we need feedback from games developers too, so that we can define it in the best way.

Not much has been also done in XWayland world (right now everything is scaled to the required value by mutter, but the toolkit  will also use scale 1, which would lead to some blurred result), but we agreed that we’d probably need to define an X11 protocol for this.

We finally spent some time for defining an algorithm for picking the preferred scaling per mode. This is a quite controversial aspect, and anyone might have their ideas on this (especially OEMs), so far we defined some DPI limits that we’ll use to evaluate weather a fractional scaling level has to be done or not: outside these limits (which change depending we’re handling a laptop panel or an external monitor [potentially in a docked setup]) we use integer scaling, in between them we use instead proportional (fractional) values.
One idea I had was to see the problem the other way around and define instead the physical size (in tenth of mm) we want for a pixel at least to be, and then scale to ensure we reach those thresholds instead of defining DPIs (again, that physical size should be weighted for external monitors differently, though). Also, hardware vendors might want to be able to tune these defaults, so one idea was also to provide a way for them to define defaults by panel serial.
In any case, the final and most
important goal, to me, is to provide defaults that guarantee usable and readable HiDPI environments, so that people would be able to use gnome-control-center and adjust these values if needed.
And I think could be quite also quite useful to add
to the gnome-shell intro-wizard an option to chose the scaling level if an high DPI monitor is detected.
For this reason, we also filled this wiki page, with display technical infos for all the hardware we had around, and we encourage you to do add your infos (if you don’t have write access to the Wiki, just send it to us).

What to do

As you can see in our technical journal TODO, we’ve plenty of things to do but the main thing is currently fixing the Shell toolkit widgets, while going through various bugs and improving the XWayland clients situation. Then there multiple optimizations to do at mutter level too.

When we ship

Our target is to get this landed by GNOME 3.26, even if this might be under an experimentalgsettings key, as right now the main blocker is the X11 clients support.

How to help

The easiest thing you can do is help testing the code (using jhbuild build gnome-shell with a config based on this) should be enough), also filling the scale factor tests wiki page might help. If you want to get involved with code, these are the git branches to look at.

Read More

You can read a more schematic report that Jonas wrote for this event on the gnome-shell mailing list.


It has been a great event, we did and discussed about many things but first of all I’ve been able to get more closely familiar in the GNOME code with who has wrote most of it, which indeed helped.
We’ve still lots of things to do, but we’re approaching to a state that would allow everyone to get differently scaled monitors at various fractional values, with no issues.

Our final board

Check some other pictures in my flickr gallery

Finally, I’ve to say thanks a lot to Jonas who initially proposed the event and, a part being a terrific engineer, has been a great mate to work and hang out with, making me discover (and survive in) Taipei and its food!

on June 19, 2017 09:03 PM

The second release of the GTK+ 3 powered Xfce Settings is now ready for testing (and possibly general use).  Check it out!

What’s New?

This release now requires xfconf 4.13+.

New Features

  • Appearance Settings: New configuration option for default monospace font
  • Display Settings: Improved support for embedded DisplayPort connectors

Bug Fixes

  • Display Settings: Fixed drawing of displays, was hit and miss before, now its guaranteed
  • Display Settings: Fixed drag-n-drop functionality, the grab area occupied the space below the drawn displays
  • Display Settings (Minimal): The mini dialog now runs as a single instance, which should help with some display drivers (Xfce #11169)
  • Fixed linking to dbus-glib with xfconf 4.13+ (Xfce #13633)


  • Resolved gtk_menu_popup and gdk_error_trap_pop deprecations
  • Ignoring GdkScreen and GdkCairo deprecations for now. Xfce shares this code with GNOME and Mate, and they have not found a resolution yet.

Code Quality

  • Several indentation fixes
  • Dropped duplicate drawing code, elimination another deprecation in the process

Translation Updates

Arabic, Bulgarian, Catalan, Chinese (China), Chinese (Taiwan), Croatian, Danish, Dutch, Finnish, French, Galician, German, Greek, Hebrew, Indonesian, Italian, Japanese, Kazakh, Korean, Lithuanian, Malay, Norwegian Bokmal, Norwegian Nynorsk, Occitan, Portuguese, Portuguese (Brazil), Russian, Serbian, Slovak, Spanish, Swedish, Thai, Ukrainian


The latest version of Xfce Settings can always be downloaded from the Xfce archives. Grab version 4.13.1 from the below link.


  • SHA-256: 01b9e9df6801564b28f3609afee1628228cc24c0939555f60399e9675d183f7e
  • SHA-1: 9ffdf3b7f6fad24f4efd1993781933a2a18a6922
  • MD5: 300d317dd2bcbb0deece1e1943cac368
on June 19, 2017 09:40 AM

The purpose of this update is to keep our community engaged and informed about the work the team is doing. We’ll cover important announcements, work-in-progress for the next release of MAAS and bugs fixes in release MAAS versions.

MAAS Sprint

The Canonical MAAS team sprinted at Canonical’s London offices this week. The purpose was to review the previous development cycle & release (MAAS 2.2), as well as discuss and finalize the plans and goals for the next development release cycle (MAAS 2.3).

MAAS 2.3 (current development release)

The team has been working on the following features and improvements:

  • New Feature – support for ‘upstream’ proxy (API only)Support for upstream proxies has landed in trunk. This iteration contains API only support. The team continues to work on the matching UI support for this feature.
  • Codebase transition from bzr to git – This week the team has focused efforts on updating all processes to the upcoming transition to Git. The progress so far is:
    • Prepared the MAAS CI infrastructure to fully support Git once the transition is complete.
    • Started working on creating new processes for PR’s auto-testing and landing.
  • Django 1.11 transition – The team continues to work through the Django 1.11 transition; we’re down to 130 unittest failures!
  • Network Beaconing & better network discovery – Prototype beacons have now been sent and received! The next steps will be to work on the full protocol implementation, followed by making use of beaconing to enhance rack registration. This will provide a better out-of-the-box experience for MAAS; interfaces which share network connectivity will no longer be assumed to be on separate fabrics.
  • Started the removal of ‘tgt’ as a dependency – We have started the removal of ‘tgt’ as a dependency. This simplies the boot process by not loading ephemeral images from tgt, but rather, having the initrd download and load the ephemeral environment.
  • UI Improvements
    • Performance Improvements – Improved the loading of elements in the Device Discovery, Node listing and Events page, which greatly improve UI performance.
    • LP #1695312 – The button to edit dynamic range says ‘Edit’ while it should say ‘Edit reserved range’
    • Remove auto-save on blur for the Fabric details summary row. Applied static content when not in edit mode.

Bug Fixes

The following issues have been fixed and backported to MAAS 2.2 branch. This will be available in the next point release of MAAS 2.2 (2.2.1) in the coming weeks:

  • LP: #1678339 – allow physical (and bond) interfaces to be placed on VLANs with a known 802.1q tag.
  • LP: #1652298 – Improve loading of elements in the device discovery page
on June 19, 2017 09:15 AM

Mission Reports

Stephen Michael Kellat

Well, taking just over 60 days to write again is not generally a good sign. Things have been incredibly busy at the day job. Finding out that a Reduction In Force is expected to happen in late September/early October also sharpens the mind as to the state of the economy. Our CEO at work is somewhat odd, to say the least. Certain acts by the CEO remain incredibly confusing if not utterly baffling.

In UK-slang, I guess I could probably be considered a "God-botherer". I've been doing work as an evangelist lately. The only product though has been the Lord's Kingdom. One of the elders at church wound up with their wife in a local nursing home due to advanced age as well as deteriorating health so I got tasked with conducting full Sunday services at the nursing home. Compared to my day job, the work has been far more worthwhile serving people in an extended care setting. Sadly it cannot displace my job that I am apparently about to lose in about 90 days or so anyhow thanks to pending actions of the board and CEO.

One other thing I have had running in the background has been the external review of Outernet. A short research note was drawn up in LaTeX and was submitted somewhere but bounced. Thanks to the magic of Pandoc, I was able to convert it to HTML to tack on to this blog post.

The Outernet rig in the garage

The Outernet rig is based in my garage to simulate a field deployment. The goal by their project is to get these boards into the wild in places like the African continent. Those aren't "clean room" testing environments. If anything, temperature controls go out the window. My only indulgence is that I added on an uninterruptible power supply due to known failures in the local grid.

The somewhat disconnected Raspberry Pi B+ known as ASTROCONTROL to connect to the Outernet board to retrieve materials

Inside the house a Raspberry Pi running Raspbian is connected via Ethernet to a Wi-Fi extender to reach out to the Outernet board. I have to manually set the time every time that ASTROCONTROL is used. Nothing in the mix is connected to the general Internet. The connection I have through Spectrum is not really all that great here in Ashtabula County.

As seen through ConnectBot, difficulties logging in

The board hit a race condition at one point recently where nothing could log in. A good old-fashioned IT Crowd-style power-cycling resolved the issue.

Pulling files on the Outernet board itself as seen in a screenshot via Cathode on an iPad

Sometimes I have used the Busybox version of tar on the board to gather files to review off the board.

The Outernet UI as seen on a smartphone

The interface gets a little cramped on a smartphone like the one I have.

And now for the text of the paper that didn't make the cut...


A current endeavor is to review the Outernet content distribution system. Outernet is a means to provide access to Internet content in impaired areas.1 This is not the only effort to do so, though. At the 33rd Chaos Communications Congress there was a review of the signals being transmitted with a view to reverse engineering it.2 The selection of content as well as the innards of the mainboard shipped in the do-it-yourself kit wind up being areas of review that continue.

In terms of concern, how is the content selected for distribution over the satellite platform? There is no known content selection policy. Content reception was observed to try to discern any patterns.

As to the software involved, how was the board put together? Although the signals were focused on at the Chaos Communications Congress, it is appropriate to learn what is happening on the board itself. As designed, the system intends for access to be had through a web browser. There is no documented method of bulk access for data. A little sleuthing shows that that is possible, though.

Low-Level Software

The software powering the mainboard, a C.H.I.P. device, was put together in an image using the Buildroot cross-compilation system. Beyond the expected web-based interface, a probe using Nmap found that ports were open for SSH as well as traditional FTP. The default directory for the FTP login is a mount point where all payloads received from the satellite platform are stored. The SSH session is provided by Dropbear and deposits you in a Busybox session.

The mainboard currently in use has been found to have problems with power interruption. After having to vigorously re-flash the board due to filesystem corruption caused by a minor power disruption, an uninterruptible power system was purchased to keep it running. Over thirty days of running, as measured by the Busybox-exposed command uptime, was gained through putting the rig on an uninterruptible power supply. The system does not adapt well with the heat as observed in the summer in northeast Ohio as we have had to power-cycle it to reboot it during high temperature periods as remote access became inaccessible.

Currently the Outernet mainboard is being operated air-gapped from other available broadband to observe how it would operate in an Internet-impaired environment. The software operates a Wi-Fi access point on the board with the board addressable at Maintaining a constant connection through a dedicated Raspberry Pi and associated monitor plus keyboard has not proved simple so far.

Content Selection

Presently a few categories of data are routinely transmitted. Weather data is sent for viewing in a dedicated applet. News ripped from the RSS feeds of selected news outlets such as the Voice of America, Deutsche Welle, and WTOP is sent routinely but is not checked for consistency. For example, one feed routinely pushes a page daily that the entire feed is just broken. Pages from Wikipedia are sent but there is no pattern discernible yet as to how the pages are picked.

Currently there is a need to review how Wikipedia may make pages available in an automated fashion. It is an open question as to how these pages are being scraped. Is there a feed? Is there manual intervention at the point of uplink? The pages sent are not the exact web-based versions or PDF exports but rather the printer-friendly versions. For now investigation needs to occur relative to how Wikipedia releases articles to see if there is anything that correlates with what is being released.

There are still open questions that require review. The opacity of the content selection policies and procedures limit the platform's utility. That opacity prevents a user having a reasonable expectation of what exactly is coming through on the downlink.


A technical platform is only a means. With the computers involved at each end, older ideas for content distribution are reborn for access-impaired areas. Content remains key, though.

  1. Alyssa Danigelis, "'Outernet' Project Seeks Free Internet Access For Earth?: Discovery News," DNews, February 25, 2014, http://news.discovery.com/tech/gear-and-gadgets/outernet-project-seeks-free-internet-access-for-earth-140225.htm./\/\

  2. Reverse Engineering Outernet (Hamburg, Germany, 2016), https://media.ccc.de/v/33c3-8399-reverse_engineering_outernet./\/\

on June 19, 2017 01:41 AM

June 18, 2017

Xfce 4.14 development has been picking up steam in the past few months.  With the release of Exo 0.11.3, things are only going to get steamier.  

What is Exo?

Exo is an Xfce library for application development. It was introduced years ago to aid the development of Xfce applications.  It’s not used quite as heavily these days, but you’ll still find Exo components in Thunar (the file manager) and Xfce Settings Manager.

Exo provides custom widgets and APIs that extend the functionality of GLib and GTK+ (both 2 and 3).  It also provides the mechanisms for defining preferred applications in Xfce.

What’s New in Exo 0.11.3?

New Features

  • exo-csource: Added a new --output flag to write the generated output to a file (Xfce #12901)
  • exo-helper: Added a new --query flag to determine the preferred application (Xfce #8579)

Build Changes

  • Build requirements were updated.  Exo now requires GTK+ 2.24, GTK 3.20, GLib 2.42, and libxfce4ui 4.12
  • Building GTK+ 3 libraries is no longer optional
  • Default debug setting is now “yes” instead of “full”. This means that builds will not fail if there are deprecated GTK+ symbols (and there are plenty).

Bug Fixes

  • Discard preferred application selection if dialog is canceled (Xfce #8802)
  • Do not ship generic category icons, these are standard (Xfce #9992)
  • Do not abort builds due to deprecated declarations (Xfce #11556)
  • Fix crash in Thunar on selection change after directory change (Xfce #13238)
  • Fix crash in exo-helper-1 from GTK 3 migration (Xfce #13374)
  • Fix ExoIconView being unable to decrease its size (Xfce #13402)

Documentation Updates

Available here

  • Add missing per-release API indices
  • Resolve undocumented symbols (100% symbol coverage)
  • Updated project documentation (HACKING, README, THANKS)

Translation Updates

Amharic, Asturian, Catalan, Chinese (Taiwan), Croatian, Danish, Dutch, Finnish, Galician, Greek, Indonesian, Kazakh,  Korean, Lithuanian, Norwegian Bokmal, Norwegian Nynorsk, Occitan, Portuguese (Brazil), Russian, Serbian, Slovenian, Spanish, Thai


The latest version of Exo can always be downloaded from the Xfce archives. Grab version 0.11.3 from the below link.


  • SHA-256: 448d7f2b88074455d54a4c44aed08d977b482dc6063175f62a1abfcf0204420a
  • SHA-1: 758ced83d97650e0428563b42877aecfc9fc3c81
  • MD5: c1801052163cbd79490113f80431674a
on June 18, 2017 05:30 PM

Kubuntu 17.04 – Zesty Zapus

The latest 5.10.2 bugfix update for the Plasma 5.10 desktop is now available in our backports PPA for Zesty Zapus 17.04.

Included with the update is KDE Frameworks 5.35

Kdevelop has also been updated to the latest version 5.1.1

Our backports for Xenial Xerus 16.04 also receive updated Plasma and Frameworks, plus some requested KDE applications.

Kubuntu 16.04 – Xenial Xerus

  • Plasma Desktop 5.8.7 LTS bugfix update
  • KDE Frameworks 5.35
  • Digikam 5.5.0
  • Kdevelop 5.1.1
  • Krita 3.1.4
  • Konversation 1.7.2
  • Krusader 2.6

To update, use the Software Repository Guide to add the following repository to your software sources list:


or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt update
sudo apt full-upgrade


Upgrade notes:

~ The Kubuntu backports PPA already contains significant version upgrades of Plasma, applications, Frameworks (and Qt for 16.04), so please be aware that enabling the backports PPA for the 1st time and doing a full upgrade will result in a substantial amount of upgraded packages in addition to the versions in this announcement.  The PPA will also continue to receive bugfix and other stable updates when they become available.

~ While we believe that these packages represent a beneficial and stable update, please bear in mind that they have not been tested as comprehensively as those in the main ubuntu archive, and are supported only on a limited and informal basis. Should any issues occur, please provide feedback on our mailing list [1], IRC [2], file a bug against our PPA packages [3], or optionally contact us via social media.

1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
3. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa

on June 18, 2017 01:08 PM

I’m pleased to announce the Community Data Science Collective Dataverse. Our dataverse is an archival repository for datasets created by the Community Data Science Collective. The dataverse won’t replace work that collective members have been doing for years to document and distribute data from our research. What we hope it will do is get our data — like our published manuscripts — into the hands of folks in the “forever” business.

Over the past few years, the Community Data Science Collective has published several papers where an important part of the contribution is a dataset. These include:

Recently, we’ve also begun producing replication datasets to go alongside our empirical papers. So far, this includes:

In the case of each of the first groups of papers where the dataset was a part of the contribution, we uploaded code and data to a website we’ve created. Of course, even if we do a wonderful job of keeping these websites maintained over time, eventually, our research group will cease to exist. When that happens, the data will eventually disappear as well.

The text of our papers will be maintained long after we’re gone in the journal or conference proceedings’ publisher’s archival storage and in our universities’ institutional archives. But what about the data? Since the data is a core part — perhaps the core part — of the contribution of these papers, the data should be archived permanently as well.

Toward that end, our group has created a dataverse. Our dataverse is a repository within the Harvard Dataverse where we have been uploading archival copies of datasets over the last six months. All five of the papers described above are uploaded already. The Scratch dataset, due to access control restrictions, isn’t listed on the main page but it’s online on the site. Moving forward, we’ll be populating this new datasets we create as well as replication datasets for our future empirical papers. We’re currently preparing several more.

The primary point of the CDSC Dataverse is not to provide you with way to get our data although you’re certainly welcome to use it that way and it might help make some of it more discoverable. The websites we’ve created (like for the ones for redirects and for page protection) will continue to exist and be maintained. The Dataverse is insurance for if, and when, those websites go down to ensure that our data will still be accessible.

This post was also published on the Community Data Science Collective blog.

on June 18, 2017 02:35 AM

June 17, 2017

I previously wrote an article around configuring msmtp on Ubuntu 12.04, but as I hinted at in my previous post that sort of got lost when the upgrade of my host to Ubuntu 16.04 went somewhat awry. What follows is essentially the same post, with some slight updates for 16.04. As before, this assumes that you’re using Apache as the web server, but I’m sure it shouldn’t be too different if your web server of choice is something else.

I use msmtp for sending emails from this blog to notify me of comments and upgrades etc. Here I’m going to document how I configured it to send emails via a Google Apps account, although this should also work with a standard Gmail account too.

To begin, we need to install 3 packages:
sudo apt-get install msmtp msmtp-mta ca-certificates
Once these are installed, a default config is required. By default msmtp will look at /etc/msmtprc, so I created that using vim, though any text editor will do the trick. This file looked something like this:

# Set defaults.
# Enable or disable TLS/SSL encryption.
tls on
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
# Setup WP account's settings.
host smtp.gmail.com
port 587
auth login
password <PASSWORD>
logfile /var/log/msmtp/msmtp.log

account default : <MSMTP_ACCOUNT_NAME>

Any of the uppercase items (i.e. <PASSWORD>) are things that need replacing specific to your configuration. The exception to that is the log file, which can of course be placed wherever you wish to log any msmtp activity/warnings/errors to.

Once that file is saved, we’ll update the permissions on the above configuration file — msmtp won’t run if the permissions on that file are too open — and create the directory for the log file.

sudo mkdir /var/log/msmtp
sudo chown -R www-data:adm /var/log/msmtp
sudo chmod 0600 /etc/msmtprc

Next I chose to configure logrotate for the msmtp logs, to make sure that the log files don’t get too large as well as keeping the log directory a little tidier. To do this, we create /etc/logrotate.d/msmtp and configure it with the following file. Note that this is optional, you may choose to not do this, or you may choose to configure the logs differently.

/var/log/msmtp/*.log {
rotate 12

Now that the logging is configured, we need to tell PHP to use msmtp by editing /etc/php/7.0/apache2/php.ini and updating the sendmail path from
sendmail_path =
sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a <MSMTP_ACCOUNT_NAME> -t"
Here I did run into an issue where even though I specified the account name it wasn’t sending emails correctly when I tested it. This is why the line account default : <MSMTP_ACCOUNT_NAME> was placed at the end of the msmtp configuration file. To test the configuration, ensure that the PHP file has been saved and run sudo service apache2 restart, then run php -a and execute the following

mail ('personal@email.com', 'Test Subject', 'Test body text');

Any errors that occur at this point will be displayed in the output so should make diagnosing any errors after the test relatively easy. If all is successful, you should now be able to use PHPs sendmail (which at the very least WordPress uses) to send emails from your Ubuntu server using Gmail (or Google Apps).

I make no claims that this is the most secure configuration, so if you come across this and realise it’s grossly insecure or something is drastically wrong please let me know and I’ll update it accordingly.

on June 17, 2017 08:32 PM

June 16, 2017

I am going to be honest with you, I am writing this post out of one part frustration and one part guidance to people who I think may be inadvertently making a mistake. I wanted to write this up as a blog post so I can send it to people when I see this happening.

It goes like this: when I follow someone on Twitter, I often get an automated Direct Message which looks something along these lines:

These messages invariably are either trying to (a) get me to look at a product they have created, (b) trying to get me to go to their website, or (c) trying to get me to follow them somewhere else such as LinkedIn.

Unfortunately, there are two similar approaches which I think are also problematic.

Firstly, some people will have an automated tweet go out (publicly) that “thanks” me for following them (as best an automated bot who doesn’t know me can thank me).

Secondly, some people will even go so far as to record a little video that personally welcomes me to their Twitter account. This is usually less than a minute long and again is published as an integrated video in a public tweet.

Why you shouldn’t do this

There are a few reasons why you might want to reconsider this:

Firstly, automated Direct Messages come across as spammy. Sure, I chose to follow you, but if my first interaction with you is advertising, it doesn’t leave a great taste in my mouth. If you are going to DM me, send me a personal message from you, not a bot (or not at all). Definitely don’t try to make that bot seem like a human: much like someone trying to suppress a yawn, we can all see it, and it looks weird.

Pictured: Not hiding a yawn.

Secondly, don’t send out the automated thank-you tweets to your public Twitter feed. This is just noise that everyone other than the people you tagged won’t care about. If you generate too much noise, people will stop following you.

Thirdly, in terms of the personal video messages (and in a similar way to the automated public thank-you messages), in addition to the noise it all seems a little…well, desperate. People can sniff desperation a mile off: if someone follows you, be confident in your value to them. Wow them with great content and interesting ideas, not fabricated personal thank-you messages delivered by a bot.

What underlies all of this is that most people want authentic human engagement. While it is perfectly fine to pre-schedule content for publication (e.g. lots of people use Buffer to have a regular drip-feed of content), automating human engagement just doesn’t hit the mark with authenticity. There is an uncanny valley that people can almost always sniff out when you try to make an automated message seem like a personal interaction.

Of course, many of the folks who do these things are perfectly well intentioned and are just trying to optimize their social media presence. Instead of doing the above things, see my 10 recommendations for social media as a starting point, and explore some other ways to engage your audience well and build growth.

The post Don’t Use Bots to Engage With People on Social Media appeared first on Jono Bacon.

on June 16, 2017 11:46 PM

This week Alan and Martin go flashing. We discuss Firefox multi-process, Minecraft now has cross platform multiplayer, the GPL is being tested in court and binary blobs in hardware are probably a bad thing.

It’s Season Ten Episode Fifteen of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Joey Sneddon are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on June 16, 2017 09:33 PM

I recently did an interview with Jeff Atwood, co-creator of StackExchange and Discourse about his approach to building platforms, communities. and more.

Read it here.

The post Interview with Jeff Atwood on Building Communities appeared first on Jono Bacon.

on June 16, 2017 12:16 AM

June 15, 2017

KDE Akademy 2017Akademy 2017

Yes, I fear I have let my blog go a bit defunct. I have been very busy with a bit of a life re-invented after separation from my 18 year marriage. But all is now well in
the land of Scarlett Gately Clark. I have now settled into my new life in beautiful Payson, AZ. I landed my dream job with Blue Systems, and recently moved to team Neon, where I will be back
at what I am good at, debian style packaging! I also will be working on Plasma Mobile! Exciting times. I will be attending Akademy, though out of my own pocket as I was unable to
procure funding. ( I did not ask KDE E.V due to my failure to assist with KDE CI ) I don’t know what happened with CI, I turned around and it was all done. At least it got done, thanks Ben.
I do plan to assist in the future with CI tickets and the like, as soon as the documentation is done!
Harald and I will be hosting a Snappy BoF at Akademy, hope to see you there!

If you find any of my work useful, please consider a donation or become a patreon!
I have 500USD a month in student loans that are killing me. I also need funding for Sprints and
Akademy. Thank you for any assistance you can provide!
Patreon for Scarlett Clark (me)

on June 15, 2017 08:16 PM

GNOME Web (Epiphany) in Debian 9 "Stretch"

Debian 9 “Stretch”, the latest stable version of the venerable Linux distribution, will be released in a few days. I pushed a last-minute change to get the latest security and feature update of WebKitGTK+ (packaged as webkit2gtk 2.16.3) in before release.

Carlos Garcia Campos discusses what’s new in 2.16, but there are many, many more improvements since the 2.6 version in Debian 8.

Like many things in Debian, this was a team effort from many people. Thank you to the WebKitGTK+ developers, WebKitGTK+ maintainers in Debian, Debian Release Managers, Debian Stable Release Managers, Debian Security Team, Ubuntu Security Team, and testers who all had some part in making this happen.

As with Debian 8, there is no guaranteed security support for webkit2gtk for Debian 9. This time though, there is a chance of periodic security updates without needing to get the updates through backports.

If you would like to help test the next proposed update, please contact me so that I can help coordinate this.

on June 15, 2017 04:02 PM

Apollo 440

Rhonda D'Vine

It's been a while. And currently I shouldn't even post but rather pack my stuff because I'll get the keys to my flat in 6 days. Yay!

But, for packing I need a good sound track. And today it is Apollo 440. I saw them live at the Sundance Festival here in Vienna 20 years ago. It's been a while, but their music still gives me power to pull through.

So, without further ado, here are their songs:

  • Ain't Talkin' 'Bout Dub: This is the song I first stumbled upon, and got me into them.
  • Stop The Rock: This was featured in a movie I enjoyed, with a great dancing scene. :)
  • Krupa: Also a very up-cheering song!

As always, enjoy!

/music | permanent link | Comments: 2 | Flattr this

on June 15, 2017 10:27 AM

I've been working on making the Inkscape CI performant on Gitlab because if you aren't paying developers you want to make developing fun. I started with implementing ccache, which got us a 4x build time improvement. The next piece of low hanging fruit seemed to be the installation of dependencies, which rarely change, but were getting installed on each build and test run. The Gitlab CI runners use Docker and so I set out to turn those dependencies into a Docker layer.

The well worn path for doing a Docker layer is to create a branch on Github and then add an automated build on Docker Hub. That leaves you with a Docker Repository that has your Docker layer in it. I did this for the Inkscape dependencies with this fairly simple Dockerfile:

FROM ubuntu:16.04
RUN apt-get update -yqq 
RUN apt-get install -y -qq <long package list>

For Inkscape though we'd really like to not set up another service and accounts and permissions. Which led me to Gitlab's Container Registry feature. I took the same Git branch and added a fairly generic .gitlab-ci.yml file that looks like this:


  image: docker:latest
    - docker:dind
  stage: build
    - docker build --pull -t ${IMAGE_TAG} .
    - docker push ${IMAGE_TAG}

That tells the Gitlab CI system to build a Docker layer with the same name as the Git branch and put it in the project's container registry. For Inkscape you can see the results here:

We then just need to change our CI configuration for the Inkscape CI builds so that it uses our new image:

image: registry.gitlab.com/inkscape/inkscape-ci-docker/master

Overall the results were saving approximately one to two minutes per build. Not the drastic results I was hoping for, but this is likely to be caused by the builders being more IO constrained than CPU constrained, so uncompressing the layer is roughly the same cost as installing the packages. This still results in a 10% savings in total pipeline time. The bigger unexpected benefit is that it has cleaned up the CI build logs to where the first page starts the actual Inkscape build instead of having to scroll through pages of dependency installation (old vs. new).

on June 15, 2017 05:00 AM

June 14, 2017

In my last blog, I described the plan to hold a meeting in Zurich about the OpenAg Food Computer.

The Meetup page has been gathering momentum but we are still well within the capacity of the room and catering budget so if you are in Zurich, please join us.

Thanks to our supporters

The meeting now has sponsorship from three organizations, Project 21 at ETH, the Debian Project and Free Software Foundation of Europe.

Sponsorship funds help with travel expenses and refreshments.

Food is always in the news

In my previous blog, I referred to a number of food supply problems that have occurred recently. There have been more in the news this week: a potential croissant shortage in France due to the rising cost of butter and Qatar's efforts to air-lift 4,000 cows from the US and Australia, among other things, due to the Saudi Arabia embargo.

The food computer isn't an immediate solution to these problems but it appears to be a helpful step in the right direction.

on June 14, 2017 07:53 PM

June 13, 2017

This is the second post in the series about building u-boot based gadget snaps, following Building u-boot gadget snap packages from source.

If you have read the last post in this series, you have likely noticed that there is a uboot.patch file being applied to the board config before building the u-boot binaries. This post will take a closer look at this patch.

As you might know already, Ubuntu Core will perform a fully automatic roll-back of upgrades of the kernel or the core snap (rootfs), if it detects that a reboot after the upgrade has not fully succeeded. If an upgrade of the kernel or core snap gets applied, snapd sets a flag in the bootloader configuration called “snap_mode=” and additionally sets the “snap_try_core=” and/or “snap_try_kernel=” variables.

To set these flags and variables that the bootloader should be able to read at next boot, snapd will need write access to the bootloader configuration.
Now, u-boot is the most flexible of all bootloaders, the configuration can live in a uEnv.txt file, in a boot.scr or boot.ini script on a filesystem, in raw space on the boot media or on some flash storage dedicated to u-boot or even a combination of these (and I surely forgot other variations in that list). This setup can vary from board to board and there is no actual standard.

Since it would be a massive amount of work and code to support all possible variations of u-boot configuration management in snapd, the Ubuntu Core team had to decided for one default process and pick a standard here.

Ubuntu Core is designed with completely unattended installations in mind, being the truly rolling Ubuntu, it should be able to upgrade itself at any time over the network and should never corrupt any of its setup or configuration, not even when a power loss occurs in the middle of an update or while the bootloader config is updated. No matter if your device is an embedded industrial controller mounted to the ceiling of a multi level factory hall, a cell tower far out in the woods or some floating sensor device on the ocean, the risk of corrupting any of the bootloader config needs to be as minimal as possible.

Opening a file, pulling it to RAM, changing it, then writing it to a filesystem cache and flushing that in the last step is quite a time-consuming thing. The time window where the system is vulnerable to corruption due to power outage is quite big. Instead we want to atomically toggle a value; preferably directly on disk with no caches at all. This cuts the potential corruption time down to the actual physical write operation, but also rules out most of the file based bits from the above list (uEnv.txt or boot.scr/.ini) and leaves us with the raw options.

That said, we can not really enforce an additional partition for a raw environment, a board might have a certain boot process that requires a very specific setup of partitions shipping binary blobs from the vendor before even getting to the bootloader (i.e. see the dragonboard-410c. Qualcomm requires 8 partitions with different blobs to initialize the hardware before even getting to u-boot.bin). To not exclude such boards we need to find a more generic setup. The solution here is a compromise between filesystem based and raw … we create an img file with fixed size (which allows the atomic writing we want) but put that on top of a vfat partition (our system-boot partition that also carries kernel, initrd and dtb) for biggest flexibility.

To make it easier for snapd and the user space side, we define a fixed size (the same size on all boards) for this img file. We also tell u-boot and the userspace tools to use redundancy for this file which allows the desired atomic writing.

Lets move on with some real-world example, looking at a board i recently created a gadget snap for [1]

I have an old Freescale SabreLite (IMX6) board lying around here, its native SCSI controller and gigabit ethernet make it a wonderful target device for i.e. a NAS or really fast Ubuntu Core based netxtcloud box.

A little research shows it uses the nitrogen6x configuration from the u-boot source tree which is stored in include/configs/nitrogen6x.h

To find the currently used environment setup for this board we just grep for “CONFIG_ENV_IS_IN” in that file and will find the following block:


So this board defines a raw space on the MMC to be used for the environment if we build for the SabreLite, but we want to use CONFIG_ENV_IS_IN_FAT with the right parameters to make use of an uboot.env file from the first vfat partition on the first SD card.

Lets tell this in the config:

 #if defined(CONFIG_SABRELITE)

If we just set this we’ll run into build errors though, since the CONFIG_ENV_IS_IN_FAT also wants to know which interface, device and filename it should use:

 #if defined(CONFIG_SABRELITE)
+#define FAT_ENV_INTERFACE "mmc"
+#define FAT_ENV_FILE "uboot.env"

So here we tell u-boot that it should use mmc device number 1 and read a file called uboot.env.

FAT_ENV_DEVICE_AND_PART can actually take a partition number, but if we do not set it, it will try to automatically use the very first partition found … (so “1” is equivalent to “1:1” in this case … on something like the dragonboard where the vfat is actually the 8th partition we use “1:8”).

While the above patch would already work with some uboot.env file, it would not yet work with the one we need for Ubuntu Core. Remember the atomic writing thing from above ? This requires us to set the CONFIG_SYS_REDUNDAND_ENVIRONMENT option too (note i did not typo this, the option is really called “REDUNDAND” for whatever reason).
Setting this option tells u-boot that there is a different header on the file and that write operations should be done atomic.

Ubuntu Core defaults to a fixed file size for uboot.env. We expect the file to be exactly 128k big, so lets find the “CONFIG_ENV_SIZE” option in the config file and adjust it too if it does define a different size:

/* Environment organization */
-#define CONFIG_ENV_SIZE (8 * 1024)
+#define CONFIG_ENV_SIZE (128 * 1024)

 #if defined(CONFIG_SABRELITE)
+#define FAT_ENV_INTERFACE "mmc"
+#define FAT_ENV_FILE "uboot.env"

Trying to build the above will actually end up with a build error complaining that fat writing is not enabled, so we will have to add that too …

One other bit that Ubuntu core expects is that we can load a proper initrd.img without having to mangle or modify it in the kernel snap (by i.e. making it a uInitrd or whatnot) so we need to define the CONFIG_SUPPORT_RAW_INITRD option as well since it is not set by default for this board.

Our final patch now looks like:

/* Environment organization */
-#define CONFIG_ENV_SIZE (8 * 1024)
+#define CONFIG_ENV_SIZE (128 * 1024)

 #if defined(CONFIG_SABRELITE)
+#define FAT_ENV_INTERFACE "mmc"
+#define FAT_ENV_FILE "uboot.env"


With this we are now able to build a u-boot.bin that will handle the Ubuntu Core uboot.env file from the system-boot partition, read and write the environment from there and allow snapd to modify the same file from user space on a booted system when kernel or core snap updates occur.

The actual uboot.env file needs to be created using the “mkenvimage” tool with the “-r” (redundant) and “-s 131072” (128k size) options, from an input file. In the branch at [1] you will find the call of this command in the snapcraft.yaml file in the “install” script snippet. It uses the uboot.env.in textfile that stores the default environment we use …

The next post in this series will take a closer look at the contents of this uboot.env.in file, what we actually need in there to achieve proper rollback handling and how to obtain the default values for it.

If you have any questions about the process, feel free to ask here in the comments or open a thread on https://forum.snapcraft.io in the device category.

[1] https://github.com/ogra1/sabrelite-gadget

on June 13, 2017 11:53 AM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, about 182 work hours have been dispatched among 11 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change and we are thus still a little behind our objective.

The security tracker currently lists 44 packages with a known CVE and the dla-needed.txt file 42. The number of open issues is close to last month.

Thanks to our sponsors

New sponsors are in bold (none this month unfortunately).

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on June 13, 2017 07:19 AM

Welcome to the Ubuntu Weekly Newsletter. This is issue #510 for the week of June 5 – 11, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • Chris Guiver
  • Athul Muralidhar
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on June 13, 2017 05:06 AM

June 12, 2017

devsumWelcome to the second Ubuntu OpenStack development summary!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!

OpenStack Distribution

Stable Releases

The current set of OpenStack Newton point releases have been released:


The next cadence cycle of stable fixes is underway – the current candidate list includes:

Cinder: RBD calls block entire process (Kilo)

Cinder: Upload to image does not copy os_type property (Kilo)

Swift: swift-storage processes die of rsyslog is restarted (Kilo, Mitaka)

Neutron: Router HA creation race (Mitaka, Newton)

We’ll also sweep up any new stable point releases across OpenStack Mitaka, Newton and Ocata projects at the same time:




Development Release

x86_64, ppc64el and s390x builds of Ceph 12.0.3 (the current Luminous development release) are available for testing via PPA whilst misc build issues are resolved with i386 and armhf architectures:


OpenStack Pike b2 was out last week; dependency updates have been uploaded (including 5 new packages) and core project updates are being prepared this week pending processing of new packages in Ubuntu Artful development. .

OpenStack Snaps

We’re really close to a functional all-in-one OpenStack cloud using the OpenStack snaps – work is underway on the nova-hypervisor snap to resolve some issues with use of sudo by the nova-compute and neutron-openvswitch daemons. Once this work has landed expect a more full update on efforts to-date on the OpenStack snaps, and how you can help out with snapping the rest of the OpenStack ecosystem!

If you want to give the current snaps a spin to see what’s possible checkout snap-test.

Nova LXD

Work on support for new LXD features to allow multiple storage backends has been landed into nova-lxd. Support for LXD using storage pools has also been added to the nova-compute and lxd charms.

The Tempest experimental gate is now functional again (hint: use ‘check experimental’ on a Gerrit review). Work is also underway to resolve issues with Neutron linuxbridge compatibility in OpenStack Pike (raised by the OpenStack-Ansible team – thanks!), including adding a new functional gate check for this particular networking option.

OpenStack Charms

Deployment Guide

The charms team will be starting work on the new OpenStack Charms deployment guide in the next week or so; if you’re an OpenStack Charm user and would like to help contribute to a best practice guide to cover all aspects of building an OpenStack cloud using MAAS, Juju and the OpenStack Charms we want to hear from you!  Ping jamespage in #openstack-charms on Freenode IRC or attend our weekly meeting to find out more.

Stable bug backports

If you have bugs that you’d like to see backported to the current stable charm set, please tag them with the ‘stable-backport’ tag (and they will pop-up in the right place in Launchpad) – you can see the current pipeline here.

We’ve had a flurry of stable backports of the last few weeks to fill in the release gap left when the project switched to a 6 month release cadence so be sure to update and test out the latest versions of the OpenStack charms in the charm store.

IRC (and meetings)

You can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.


on June 12, 2017 01:41 PM

June 11, 2017

The Fleb 100K Special

Stuart Langridge

Fleb reviews mechanical puzzles on YouTube. I subscribe to his puzzle review channel and it’s jolly interesting. Today, in celebration of his hitting 100,000 subscribers1, he published a video reviewing the Mini Puzzle Box (sold out, as I write this) and, more intriguingly, a link to a “100k special”, a puzzle produced by Fleb for viewers to look at. It’s at www.flebpuzzles.com/100kspecial2 and it presents the first in a series of four puzzles, for channel viewers.

These are “Puzzle Hunt”-style puzzles; that is, the answer to the puzzle is a word or phrase. Here, I’ll show you how I solved each. Be warned: if you’re looking to solve these yourself, stop reading now. Really.

The Opening

Puzzle 1, at www.flebpuzzles.com/100kspecial, is entitled “The Opening” and consists of a number of clues about puzzles. Each of these clues indicates a puzzle which has been reviewed on Fleb’s channel, but with a single letter alteration. The bracketed letters indicate the length of the puzzle name. So, to make a start on solving this, list all the puzzles reviewed on the channel with the length of the puzzle names, and that’ll help. For example, “This puzzle was about the pigs owned by one of the United States’ greatest presidents. I think it was called the (7 4) puzzle!” suggests The Lincoln Logs Puzzle because its name is of length 7 and 4 (“Lincoln Logs”).

But the clue mentions pigs, and there are no pigs involved? What gives? Well, this is the next step; each puzzle name has a letter changed in it to better match the clue. So LINCOLN LOGS becomes LINCOLN HOGS, and now it’s about the pigs owned by an American President… and the changed letter is “H”. That’s important. The clues in order have these answers:

…the last moment of a punch (4 3)CAST BOX WIT’S ENDCASH BOX3 HIT’S END4
…a small chirping insect which is located directly to the right of the solver (4 7)CAST CRICKETEAST CRICKET
…the strange hybrid of a young man and a buzzing insect (3 3)BEE BOXBEE BOY
…a four sided geometric shape embedded in something found on a piece of clothing or a mattress (6 2 3 3)SQUARE IN THE BAGSQUARE IN THE TAG
…the pigs owned by one of the United States’ greatest presidents (7 4)LINCOLN LOGSLINCOLN HOGS
…a rectangular tile and something that connects two points (5 & 4)PANEL & LINGPANEL & LINE
…a score of payments for monthly living accommodations (2 4 9)20 CENT PUZZLEBOX20 RENT PUZZLEBOX
…a spiral (4)GYROGYRE
…a brave machine that tells you how long you have until you have to move your car. It was near the sea (4 5 7 5)GOLD COAST PARKING METERBOLD COAST PARKING METER
…the final, cute, regular three dimensional figure with all edges the same side (4 4)CAST CUBYLAST CUBY
…a light source that was run on a black rock (4 2 4)LUMP OF COALLAMP OF COAL
…a man who checks gas gauges (9) METERMASS??5 (N)
…small breads designed for small cats (6 8)BITTEN BISCUITSKITTEN BISCUITS

Take all the changed letters in order and you get HEY THERE BLANK6. And the puzzle’s named “The Opening”, and how does Fleb open every video? With “Hey there, puzzlers!”. So BLANK is PUZZLERS, and our link to the second puzzle of the 100K special must therefore be www.flebpuzzles.com/PUZZLERS.

The Spoiler Break

This second puzzle has a bunch of pictures where each picture has an associated phrase, and then separately a list of clues for “spoilers” from film and TV history. Our job is to match up the phrases with the spoilers, and that will give us the pictures in order.

Each picture is a 7×2 grid in which one or more coloured squares is placed: one example (the first example) looks roughly like this:

So first, the list of spoiler clues and their answers7:

  • He killed Dumbledore: SNAPE (from the Harry Potter series)
  • He was dead the whole time: BRUCE (Willis from The Sixth Sense)
  • He never existed and was a figment of the imagination: TYLER (Durden from Fight Club)
  • It was his sled’s name: ROSEBUD (from Citizen Kane)
  • It was this planet the whole time: EARTH (from Planet of the Apes)
  • He was Keyser Soze: VERBAL8 (from the Usual Suspects)
  • She shot JR: KRISTEN9 (from Dallas)
  • He is the one: NEO (from The Matrix)

Now we pair those answers with the picture clues:

  • BGREED BAUSE BN BIGHTING: BRUCE (the phrase “Agreed pause in fighting → Truce” with initial letters all replaced by B)
  • FLOWERBUD WITBID THORNBUD: ROSEBUD (the phrase “Flowers with thorns → Roses” with final S replaced by BUD in each word)
  • DOUGHNTIN SHTIN ______ KRETIN: KRISTIN (the phrase “Doughnut shop Krispy Kreme” with the final two letters of each word replaced by TIN)
  • LETTEREO AFTEREO MEO: NEO (letter+eo after+eo m+eo → letter after m + eo → n+eo → neo)
  • NOMPKOP REBPUN YTNPWT EERPT: (don’t know)reverse to give POKEMON NUMBER TWENTY THREE with letters replaced by P; Pokemon 23 is EKANS, so reverse and replace one letter by P to give SNAPE
  • LOOR ROUND IREPLACE: EARTH (the phrase “floor around fireplace: hearth” with initial letters all removed)
  • LYLGE SYLIPED CYL: (don’t know)LARGE STRIPED CAT with pairs replaced by YL; replace “ig” with “yl” in TIGER to give TYLER
  • VYPE VF VEA: VERBAL (the phrase “type of tea: herbal” with initial letters all replaced by V)

This gives us an ordering of (SNAPE/TYLER) BRUCE (SNAPE/TYLER) ROSEBUD EARTH VERBAL KRISTIN NEO. It’s not clear to me which clues the SNAPE and TYLER answers match with and why.10 We’ll go with SNAPE first and TYLER third, because that makes the below answer work.11 So we can take the pictures in that order:

and if we plot the lines drawn by the different colours then we get the following four coloured tracks:

or, slightly fancifully:

the word LOOT, leading us to puzzle 3 at flebpuzzles.com/LOOT.

The Solution

OK, it’s becoming apparent that these puzzles are named after the stages of a Fleb puzzle review: the first puzzle was named the opening, the second the spoiler break, and now the solution. This third puzzle has a series of letter grids, each of which apparently matches with a clue. The letter grids are:


and the clues (of which more later) make it fairly obvious that we’re supposed to extract multiple words from each grid and then use that to work out which clue applies to each grid. Well, I stared at this for a long, long time, trying to find words in the grids, see what the deal was, and so on. The implication from the clues is that we should be extracting at least five words from each grid (one clue refers to “the fourth word”, and then “the final word”, so “final” is more than four) and it’s hard to see how one could consistently remove six or more words from a 5×5 grid, so I figured I was looking for five words. I got nowhere with this for quite some time until it occurred to me that I could brute-force it and see if that gave me an answer. Slightly off the pure puzzler mindset, but, hey, computers are handy. So I wrote a little program that tried every combination of letters from the grids, in this way: we assume that each word takes its first letter(s) from column 1, its second letter(s) from column 2, and so on. So in the fourth grid, above, a word might be S-C-I-NI-LL or S-E-K-N-ER or S-X-N-N-Y or something else… and was it possible to decompose a grid this way into five separate words which used up all the letters?

Well, yes it is. My little script printed out this:


Nothing for grid 1, but that’s probably down to its dictionary. So this helps a lot! I was honestly surprised at how successful that was. The clues look like this:

  • LOTUS: Take the 1st letter of the 4th answer and the second to last letter of the final answer.
  • CAST CRICKET: Take the 3rd letter of the 2nd answer and the 1st letter of the final answer.
  • XBOX: Take the 4th and 5th letters of the last answer.
  • LINCOLN LOGS: Take the 5th letter of the 1st answer and the 4th letter of the last answer.

and they pretty clearly match up:

  • LOTUS matches grid 3 (flowers, like a lotus), so Marigold + orchId
  • CAST CRICKET matches grid 4 (sports), so boXing + Tennis
  • XBOX… we’ll come back to
  • and LINCOLN LOGS matches grid 2 (US president names), so cartEr + rooSevelt

But what of grid 1, matching XBOX? Well, knowing now that it’s about the XBox it’s pretty quick to decompose the square by hand into GAMEBOY, GENESIS12, NEOGEO, PLAYSTATION, SATURN13 and so its letters are satURn. And so our whole word for this puzzle is MI+XT+UR+ES, leading us to puzzle 4 at flebpuzzles.com/MIXTURES.

The Comment Section

So, for this fourth puzzle, we’re confronted with a set of eight images. Since I’m now pretty familiar with Fleb’s selection of puzzle reviews, these images look like they’re each portraying a word I recognise; they are DEVIL, some knots which I think clues FIGURE EIGHT, COAL, LOTUS, HORSESHOE, GYRO, SQUARE, and BEE, each of which is relevant to a reviewed puzzle. Fleb also says at the top of this puzzle: “Don’t forget to respond to comments on old videos! It’ll help to pin the best ones!”, hinting that the comments section for these puzzles may have some extra hints. And indeed it does; Fleb has pinned a comment on each of these videos, reading as follows:

  • DEVIL: “-[opposite of good]”
  • FIGURE EIGHT: “-[black ball number] -[small carpet] -[chemical symbol for iron]”
  • COAL: “-[centilitre, for short] -[universal donor blood type]”
  • LOTUS: “-[remove from power]”
  • HORSESHOE: “-[Kentucky Derby racer] -[female pronoun]”
  • GYRO: “-[cowboy Rogers]”
  • SQUARE: “-[they might be burning] -[17th letter]”
  • BEE: “-[exist]”

And, obviously, each comment is a clue to remove some letters from each of the puzzle clues. So DEVIL - EVIL (the opposite of good) gives D. The others:

  • COAL - CL - O → A
  • SQUARE - EARS - Q → U
  • BEE - BEE

and so our final puzzle word and the link to the solution is DIALOGUE.

If you’ve got thoughts or responses to all this, or answers to the couple of clues I didn’t understand, or want to chat more, best to reply to my YouTube comment linking to these solutions on Fleb’s announcement video, here!

Thank you to Fleb for this fun set of puzzles, and congrats on the milestone. What’s next?

  1. nice one Fleb; a hundred thousand! sweet!
  2. since the website was down when I started writing this, I pinged Fleb and said: install WP Super Cache and talk to your hosting provider, and I added a mirror of puzzle 1’s text in a youtube comment
  3. the letter is definitely H, but I’m pretty dubious about my solution; “a punch” is obviously BOX, but is “the last moment” CASH? Should it be HAST? CHST? CASH is the best I can come up with, but I’m not really sure why it’s the answer
  4. Thank you to JDiMase8 for the correction!
  5. from context, this is obviously an N, but how does it match the clue? METERMANS? But that’d be more than one man. A “gas gauge” is presumably a METER, so is a man who checks them a METERNASS? Maybe a METERMASN? Another one I don’t fully understand.
  6. well, you get _EY THERE BLA_K, and then you ping Fleb for help because you don’t get it
  7. warning! spoilers
  8. Láttam a Keyser Söze-t! Te nem értesz?! Keyser Söze!!
  9. not Sue Ellen, which is who I thought it was until I looked it up
  10. One of the nice things about Puzzle-Hunt-style puzzles is that you don’t necessarily have to have worked out every step in order to get the answer; you’ll see that in both this puzzle and the first one I deduced some parts of the answer from context without actually having a proper solution for the clue which gives that part of the answer
  11. Suggestions as to why LYLGE SYLIPED CYL clues TYLER (lots of Ys in there), or NOMPKOP REBPUN YTNPWT EERPT clues SNAPE, are invitedSuggestions and hints were received from Angus Mills, Robertlavigne1, and PANICFAN1227, for which many thanks!
  12. Americans, eh? Tch. It’s called a Sega Megadrive, yes it is
  13. and now you see why my script didn’t get it; not many dictionaries have “neogeo” as a word
on June 11, 2017 11:53 PM

UPDATE: The URLs below are dead. I no longer work at Canonical, and don’t know if file system benchmarking is still part of their kernel testing process.



I’ve been working to implement file system benchmarking as part of the test process that the kernel team applies to every kernel update. These are intended to help us spot performance issues. The following announcement I just sent to the Ubuntu kernel mailing list covers the specifics:

[EDIT] Fixed tags to enable the copied email text to flow.


The Ubuntu kernel team has implemented the first of what we hope will be
a growing set of benchmarks which are run against Ubuntu kernel
releases. The first two benchmarks to be included are iozone file system
tests, with and without fsync enabled. These are being run as part of
the testing applied to all kernel releases.

== Disclaimers ==


1. These benchmarks are not intended to indicate any performance metrics
in any real world or end user situations. They are intended to expose
possible performance differences between releases, and not to reflect
any particular use case.

2. Fixes for file system bugs reduce performance in some cases.
Performance decreases between releases may be a side effect of fixing
bugs, and not a bug in themselves.

3. While assessments of performance are valuable, they are not the only
criteria that should be used to select a file system. In addition to
benchmarks, file systems must be tested for a variety of use cases and
verified for correctness under a variety of conditions.

== General Information ==

1. The top level benchmarking results page is located here:
This page is linked from the top level index at kernel.ubuntu.com

2. The tests are run on the same bare-metal hardware for each release,
on spinning magnetic media.

3. Test partitions are sized at twice system memory size to prevent the
entire test data set from being cached.

4. File systems tested are ext2, ext3, ext4, xfs, and btrfs

5. For each release, each test is run on each file system five times,
and then the results are averaged.

== Types of results ==

There are three types of results. To find performance regressions, we
(the Ubuntu kernel team) are primarily interested in the second and
third types.

1. The Iozone test generates charts of the data for each individual file
system type. To navigate to these, select the links under the “Ran” or
“Passed” columns in the list of results for each benchmark, then select
the test name (“iozone”, for example) from that page. The graphs for
each run for each file system type will be available from that page in
the “Graphs” column.

The second and third result sets are generated by the
iozone-results-comparator tool, located here:


2. Charts comparing performance among all tested file systems for each
individual release. To navigate to these, select the links under the
“Ran” or “Passed” columns in the list of results, then select the
“charts” link at the top of that page.

3. Charts comparing different releases to each other. These comparisons
are generated for each file system type, and are linked at the bottom of
the index page for each benchmark. These comparisons include:

3A. Comparison between the latest kernel for each Ubuntu series (i.e.
raring, saucy, etc).

3B. Comparison between the latest kernel for each LTS release.

3C. comparison of successive versions within each series.

on June 11, 2017 04:03 PM

The Wikipedia Adventure

Benjamin Mako Hill

I recently finished a paper that presents a novel social computing system called the Wikipedia Adventure. The system was a gamified tutorial for new Wikipedia editors. Working with the tutorial creators, we conducted both a survey of its users and a randomized field experiment testing its effectiveness in encouraging subsequent contributions. We found that although users loved it, it did not affect subsequent participation rates.

Start screen for the Wikipedia Adventure.

A major concern that many online communities face is how to attract and retain new contributors. Despite it’s success, Wikipedia is no different. In fact, researchers have shown that after experiencing a massive initial surge in activity, the number of active editors on Wikipedia has been in slow decline since 2007.

The number of active, registered editors (≥5 edits per month) to Wikipedia over time. From Halfaker, Geiger, and Morgan 2012.

Research has attributed a large part of this decline to the hostile environment that newcomers experience when begin contributing. New editors often attempt to make contributions which are subsequently reverted by more experienced editors for not following Wikipedia’s increasingly long list of rules and guidelines for effective participation.

This problem has led many researchers and Wikipedians to wonder how to more effectively onboard newcomers to the community. How do you ensure that new editors Wikipedia quickly gain the knowledge they need in order to make contributions that are in line with community norms?

To this end, Jake Orlowitz and Jonathan Morgan from the Wikimedia Foundation worked with a team of Wikipedians to create a structured, interactive tutorial called The Wikipedia Adventure. The idea behind this system was that new editors would be invited to use it shortly after creating a new account on Wikipedia, and it would provide a step-by-step overview of the basics of editing.

The Wikipedia Adventure was designed to address issues that new editors frequently encountered while learning how to contribute to Wikipedia. It is structured into different ‘missions’ that guide users through various aspects of participation on Wikipedia, including how to communicate with other editors, how to cite sources, and how to ensure that edits present a neutral point of view. The sequence of the missions gives newbies an overview of what they need to know instead of having to figure everything out themselves. Additionally, the theme and tone of the tutorial sought to engage new users, rather than just redirecting them to the troves of policy pages.

Those who play the tutorial receive automated badges on their user page for every mission they complete. This signals to veteran editors that the user is acting in good-faith by attempting to learn the norms of Wikipedia.

An example of a badge that a user receives after demonstrating the skills to communicate with other users on Wikipedia.

Once the system was built, we were interested in knowing whether people enjoyed using it and found it helpful. So we conducted a survey asking editors who played the Wikipedia Adventure a number of questions about its design and educational effectiveness. Overall, we found that users had a very favorable opinion of the system and found it useful.

Survey responses about how users felt about TWA.
Survey responses about what users learned through TWA.

We were heartened by these results. We’d sought to build an orientation system that was engaging and educational, and our survey responses suggested that we succeeded on that front. This led us to ask the question – could an intervention like the Wikipedia Adventure help reverse the trend of a declining editor base on Wikipedia? In particular, would exposing new editors to the Wikipedia Adventure lead them to make more contributions to the community?

To find out, we conducted a field experiment on a population of new editors on Wikipedia. We identified 1,967 newly created accounts that passed a basic test of making good-faith edits. We then randomly invited 1,751 of these users via their talk page to play the Wikipedia Adventure. The rest were sent no invitation. Out of those who were invited, 386 completed at least some portion of the tutorial.

We were interested in knowing whether those we invited to play the tutorial (our treatment group) and those we didn’t (our control group) contributed differently in the first six months after they created accounts on Wikipedia. Specifically, we wanted to know whether there was a difference in the total number of edits they made to Wikipedia, the number of edits they made to talk pages, and the average quality of their edits as measured by content persistence.

We conducted two kinds of analyses on our dataset. First, we estimated the effect of inviting users to play the Wikipedia Adventure on our three outcomes of interest. Second, we estimated the effect of playing the Wikipedia Adventure, conditional on having been invited to do so, on those same outcomes.

To our surprise, we found that in both cases there were no significant effects on any of the outcomes of interest. Being invited to play the Wikipedia Adventure therefore had no effect on new users’ volume of participation either on Wikipedia in general, or on talk pages specifically, nor did it have any effect on the average quality of edits made by the users in our study. Despite the very positive feedback that the system received in the survey evaluation stage, it did not produce a significant change in newcomer contribution behavior. We concluded that the system by itself could not reverse the trend of newcomer attrition on Wikipedia.

Why would a system that was received so positively ultimately produce no aggregate effect on newcomer participation? We’ve identified a few possible reasons. One is that perhaps a tutorial by itself would not be sufficient to counter hostile behavior that newcomers might experience from experienced editors. Indeed, the friendly, welcoming tone of the Wikipedia Adventure might contrast with strongly worded messages that new editors receive from veteran editors or bots. Another explanation might be that users enjoyed playing the Wikipedia Adventure, but did not enjoy editing Wikipedia. After all, the two activities draw on different kinds of motivations. Finally, the system required new users to choose to play the tutorial. Maybe people who chose to play would have gone on to edit in similar ways without the tutorial.

Ultimately, this work shows us the importance of testing systems outside of lab studies. The Wikipedia Adventure was built by community members to address known gaps in the onboarding process, and our survey showed that users responded well to its design.

While it would have been easy to declare victory at that stage, the field deployment study painted a different picture. Systems like the Wikipedia Adventure may inform the design of future orientation systems. That said, more profound changes to the interface or modes of interaction between editors might also be needed to increase contributions from newcomers.

This blog post, and the open access paper that it describes, is a collaborative project with Sneha Narayan, Jake OrlowitzJonathan Morgan, and Aaron Shaw. Financial support came from the US National Science Foundation (grants IIS-1617129 and IIS-1617468), Northwestern University, and the University of Washington. We also published all the data and code necessary to reproduce our analysis in a repository in the Harvard Dataverse. Sneha posted the material in this blog post over on the Community Data Science Collective Blog.

on June 11, 2017 02:57 AM

June 10, 2017

En esta ocasión:
Charlaremos de estos temas:
  • ¿Dónde comprar un PC con Ubuntu preinstalado?
  • Desbandada de desarrolladores de Canonical.
  • El navegador Chrome ha ganado, ¿qué puede hacer Mozilla?
El podcast esta disponible para escuchar en:

Ubuntu y otras hierbas S01E05
on June 10, 2017 03:43 PM

The Ubuntu Core snap store is architected as a number of smallish, independent services. In this week's post I want to talk about some of the challenges that come from adopting a distributed architecture, and how we're working to resolve them.


In a typical monolithic architecture, the functionality that you ship is contained within a single codebase. Before deploying a new version of your service you probably run an extensive suite of tests that cover everything from high-level functional tests to low-level unit tests. Writing these tests is mostly a solved problem at this point: if you want to write a functional test that covers a feature that resides in several units of code you can do that - it will probably involve mocks or doubles, but at this point it's a mostly solved problem.

A diagram of functional tests covering the interaction between many units in one codebase.

Functional tests are relatively easy to write when all the code being covered lives in the same codebase.

Similarly, tests that test the interaction between just two components in a system are easy to write, since everything is in the same codebase: we can write a test that calls the first component, causing it to call the second and return a result. In order to determine whether the interaction was successful or not we can make assertions on the result of the operation, as well as rely on the language's run-time checks (although this is one area where Python is not as helpful as a statically typed language).

A diagram of unit tests covering the interaction between two units in one codebase.

Testing interactions between two components in a single codebase is easy too.

The key point to realise here is that even in a dynamically-typed language such as Python, we still rely on the language to do a lot of run-time checks for us. For example:

  • When calling a function you must provide the correct number of arguments.
  • When calling a function with keyword arguments you must spell the argument names correctly.
  • When unpacking the result from a function you must know how many arguments to unpack.

There are exceptions to all of these, but the general point remains: even in a dynamically-typed language, the language runtime catches a lot of low-level issues for you. (In terms of connascence, Python mostly takes care of connascence of name and connascence of position for us.)

The problem arises when we want to test a feature that has units of code in more than one service, and where the communication protocol between those services is something reasonably low-level. For the snap store, we've standardised on HTTP to transport JSON payloads between services. All of a sudden, testing inter-service communications starts to look like this:

A diagram of calls across codebases with HTTP.

Consider a function that makes a request to a remote service with a query, and returns the result. This simplified code-snippet shows how such a function might look:

import requests

def set_thing_on_service(username, thing_name):
    url = 'https://remote-service.internal/api/path'
    json_payload = {
        'username': username,
        'thing': thing_name,
    response = requests.post(url, json=json_payload)
    return response.json()['status']

Of course, a real function would contain a lot more error-checking code. The key thing to notice here is that since we're using HTTP, there's nothing stopping us from sending completely the wrong payload from this function. How do we know that json_payload has been created correctly? How do we know that the service response contains a status key? Clearly we need to write some tests to gain confidence that this code does what we want it to. There are several approaches that can be used here...


This is the approach that we've followed up to this point. The basic idea is that we mock out the code that sends data over HTTP, and instead our mock returns a response as if the remote server had responded to us.

The author of the code above writes a test that looks like this:

import io
from unittest import mock
from requests.packages.urllib3.response import HTTPResponse
import production_code  # contains the production code we're testing.

def test_set_thing_on_service():
    # 1.- Set up a mock so we don't actually send anything out on the wire.
    #     instead, our mock will always return a 200 response with the
    #     response payload we expect.
    with mock.patch('production_code.requests.adapters.HTTPAdapter.send') as mock_requests:
        mock_requests.return_value = HTTPResponse(
            body=io.BytesIO(b"{'status': 'OK'}"),

        # 2.- Call the code under test, get the returned value.
        return_value = production_code.set_thing_on_service(
            'username', 'thing')

        # 3.- Make assertions on the returned value and the calls that were
        #     made to the remote service:
        assert mock_requests.assert_called_once_with(...)
        assert return_value == 'OK'

If this were a real test (and not just an example in a blog post) I'd want to refactor this to hide some of the ugly setup code. Hopefully this example adequately illustrates some of the issues with this approach:

  1. While this test asserts that the requests mock was called with arguments that we expected, we have no guarantee that these are the arguments that the remote service expected. Even if we ensure this is the case when this code is written, if the remote service ever changes its API this test will continue to pass, despite the code now being incorrect.
  2. Similarly, we're making assumptions about what the remote service sends in response to our query.
  3. We're even making assumptions about the fact that the API even exists in the first place - our mock will catch all HTTP requests, regardless of destination url or HTTP method.

Finally, it's likely that the same developer is writing the test case as wrote the production code. This makes it much more likely that any faulty assumptions the developer had while writing the production code are likely to make it into the test, in effect hiding the issues present.

This approach does catch some issues, particular when the logic to determine exactly what to send to the remote service is complex. However, with refactoring, that complex code can usually be separated and tested separately. In my experience, the vast majority of bugs in inter-service communication code end up being issues in the format of the requests and responses, rather than in any higher level code.

No Mocking

At the other end of the extreme from mocking everything is... mocking nothing. If the service is small enough, then instead of mocking the remote service, we can just run it during our test run. This is particular useful when:

  • The remote service is stateless - i.e. it has no database or other forms of persistence. Even stateful services can be run if the database is lightweight and easy to set up.
  • The service itself is reasonably lightweight. This means: quick to start, has a reasonably small memory and CPU footprint.
  • The service itself does not require any other services to be running in order to be useful (that way madness lies).

A typical test might look like this:

def test_set_on_service():
    # 1.- Start the remote service if it's not running. Configure local DNS
    #     such that the code under test will talk to this running process.

    # 2.- Call the code under test, grab the returned value...
    return_value = production_code.set_thing_on_service(
        'username', 'thing')

    # 3.- Make assertions on the returned data. There is no mock, so we
    #     can't (easily) check what data was sent, but we know that the
    #     communication happened with the _real_ remote service, so that's
    #     probably good enough.
    assert return_value == 'OK'

    # 4.- Stop the service at the end of the test. Probably use 'addCleanup'
    #     or whatever your test framework of choice supports to ensure this
    #     always runs, even if the above assertion fails.

In reality using something like the excellent testresources to manage the remote service process allows you to minimise the overhead of having to start and stop the service for every test.

If you can get away with it, this is an excellent approach to testing interactions between components. In reality, this is rarely a practical option.

A third option

We've found that mocking everything doesn't work, and mocking nothing is great, but rarely practical. Is there a third option, a compromise between the two above approaches?

The first insight towards understanding this third option is that the most common cause of problems in inter-service communication are at the "presentation layer". Most issues are with how the data sent between services is formatted - a few examples that I'm sure I've committed personally in the course of my career include:

  • Typo-ing a value in the production code, and then typo-ing it in the tests as well (editors are particularly bad at enabling this, especially those that "learn" your typos and then helpfully offer them as completions later on).
  • Forgetting the fact that the remote API requires an additional value (perhaps an HTTP header) to be set, and ignoring that fact in my tests as well.
  • Forgetting that some services are more delicate than others about whether trailing '/' are present on URLs, and requesting the wrong resource in my production and test code.
  • Confusing two remote APIs and calling the wrong one in my production code, then writing the unit test to the same assumptions.
Perhaps a good option to find a middle-ground between "mocking everything" and "mocking nothing" would be if we could somehow ensure that the format of the data being sent to a remote service was correct, but still mock the actual operation of the remote service.

Layered services

Once we start thinking about data validation as being separate from the logic of a service, we can start to structure our services using a more layered approach, where presentation, logic, and persistence code are separated within the codebase:

A diagram of a layered service.
  • The presentation layer deals with the external API the service is exposing. In our case this means HTTP, Flask, and JSON.
  • The logic layer deals with the business logic in the service. This is where the actual work happens, but it deals with data types that are internal to the service (for example, we never pass a Flask request object into the business layer).
  • The persistence layer is where state is stored. Many services have some sort of database (in our case it's almost always Postgresql, but the specific database doesn't matter much). Some services may talk to other micro-services in their persistence layer.

The typical flow of a web request is:

  1. The presentation layer receives the request and validates the payload.
    • The presentation layer might return a response right away. For example, the service might be in maintenance mode, the request might be invalid, etc.
    • If the request is valid, the presentation layer typically converts the request into an internal data-type, and then forwards it to the logic layer.
  2. The logic layer receives a request from the presentation layer and actions the request.
    • Sometimes the logic layer can return a response right away - in the case of a stateless service the response is often something that's calculated on the fly (for example, imagine a signing service that verifies GPG signatures on signed documents).
    • Sometimes this means retrieving something from the database or from a third-party service. In both these cases this involves making a call into the persistence layer.
  3. The persistence layer retrieves the data requested from the database or service in question. The persistence layer usually has to deal with some separate concerns from the rest of the system. For example, it might have to talk to an ORM like sqlalchemy, or it might have to speak HTTP to some remote service.

The stack unwinds as you'd expect it to. The last thing that happens is that the presentation layer finally converts the response into something that Flask understands, and that response is sent out over the wire.

This architecture isn't particularly surprising or controversial, but it gets us one step closer to being able to extract the presentation layer validation code so it can be used elsewhere.

Declarative validation

The next step is to make the presentation layer validation declarative, rather then imperative. That is, we want to transform the usual imperative presentation layer validation:

from flask import request

def my_flask_view():
    payload = request.get_json()
    required_keys = {'thing_one', 'thing_two'}
    for key in required_keys:
    if key not in payload:
        return "Error, missing required key: %s" % key, 400
    # MUCH more code to completely validate the payload here:...

...into a declarative form:

from flask import request
from acceptable import validate_body

    'type': 'object',
    'properties': {
        'thing_one': {'type': 'string'},
        'thing_two': {'type': 'string'},
    'additionalProperties': False,
def my_flask_view():
    payload = request.get_json()
    # payload is validated if we get here...

Keen-eyed observers will notice that the declarative specification here is jsonschema. This allows us to create some incredibly powerful specifications - much more so than the above example which simply states that both keys must exist, and their values must be strings.

Introducing Acceptable

The from acceptable import validate_body is the first import from the acceptable package we've seen so far. What is Acceptable? It's a wrapper around Flask, and contains some of the new technology we've had to build in order to build the new snap store. Of particular interest to this blog post is the fact that it contains the mechanism we're using for inter-service testing.

The validate_body decorator takes a jsonschema specification, and will validate all incoming requests against that schema. Only requests that validate successfully cause the view function to be called. There exists a similar decorator named validate_output that operates on the output of an API view instead of the input.

So far we've built a nicely structured service, but we haven't improved the situation with regards to testing code that wants to integrate with this service at all. The final piece of the puzzle is that acceptable includes a command that can scan a codebase (or multiple codebases) for the validate_body and validate_output decorators, extract them, and build "service mocks" with them.

This gives us a library of mock objects that we can instantiate during a test. Setting up a service mock requires only that the test author specify a response they want the mock to return. Setting up one of these mocks causes the following things to be configured in the background:

  1. The correct URL that the remote service exposes for the API you want to integrate against is mocked. No other URLs are mocked, and only the correct HTTP method is mocked, so you can't accidentally mock out more than you intended to.
  2. Any requests to that URL will be validated against the jsonschema specification. Payloads that fail validation will result in an error response.
  3. The response the test author passes to the service mock will be validated against the validate_output schema. If it fails validation the mock will raise an error, and your test will fail. This prevents you from making faulty assumptions about what a remote service returns in your test code.
  4. All calls to the target service are recorded, so you can still make assertions on what was sent to the target service, as well as what the service responded with.

A test author integrating with an acceptable-enabled service gets to write tests that look like this:

from service_mocks import remote_service

def test_set_on_service(self):
    # 1.- configure the service mock for the specific API we're integrating
    #     against. We pass in the response data we want from that service,
    #     and this step will fail if the response data we provide does not
    #     match the response specification in the target service:
    service_mock = self.useFixture(
        remote_service.api_name(output={'status': 'OK'}))

    # 2.- Call our function under test. If this function sends an invalid
    #     request to the correct url, or sends any request to a different
    #     url then the response you get will be what you'd expect: a 400
    #     and a 404, respectively.
    return_value = production_code.set_thing_on_service(
        'username', 'thing')

    # 3.- Assertions based on the code-under-test's return value work as
    #     you'd expect:
    assert return_value == 'OK'
    # ... and you can also assert based on what calls were made during the
    # test:
    assert len(service_mock.calls) == 1

This is far from perfect, but it's a big step forwards compared to anything else I've seen. I'm now much more confident when writing integration code between multiple services - I can be reasonably sure that if my tests pass then at least the presentation of the requests and responses in my tests are accurate. Of course, these tests are only as good as the jsonschema specifications are on the target service.

Questions and Answers

Can I use it?

You can, but you probably shouldn't, at least not yet. While we're using it in production, and it's performed well for us, we're still not ready to make any promises about keeping APIs stable, even between minor versions. Additionally, there are still several bugs and issues that need more investigation and engineering work.

Having said that, it's open source, and there's nothing to stop you from using the ideas expressed in acceptable in your own codebases. Today's blog post can be summarised as "If you make your presentation layer validation declarative, then you can extract it and use it to build better service mocks for inter-service testing". There's nothing in the implementation that's particularly tricky. Acceptable does a lot more than just presentation layer validation, and I look forward to writing more about it in the future.

What are the known issues?

The largest known issue today is that writing assertions against the configured service mocks is somewhat unpleasant. This is due to the fact that the service mocks all share a single responses mock object, which in turn means the 'calls' list is shared across all service mock instances. Fixing this is on my personal TODO list, probably by moving away from responses.

What are the plans for the future?

Acceptable is still being evaluated. It has certainly proved useful, but we need to see how useful it's going to be long-term, and whether it's worth the cost of having another codebase to maintain.

If we decide to keep investing in it, there are a few things I'd like to see fixed:

  • We need some basic documentation. Acceptable isn't hard to use once you know it, but the lack of good documentation is a little unfriendly.
  • The script that extracts jsonschema specifications could be easier to use, and the library it generates could be easier to release (there's currently a few manual steps involved).
  • It would be nice to support HTTP responses with Content-Type's of something other than application/json. For example, some of our APIs expose application/hal+json, and acceptable currently has no support for this.
  • Ideally I'd like to see acceptable converge on a more cohesive set of APIs. It's currently a bit of a "grab-bag" of tools. Ideally we'd turn it into a flask-compatible framework with capital-o Opinions about how to build a service.


Writing and testing robust inter-service communications has been one of the main challenges involved in pursuing a more distributed, less monolithic architecture. Acceptable is very much experimental software at this stage, but it's already providing some benefit, and the ideas are transportable to other web frameworks or even other languages. I hope that while acceptable itself might not be useful to others in its current form, the ideas expressed in this post are interesting, and perhaps spur others into developing similar tools for themselves.

As always, if you have any questions, please do not hesitate to ask.

on June 10, 2017 12:00 PM