October 27, 2016

S09E35 – Red Nun - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Thirty-Five of Season-Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Paul Tansom are connected and speaking to your brain.

We are four, made whole by a new guest presenter.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on October 27, 2016 02:00 PM

This is a guest post by Ryan Sipes, community manager at System76. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com


We would like to introduce you to the newest version of the extremely portable Lemur laptop. Like all System76 laptops the Lemur ships with Ubuntu, and you can choose between 16.04 LTS or the newest 16.10 release.

About System76

System76 is based out of Denver, Colorado and has been making Ubuntu computers for ten years. Creating great machines born to run Linux is our sole purpose. Members of our team are contributors to many different open source projects and we send our work enabling hardware on our computers upstream, to the benefit of everyone running our favorite operating system.

Our products have been praised as the best machines born to run Linux by fans including Chris Fisher of The Linux Action Show and Leo Laporte of This Week in Tech. We pride ourselves in offering fantastic products and providing first-class support to our users. Our support staff themselves are Linux/Ubuntu users and open source contributors, like Emma Marshall who is a host on the Ubuntu podcast.


About the Lemur

This is our 7th generation release of the Lemur, and it’s now 10% faster with the 7th gen Intel processor (Kaby Lake). Loaded with the newest Intel graphics, up to 32GB of DDR4 memory, and USB type-C port, this Lemur enables more powerful multitasking on the go.

Weighing in at 3.6 lbs, this beauty is light enough to carry from meeting to meeting, or across campus. The Lemur design is thin, built with a handle grip at the back of the laptop, allowing you to easily grasp your Lemur and rush off to your next location.

The Lemur retains its reputation, as the perfect option for those who want a high-quality portable Linux laptop at an affordable price (starting at only $699 USD).

You can see the full tech specs and other details about the Lemur here.


About the author
Ryan Sipes is the Community Manager at System76. He is a regular guest on podcasts over at Jupiter Broadcasting, like The Linux Action Show and Linux Unplugged. He helped organize the first Kansas Linux Fest and the Lawrence Linux User Group. Ryan is also a longtime Ubuntu user (since Warty Warthog), and an enthusiastic open source evangelist.

on October 27, 2016 12:11 PM

Snapping Cuberite

Ubuntu Insights

This is a guest post by James Tait, Software Engineer at Canonical. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com


I’m a father of two pre-teens, and like many kids their age (and many adults, for that matter) they got caught up in the craze that is Minecraft. In our house we adopted Minetest as a Free alternative to begin with, and had lots of fun and lots of arguments! Somewhere along the way, they decided they’d like to run their own server and share it with their friends. But most of those friends were using Windows and there was no Windows client for Minetest at the time. And so it came to pass that I would trawl the internet looking for Free Minecraft server software, and eventually stumble upon Cuberite (formerly MCServer), “a lightweight, fast and extensible game server for Minecraft”.

Cuberite is an actively developed project. At the time of writing, there are 16 open pull requests against the server itself, of which five are from the last week. Support for protocol version 1.10 has recently been added, along with spectator view and a steady stream of bug fixes. It is automatically built by Jenkins on each commit to master, and the resulting artefacts are made available on the website as .tar.gz and .zip files. The server itself runs in-place; that is to say that you just unpack the archive and run the Cuberite binary and the data files are created alongside it, so everything is self-contained. This has the nice side-effect that you can download the server once, copy or symlink a few files into a new directory and run a separate instance of Cuberite on a different port, say for testing.

All of this sounds great, and mostly it is. But there are a few wrinkles that just made it a bit of a chore:

  • No formal releases. OK, while there are official build artifacts, there are no milestones, no version numbers
  • No package management! No version numbers means no managed package. We just get an archive with a self-contained build directory
  • No init scripts. When I restart my server, I want the Minecraft server to be ready to play, so I need init scripts

Now none of these problems is insurmountable. We can put the work in to build distro packages for each distribution from git HEAD. We can contribute upstart and systemd and sysvinit scripts. We can run a cron job to poll for new releases. But, frankly, it just seems like a lot of work.

In truth I’d done a lot of manual work already to build Cuberite from source, create a couple of independent instances, and write init scripts. I’d become somewhat familiar with the build process, which basically amounted to something like:

$ cd src/cuberite
$ git pull
$ git submodule update --init
$ cd Release
$ make

This builds the release binaries and copies the plugins and base data files into the Server subdirectory, which is what the Jenkins builds then compress and make available as artifacts. I’d then do a bit of extra work: I’ve been running this in a dedicated lxc container, and keeping a production and a test instance running so we could experiment with custom plugins, so I would:

$ cd ../Server
$ sudo cp Cuberite /var/lib/lxc/miners/rootfs/usr/games/Cuberite
$ sudo cp brewing.txt crafting.txt furnace.txt items.ini monsters.ini /var/lib/lxc/miners/rootfs/etc/cuberite/production
$ sudo cp brewing.txt crafting.txt furnace.txt items.ini monsters.ini /var/lib/lxc/miners/rootfs/etc/cuberite/testing
$ sudo cp -r favicon.png lang Plugins Prefabs webadmin /var/lib/lxc/miners/rootfs/usr/share/games/cuberite

Then in the container, /srv/cuberite/production and /srv/cuberite/testing contain symlinks to everything we just copied, and some runtime data files under /var/lib/cuberite/production and /var/lib/cuberite/testing, and we have init scripts to chdir to each of those directories and run Cuberite.

All this is fine and could no doubt be moulded into packages for the various distros with a bit of effort. But wouldn’t it be nice if we could do all of that for all the most popular distros in one fell swoop? Enter snaps and snapcraft. Cuberite is statically linked and already distributed as a run-in-place archive, so it’s inherently relocatable, which means it lends itself perfectly to distribution as a snap.
This is the part where I confess to working on the Ubuntu Store and being more than a little curious as to what things looked like coming from the opposite direction. So in the interests of eating my own dogfood, I jumped right in.
Now snapcraft makes getting started pretty easy:

$ mkdir cuberite
$ cd cuberite
$ snapcraft init

And you have a template snapcraft.yaml with comments to instruct you. Most of this is straightforward, but for the version here I just used the current date. With the basic metadata filled in, I moved onto the snapcraft “parts”.

Parts in snapcraft are the basic building blocks for your package. They might be libraries or apps or glue, and they can come from a variety of sources. The obvious starting point for Cuberite was the git source, and as you may have noticed above, it uses CMake as its build system. The snapcraft part is pretty straightforward:

        plugin: cmake
        source: https://github.com/cuberite/cuberite.git
            - gcc
            - g++
            - -include
            - -lib

That last section warrants some explanation. When I built Cuberite at first, it included some library files and header files from some of the bundled libraries that are statically linked. Since we’re not interested in shipping these files, they just add bloat to the final package, so we specify that they are excluded.

That gives us our distributable Server directory, but it’s tucked away under the snapcraft parts hierarchy. So I added a release part to just copy the full contents of that directory and locate them at the root of the snap:

        after: [cuberite]
        plugin: dump
        source: parts/cuberite/src/Server
            "*": "."

Some projects let you specify the output directory with a –prefix flag to a configure script or similar methods, and won’t need this little packaging hack, but it seems to be necessary here.

At this stage I thought I was done with the parts and could just define the Cuberite app – the executable that gets run as a daemon. So I went ahead and did the simplest thing that could work:

        command: Cuberite
        daemon: forking
            - network
            - network-bind

But I hit a snag. Although this would work with a traditional package, the snap is mounted read-only, and Cuberite writes its data files to the current directory. So instead I needed to write a wrapper script to switch to a writable directory, copy the base data files there, and then run the server:

 1 #!/bin/bash
 2 for file in brewing.txt crafting.txt favicon.png furnace.txt items.ini
 3 monsters.ini README.txt; do
 4 if [ ! -f "$SNAP_USER_DATA/$file" ]; then
 5  cp --preserve=mode "$SNAP/$file" "$SNAP_USER_DATA"
 6 fi
 7 done
 9 for dir in lang Plugins Prefabs webadmin; do
10 if [ ! -d "$SNAP_USER_DATA/$dir" ]; then
11 cp -r --preserve=mode "$SNAP/$dir" "$SNAP_USER_DATA"
12 fi
13 done
16 exec "$SNAP"/Cuberite -d 

Then add the wrapper as a part:

        plugin: dump
        source: .
            Cuberite.wrapper: bin/Cuberite.wrapper

And update the snapcraft app:

        command: bin/Cuberite.wrapper
        daemon: forking
            - network
            - network-bind 

And with that we’re done! Right? Well, not quite…. While this works in snap’s devmode, in strict mode it results in the server being killed. A little digging in the output from snappy-debug.security scanlog showed that seccomp was taking exception to Cuberite using the fchown system call. Applying some Google-fu turned up a bug with a suggested workaround, which was applied to the two places (both in sqlite submodules) that used the offending system call and the snap rebuilt – et voilà! Our Cuberite server now happily runs in strict mode, and can be released in the stable channel.

My build process now looks like this:

$ vim snapcraft.yaml
$ # Update version
$ snapcraft pull cuberite
$ # Patch two fchown calls
$ snapcraft
I can then push it to the edge channel:
$ snapcraft push cuberite_20161023_amd64.snap --release edge
Revision 1 of cuberite created.
And when people have had a chance to test and verify, promote it to stable:
$ snapcraft release cuberite 1 stable

There are a couple of things I’d like to see improved in the process:

  • It would be nice not to have to edit the snapcraft.yaml on each build to change the version. Some kind of template might work for this
  • It would be nice to be able to apply patches as part of the pull phase of a part

With those two wishlist items fixed, I could fully automate the Cuberite builds and have a fresh snap released to the edge channel on each commit to git master! I’d also like to make the wrapper a little more advanced and add another command so that I can easily manage multiple instances of Cuberite. But for now, this works – my boys have never had it so good!

Download the Cuberite Snap

on October 27, 2016 11:17 AM

Since my last article, lots of things happened in the container world! Instead of using LXC, I find myself using the next great thing much much more now, namely LXC's big brother, LXD.

As some people asked me, here's my trick to make containers use my host as an apt proxy, significantly speeding up deployment times for both manual and juju-based workloads.

Metal As An Attitude

Setting up a cache on the host

First off, we'll want to setup an apt cache on the host. As is usually the case in the Ubuntu world, it all starts with an apt-get:

sudo apt-get install squid-deb-proxy

This will setup a squid caching proxy on your host, with a specific apt configuration listening on port 8000.

Since it is tuned for larger machines by default, I find myself wanting to make it use a slightly smaller disk cache, using 2Gb instead of the default 40Gb is way more reasonable on my laptop.

Simply editing the config file takes care of that:

$EDITOR /etc/squid-deb-proxy
# Look for the "cache_dir aufs" line and replace with:
cache_dir aufs /var/cache/squid-deb-proxy 2000 16 256 # 2 gb

Of course you'll need to restart the service after that:

sudo service squid-deb-proxy restart

Setting up LXD

Compared to the similar procedure on LXC, setting up LXD is a breeze! LXD comes with configuration templates, and so we can conveniently either create a new template if we want to use the proxy selectively, or simply add the configuration to the "default" template, and all our containers will use the proxy, always!

In the default template

Since I never turn the proxy off on my laptop I saw no reason to apply the proxy selectively, and simply added it to the default profile:

export LXD_ADDRESS=$(ifconfig lxdbr0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}')
echo -e "#cloud-config\napt:\n proxy: http://$LXD_ADDRESS:8000" | lxc profile set default user.user-data -

Of course the first part of the first command line automates the discovery of your IP address, conveniently, as long as your LXD bridge is called "lxdbr0".

Once set in the default template, all LXD containers you start now have an apt proxy pointing to your host set up!

In a new template

Should you not want to alter the default template, you can easily create a new one:

export PROFILE_NAME=proxy
lxc profile create $PROFILE_NAME

Then substitute the newly created profile in the previous command line. It becomes:

export LXD_ADDRESS=$(ifconfig lxdbr0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}')
echo -e "#cloud-config\napt:\n proxy: http://$LXD_ADDRESS:8000" | lxc profile set $PROFILE_NAME user.user-data -

Launching a new container needs to add this configuration template, so that the container benefits form the proxy configuration:

lxc launch ubuntu:xenial -p $PROFILE_NAME -p default


If for some reason you don't want to use your host as a proxy anymore, it is quite easy to revert the changes to the template:

lxc profile set <template> user.user-data

That's it!

As you can see it is trivial to set an apt proxy on LXD, and using squid-deb-proxy on the host makes that configuration trivial.

Hope this helps!

on October 27, 2016 06:00 AM

This is the eleventh blog post in this series about LXD 2.0.

LXD logo


First of all, sorry for the delay. It took quite a long time before I finally managed to get all of this going. My first attempts were using devstack which ran into a number of issues that had to be resolved. Yet even after all that, I still wasn’t be able to get networking going properly.

I finally gave up on devstack and tried “conjure-up” to deploy a full Ubuntu OpenStack using Juju in a pretty user friendly way. And it finally worked!

So below is how to run a full OpenStack, using LXD containers instead of VMs and running all of this inside a LXD container (nesting!).


This post assumes you’ve got a working LXD setup, providing containers with network access and that you have a pretty beefy CPU, around 50GB of space for the container to use and at least 16GB of RAM.

Remember, we’re running a full OpenStack here, this thing isn’t exactly light!

Setting up the container

OpenStack is made of a lof of different components, doing a lot of different things. Some require some additional privileges so to make our live easier, we’ll use a privileged container.

We’ll configure that container to support nesting, pre-load all the required kernel modules and allow it access to /dev/mem (as is apparently needed).

Please note that this means that most of the security benefit of LXD containers are effectively disabled for that container. However the containers that will be spawned by OpenStack itself will be unprivileged and use all the normal LXD security features.

lxc launch ubuntu:16.04 openstack -c security.privileged=true -c security.nesting=true -c "linux.kernel_modules=iptable_nat, ip6table_nat, ebtables, openvswitch"
lxc config device add openstack mem unix-char path=/dev/mem

There is a small bug in LXD where it would attempt to load kernel modules that have already been loaded on the host. This has been fixed in LXD 2.5 and will be fixed in LXD 2.0.6 but until then, this can be worked around with:

lxc exec openstack -- ln -s /bin/true /usr/local/bin/modprobe

Then we need to add a couple of PPAs and install conjure-up, the deployment tool we’ll use to get OpenStack going.

lxc exec openstack -- apt-add-repository ppa:conjure-up/next -y
lxc exec openstack -- apt-add-repository ppa:juju/stable -y
lxc exec openstack -- apt update
lxc exec openstack -- apt dist-upgrade -y
lxc exec openstack -- apt install conjure-up -y

And the last setup step is to configure LXD networking inside the container.
Answer with the default for all questions, except for:

  • Use the “dir” storage backend (“zfs” doesn’t work in a nested container)
  • Do NOT configure IPv6 networking (conjure-up/juju don’t play well with it)
lxc exec openstack -- lxd init

And that’s it for the container configuration itself, now we can deploy OpenStack!

Deploying OpenStack with conjure-up

As mentioned earlier, we’ll be using conjure-up to deploy OpenStack.
This is a nice, user friendly, tool that interfaces with Juju to deploy complex services.

Start it with:

lxc exec openstack -- sudo -u ubuntu -i conjure-up
  • Select “OpenStack with NovaLXD”
  • Then select “localhost” as the deployment target (uses LXD)
  • And hit “Deploy all remaining applications”

This will now deploy OpenStack. The whole process can take well over an hour depending on what kind of machine you’re running this on. You’ll see all services getting a container allocated, then getting deployed and finally interconnected.

Conjure-Up deploying OpenStack

Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard.

Access the dashboard and spawn a container

The dashboard runs inside a container, so you can’t just hit it from your web browser.
The easiest way around this is to setup a NAT rule with:

lxc exec openstack -- iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to <IP>

Where “<ip>” is the dashboard IP address conjure-up gave you at the end of the installation.

You can now grab the IP address of the “openstack” container (from “lxc info openstack”) and point your web browser to: http://<container ip>/horizon

This can take a few minutes to load the first time around. Once the login screen is loaded, enter the default login and password (admin/openstack) and you’ll be greeted by the OpenStack dashboard!


You can now head to the “Project” tab on the left and the “Instances” page. To start a new instance using nova-lxd, click on “Launch instance”, select what image you want, network, … and your instance will get spawned.

Once it’s running, you can assign it a floating IP which will let you reach your instance from within your “openstack” container.


OpenStack is a pretty complex piece of software, it’s also not something you really want to run at home or on a single server. But it’s certainly interesting to be able to do it anyway, keeping everything contained to a single container on your machine.

Conjure-Up is a great tool to deploy such complex software, using Juju behind the scene to drive the deployment, using LXD containers for every individual service and finally for the instances themselves.

It’s also one of the very few cases where multiple level of container nesting actually makes sense!

Extra information

The conjure-up website can be found at: http://conjure-up.io
The Juju website can be found at: http://www.ubuntu.com/cloud/juju

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

on October 27, 2016 01:10 AM

October 26, 2016

Ubuntu Unleashed 2017

Matthew Helmke

I was the sole editor and contributor of new content for Ubuntu Unleashed 2017 Edition. This book is intended for intermediate to advanced users.

on October 26, 2016 10:51 PM

FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2017 takes place 4-5 February 2017 in Brussels, Belgium.

This email contains information about:

  • Real-Time communications dev-room and lounge,
  • speaking opportunities,
  • volunteering in the dev-room and lounge,
  • related events around FOSDEM, including the XMPP summit,
  • social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities),
  • the Planet aggregation sites for RTC blogs

Call for participation - Real Time Communications (RTC)

The Real-Time dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge.

The dev-room is only on Saturday, 4 February 2017. The lounge will be present for both days.

To discuss the dev-room and lounge, please join the FSFE-sponsored Free RTC mailing list.

To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list.

Speaking opportunities

Note: if you used FOSDEM Pentabarf before, please use the same account/username

Real-Time Communications dev-room: deadline 23:59 UTC on 17 November. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real-Time devroom". Link to talk submission.

Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form.

You can find the full list of dev-rooms on this page and apply for a lightning talk at https://fosdem.org/submit

Main track: the deadline for main track presentations is 23:59 UTC 31 October. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track.

First-time speaking?

FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it.

Submission guidelines

The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one.

In the "Submission notes", please tell us about:

  • the purpose of your talk
  • any other talk applications (dev-rooms, lightning talks, main track)
  • availability constraints and special needs

You can use HTML and links in your bio, abstract and description.

If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work.

We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics.

Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate.

Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used.

Volunteers needed

To make the dev-room and lounge run successfully, we are looking for volunteers:

  • FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this
  • organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 4 February
  • participation in the Real-Time lounge
  • helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses
  • circulating this Call for Participation (text version) to other mailing lists

See the mailing list discussion for more details about volunteering.

Related events - XMPP and RTC summits

The XMPP Standards Foundation (XSF) has traditionally held a summit in the days before FOSDEM. There is discussion about a similar summit taking place on 2 and 3 February 2017. XMPP Summit web site - please join the mailing list for details.

We are also considering a more general RTC or telephony summit, potentially in collaboration with the XMPP summit. Please join the Free-RTC mailing list and send an email if you would be interested in participating, sponsoring or hosting such an event.

Social events and dinners

The traditional FOSDEM beer night occurs on Friday, 3 February.

On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints and reservations are essential. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat.

Spread the word and discuss

If you know of any mailing lists where this CfP would be relevant, please forward this email (text version). If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk.

If you regularly blog about RTC topics, please send details about your blog to the planet site administrators:

Planet site Admin contact
All projects Free-RTC Planet (http://planet.freertc.org) contact planet@freertc.org
XMPP Planet Jabber (http://planet.jabber.org) contact ralphm@ik.nu
SIP Planet SIP (http://planet.sip5060.net) contact planet@sip5060.net
SIP (Español) Planet SIP-es (http://planet.sip5060.net/es/) contact planet@sip5060.net

Please also link to the Planet sites from your own blog or web site as this helps everybody in the free real-time communications community.


For any private queries, contact us directly using the address fosdem-rtc-admin@freertc.org and for any other queries please ask on the Free-RTC mailing list.

The dev-room administration team:

on October 26, 2016 06:39 AM

October 25, 2016


I’m proud (yes, really) to announce DNS66, my host/ad blocker for Android 5.0 and newer. It’s been around since last Thursday on F-Droid, but it never really got a formal announcement.

DNS66 creates a local VPN service on your Android device, and diverts all DNS traffic to it, possibly adding new DNS servers you can configure in its UI. It can use hosts files for blocking whole sets of hosts or you can just give it a domain name to block (or multiple hosts files/hosts). You can also whitelist individual hosts or entire files by adding them to the end of the list. When a host name is looked up, the query goes to the VPN which looks at the packet and responds with NXDOMAIN (non-existing domain) for hosts that are blocked.

You can find DNS66 here:

F-Droid is the recommended source to install from. DNS66 is licensed under the GNU GPL 3, or (mostly) any later version.

Implementation Notes

DNS66’s core logic is based on another project,  dbrodie/AdBuster, which arguably has the cooler name. I translated that from Kotlin to Java, and cleaned up the implementation a bit:

All work is done in a single thread by using poll() to detect when to read/write stuff. Each DNS request is sent via a new UDP socket, and poll() polls over all UDP sockets, a Device Socket (for the VPN’s tun device) and a pipe (so we can interrupt the poll at any time by closing the pipe).

We literally redirect your DNS servers. Meaning if your DNS server is, all traffic to is routed to the VPN. The VPN only understands DNS traffic, though, so you might have trouble if your DNS server also happens to serve something else. I plan to change that at some point to emulate multiple DNS servers with fake IPs, but this was a first step to get it working with fallback: Android can now transparently fallback to other DNS servers without having to be aware that they are routed via the VPN.

We also need to deal with timing out queries that we received no answer for: DNS66 stores the query into a LinkedHashMap and overrides the removeEldestEntry() method to remove the eldest entry if it is older than 10 seconds or there are more than 1024 pending queries. This means that it only times out up to one request per new request, but it eventually cleans up fine.


Filed under: Android, Uncategorized
on October 25, 2016 04:20 PM

October 24, 2016

with automatic updates on changes in CodeCommit Git repository

A number of CloudFormation templates have been published that generate AWS infrastructure to support a static website. I’ll toss another one into the ring with a feature I haven’t seen yet.

In this stack, changes to the CodeCommit Git repository automatically trigger an update to the content served by the static website. This automatic update is performed using CodePipeline and AWS Lambda.

This stack also includes features like HTTPS (with a free certificate), www redirect, email notification of Git updates, complete DNS support, web site access logs, infinite scaling, zero maintenance, and low cost.

Here is an architecture diagram outlining the various AWS services used in this stack. The arrows indicate the major direction of data flow. The heavy arrows indicate the flow of website content.

CloudFormation stack architecture diagram

Sure, this does look a bit complicated for something as simple as a static web site. But remember, this is all set up for you with a simple aws-cli command (or AWS Web Console button push) and there is nothing you need to maintain except the web site content in a Git repository. All of the AWS components are managed, scaled, replicated, protected, monitored, and repaired by Amazon.

The input to the CloudFormation stack includes:

  • Domain name for the static website

  • Email address to be notified of Git repository changes

The output of the CloudFormation stack includes:

  • DNS nameservers for you to set in your domain registrar

  • Git repository endpoint URL

Though I created this primarily as a proof of concept and demonstration of some nice CloudFormation and AWS service features, this stack is suitable for use in a production environment if its features match your requirements.

Speaking of which, no CloudFormation template meets everybody’s needs. For example, this one conveniently provides complete DNS nameservers for your domain. However, that also means that it assumes you only want a static website for your domain name and nothing else. If you need email or other services associated with the domain, you will need to modify the CloudFormation template, or use another approach.

How to run

To fire up an AWS Git-backed Static Website CloudFormation stack, you can click this button and fill out a couple input fields in the AWS console:

Launch CloudFormation stack

I have provided copy+paste aws-cli commands in the GitHub repository. The GitHub repository provides all the source for this stack including the AWS Lambda function that syncs Git repository content to the website S3 bucket:

AWS Git-backed Static Website GitHub repo

If you have aws-cli set up, you might find it easier to use the provided commands than the AWS web console.

When the stack starts up, two email messages will be sent to the address associated with your domain’s registration and one will be sent to your AWS account address. Open each email and approve these:

  • ACM Certificate (2)
  • SNS topic subscription

The CloudFormation stack will be stuck until the ACM certificates are approved. The CloudFront distributions are created afterwards and can take over 30 minutes to complete.

Once the stack completes, get the nameservers for the Route 53 hosted zone, and set these in your domain’s registrar. Get the CodeCommit endpoint URL and use this to clone the Git repository. There are convenient aws-cli commands to perform these fuctions in the project’s GitHub repository linked to above.

AWS Services

The stack uses a number of AWS services including:

  • CloudFormation - Infrastructure management.

  • CodeCommit - Git repository.

  • CodePipeline - Passes Git repository content to AWS Lambda when modified.

  • AWS Lambda - Syncs Git repository content to S3 bucket for website

  • S3 buckets - Website content, www redirect, access logs, CodePipeline artifacts

  • CloudFront - CDN, HTTPS management

  • Certificate Manager - Creation of free certificate for HTTPS

  • CloudWatch - AWS Lambda log output, metrics

  • SNS - Git repository activity notification

  • Route 53 - DNS for website

  • IAM - Manage resource security and permissions


As far as I can tell, this CloudFormation stack currently costs around $0.51 per month in a new AWS account with nothing else running a reasonable amount of storage for the web site content, and up to 5 Git users. This minimal cost is due to there being no free tier for Route 53 at the moment.

If you have too many GB of content, too many tens of thousands of requests, etc., you may start to see additional pennies being added to your costs.

If you stop and start the stack, it will cost an additional $1 each time because of the odd CodePipeline pricing structure. See the AWS pricing guides for complete details, and monitor your account spending closely.


Thanks to Mitch Garnaat for pointing me in the right direction for getting the aws-cli into an AWS Lambda function. This was important because “aws s3 sync” is much smarter than the other currently availble options for syncing website content with S3.

Thanks to AWS Community Hero Onur Salk for pointing me in the direction of CloudPipeline for triggering AWS Lamda functions off of CodeCommit changes.

Thanks to Ryan Brown for already submitting a pull request with lots of nice cleanup of the CloudFormation template, teaching me a few things in the process.

Some other resources you might fine useful:

Creating a Static Website Using a Custom Domain - Amazon Web Services

S3 Static Website with CloudFront and Route 53 - AWS Sysadmin

Continuous Delivery with AWS CodePipeline - Onur Salk

Automate CodeCommit and CodePipeline in AWS CloudFormation - Stelligent

Running AWS Lambda Functions in AWS CodePipeline using CloudFormation - Stelligent

You are welcome to use, copy, and fork this repository. I would recommend contacting me before spending time on pull requests, as I have specific limited goals for this stack and don’t plan to extend its features much more.

Original article and comments: https://alestic.com/2016/10/aws-git-backed-static-website/

on October 24, 2016 10:00 AM

For much of the past year I have been working on a game. No, not just a game, I’m been working on change. There are 122 million children in the world today who can’t read or write[1]. They will grow up to join the 775 million adults who can’t. Together that’s almost one billion people who are effectively shut off from the information age. How many of them could make the world a better place, given even half a chance?

I’ve been interested in the intersection of open source and education for underprivileged children for quite some time.  I even build a a Linux distro towards that end. So when Jono Bacon told me about a new XPRIZE contest to build open source software for teaching literacy skills to children in Africa, of course I was interested.  And now, a little more than a year later, I have a game that I firmly believe can deliver that world changing ambition.

device-2016-08-25-224444This is where you come in. Don’t worry, I’m not going to ask you to help build my contest entry, though it is already open source (GPLv3) and on github. But the contest entries only cover English and Kiswahili, which is going to leave a very large part of the illiterate population out. That’s not enough, to change the world it needs to be available to the world. Additional languages won’t be part of the contest entry, but they will be a part of making the world a better place.

I designed Phoenicia from the beginning to be able to support as many languages as possible, with as little additional work as possible. But while it may be capable of using handling multiple languages, I sadly am not. So I’m reaching out to the community to help me bring literacy to millions more children than I can do by myself. Children who speak your language, live in your community, who may be your own neighbors.

You don’t need to be a programmer, in fact there shouldn’t be any programming work needed at all. What I need are early reader words, for each language. From there I can show you how to build a locale pack, record audio help, and add any new artwork needed to support your localization. I’m especially looking to those of you who speak French, Spanish and Portuguese, as those languages will carry Phoenicia into many countries where childhood illiteracy is still a major problem.

on October 24, 2016 09:00 AM

October 23, 2016

Happy 20th birthday, KDE!

Valorie Zimmerman

KDE turned twenty recently, which seems significant in a world that seems to change so fast. Yet somehow we stay relevant, and excited to continue to build a better future.

Lydia asked recently on the KDE-Community list what we were most proud of.

For the KDE community, I'm proud that we continue to grow and change, while remaining friendly, welcoming, and ever more diverse. Our software shows that. As we change and update, some things get left behind, only to re-appear in fresh new ways. And as people get new jobs, or build new families, sometimes they disappear for awhile as well. And yet we keep growing, attracting students, hobbyist programmers, writers, artists, translators, designers and community people, and sometimes we see former contributors re-appear too. See more about that in our 20 Years of KDE Timeline.

I'm proud that we develop whole new projects within the community. Recently Peruse, Atelier, Minuet, WikitoLearn, KDEConnect, Krita, Plasma Mobile and neon have all made the news. We welcome projects from outside as well, such as Gcompris, Kdenlive, and the new KDE Store. And our established projects continue to grow and extend. I've been delighted to hear about Calligra Author, for instance, which is for those who want to write and publish a book or article in pdf or epub. Gcompris has long been available for Windows and Mac, but now you can get it on your Android phone or tablet. Marble is on Android, and I hear that Kstars will be available soon.

I'm proud of how established projects continue to grow and attract new developers. The Plasma team, hand-in-hand with the Visual Design Group, continues to blow testers and users away with power, beauty and simplicity on the desktop. Marble, Kdevelop, Konsole, Kate, KDE-PIM, KDElibs (now Frameworks), KOffice (now Calligra), KDE-Edu, KDE-Games, Digikamkdevplatform, Okular, Konversation and Yakuake, just to mention a few, continue to grow as projects, stay relevant and often be offered on new platforms. Heck, KDE 1 runs on modern computer systems!

For myself, I'm proud of how the KDE community welcomed in a grandma, a non-coder, and how I'm valued as part of the KDE Student Programs team, and the Community Working Group, and as an author and editor. Season of KDE, Google Summer of Code, and now Google Code-in all work to integrate new people into the community, and give more experienced developers a way to share their knowledge as mentors. I'm proud of how the Amarok handbook we developed on the Userbase wiki has shown the way to other open user documentation. And thanks to the wonderful documentation and translation teams, the help is available to millions of people around the world, in multiple forms.

I'm proud to be part of the e.V., the group supporting the fantastic community that creates the software we offer freely to the world.
on October 23, 2016 05:22 AM

October 20, 2016


Kees Cook

My prior post showed my research from earlier in the year at the 2016 Linux Security Summit on kernel security flaw lifetimes. Now that CVE-2016-5195 is public, here are updated graphs and statistics. Due to their rarity, the Critical bug average has now jumped from 3.3 years to 5.2 years. There aren’t many, but, as I mentioned, they still exist, whether you know about them or not. CVE-2016-5195 was sitting on everyone’s machine when I gave my LSS talk, and there are still other flaws on all our Linux machines right now. (And, I should note, this problem is not unique to Linux.) Dealing with knowing that there are always going to be bugs present requires proactive kernel self-protection (to minimize the effects of possible flaws) and vendors dedicated to updating their devices regularly and quickly (to keep the exposure window minimized once a flaw is widely known).

So, here are the graphs updated for the 668 CVEs known today:

  • Critical: 3 @ 5.2 years average
  • High: 44 @ 6.2 years average
  • Medium: 404 @ 5.3 years average
  • Low: 216 @ 5.5 years average

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

on October 20, 2016 11:02 PM

Last year I went to All Things Open for the first time and did a keynote. You can watch the keynote here.

I was really impressed with All Things Open last year and have subsequently become friends with the principle organizer, Todd Lewis. I loved how the team put together a show with the right balance of community and corporation, great content, exhibition and more.

All Thing Open 2016 is happening next week and I will be participating in a number of areas:

  • I will be MCing the keynotes for the event. I am looking forward to introducing such a tremendous group of speakers.
  • Jeremy King, CTO of Walmart Labs and I will be having a fireside chat. I am looking forward to delving into the work they are doing.
  • I will also be participating in a panel about openness and collaboration, and delivering a talk about building a community exoskeleton.
  • It is looking pretty likely I will be doing a book signing with free copies of The Art of Community to be given away thanks to my friends at O’Reilly!

The event takes place in Raleigh, and if you haven’t registered yet, do so right here!

Also, a huge thanks to Red Hat and opensource.com for flying me out. I will be joining the team for a day of meetings prior to All Things Open – looking forward to the discussions!

The post All Things Open Next Week – MCing, Talks, and More appeared first on Jono Bacon.

on October 20, 2016 08:20 PM

Working to make Juju more accessible

Canonical Design Team

In the middle of July the Juju team got together to work towards making Juju more accessible. For now the aim was to reach Level AA compliant, with the intention of reaching AAA in the future.

We started by reading through the W3C accessibility guidelines and distilling each principle into sentences that made sense to us as a team and documenting this into a spreadsheet.

We then created separate columns as to how this would affect the main areas across Juju as a product. Namely static pages on jujucharms.com, the GUI and the inspector element within the GUI.




GUI live on jujucharms.com




Inspector within the GUI




Example of static page content from the homepage




The Juju team working through the accessibility guidelines



Tackling this as a team meant that we were all on the same page as to which areas of the Juju GUI were affected by not being AA compliant and how we could work to improve it.

We also discussed the amount of design effort needed for each of the areas that isn’t AA compliant and how long we thought it would take to make improvements.

You can have a look at the spreadsheet we created to help us track the changes that we need to make to Juju to make more accessible:




Spreadsheet created to track changes and improvements needed to be done



This workflow has helped us manage and scope the tasks ahead and clear up uncertainties that we had about which tasks done or which requirements need to be met to achieve the level of accessibility we are aiming for.



on October 20, 2016 04:22 PM

The Yakkety Yak 16.10 is released and now you can download the new wallpaper by clicking here. It’s the latest part of the set for the Ubuntu 2016 releases following Xenial Xerus. You can read about our wallpaper visual design process here.

Ubuntu 16.10 Yakkety Yak


Ubuntu 16.10 Yakkety Yak (light version)


on October 20, 2016 02:54 PM

It’s Episode Thirty-Four of Season-Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Emma Marshall are connected and speaking to your brain.

The three amigos are back with our new amiga!

In this week’s show:

Thing Explainer Competition!

  • Prize: Signed copies of “What If?” and “Thing Explainer” by Randall Munroe (creator of XKCD)
  • Question: Listen to the podcast for instructions
  • Send your entries to competition AT ubuntupodcast DOT org. We’ll pick our favourite and announce the winner on the show.
  • Here are some examples to help get you in the groove:


I write words that are read by a computer. Students who want to learn about something ask their computer for part of a book. Their computer talks to another computer over phone lines, and that computer uses the words I’ve written to send them the book part they want. Sometimes students want new types of book parts that they can use to share their learning with other students. I have to work out the right words for the computer to let them do this, and write them. When I can, I share my words with other people so that their computers can send better book parts to their students.


I talk to people about computer things to help make the stuff they make and the stuff we make better. Also I sometimes write things that the computer gets but I am not great at that. We give away a lot of the things we make which is not like the way some other people share their work. It makes me happy inside that we do this.


I help write a group of books that a computer reads and stores. These books make the computer work much better. When a computer has stored the books I help make you can do things with your computer, like write to people and send what you wrote to the other peoples computers. Or you can ask your computer to talk to other computers to learn things, look at moving pictures, listen to music or buy shopping.

The group of books I help write are free for anyone to give to their computer. You are also free to change these books and share those changes with anyone. This way everyone can help make the books even better so your computer can do more for you.


I help people change their computer to something better. I fix things that are broken and make people happy again. I talk to a lot of people about computers all day. I put my heart into every conversation so people feel like they are talking with a human instead of speaking with a pretend human.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on October 20, 2016 02:00 PM

One of the projects proposed for this round of Outreachy is the PGP / PKI Clean Room live image.

Interns, and anybody who decides to start using the project (it is already functional for command line users) need to decide about purchasing various pieces of hardware, including a smart card, a smart card reader and a suitably secure computer to run the clean room image. It may also be desirable to purchase some additional accessories, such as a hardware random number generator.

If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.

Choice of smart card

For standard PGP use, the OpenPGP card provides a good choice.

For X.509 use cases, such as VPN access, there are a range of choices. I recently obtained one of the SmartCard HSM cards, Card Contact were kind enough to provide me with a free sample. An interesting feature of this card is Elliptic Curve (ECC) support. More potential cards are listed on the OpenSC page here.

Choice of card reader

The technical factors to consider are most easily explained with a table:

On disk Smartcard reader without PIN-pad Smartcard reader with PIN-pad
Software Free/open Mostly free/open, Proprietary firmware in reader
Key extraction Possible Not generally possible
Passphrase compromise attack vectors Hardware or software keyloggers, phishing, user error (unsophisticated attackers) Exploiting firmware bugs over USB (only sophisticated attackers)
Other factors No hardware Small, USB key form-factor Largest form factor

Some are shortlisted on the GnuPG wiki and there has been recent discussion of that list on the GnuPG-users mailing list.

Choice of computer to run the clean room environment

There are a wide array of devices to choose from. Here are some principles that come to mind:

  • Prefer devices without any built-in wireless communications interfaces, or where those interfaces can be removed
  • Even better if there is no wired networking either
  • Particularly concerned users may also want to avoid devices with opaque micro-code/firmware
  • Small devices (laptops) that can be stored away easily in a locked cabinet or safe to prevent tampering
  • No hard disks required
  • Having built-in SD card readers or the ability to add them easily

SD cards and SD card readers

The SD cards are used to store the master private key, used to sign the certificates/keys on the smart cards. Multiple copies are kept.

It is a good idea to use SD cards from different vendors, preferably not manufactured in the same batch, to minimize the risk that they all fail at the same time.

For convenience, it would be desirable to use a multi-card reader:

although the software experience will be much the same if lots of individual card readers or USB flash drives are used.

Other devices

One additional idea that comes to mind is a hardware random number generator (TRNG), such as the FST-01.

Can you help with ideas or donations?

If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.

on October 20, 2016 07:25 AM

October 19, 2016

Introducting the Canonical Livepatch Service

Ubuntu 16.04 LTS’s 4.4 Linux kernel includes an important new security capability in Ubuntu -- the ability to modify the running Linux kernel code, without rebooting, through a mechanism called kernel livepatch.

Today, Canonical has publicly launched the Canonical Livepatch Service -- an authenticated, encrypted, signed stream of Linux livepatches that apply to the 64-bit Intel/AMD architecture of the Ubuntu 16.04 LTS (Xenial) Linux 4.4 kernel, addressing the highest and most critical security vulnerabilities, without requiring a reboot in order to take effect.  This is particularly amazing for Container hosts -- Docker, LXD, etc. -- as all of the containers share the same kernel, and thus all instances benefit.

I’ve tried to answer below some questions that you might have. As you have others, you’re welcome
to add them to the comments below or on Twitter with hastag #Livepatch.

Retrieve your token from ubuntu.com/livepatch

Q: How do I enable the Canonical Livepatch Service?

A: Three easy steps, on a fully up-to-date 64-bit Ubuntu 16.04 LTS system.
  1. Go to https://ubuntu.com/livepatch and retrieve your livepatch token
    1. Install the canonical-livepatch snap
      $ sudo snap install canonical-livepatch 
    2. Enable the service with your token
      $ sudo canonical-livepatch enable [TOKEN] 
    And you’re done! You can check the status at any time using:

    $ canonical-livepatch status --verbose

      Q: What are the system requirements?

      A: The Canonical Livepatch Service is available for the generic and low latency flavors of the 64-bit Intel/AMD (aka, x86_64, amd64) builds of the Ubuntu 16.04 LTS (Xenial) kernel, which is a Linux 4.4 kernel. Canonical livepatches work on Ubuntu 16.04 LTS Servers and Desktops, on physical machines, virtual machines, and in the cloud. The safety, security, and stability firmly depends on unmodified Ubuntu kernels and network access to the Canonical Livepatch Service (https://livepatch.canonical.com:443).  You also will need to apt update/upgrade to the latest version of snapd (at least 2.15).

      Q: What about other architectures?

      A: The upstream Linux livepatch functionality is currently limited to the 64-bit x86 architecture, at this time. IBM is working on support for POWER8 and s390x (LinuxOne mainframe), and there’s also active upstream development on ARM64, so we do plan to support these eventually. The livepatch plumbing for 32-bit ARM and 32-bit x86 are not under upstream development at this time.

      Q: What about other flavors?

      A: We are providing the Canonical Livepatch Service for the generic and low latency (telco) flavors of the the Linux kernel at this time.

      Q: What about other releases of Ubuntu?

      A: The Canonical Livepatch Service is provided for Ubuntu 16.04 LTS’s Linux 4.4 kernel. Older releases of Ubuntu will not work, because they’re missing the Linux kernel support. Interim releases of Ubuntu (e.g. Ubuntu 16.10) are targeted at developers and early adopters, rather than Long Term Support users or systems that require maximum uptime.  We will consider providing livepatches for the HWE kernels in 2017.

      Q: What about derivatives of Ubuntu?

      A: Canonical livepatches are fully supported on the 64-bit Ubuntu 16.04 LTS Desktop, Cloud, and Server operating systems. On other Ubuntu derivatives, your mileage may vary! These are not part of our automated continuous integration quality assurance testing framework for Canonical Livepatches. Canonical Livepatch safety, security, and stability will firmly depend on unmodified Ubuntu generic kernels and network access to the Canonical Livepatch Service.

      Q: How does Canonical test livepatches?

      A: Every livepatch is rigorously tested in Canonical's in-house CI/CD (Continuous Integration / Continuous Delivery) quality assurance system, which tests hundreds of combinations of livepatches, kernels, hardware, physical machines, and virtual machines.  Once a livepatch passes CI/CD and regression tests, it's rolled out on a canary testing basis, first to a tiny percentage of the Ubuntu Community users of the Canonical Livepatch Service. Based on the success of that microscopic rollout, a moderate rollout follows.  And assuming those also succeed, the livepatch is delivered to all free Ubuntu Community and paid Ubuntu Advantage users of the service.  Systemic failures are automatically detected and raised for inspection by Canonical engineers.  Ubuntu Community users of the Canonical Livepatch Service who want to eliminate the small chance of being randomly chosen as a canary should enroll in the Ubuntu Advantage program (starting at $12/month).

      Q: What kinds of updates will be provided by the Canonical Livepatch Service?

      A: The Canonical Livepatch Service is intended to address high and critical severity Linux kernel security vulnerabilities, as identified by Ubuntu Security Notices and the CVE database. Note that there are some limitations to the kernel livepatch technology -- some Linux kernel code paths cannot be safely patched while running. We will do our best to supply Canonical Livepatches for high and critical vulnerabilities in a timely fashion whenever possible. There may be occasions when the traditional kernel upgrade and reboot might still be necessary. We’ll communicate that clearly through the usual mechanisms -- USNs, Landscape, Desktop Notifications, Byobu, /etc/motd, etc.

      Q: What about non-security bug fixes, stability, performance, or hardware enablement updates?

      A: Canonical will continue to provide Linux kernel updates addressing bugs, stability issues, performance problems, and hardware compatibility on our usual cadence -- about every 3 weeks. These updates can be easily applied using ‘sudo apt update; sudo apt upgrade -y’, using the Desktop “Software Updates” application, or Landscape systems management. These standard (non-security) updates will still require a reboot, as they always have.

      Q: Can I rollback a Canonical Livepatch?

      A: Currently rolling-back/removing an already inserted livepatch module is disabled in Linux 4.4. This is because we need a way to determine if we are currently executing inside a patched function before safely removing it. We can, however, safely apply new livepatches on top of each other and even repatch functions over and over.

      Q: What about low and medium severity CVEs?

      A: We’re currently focusing our Canonical Livepatch development and testing resources on high and critical security vulnerabilities, as determined by the Ubuntu Security Team.  We'll livepatch other CVEs opportunistically.

      Q: Why are Canonical Livepatches provided as a subscription service?

      A: The Canonical Livepatch Service provides a secure, encrypted, authenticated connection, to ensure that only properly signed livepatch kernel modules -- and most importantly, the right modules -- are delivered directly to your system, with extremely high quality testing wrapped around it.

      Q: But I don’t want to buy UA support!

      A: You don’t have to! Canonical is providing the Canonical Livepatch Service to community users of Ubuntu, at no charge for up to 3 machines (desktop, server, virtual machines, or cloud instances). A randomly chosen subset of the free users of Canonical Livepatches will receive their Canonical Livepatches slightly earlier than the rest of the free users or UA users, as a lightweight canary testing mechanism, benefiting all Canonical Livepatch users (free and UA). Once those canary livepatches apply safely, all Canonical Livepatch users will receive their live updates.

      Q: But I don’t have an Ubuntu SSO account!

      A: An Ubuntu SSO account is free, and provides services similar to Google, Microsoft, and Apple for Android/Windows/Mac devices, respectively. You can create your Ubuntu SSO account here.

      Q: But I don’t want login to ubuntu.com!

      A: You don’t have to! Canonical Livepatch is absolutely not required maintain the security of any Ubuntu desktop or server! You may continue to freely and anonymously ‘sudo apt update; sudo apt upgrade; sudo reboot’ as often as you like, and receive all of the same updates, and simply reboot after kernel updates, as you always have with Ubuntu.

      Q: But I don't have Internet access to livepatch.canonical.com:443!

      A: You should think of the Canonical Livepatch Service much like you think of Netflix, Pandora, or Dropbox.  It's an Internet streaming service for security hotfixes for your kernel.  You have access to the stream of bits when you can connect to the service over the Internet.  On the flip side, your machines are already thoroughly secured, since they're so heavily firewalled off from the rest of the world!

      Q: Where’s the source code?

      A: The source code of livepatch modules can be found here.  The source code of the canonical-livepatch client is part of Canonical's Landscape system management product and is commercial software.

      Q: What about Ubuntu Core?

      A: Canonical Livepatches for Ubuntu Core are on the roadmap, and may be available in late 2016, for 64-bit Intel/AMD architectures. Canonical Livepatches for ARM-based IoT devices depend on upstream support for livepatches.

      Q: How does this compare to Oracle Ksplice, RHEL Live Patching and SUSE Live Patching?

      A: While the concepts are largely the same, the technical implementations and the commercial terms are very different:

      • Oracle Ksplice uses it’s own technology which is not in upstream Linux.
      • RHEL and SUSE currently use their own homegrown kpatch/kgraft implementations, respectively.
      • Canonical Livepatching uses the upstream Linux Kernel Live Patching technology.
      • Ksplice is free, but unsupported, for Ubuntu Desktops, and only available for Oracle Linux and RHEL servers with an Oracle Linux Premier Support license ($2299/node/year).
      • It’s a little unclear how to subscribe to RHEL Kernel Live Patching, but it appears that you need to first be a RHEL customer, and then enroll in the SIG (Special Interests Group) through your TAM (Technical Account Manager), which requires Red Hat Enterprise Linux Server Premium Subscription at $1299/node/year.  (I'm happy to be corrected and update this post)
      • SUSE Live Patching is available as an add-on to SUSE Linux Enterprise Server 12 Priority Support subscription at $1,499/node/year, but does come with a free music video.
      • Canonical Livepatching is available for every Ubuntu Advantage customer, starting at our entry level UA Essential for $150/node/year, and available for free to community users of Ubuntu.

      Q: What happens if I run into problems/bugs with Canonical Livepatches?

      A: Ubuntu Advantage customers will file a support request at support.canonical.com where it will be serviced according to their UA service level agreement (Essential, Standard, or Advanced). Ubuntu community users will file a bug report on Launchpad and we'll service it on a best effort basis.

      Q: Why does canonical-livepatch client/server have a proprietary license?

      A: The canonical-livepatch client is part of the Landscape family of tools available to Canonical support customers. We are enabling free access to the Canonical Livepatch Service for Ubuntu community users as a mark of our appreciation for the broader Ubuntu community, and in exchange for occasional, automatic canary testing.

      Q: How do I build my own livepatches?

      A: It’s certainly possible for you to build your own Linux kernel live patches, but it requires considerable skill, time, computing power to produce, and even more effort to comprehensively test. Rest assured that this is the real value of using the Canonical Livepatch Service! That said, Chris Arges has blogged a howto for the curious a while back:


      Q: How do I get notifications of which CVEs are livepatched and which are not?

      A: You can, at any time, query the status of the canonical-livepatch daemon using: ‘canonical-livepatch status --verbose’. This command will show any livepatches successfully applied, any outstanding/unapplied livepatches, and any error conditions. Moreover, you can monitor the Ubuntu Security Notices RSS feed and the ubuntu-security-announce mailing list.

      Q: Isn't livepatching just a big ole rootkit?

      A: Canonical Livepatches inject kernel modules to replace sections of binary code in the running kernel. This requires the CAP_SYS_MODULE capability. This is required to modprobe any module into the Linux kernel. If you already have that capability (root does, by default, on Ubuntu), then you already have the ability to arbitrarily modify the kernel, with or without Canonical Livepatches. If you’re an Ubuntu sysadmin and you want to disable module loading (and thereby also disable Canonical Livepatches), simply ‘echo 1 | sudo tee /proc/sys/kernel/modules_disabled’.

      Keep the uptime!
      on October 19, 2016 03:50 PM

      A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

      Individual reports

      In September, about 152 work hours have been dispatched among 13 paid contributors. Their reports are available:

      • Balint Reczey did 15 hours (out of 12.25 hours allocated + 7.25 remaining, thus keeping 4.5 extra hours for October).
      • Ben Hutchings did 6 hours (out of 12.3 hours allocated + 1.45 remaining, he gave back 7h and thus keeps 9.75 extra hours for October).
      • Brian May did 12.25 hours.
      • Chris Lamb did 12.75 hours (out of 12.30 hours allocated + 0.45 hours remaining).
      • Emilio Pozuelo Monfort did 1 hour (out of 12.3 hours allocated + 2.95 remaining) and gave back the unused hours.
      • Guido Günther did 6 hours (out of 7h allocated, thus keeping 1 extra hour for October).
      • Hugo Lefeuvre did 12 hours.
      • Jonas Meurer did 8 hours (out of 9 hours allocated, thus keeping 1 extra hour for October).
      • Markus Koschany did 12.25 hours.
      • Ola Lundqvist did 11 hours (out of 12.25 hours assigned thus keeping 1.25 extra hours).
      • Raphaël Hertzog did 12.25 hours.
      • Roberto C. Sanchez did 14 hours (out of 12.25h allocated + 3.75h remaining, thus keeping 2 extra hours).
      • Thorsten Alteholz did 12.25 hours.

      Evolution of the situation

      The number of sponsored hours reached 172 hours per month thanks to maxcluster GmbH joining as silver sponsor and RHX Srl joining as bronze sponsor.

      We only need a couple of supplementary sponsors now to reach our objective of funding the equivalent of a full time position.

      The security tracker currently lists 39 packages with a known CVE and the dla-needed.txt file 34. It’s a small bump compared to last month but almost all issues are affected to someone.

      Thanks to our sponsors

      New sponsors are in bold.

      No comment | Liked this article? Click here. | My blog is Flattr-enabled.

      on October 19, 2016 10:29 AM

      In several of my recent presentations, I’ve discussed the lifetime of security flaws in the Linux kernel. Jon Corbet did an analysis in 2010, and found that security bugs appeared to have roughly a 5 year lifetime. As in, the flaw gets introduced in a Linux release, and then goes unnoticed by upstream developers until another release 5 years later, on average. I updated this research for 2011 through 2016, and used the Ubuntu Security Team’s CVE Tracker to assist in the process. The Ubuntu kernel team already does the hard work of trying to identify when flaws were introduced in the kernel, so I didn’t have to re-do this for the 557 kernel CVEs since 2011.

      As the README details, the raw CVE data is spread across the active/, retired/, and ignored/ directories. By scanning through the CVE files to find any that contain the line “Patches_linux:”, I can extract the details on when a flaw was introduced and when it was fixed. For example CVE-2016-0728 shows:

       break-fix: 3a50597de8635cd05133bd12c95681c82fe7b878 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2

      This means that CVE-2016-0728 is believed to have been introduced by commit 3a50597de8635cd05133bd12c95681c82fe7b878 and fixed by commit 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2. If there are multiple lines, then there may be multiple SHAs identified as contributing to the flaw or the fix. And a “-” is just short-hand for the start of Linux git history.

      Then for each SHA, I queried git to find its corresponding release, and made a mapping of release version to release date, wrote out the raw data, and rendered graphs. Each vertical line shows a given CVE from when it was introduced to when it was fixed. Red is “Critical”, orange is “High”, blue is “Medium”, and black is “Low”:

      CVE lifetimes 2011-2016

      And here it is zoomed in to just Critical and High:

      Critical and High CVE lifetimes 2011-2016

      The line in the middle is the date from which I started the CVE search (2011). The vertical axis is actually linear time, but it’s labeled with kernel releases (which are pretty regular). The numerical summary is:

      • Critical: 2 @ 3.3 years
      • High: 34 @ 6.4 years
      • Medium: 334 @ 5.2 years
      • Low: 186 @ 5.0 years

      This comes out to roughly 5 years lifetime again, so not much has changed from Jon’s 2010 analysis.

      While we’re getting better at fixing bugs, we’re also adding more bugs. And for many devices that have been built on a given kernel version, there haven’t been frequent (or some times any) security updates, so the bug lifetime for those devices is even longer. To really create a safe kernel, we need to get proactive about self-protection technologies. The systems using a Linux kernel are right now running with security flaws. Those flaws are just not known to the developers yet, but they’re likely known to attackers, as there have been prior boasts/gray-market advertisements for at least CVE-2010-3081 and CVE-2013-2888.

      (Edit: see my updated graphs that include CVE-2016-5195.)

      © 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
      Creative Commons License

      on October 19, 2016 04:46 AM

      October 18, 2016

      Plasma’s road ahead

      Sebastian Kügler

      My Plasma Desktop in 2016My Plasma Desktop in 2016
      On Monday, KDE’s Plasma team held its traditional kickoff meeting for the new development cycle. We took this opportunity to also look and plan ahead a bit further into the future. In what areas are we lacking, where do we want or need to improve? Where do we want to take Plasma in the next two years?

      Our general direction points towards professional use-cases. We want Plasma to be a solid tool, a reliable work-horse that gets out of the way, allowing to get the job done quickly and elegantly. We want it to be faster and of better quality than the competition.

      With these big words out there, let’s have a look at some specifics we talked about.

      Release schedule until 2018

      Our plan is to move from 4 to 3 yearly releases in 2017 and 2018, which we think strikes a nice balance between our pace of development, and stabilization periods around that. Our discussion of the release schedule resulted in the following plan:

      • Plasma 5.9: 31 January 2017
      • Plasma 5.10: May 2017
      • Plasma 5.11: September 2017
      • Plasma 5.12: December 2017
      • Plasma 5.13: April 2018
      • Plasma 5.14 LTS: August 2018

      A cautionary note, we can’t know if everything exactly plays out like this, as this schedule, to a degree depends on external factors, such as Qt’s release schedule. Here’s what we intend to do, it is really our “best guess”. Still, this aligns with Qt’s plans, who are also looking at an LTS release in summer 2018. So, what will these upcoming releases bring?

      Breeze Look and Feel
      Breeze Look and Feel

      UI and Theming

      The Breeze icon theme will see further completion work and refinements in its existing icons details. Icon usage over the whole UI will see more streamlining work as well. We also plan to tweak the Breeze-themed scrollbars a bit, so watch out for changes in that area. A Breeze-themed Firefox theme is planned, as well as more refinement in the widget themes for Qt, GTK, etc.. We do not plan any radical changes in the overall look and feel of our Breeze theme, but will further improve and evolve it, both in its light and dark flavors.

      Feature back-log

      The menu button is a first sign of the global menu returning to PlasmaThe menu button is a first sign of the global menu returning to Plasma
      One thing that many of our users are missing is support for a global menu similar to how MacOS displays application menus outside of the app’s window (for example at the top of the screen). We’re currently working on bringing this feature, which was well-supported in Plasma 4 back in Plasma 5, modernized and updated to current standards. This may land as soon as the upcoming 5.9 release, at least for X11.

      Better support for customizing the locale (the system which shows things like time, currencies, numbers in the way the user expects them) is on our radar as well. In this area, we lost some features due to the transition to Frameworks 5, or rather QLocale, away from kdelibs’ custom, but sometimes incompatible locale handling classes.


      The next releases overall will bring further improvements to our Wayland session. Currently, Plasma’s KWin brings an almost feature-complete Wayland display server, which already works for many use-cases. It hasn’t seen the real-world testing it needs, and it is lacking certain features that our users expect from their X11 session, or new features which we want to offer to support modern hardware better.
      We plan to improve multi-screen rendering on Wayland and the input stack in areas such as relative pointers, pointer confinement, touchpad gestures, wacom tablet support, clipboard management (for example, Klipper). X11 dependencies in KWin will be further reduced with the goal to make it possible to start up KWin entirely without hard X11 dependencies.
      One new feature which we want to offer in our Wayland session is support for scaling the contents of each output individually, which allows users to use multiple displays with vastly varying pixel densities more seamlessly.
      There are also improvements planned around virtual desktops under Wayland, as well as their relation to Plasma’s Activities features. Output configuration as of now is also not complete, and needs more work in the coming months. Some features we plan will also need changes in QtWayland, so there’s some upstream bug-fixing needed, as well.

      One thing we’d like to see to improve our users’ experience under Wayland is to have application developers test their apps under Wayland. It happens still a bit too often that an application ends up running into a code-path that makes assumptions that X11 is used as display server protocol. While we can run applications in backwards-compatible XWayland mode, applications can benefit from the better rendering quality under Wayland only when actually using the Wayland protocol. (This is mostly handled transparantly by Qt, but applications do their thing, so unless it’s tested, it will contain bugs.)


      Plasma’s Mobile flavor will be further stabilized, and its stack cleaned up, we are further reducing the stack’s footprint without losing important functionality. The recently-released Kirigami framework, which allows developers to create convergent applications that work on both mobile and desktop form-factors, will be adjusted to use the new, more light-weight QtQuick Controls 2. This makes Kirigami a more attractive technology to create powerful, yet lean applications that work across a number of mobile and desktop operating systems, such as Plasma Mobile, Android, iOS, and others.

      Plasma DiscoverDiscover, Plasma’s software center integrates online content from the KDE Store, its convergent user-interface is provided by the Kirigami framework

      Online Services

      Planned improvements in our integration of online services are dependency handling for assets installed from the store. This will allow us to support installation of meta-themes directly from the KDE Store. We want to also improve our support for online data storage, prioritizing Free services, but also offer support for proprietary services, such as the GDrive support we recently added to Plasma’s feature-set.

      Developer Recruitment

      We want to further increase our contributor base. We plan to work towards an easier on-boarding experience, through better documentation, mentoring and communication in general. KDE is recruiting, so if you are looking for a challenging and worthwhile way to work as part of a team, or on your individual project, join our ranks of developers, artists, sysadmins, translators, documentation writers, evangelists, media experts and free culture activists and let us help each other.

      on October 18, 2016 12:29 PM
      Incomplete bug reports

      Since writing an earlier post on the subject I've continued to monitor new bug reports. I have been very disappointed to see that so many have to be marked as being "incomplete" as they give little information about the problem and don't really give anyone an incentive to work on and help fix. So many are very vague about the problem being reported while some are just an indication that a problem exists. Reports which just say something along the lines of:
      • help
      • bug
      • i don't know
      • dont remember
      don't do a lot to point to the problem that is being reported. May be some information can be gleaned from any attached log files but please bug reporters, tell us what the problem is as it will greatly increase the chances of your issue being fixed, investigated or (re)assigned to the correct package. Reporters need to reply when asked for further information about the bug or the version of Ubuntu being used even if it is to say that, for whatever reason, the problem no longer affects them. And I say to all novice reporters: "Please don't keep the Ubuntu version or flavour that you are using a secret!"

      Bug report or support request?

      Some reports are probably submitted as a desperate measure when help is needed and no-one is around to help. Over the last couple of months I've seen dozens of bug reports being closed as they have "expired" because there has been no response to a request for information within 59 days of the request being made, Obviously Ubuntu users are having problems but are their issues being resolved? Are those users moving back to Windows or to another Linux distribution because they aren't getting help they need and don't know how to ask for it?

      Many of the issues that I'm referring to should have been posted initially as support requests at the Ubuntu ForumsAsk Ubuntu or Launchpad Answers and then filed as bug reports once sufficient help and guidance had been obtained and the presence of a bug confirmed.

      A bug with the bug reporting tool ubuntu-bug?

      Sometimes trying to establish the correct package against which to file a bug is a difficult task especially if you are not conversant with the inner workings of Ubuntu. Launchpad can often guide the reporter but it seems many reports are being incorrectly filed against the xorg package in error. Bug #1631748 (ubuntu-bug selects wrong category) seems to confirm this widespread problem. If a bug is reported against the wrong package and no description of the issue is given there is no chance of the issue being investigated.

      Further reading

      The following links will give those who are new to bug reporting some help in filing a good bug report that can be worked on by a bug triager or developer.

      How to Report Bugs
      How to Report Bugs Effectively
      Improving Ubuntu: A Beginners Guide to Filing Bug Reports
      How do I report a bug?

      To the future and some events of September 1973

      In just a couple of weeks I'll no longer have to worry about getting up early for work, fighting my way though the local traffic and aiming for an 8 o'clock start which is something that I seldom manage to achieve these days. No doubt I'll be able to devote much more time to work on Ubuntu and who knows I may well revisit some of the teams and projects that I've left over the past couple of years.

      Looking at Mark Shuttleworth's Wikipedia page it seems that he was born just a week or two after I started my working life in September 1973. A lot has changed since then. We didn't have personal computers or mobile phones and as far as I can remember we managed perfectly well without them. Back then I had very different interests, some of which I've recently returned to but obviously I had no idea what was in store for me around 40 years later.

      Thanks for everything so far Mark!

      zz = Zesty Zapus, a mouse that jumped

      So we now have a code-name for the next Ubuntu release which Mark has confirmed will be Zesty Zapus, Apparently a zapus is a North American mouse that jumps. So, now that we've reached the end of the alphabet, what next?

      Prediction: There will be much discussion about the code-name for the 17.10 release and it's announcement will probably be the most anticipated yet.
      on October 18, 2016 05:16 AM

      From Yakkety To Zesty

      Stephen Michael Kellat

      I've seen Ms. Belkin go ahead and wrap up the Y (Yakkety) season while giving a look ahead to the Z (Zesty) season. I'm afraid I cannot give as much of a report. My participation in the Ubuntu realm has been a bit held back by things beyond my control.

      During the Y season I was stuck at work. I have a hefty commute to work which pretty much wrecks my day when included with my working hours. My work is considered seasonal which for a federal civil servant means that it is subject to workload constraints. Apparently we did not have a proper handle on workload this year. The estimate was that our work would be done by a certain date and we would go on "seasonal release" or furlough until we are recalled to duty. We missed that date by quite a longshot. After quite a bit of attrition, angry resignations, people checking into therapy, people developing cardiac issues, and worse my unit received "seasonal release" only last Friday. Recall could be as soon as two weeks away.

      The only main action I really wanted to handle during Y was to get a backport in of dianara and pumpa if they dropped new releases. I was a little late in doing so but I just got the backport bug for dianara filed tonight. I kept stating I would wait for furlough to do the testing but furlough took long enough that a couple versions of dianara went by in the interim. Folks looking at pump.io have to remember that even the website to a server is itself a client and new features have to be implemented in clients for the main server engine to pass around. The website isn't the point to pump.io but rather the use of a client of your choice is and a list is being maintained.

      I don't really know what the plan is for Z for me. Right now many eyes around the world are focused on the election for President of the United States. People regard that office as the so-called leader of the free world. That person also happens to be head of the civil service in the United States. Neither of the major party candidates have nice plans for my employing agency. Both scare me. A good chunk of the attrition and angry resignations at work has been people fleeing for the safety of the private sector in light of what is expected from either major party candidate.

      Backporting will continue subject to resource restrictions. I remain a student in the Emergency Management and Planning Administration program at Lakeland Community College with graduation expected in May 2017 subject to course availability. Right now I'm working on learning about the Incident Command System and how it is applied in addition to Continuity of Operations.

      Graphic From FEMA's Emergency Management Institute IS-800 Class of the Incident Command SystemGraphic From FEMA's Emergency Management Institute IS-800 Class of the Incident Command System

      Time will tell where things go. Clues are not readily available to me. I wish they were, perhaps...

      on October 18, 2016 02:43 AM

      Welcome to the Ubuntu Weekly Newsletter. This is issue #484 for the weeks October 3 – 16, 2016, and the full version is available here.

      In this issue we cover:

      The issue of The Ubuntu Weekly Newsletter is brought to you by:

      • Elizabeth K. Joseph
      • Chris Guiver
      • Simon Quigley
      • Mary Frances Hull
      • Chris Sirrs
      • And many others

      If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

      Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

      on October 18, 2016 12:47 AM

      to not breathe

      Sebastian Kügler

      I’ve always loved diving down while snorkeling or swimming, and it’s been intriguing to me how long I can hold my breath, how far and deep I could go just like that. (The answer so far, 14m.)

      Last week, I met with Jeanine Grasmeijer. Jeanine is one of the world’s top freedivers, two times world record holder, 11 times Dutch national record holder. She can hold her breath for longer than 7 minutes. Just last month she dove down to -92m without fins. (For the mathematically challenged, that’s 6.6 times 14m.)

      Diving with Jeanine GrasmeijerDiving with Jeanine Grasmeijer
      Jeanine showed me how to not breathe properly.
      We started with relaxation and breathing exercises on dry land. Deep relaxation, breathing using the proper and most effective technique, then holding  breath and recovering.
      In the water, this actually got a bit easier. Water has better pressure characteristics on the lungs, and the mammalian diving reflex helps shutting off the air ways, leading to a yet more efficient breath hold. A cycle starts with breathing in the water through the snorkel for a few minutes, focusing on a calm and regular, relaxed breathing rhythm. After a few cycles of static apnea (breath holding under water, no movement), I passed the three-minute-mark at 3:10.
      We then moved on to dynamic apnea (swimming a horizontal distance under water on one breath). Jeanine did a careful weight check with me, making sure my position would need as little as possible correction movements while swimming. With a reasonable trim achieved, I swam some 50m, though we mainly focused not on distance, but on technique of finning, arms usage and horizontal trim.
      The final exercise in the pool was about diving safety. We went over the procedure to surface an unconscious diver, and get her back to her senses.

      Freediving, as it turns out, is a way to put the world around on pause for a moment. You exist in the here and now, as if the past and future do not exist. The mind is in a completely calm state, while your body floats in a world of weightless balance. As much as diving is a physical activity, it can be a way to enter a state of Zen in the under water world.

      Jeanine has not only been a kind, patient and reassuring mentor to me, but opened the door to a world which has always fascinated and intrigued me. A huge, warm thanks for so much inspiration of this deep passion!

      Harbor porpoise MichaelThe cutest whale in the world!

      In other news on the “mammals that can hold their breath really well” topic: I’ve adopted a cute tiny orphaned whale!

      on October 18, 2016 12:10 AM

      October 17, 2016

      Seeking a new role

      Elizabeth K. Joseph

      Today I was notified that I am being laid off from the upstream OpenStack Infrastructure job I have through HPE. It’s a workforce reduction and our whole team at HPE was hit. I love this job. I work with a great team on the OpenStack Infrastructure team. HPE has treated me very well, supporting travel to conferences I’m speaking at, helping to promote my books (Common OpenStack Deployments and The Official Ubuntu Book, 9th and 8th editions) and other work. I spent almost four years there and I’m grateful for what they did for my career.

      But now I have to move on.

      I’ve worked as a Linux Systems Administrator for the past decade and I’d love to continue that. I live in San Francisco so there are a lot of ops positions around here that I can look at, but I really want to find a place where my expertise with open source, writing and public speaking can will be used and appreciated. I’d also be open to a more Community or Developer Evangelist role that leverages my systems and cloud background.

      Whatever I end up doing next the tl;dr (too long; didn’t read) version of what I need in my next role are as follows:

      • Most of my job to be focused on open source
      • Support for travel to conferences where I speak at (6-12 per year)
      • Work from home
      • Competitive pay

      My resume is over here: http://elizabethkjoseph.com

      Now the long version, and a quick note about what I do today.

      OpenStack project Infrastructure Team

      I’ve spent nearly four years working full time on the OpenStack project Infrastructure Team. We run all the services that developers on the OpenStack project interact with on a daily basis, from our massive Continuous Integration system to translations and the Etherpads. I love it there. I also just wrote a book about OpenStack.

      HPE has paid me to do this upstream OpenStack project Infrastructure work full time, but we have team members from various companies. I’d love to find a company in the OpenStack ecosystem willing to pay for me to continue this and support me like HPE did. All the companies who use and contribute to OpenStack rely upon the infrastructure our team provides, and as a root/core member of this team I have an important role to play. It would be a shame for me to have to leave.

      However, I am willing to move on from this team and this work for something new. During my career thus far I’ve spent time working on both the Ubuntu and Debian projects, so I do have experience with other large open source projects, and reducing my involvement in them as my life dictates.

      Most of my job to be focused on open source

      This is extremely important to me. I’ve spent the past 15 years working intensively in open source communities, from Linux Users Groups to small and large open source projects. Today I work on a team where everything we do is open source. All system configs, Puppet modules, everything but the obvious private data that needs to be private for the integrity of the infrastructure (SSH keys, SSL certificates, passwords, etc). While I’d love a role where this is also the case, I realize how unrealistic it is for a company to have such an open infrastructure.

      An alternative would be a position where I’m one of the ops people who understands the tooling (probably from gaining an understanding of it internally) and then going on to help manage the projects that have been open sourced by the team. I’d make sure best practices are followed for the open sourcing of things, that projects are paid attention to and contributors outside the organization are well-supported. I’d also go to conferences to present on this work, write about it on a blog somewhere (company blog? opensource.com?) and be encouraging and helping other team members do the same.

      Support for travel to conferences where I speak at (to chat at 6-12 per year)

      I speak a lot and I’m good at it. I’ve given keynotes at conferences in Europe, South America and right here in the US. Any company I go to work for will need to support me in this by giving me the time to prepare and give talks, and by compensating me for travel for conferences where I’m speaking.

      Work from home

      I’ve been doing this for the past ten years and I’d really struggle to go back into an office. Since operations, open source and travel doesn’t need me to be in an office, I’d prefer to stick with the flexibility and time working from home gives me.

      For the right job I may be willing to consider going into an office or visiting client/customer sites (SF Bay Area is GREAT for this!) once a week, or some kind of arrangement where I travel to a home office for a week here and there. I can’t relocate for a position at this time.

      Competitive pay

      It should go without saying, but I do live in one of the most expensive places in the world and need to be compensated accordingly. I love my work, I love open source, but I have bills to pay and I’m not willing to compromise on this at this point in my life.

      Contact me

      If you think your organization would be interested in someone like me and can help me meet my requirements, please reach out via email at lyz@princessleia.com

      I’m pretty sad today about the passing of what’s been such a great journey for me at HPE and in the OpenStack community, but I’m eager to learn more about the doors this change is opening up for me.

      on October 17, 2016 11:23 PM

      The mouse that jumped

      Mark Shuttleworth

      The naming of Ubuntu releases is, of course, purely metaphorical. We are a diverse community of communities – we are an assembly of people interested in widely different things (desktops, devices, clouds and servers) from widely different backgrounds (hello, world) and with widely different skills (from docs to design to development, and those are just the d’s).

      As we come to the end of the alphabet, I want to thank everyone who makes this fun. Your passion and focus and intellect, and occasionally your sharp differences, all make it a privilege to be part of this body incorporate.

      Right now, Ubuntu is moving even faster to the centre of the cloud and edge operations. From AWS to the zaniest new devices, Ubuntu helps people get things done faster, cleaner, and more efficiently, thanks to you. From the launch of our kubernetes charms which make it very easy to operate k8s everywhere, to the fun people seem to be having with snaps at snapcraft.io for shipping bits from cloud to top of rack to distant devices, we love the pace of change and we change the face of love.

      We are a tiny band in a market of giants, but our focus on delivering free software freely together with enterprise support, services and solutions appears to be opening doors, and minds, everywhere. So, in honour of the valiantly tiny leaping long-tailed over the obstacles of life, our next release which will be Ubuntu 17.04, is hereby code named the ‘Zesty Zapus’.


      on October 17, 2016 12:23 PM

      LXD logo

      What are snaps?

      Snaps were introduced a little while back as a cross-distro package format allowing upstreams to easily generate and distribute packages of their application in a very consistent way, with support for transactional upgrade and rollback as well as confinement through AppArmor and Seccomp profiles.

      It’s a packaging format that’s designed to be upstream friendly. Snaps effectively shift the packaging and maintenance burden from the Linux distribution to the upstream, making the upstream responsible for updating their packages and taking action when a security issue affects any of the code in their package.

      The upside being that upstream is now in complete control of what’s in the package and can distribute a build of the software that matches their test environment and do so within minutes of the upstream release.

      Why distribute LXD as a snap?

      We’ve always cared about making LXD available to everyone. It’s available for a number of Linux distribution already with a few more actively working on packaging it.

      For Ubuntu, we have it in the archive itself, push frequent stable updates, maintain official backports in the archive and also maintain a number of PPAs to make our releases available to all Ubuntu users.

      Doing all that is a lot of work and it makes tracking down bugs that much harder as we have to care about a whole lot of different setups and combination of package versions.

      Over the next few months, we hope to move away from PPAs and some of our backports in favor of using our snap package. This will allow a much shorter turnaround time for new releases and give us more control on the runtime environment of LXD, making our lives easier when dealing with bugs.

      How to get the LXD snap?

      Those instructions have only been tested on fully up to date Ubuntu 16.04 LTS or Ubuntu 16.10 with snapd installed. Please use a system that doesn’t already have LXD containers as the LXD snap will not be able to take over existing containers.

      LXD snap example

      1. Make sure you don’t have a packaged version of LXD installed on your system.
        sudo apt remove --purge lxd lxd-client
      2. Create the “lxd” group and add yourself to it.
        sudo groupadd --system lxd
        sudo usermod -G lxd -a <username>
      3. Install LXD itself
        sudo snap install lxd

      This will get the current version of LXD from the “stable” channel.
      If your user wasn’t already part of the “lxd” group, you may now need to run:

      newgrp lxd

      Once installed, you can set it up and spawn your first container with:

      1. Configure the LXD daemon
        sudo lxd init
      2. Launch your first container
        lxd.lxc launch ubuntu:16.04 xenial

      Channels and updates

      The Ubuntu Snap store offers 4 different release “channels” for snaps:

      • stable
      • candidate
      • stable
      • edge

      For LXD, we currently use “stable”, “candidate” and “edge”.

      • “stable” contains the latest stable release of LXD.
      • “candidate” is a testing area for “stable”.
        We’ll push new releases there a couple of days before releasing to “stable”.
      • “edge” is the current state of our development tree.
        This channel is entirely automated with uploads triggered after the upstream CI confirms that the development tree looks good.

      You can switch between channels by using the “snap refresh” command:

      snap refresh lxd --edge

      This will cause your system to install the current version of LXD from the “edge” channel.

      Be careful when hopping channels though as LXD may break when moving back to an earlier version (going from edge to stable), especially when database schema changes occurred in between.

      Snaps automatically update, either on schedule (typically once a day) or through push notifications from the store. On top of that, you can force an update by running “snap refresh lxd”.

      Known limitations

      Those are all pretty major usability issues and will likely be showstoppers for a lot of people.
      We’re actively working with the Snappy team to get those issues addressed as soon as possible and will keep maintaining all our existing packages until such time as those are resolved.

      Extra information

      More information on snap packages can be found at: http://snapcraft.io
      Bug reports for the LXD snap: https://github.com/lxc/lxd-pkg-ubuntu/issues

      The main LXD website is at: https://linuxcontainers.org/lxd
      Development happens on Github at: https://github.com/lxc/lxd
      Mailing-list support happens on: https://lists.linuxcontainers.org
      IRC support happens in: #lxcontainers on irc.freenode.net
      Try LXD online: https://linuxcontainers.org/lxd/try-it

      PS: I have not forgotten about the remaining two posts in the LXD 2.0 series, the next post has been on hold for a while due to some issues with OpenStack/devstack.

      on October 17, 2016 05:55 AM

      October 16, 2016

      I’m very pleased to announce the release of budgie-remix based on the solid 16.10 Ubuntu foundations. For the uninitiated, budgie-remix utilises the wonderful budgie-desktop graphical interface from the Solus team. This is our first release following the standard Ubuntu release … Continue reading
      on October 16, 2016 07:31 PM

      The Silicon Canal tech awards are coming up here in Birmingham, so I thought I’d write down who I’ve nominated and why! Along with a few categories where I had difficulty deciding, in which an honourable mention or two may be awarded, although such things do not get submitted to the actual award ceremony :-)

      Best Tech Start-Up

      ImpactHub Birmingham

      As ImpactHub say, “We want to empower a collective movement to bring about change in our city, embracing a diverse range of people and organisations with a whole host of experiences and skills.” ImpactHub is a place enabling the tech scene in Birmingham, which is the most important part of it all; that’s what makes Birmingham great and more than just some half-baked clone of London or San Francisco. Bringing tech companies together with the rest of the city also hugely increases the number of connections made and opportunities created right here in Birmingham itself, and helps tech entrepreneurs meet other communities and unify everyone’s goals.

      Most Influential Female in Technology

      Jessica Rose

      Jess tirelessly advocates technology and Birmingham, both inside and outside the city. She’s great at connecting dots, showing people who they can work with to get things done, and advising on how best to grow a community or a company into areas you might not have otherwise pursued. And she’s helpful and engaging and good to work with, and knows basically everyone. That’s influence, and she’s using it to better the Brum tech scene as a whole, and that deserves reward.

      Runner up: Immy Kaur for setting up ImpactHub :-)

      Small Tech Company of the Year

      Technical Team Solutions

      TTS are heavily invested in the tech life of Birmingham itself. They sponsor events, they’ve partnered with Silicon Canal as exclusive recruitment agents, and most importantly they’re behind Fusion, a regular and vibrant quarterly tech conference drawn from the city and supporting both local tech and local street food vendors. This isn’t like some other conferences which basically are in Birmingham by coincidence; Fusion is intimately involved with the Brum tech scene, as are TTS themselves, and that should be massively encouraged.

      Runner up: Jump 24, web design and development studio getting good stuff done and run by a very smart and very short Welshman1 :-)

      Large Tech Company of the Year (revenue over £10 million)


      Talis are strong supporters of the Birmingham tech scene, a successful large scaleup here in the city, and willing to work openly with others in pursuit of those goals. They regularly sponsor tech events with money or by providing space to host meetups, hold hack days and write about them afterwards, donate time and money to helping others in the city including events for entrepreneurs as well as developers, and run their own events (such as Codelicious) to add more to the growing vibrancy of Brum. It’s great to see a company of this size be cognisant of the city and their life within it, and this certainly deserves to be recognised.

      Most Influential Male in Technology

      Roy Meredith

      A jolly good way to make connections in the city is through Roy, who is connected to all sorts of people via being responsible for the tech sectors in Marketing Birmingham. I’m not sure the government marketing agency are always perfect, but I am sure that Roy is a person to know. He’s an engaging public speaker, he’s got a background in industry (with a list of AAA games he’s worked on that’d blow your mind), and he’s approachable and smart and everyone listens to him. If that’s not influence, I don’t know what is.

      Outstanding Technology Individual of the Year

      Mary Matthews from Memrica

      Mary describes herself as “passionate about using technology to make a difference to people’s lives” and, unlike quite a few people who might say that, I think she actually means it. It was marvellous to see Memrica get recognised as part of the UberPITCH consultancy earlier this year, and her trip out to meet Travis Kalanick not only will have helped her continue her long history of doing good tech things but also helped elevate Birmingham’s profile as a place for internationally recognised startups. That’s pretty outstanding, in my opinion.

      Runner up: Jess Rose


      Best Angel or Seed Investor of the Year: no nomination here because I have no idea! I know a couple, but haven’t worked with them.

      Graduate Developer of the Year: no nomination here because I don’t know enough graduates. I’d have nominated @jackweirdy if he hadn’t left us :)

      Developer of the Year: no nomination here because, well, too contentious. I don’t know who I’d pick as the best, and I do know that everyone I don’t pick will never buy me a pint again, so I’m not sure who to say here. Maybe I should have just picked myself :-)

      Now, your turn

      Maybe you agree, maybe you don’t. That’s what I think. You will notice that I primarily care about the tech life of the city; if you do a bunch of good stuff here in Birmingham and you’re proud of that, I like what you do. If you do interesting things but never talk about them here in the city, I’m less interested in your things. Perhaps you have different criteria: you should now go and say what you think. Go and add your nominations, Birmingham people; let’s get everyone’s voices heard.

      1. sorry, Dan; Fusion just tips it for TTS, but maybe you should run a conference as well to lobby for the vote :)
      on October 16, 2016 06:02 PM

      It’s hard to believe that Ubuntu 16.10 is already released (I think I may of lost track of time this cycle) and it seems that it’s time for the next cycle’s goals.  But first, I wish to reflect on the goals from the last cycle.  I found out that I’m (somehow) no mood for coding and/or hacking, but I was able to do something for Linux Padawn, which was Buddy Press, but it needs tweaking to get it to work right.

      As for this cycle’s goals, they will be centered around the Grailville Pond project, Ubuntu (Touch), and Linux Padawan:

      Grailville Pond Project

      Work on the Raspberry Pi as I stated here and also work on a temperature inversion catching script for R.

      Ubuntu (Touch)

      Work on a demo because I’m planning to go Ohio Linux Fest next year and I want to bring something cool.

      Linux Padawan

      Work on community building in order to increase the growth and also try to get Buddy Press to work.

      Hopefully this time I can complete them.

      on October 16, 2016 04:57 PM

      October 15, 2016



      We, the Kubuntu Team are very happy to announce that Kubuntu 16.10 is finally here!


      After 6 months of hard but fun work we have a bright new release for you all!


      We packaged some great updates from the KDE Community such as:

      – Plasma 5.7.5
      – Applications 16.04.3
      – Frameworks 5.26.0

      We also have updated to version 4.8 of the Linux kernel with improvements across the board such as Microsoft Surface 3 support.

      For a list of other application updates, upgrading notes and known bugs be sure to read our release notes!

      Download 16.10 or read about how to upgrade from 16.04.

      on October 15, 2016 10:49 PM

      A very bright future ahead.

      Aaron Honeycutt


      It’s only half way though October but it has already been a very busy month for us at Kubuntu. We have welcomed Rik Mills (acheronuk on IRC) as a new Kubuntu/Ubuntu member, Clive Johnston (clivejo on IRC) as a Kubuntu Developer and pushed a new Kubuntu release out the doors!

      Be sure to let us know how much you love it in the #kubuntu-devel IRC Channel, Telegram group or the Mailing List. Your reply might be featured in the next Kubuntu Podcast!

      on October 15, 2016 01:34 PM

      October 14, 2016

      We are happy to announce the release of our latest version, Ubuntu Studio 16.10 Yakkety Yak! As a regular version, it will be supported for 9 months. Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a complete list […]
      on October 14, 2016 11:26 PM

      As you’re probably aware Ubuntu 16.10 was released yesterday and brings with it the Unity 8 desktop session as a preview of what’s being worked on right now and a reflection of the current state of play.

      You might have already logged in and kicked the proverbial tyres.  If not I would urge you to do so.  Please take the time to install a couple of apps as laid out here:


      The main driver for getting Unity 8 in to 16.10 was the chance to get it in the hands of users so we can get feedback and bug reports.  If you find something doesn’t work, please, log a bug.  We don’t monitor every forum or comments section on the web so the absolute best way to provide your feedback to people who can act on it is a bug report with clear steps on how to reproduce the issue (in the case of crashes) or an explanation of why you think a particular behaviour is wrong.  This is how you get things changed or fixed.

      You can contribute to Ubuntu by simply playing with it.

      Read about logging bugs in Ubuntu here: https://help.ubuntu.com/community/ReportingBugs

      And when you are ready to log a bug, log it against Unity 8 here: https://bugs.launchpad.net/ubuntu/+source/unity8




      on October 14, 2016 01:09 PM

      To celebrate KDE’s 20th birthday today, the great KDE developer Helio Castro has launched KDE 1, the ultimate in long term support software with a 20 year support period.

      KDE neon has now, using the latest containerised continuous integration technologies released KDE1 neon Docker images for your friendly local devop to deploy.

      Give it a shot with:

      apt install docker xserver-xephyr
      adduser <username> docker
      <log out and in again>
      Xephyr :1 -screen 1024×768 &
      docker pull jriddell/kde1neon
      docker run -v /tmp/.X11-unix:/tmp/.X11-unix jriddell/kde1neon

      (The Docker image isn’t optimised at all and probably needs to download 10GB, have fun!)

      Facebooktwittergoogle_pluslinkedinby feather
      on October 14, 2016 11:12 AM

      Juju 2.0 is here!

      The Fridge

      Juju 2.0 is here! This release has been a year in the making. We’d like to thank everyone for their feedback, testing, and adoption of juju 2.0 throughout its development process! Juju brings refinements in ease of use, while adding support for new clouds and features.

      New to juju 2?

      You can check our documentation at https://jujucharms.com/docs/2.0/getting-started

      Need to install it?

      If you are running Ubuntu, you can get it from the juju stable ppa:

      sudo add-apt-repository ppa:juju/stable
      sudo apt update
      sudo apt install juju-2.0

      Or install it from the snap store

      snap install juju --beta --devmode

      Windows, Centos, and MacOS users can get a corresponding installer at:


      Want to upgrade to GA?

      Those of you running an RC version of juju 2 can upgrade to this release by running:

      juju upgrade-juju

      Feedback Appreciated!

      We encourage everyone to subscribe the mailing list at juju at lists.ubuntu.com and join us on #juju on freenode. We would love to hear your feedback and usage of juju.

      Originally posted to the juju mailing list on Fri Oct 14 04:34:41 UTC 2016 by Nicholas Skaggs

      on October 14, 2016 04:49 AM

      October 13, 2016

      Kubuntu is a friendly, elegant operating system. The system uses the Linux kernel and Ubuntu core. Kubuntu presents KDE software and a selection of other essential applications.

      We focus on elegance and reliability. Please join us and contribute to an exciting international Free and Open Source Software project.

      Install Kubuntu and enjoy friendly computing. Download the latest version:

      Download kubuntu 64-bit (AMD64) desktop DVD    Torrent

      Download kubuntu (Intel x86) desktop DVD            Torrent

      PCs with the Windows 8 logo or UEFI firmware, choose the 64-bit download. Visit the help pages for more information.

      Ubuntu Release notes
      For a full list of issues and features common to Ubuntu, please refer to the Ubuntu release notes.
      Known problems
      For known problems, please see our official Release Announcement.
      on October 13, 2016 11:38 PM
      Thanks to all the hard work from our contributors, Lubuntu 16.10 has been released! With the codename Yakkety Yak, Lubuntu 16.10 is the 11th release of Lubuntu, with support until July 2017. We even have Lenny the Lubuntu mascot dressed up for the occasion! What is Lubuntu? Lubuntu is an official Ubuntu flavor based on […]
      on October 13, 2016 11:19 PM
      Another six months have come and gone, and it’s been a relatively slow cycle for Xubuntu development.  With increased activity in Xfce as it heads towards 4.14 and full GTK+3 support, few changes have landed in Xubuntu this cycle.  The gears are in motion… but for now, let’s celebrate a new release! Y is for… Yakkety … Continue reading Xubuntu 16.10 “Yakkety Yak” Released
      on October 13, 2016 10:30 PM