December 13, 2017

We are pleased to announce that officially certified FIPS 140-2 level 1 cryptographic packages are now available for Ubuntu 16.04 LTS for Ubuntu Advantage Advanced customers and as a separate, stand-alone product.

In 2016 Canonical began the process of completing the Cryptographic Module Validation Program to obtain FIPS 140-2 validation for Ubuntu 16.04 LTS. This has been successfully completed and Canonical now offers key components of Ubuntu 16.04 LTS compliant with the FIPS 140-2 level 1 standard. The FIPS compliant modules are available to Ubuntu Advantage Advanced subscribers in the Ubuntu Advantage private archive.

We currently use Ubuntu Linux because of its superior development environment and frequent LTS releases. As a business that develops software, one of our customer’s requirements is to utilize FIPS 140-2 validated software. We have been able to start rolling out the Ubuntu FIPS modules without needing to reinstall the operating system. This keeps our developers happy and productive as Ubuntu is their preferred environment and minimizes transition cost. The FIPS modules also include a VPN solution which we look forward to implementing to allow our developers to work remotely but still meet our customer’s requirements.

-Alex Stuart, North Point Defense


Users interested in FIPS 140-2 compliant modules on Ubuntu 16.04 can purchase Ubuntu Advantage at or by contacting the Canonical Sales Team.

For further information please visit



What is FIPS?

FIPS stands for Federal Information Processing Standards which is a set of publications developed and maintained by the National Institute of Standards and Technology (NIST), a United States federal agency. These publications define the security criteria required for government computers and telecommunication systems.

What is the FIPS 140-2 standard?

According to NIST, FIPS 140-2 “specifies the security requirements that will be satisfied by a cryptographic module used within a security system protecting sensitive but unclassified information.”

Why should I use the FIPS 140-2 modules?

Government, defence, healthcare, and finance organizations worldwide operate in highly regulated industries and are required to meet the security requirements defined in the FIPS 140-2 standard. This includes the United States, Canadian, and United Kingdom governments as well as government contractors.

Where can I find out more about FIPS?

General information about the Federal Information Processing Standards can be found on the NIST website. More detailed information about FIPS 140-2 itself can be found in the Federal Information Processing Standards Publication 140-2 document.

Which modules are included?

What versions of Ubuntu have FIPS certified modules?

Currently only Ubuntu 16.04 LTS has FIPS certified modules.

How Can I Find Out More?

Click here to make an inquiry, and somebody from our team will get back to you!

on December 13, 2017 02:00 PM

Because of the distributed nature of Ubuntu development, it is sometimes a little difficult for me to keep track of the "special" URLs for various actions or reports that I'm regularly interested in.

Therefore I started gathering them in my personal wiki (I use the excellent "zim" desktop wiki), and realized some of my colleagues and friends would be interested in that list as well. I'll do my best to keep this blog post up-to-date as I discover new ones.

A magic book

If you know of other candidates for this list, please don't hesitate to get in touch!

Behold, tribaal's "secret URL" list!

Pending SRUs

Once a package has been uploaded to a -proposed pocket, it needs to be verified as per the SRU process. Packages pending verification end up in this list.

Sponsorship queue

People who don't have upload rights for the package they fixed need to request sponsorship. This queue is the place to check if you're waiting for someone to pick it up and upload it.

Upload queue

A log of what got uploaded (and to which pocket) for a particular release, and also a queue of packages that have been uploaded and are now waiting for review before entering the archive.

For the active development release this is for brand new packages, for frozen realeases these are SRU packages. Once approved at this step, the pacakges enter -proposed.

The launchpad build farm

A list of all the builders Launchpad currently has, broken down by architecture. You can look at jobs being built in real time, and the occupation level of the whole build farm in here as well.

Proposed migration excuses

For the currently in-development Ubuntu release, packages are first uploaded to -proposed, then a set of conditions need to be met before it can be promoted to the released pockets. The list of packages that have failed this automatic migration and the reason why they haven't cna be found on this page.


Not really a "magic" URL, but this system gathers information and lists for the automatic merging system, that merges debian pacakges to the development release of Ubuntu.

Transitions tracker

This page tracks transitions, which are toolchain changes or other package updates with "lots" of dependencies. This tracks the dependencies build status.

on December 13, 2017 09:05 AM


Rhonda D'Vine

I long thought about whether I should post a/my #metoo. It wasn't a rape. Nothing really happened. And a lot of these stories are very disturbing.

And yet it still it bothers me every now and then. I was in school age, late elementary or lower school ... In my hometown there is a cinema. Young as we've been we weren't allowed to see Rambo/Rocky. Not that I was very interested in the movie ... But there the door to the screening room stood open. And curious as we were we looked through the door. The projectionist saw us and waved us in. It was exciting to see a moview from that perspective that was forbidden to us.

He explained to us how the machines worked, showed us how the film rolls were put in and showed us how to see the signals on the screen which are the sign to turn on the second projector with the new roll.

During these explenations he was standing very close to us. Really close. He put his arm around us. The hand moved towards the crotch. It was unpleasantly and we knew that it wasn't all right. But screaming? We weren't allowed to be there ... So we thanked him nicely and retreated disturbed. The movie wasn't that good anyway.

Nothing really happened, and we didn't say anything.

/personal | permanent link | Comments: 0 | Flattr this

on December 13, 2017 08:48 AM

December 12, 2017

Hello Ubuntu Server!

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list.

Spotlight: Ubuntu Bionic: Netplan

Josh on the Canonical Server team took a look at Netplan on Ubuntu Bionic. He shows some initial use cases and provides examples of some configurations.


  • Added ‘status’ subcommand to report whether cloud-init is ‘running’, ‘done’ or ‘error’. Also a tool for scripts to block on cloud-init completion with cloud-init status –wait
  • Added ‘clean’ subcommand as a developer tool to easily remove cloud-init artifacts and re-run cloud-init on reboot.
  • Cloud-init datasources now store standardized instance metadata in /run/cloud-init/instance-data.json which can be referenced by scripts to get instance-related variables such as region, availability-zone, instance-id and more.
  • Update pylint to 1.7.4 and run on tests and tools dirs
  • EC2 uses instance-identity doc from metadata to obtain instance-id and region [Andrew Jorgensen]
  • SUSE: remove delta in systemd local template for SUSE [Robert Schweikert]
  • VMware: Support for user provided pre and post-customization scripts [Maitreyee Saikia]
  • Fix ds-identify warning on VMWare platform by correctly identifying the OVF datasource. ds-identify identifies OVF when an iso9660 filesystem exists on cdrom containing ovf.env content (LP: #1731868)

Bug Work and Triage

Contact the Ubuntu Server team

Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Uploads to the Supported Releases

apache2, 2.4.29-1ubuntu2, paelzer
asterisk, 1:13.18.3~dfsg-1ubuntu2, doko
asterisk, 1:13.18.3~dfsg-1ubuntu1, costamagnagianfranco
cloud-init, 17.1-58-g703241a3-0ubuntu1, smoser
cloud-init, 17.1-53-ga5dc0f42-0ubuntu1, smoser
cloud-init, 17.1-51-g05b2308a-0ubuntu1, smoser
curtin, 17.0~bzr552-0ubuntu1, smoser
iproute2, 4.14.1-0ubuntu2, paelzer
iproute2, 4.14.1-0ubuntu1, paelzer
python-ldappool, 2.1.0-0ubuntu1, corey.bryant
samba, 2:4.7.3+dfsg-1ubuntu1, mdeslaur
sosreport, 3.5-1ubuntu1, sil2100
sysstat, 11.6.0-1ubuntu2, paelzer
uvtool, 0~git136-0ubuntu1, racb
Total: 14

Uploads to the Development Release

iproute2, artful, 4.9.0-1ubuntu2.1, paelzer
iproute2, zesty, 4.9.0-1ubuntu1.1, paelzer
iproute2, xenial, 4.3.0-1ubuntu3.16.04.3, paelzer
iproute2, trusty, 3.12.0-2ubuntu1.2, paelzer
iproute2, trusty, 3.12.0-2ubuntu1.1, nacc
iscsitarget, trusty,, cascardo
lxd, xenial, 2.0.11-0ubuntu1~16.04.4, stgraber
lxd, xenial, 2.0.11-0ubuntu1~16.04.4, stgraber
lxd, xenial, 2.0.11-0ubuntu1~16.04.3, stgraber
lxd, xenial, 2.0.11-0ubuntu1~16.04.2, stgraber
qemu, artful, 1:2.10+dfsg-0ubuntu3.1, paelzer
rsync, artful, 3.1.2-2ubuntu0.1, leosilvab
rsync, zesty, 3.1.2-1ubuntu0.1, leosilvab
rsync, xenial, 3.1.1-3ubuntu1.1, leosilvab
rsync, trusty, 3.1.0-2ubuntu0.3, leosilvab
sysstat, xenial, 11.2.0-1ubuntu0.2, slashd
sysstat, zesty, 11.4.3-1ubuntu1, slashd
sysstat, artful, 11.5.7-1ubuntu2, slashd
Total: 18
on December 12, 2017 08:02 PM

With each new release, the Xfce PulseAudio Plugin becomes more refined and better suited for Xfce users. The latest release adds support for the MPRIS Playlists specification and improves support for Spotify and other media players.

What’s New?

New Feature: MPRIS Playlists Support

  • This is a basic implementation of the MediaPlayer2.Playlists specification.
  • The 5 most recently played playlists are displayed (if supported by the player). Admittedly, I have not found a player that seems to implement the ordering portion of this specification.

New Feature: Experimental libwnck Support

  • libwnck is a window management library. This feature adds the “Raise” method for media players that do not support it, allowing the user to display the application window after clicking the menu item in the plugin.
  • Spotify for Linux is the only media player that I have found which does not implement this method. Since this is the media player I use most of the time, this was an important issue for me to resolve.


  • Unexpected error messages sent via DBUS are now handled gracefully. The previous release of Pithos (1.1.2) displayed a Python error when doing DBUS queries before, crashing the plugin.
  • Numerous memory leaks were patched.

Translation Updates

Chinese (Taiwan), Croatian, Czech, Danish, Dutch, French, German, Hebrew, Japanese, Korean, Lithuanian, Polish, Russian, Slovak, Spanish, Swedish, Thai


The latest version of Xfce PulseAudio Plugin can always be downloaded from the Xfce archives. Grab version 0.3.4 from the below link.

  • SHA-256: 43fa39400eccab1f3980064f42dde76f5cf4546a6ea0a5dc5c4c5b9ed2a01220
  • SHA-1: 171f49ef0ffd1e4a65ba0a08f656c265a3d19108
  • MD5: 05633b8776dd3dcd4cda8580613644c3
on December 12, 2017 12:12 PM

December 11, 2017

Today’s daily ISO for Bionic Beaver 18.04 sees an experimental switch to the Breeze-Dark Plasma theme by default.

Users running 18.04 development version who have not deliberately opted to use Breeze/Breeze-Light in their systemsettings will also see the change after upgrading packages.

Users can easily revert back to the Breeze/Breeze-Light Plasma themes by changing this in systemsettings.

Feedback on this change will be very welcome:

You can reach us on the Kubuntu IRC channel or Telegram group, on our user mailing list, or post feedback on the (unofficial) Kubuntu web forums

Thank you to Michael Tunnell from for kindly suggesting this change.

on December 11, 2017 01:15 AM

December 10, 2017

Yes! A new edition for ubunteros around the world! :))

Ubucons around the world

Is Ubucon made for me?

This event is just for you! ;) You don't need to be a developer, because you'll enjoy a lot of talks about everything you can to imagine about Ubuntu and share great moments with other users.
Even the language won't be a problem. there, you'll meet people from everywhere and surely someone will speak in your language :)

You can read different posts about the previous Ubucon in Paris here:
Another in Spanish:


Gijón/Xixón, Asturies, Spain
Antiguo Instituto, just in the city center, built in 1797:
Antiguo Instituto


27th, 28th and 29th of April 2018.

Organized by

  • Francisco Javier Teruelo de Luis 
  • Francisco Molinero 
  • Sergi Quiles Pérez 
  • Antonio Fernandes 
  • Paul Hodgetts 
  • Santiago Moreira 
  • Joan CiberSheep 
  • Fernando Lanero 
  • Manu Cogolludo 
  • Marcos Costales

Get in touch!

We're working in a few details yet, please don't book a flight yet and join our Telegram channel nowGoogle+ or Twitter for getting the last news and future discounts on hotels and transport.
on December 10, 2017 07:21 PM
En esta ocasión, Francisco MolineroFrancisco Javier TerueloFernando Lanero  y Marcos Costalescharlamos sobre los siguientes temas:

  • Distros derivadas. ¿Sí o no?
  • Entrevista al desarrollador de uNav
  • Linux on Galaxy

Ubuntu y otras hierbas S02E03

Y atención a la noticia que damos al final del programa ;) Sigue la Ubucon en TelegramGoogle+ o Twitter.

El podcast esta disponible para escuchar en:
on December 10, 2017 02:59 PM

December 09, 2017

Earlier this year I talked about using KDE Connect to send and receive SMS messages via your connected device. Back then sending messages was a bit of a faff and involved having to use the terminal, but as of today this is no longer an issue!

Meet KDEConnect SMS sender Plasmoid which was uploaded earlier today on the KDE Store.  Once installed on your system you can add it to your desktop as a widget (as shown above).  On first use you need to tell it which connection to use by going to the Settings page.



Once you have it configured to use the correct device, you type in the phone number of the person you wish to send the message to in the first box (as below).  Please note this needs to be the international dialling code (ie +44 for the UK, +353 for Ireland).  Then type your message and click the Send button, it’s that simple!

Your mobile device will then send the message.  The project has a GitHub page – so head over there for the code, new releases and bug reports/feedback.

You can try it out yourself, on Xenial (16.04), Artful (17.10) or Bionic (18.04) by adding my PPA:

sudo add-apt-repository ppa:clivejo/plasma-kdeconnect-sms
sudo apt update
sudo apt install plasma-kdeconnect-sms

on December 09, 2017 04:07 PM

December 08, 2017

S10E40 – Clammy Eminent Spot - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

This week an old man is confused by a modern gaming mouse. We talk to Ikey Doherty from the Solus project about Linux Steam Integration and how snaps are improving game delivery for all users of Linux. We have a multi-player love and go over your feedback.

It’s Season Ten Episode Forty of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Ikey Doherty are connected and speaking to your brain.

In this week’s show:

  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • This weeks cover image is taken from Wikimedia.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on December 08, 2017 11:00 PM

Being on furlough from your job for just under four full months and losing 20 pounds during that time can hardly be considered healthy. If anything, it means that something is wrong. I allude in various fora that I work for a bureau of the United States of America's federal government as a civil servant. I am not particularly high-ranking as I only come in at GS-7 Step 1 under "CLEVELAND-AKRON-CANTON, OH" locality pay. My job doesn't normally have me working a full 12 months out of the year (generally 6-8 months depending upon the needs of the bureau) and I am normally on-duty only 32 hours per week.

More recent headshot of Stephen Michael Kellat

As you might imagine, I have been trying to leave that job. Unfortunately, working for this particular government bureau makes any resume look kinda weird. My local church has some domestic missions work to do and not much money to fund it. I already use what funding we have to help with our mission work reaching out to one of the local nursing homes to provide spiritual care as well as frankly one of the few lifelines to the outside world some of those residents have. Xubuntu and the bleeding edge of LaTeX2e plus CTAN help greatly in preparing devotional materials for use in the field at the nursing home. Funding held us back from letting me assist with Hurricane Harvey or Hurricane Maria relief especially since I am currently finishing off quite a bit of training in homeland security/emergency management. But for the lack of finances to back it up as well as the lack of a large enough congregation, there is quite a bit to do. Unfortunately the numbers we get on a Sunday morning are not what they once were when the congregation had over a hundred in attendance.

I don't like talking about numbers in things like this. If you take 64 hours in a two week pay period multiplied it by the minimum of 20 pay periods that generally occur and then multiplied by the hourly equivalent rate for my grade and step it only comes out to a pre-tax gross under $26,000. I rounded up to a whole number. Admittedly it isn't too much.

At this time of the year last year, many people across the Internet burned cash by investing in the Holiday Hole event put on by the Cards Against Humanity people. Over $100,000 was raised to dig a hole about 90 miles outside Chicago and then fill the thing back in. This year people spent money to help buy a piece of land to tie up the construction of President Trump's infamous border wall and even more which resulted in Cards Against Humanity raking in $2,250,000 in record time.

Now, the church I would be beefing up the missionary work with doesn't have a web presence. It doesn't have an e-mail address. It doesn't have a fax machine. Again, it is a small church in rural northeast Ohio. According to IRS Publiction 526, contributions to them are deductible under current law provided you read through the stipulations in that thin booklet and are a taxpayer in the USA. Folks outside the USA could contribute in US funds but I don't know what the rules are for foreign tax administrations to advise about how such is treated if at all.

The congregation is best reached by writing to:

 West Avenue Church of Christ
 5901 West Avenue
 Ashtabula, OH  44004
 United States of America

With the continuing budget shenanigans about how to fund Fiscal Year 2018 for the federal government, I get left wondering if/when I might be returning to duty. Helping the congregation fund me to undertake missions for it removes that as a concern. Besides, any job that gives you gray hair and puts 30 pounds on you during eight months of work cannot be good for you to remain at. Too many co-workers took rides away in ambulances at times due to the pressures of the job during the last work season.

Creative Commons License
Not Messing With Hot Wheels Car Insertion by Stephen Michael Kellat is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

on December 08, 2017 06:03 AM
Simple Scan recently migrated to the new infrastructure. With modern infrastructure I now have the opportunity to enable Continuous Integration (CI), which is a fancy name for automatically building and testing your software when you make changes (and it can do more than that too).

I've used CI in many projects in the past, and it's a really handy tool. However, I've never had to set it up myself and when I've looked it's been non-trivial to do so. The great news is this is really easy to do in GitLab!

There's lots of good documentation on how to set it up, but to save you some time I'll show how I set it up for Simple Scan, which is a fairly typical GNOME application.

To configure CI you need to create a file called .gitlab-ci.yml in your git repository. I started with the following:

  image: ubuntu:rolling
    - apt-get update
    - apt-get install -q -y --no-install-recommends meson valac gcc gettext itstool libgtk-3-dev libgusb-dev libcolord-dev libpackagekit-glib2-dev libwebp-dev libsane-dev
    - meson _build
    - ninja -C _build install

The first line is the name of the job - "build_ubuntu". This is going to define how we build Simple Scan on Ubuntu.

The "image" is the name of a Docker image to build with. You can see all the available images on Docker Hub. In my case I chose an official Ubuntu image and used the "rolling" link which uses the most recently released Ubuntu version.

The "before_script" defines how to set up the system before building. Here I just install the packages I need to build simple-scan.

Finally the "script" is what is run to build Simple Scan. This is just what you'd do from the command line.

And with that, every time a change is made to the git repository Simple Scan is built on Ubuntu and tells me if that succeeded or not! To make things more visible I added the following to the top of the

[![Build Status](](

This gives the following image that shows the status of the build:

pipeline status

And because there's many more consumers of Simple Scan that just Ubuntu, I added the following to.gitlab-ci.yml:

  image: fedora:latest
    - dnf install -y meson vala gettext itstool gtk3-devel libgusb-devel colord-devel PackageKit-glib-devel libwebp-devel sane-backends-devel
    - meson _build
    - ninja -C _build install

Now it builds on both Ubuntu and Fedora with every commit!

I hope this helps you getting started with CI and Happy hacking.
on December 08, 2017 12:40 AM

December 05, 2017

Join Phabricator

Lubuntu Blog

Inspired by the wonderful KDE folks, Lubuntu has created a Phabricator instance for our project. Phabricator is an open source, version control system-agnostic collaborative development environment similar in some ways to GitHub, GitLab, and perhaps a bit more remotely, like Launchpad. We were looking for tools to organize, coordinate, and collaborate, especially across teams within […]
on December 05, 2017 07:54 PM

You are using LXD from a Linux distribution package and you would like to migrate your existing installation to the Snap LXD package. Let’s do the migration together!

This post is not about live container migration in LXD. Live container migration is about moving a running container from one LXD server to another.

If you do not have LXD installed already, then look for another guide about the installation and set up of LXD from a snap package. A fresh installation of LXD as a snap package is easy.

Note that from the end of 2017, LXD will be generally distributed as a Snap package. If you run LXD 2.0.x from Ubuntu 16.04, you are not affected by this.


Let’s check the version of LXD (Linux distribution package).

$ lxd --version

$ apt policy lxd
 Installed: 2.20-0ubuntu4~16.04.1~ppa1
 Candidate: 2.20-0ubuntu4~16.04.1~ppa1
 Version table:
*** 2.20-0ubuntu4~16.04.1~ppa1 500
      500 xenial/main amd64 Packages
      100 /var/lib/dpkg/status
    2.0.11-0ubuntu1~16.04.2 500
      500 xenial-updates/main amd64 Packages
    2.0.2-0ubuntu1~16.04.1 500
      500 xenial-security/main amd64 Packages
    2.0.0-0ubuntu4 500
      500 xenial/main amd64 Packages

In this case, we run LXD version 2.20, and it was installed from the LXD PPA repository.

If you did not enable the LXD PPA repository, you would have an LXD version 2.0.x, the version that was released with Ubuntu 16.04 (what is running above). LXD version 2.0.11 is currently the default version for Ubuntu 16.04.3 and will be supported in that form until 2016 + 5 = 2021. LXD version 2.0.0 is the original LXD version in Ubuntu 16.04 (when original released) and LXD version 2.0.2 is the security update of that LXD 2.0.0.

We are migrating to the LXD snap package. Let’s see how many containers will be migrated.

$ lxc list | grep RUNNING | wc -l

It would be a good test to check if something goes horribly wrong.

Let’s check the available incoming LXD snap packages.

$ snap info lxd
name: lxd
summary: System container manager and API
publisher: canonical
description: |
 LXD is a container manager for system containers.
 It offers a REST API to remotely manage containers over the network, using an
 image based workflow and with support for live migration.
 Images are available for all Ubuntu releases and architectures as well as for
 a wide number of other Linux distributions.
 LXD containers are lightweight, secure by default and a great alternative to
 virtual machines.
snap-id: J60k4JY0HppjwOjW8dZdYc8obXKxujRu
 stable: 2.20 (5182) 44MB -
 candidate: 2.20 (5182) 44MB -
 beta: ↑ 
 edge: git-b165982 (5192) 44MB -
 2.0/stable: 2.0.11 (4689) 20MB -
 2.0/candidate: 2.0.11 (4770) 20MB -
 2.0/beta: ↑ 
 2.0/edge: git-03e9048 (5131) 19MB -

There are several channels to choose from. The stable channel has LXD 2.20, just like the candidate channel. When the LXD 2.21 snap is ready, it will first be released in the candidate channel and stay there for 24 hours. If everything goes well, it will get propagated to the stable channel. LXD 2.20 was released some time ago, that’s why both channels have the same version (at the time of writing this blog post).

There is the edge channel, which has the auto-compiled version from the git source code repository. It is handy to use this channel if you know that a specific fix (that affects you) has been added to the source code, and you want to verify that it actually fixed the issue. Note that the beta channel is not used, therefore it inherits whatever is found in the channel below; the edge channel.

Finally, there are these 2.0/ tagged channels that correspond to the stock 2.0.x LXD versions in Ubuntu 16.04. It looks that those who use the 5-year supported LXD (because Ubuntu 16.04) have the option to switch to a snap version after all.

Installing the LXD snap

Install the LXD snap.

$ snap install lxd
lxd 2.20 from 'canonical' installed

Migrating to the LXD snap

Now, the LXD snap is installed, but the DEB/PPA package LXD is the one that is running. We need to run the migration script lxd.migrate that will move the data from the DEB/PPA version over to the Snap version of LXD. In practical terms, it will move files from /var/lib/lxd (old DEB/PPA LXD location), to

$ sudo lxd.migrate 
=> Connecting to source server
=> Connecting to destination server
=> Running sanity checks

=== Source server
LXD version: 2.20
LXD PID: 4414
 Containers: 6
 Images: 3
 Networks: 1
 Storage pools: 1

=== Destination server
LXD version: 2.20
LXD PID: 30329
 Containers: 0
 Images: 0
 Networks: 0
 Storage pools: 0

The migration process will shut down all your containers then move your data to the destination LXD.
Once the data is moved, the destination LXD will start and apply any needed updates.
And finally your containers will be brought back to their previous state, completing the migration.

Are you ready to proceed (yes/no) [default=no]? yes
=> Shutting down the source LXD
=> Stopping the source LXD units
=> Stopping the destination LXD unit
=> Unmounting source LXD paths
=> Unmounting destination LXD paths
=> Wiping destination LXD clean
=> Moving the data
=> Moving the database
=> Backing up the database
=> Opening the database
=> Updating the storage backends
=> Starting the destination LXD
=> Waiting for LXD to come online

=== Destination server
LXD version: 2.20
LXD PID: 2812
 Containers: 6
 Images: 3
 Networks: 1
 Storage pools: 1

The migration is now complete and your containers should be back online.
Do you want to uninstall the old LXD (yes/no) [default=no]? yes

All done. You may need to close your current shell and open a new one to have the "lxc" command work.

Testing the migration to the LXD snap

Let’s check that the containers managed to start successfully,

$ lxc list | grep RUNNING | wc -l

But let’s check that we can still run Firefox from an LXD container, according to the following post,

How to run graphics-accelerated GUI apps in LXD containers on your Ubuntu desktop

Yep, all good. The artifact in the middle (over the c in packaged) is the mouse cursor in wait mode, while GNOME Screenshot is about to take the screenshot. I did not find a report about that in the GNOME Screenshot bugzilla. It is a minor issue and there are several workarounds (1. try one more time, 2. use timer screenshot).

Let’s do some actual testing,

Yep, works as well.

Exploring the LXD snap commands

Let’s type lxd and press Tab.

$ lxd<Tab>
lxd lxd.check-kernel lxd.migrate 
lxd.benchmark lxd.lxc

There are two commands left to try out, lxd.check-kernel and lxd.benchmark. The snap package is called lxd, therefore any additional commands are prepended with lxd.. lxd is the actually LXD server executable. lxd.lxc is the lxc command that we are using for all LXD actions. The LXD snap package makes the appropriate symbolic link so that we just need to write lxc instead of lxd.lxc.

Trying out lxd.check-kernel

Let’s run lxd.check-kernel.

$ sudo lxd.check-kernel
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /lib/modules/4.10.0-40-generic/build/.config
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
newuidmap is not installed
newgidmap is not installed
Network namespace: enabled

--- Control groups ---
Cgroups: enabled

Cgroup v1 mount points: 

Cgroup v2 mount points:

Cgroup v1 clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded
Macvlan: enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded
Vlan: enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded
Bridges: enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded
Advanced netfilter: enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded
CONFIG_NF_NAT_IPV4: enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded
CONFIG_NF_NAT_IPV6: enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded
CONFIG_IP_NF_TARGET_MASQUERADE: enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded
CONFIG_IP6_NF_TARGET_MASQUERADE: enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabledmodprobe: ERROR: missing parameters. See -h.
, not loadedCONFIG_NETFILTER_XT_MATCH_COMMENT: enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded
FUSE (for use with lxcfs): enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded

--- Checkpoint/Restore ---
checkpoint restore: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /snap/lxd/5182/bin/lxc-checkconfig

This is an important tool if you have issues in getting the LXD to run. In this example in the Misc section, it shows some errors about missing parameters. I suppose they are issues with the tool as the appropriate kernel modules are indeed loaded. My installation of the LXD snap works okay.

Trying out lxd.benchmark

Let’s try out the command without parameters.

$ lxd.benchmark 
Usage: lxd-benchmark launch [--count=COUNT] [--image=IMAGE] [--privileged=BOOL] [--start=BOOL] [--freeze=BOOL] [--parallel=COUNT]
 lxd-benchmark start [--parallel=COUNT]
 lxd-benchmark stop [--parallel=COUNT]
 lxd-benchmark delete [--parallel=COUNT]

--count (= 100)
 Number of containers to create
 --freeze (= false)
 Freeze the container right after start
 --image (= "ubuntu:")
 Image to use for the test
 --parallel (= -1)
 Number of threads to use
 --privileged (= false)
 Use privileged containers
 --report-file (= "")
 A CSV file to write test file to. If the file is present, it will be appended to.
 --report-label (= "")
 A label for the report entry. By default, the action is used.
 --start (= true)
 Start the container after creation

error: A valid action (launch, start, stop, delete) must be passed.
Exit 1

It is a benchmark tool that allows to create many containers. We can then use the tool to remove those containers. There is an issue with the default number of containers, 100, which is too high. If you run lxd-benchmark launch without specifying a smaller count,  you will mess up your LXD installation because you will run out of memory and maybe of disk space. Looking for a bug report… Okay it got buried into this pull request and needs to re-open. Ideally, the default count should be 1, and let the user knowingly select a bigger number. TODO. Here is the new pull request,

Let’s try carefully lxd-benchmark.

$ lxd.benchmark launch --count 3
Test environment:
 Server backend: lxd
 Server version: 2.20
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.10.0-40-generic
 Storage backend: zfs
 Storage version:
 Container backend: lxc
 Container version: 2.1.1

Test variables:
 Container count: 3
 Container mode: unprivileged
 Startup mode: normal startup
 Image: ubuntu:
 Batches: 0
 Batch size: 4
 Remainder: 3

[Dec 5 13:24:26.044] Found image in local store: 5f364e2e3f460773a79e9bec2edb5e993d236f035f70267923d43ab22ae3bb62
[Dec 5 13:24:26.044] Batch processing start
[Dec 5 13:24:28.817] Batch processing completed in 2.773s

It took just 2.8s to launch then on this computer.
launched 3 containers, with names benchmark-%d. Obviously, refrain from using the word benchmark as a name for your own containers. Let’s see these containers

$ lxc list --columns ns4
| NAME          | STATE   | IPV4                 |
| benchmark-1   | RUNNING | (eth0) |
| benchmark-2   | RUNNING | (eth0)  |
| benchmark-3   | RUNNING | (eth0) |

Let’s stop them, and finally remove them.

$ lxd.benchmark stop
Test environment:
 Server backend: lxd
 Server version: 2.20
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.10.0-40-generic
 Storage backend: zfs
 Storage version:
 Container backend: lxc
 Container version: 2.1.1

[Dec 5 13:31:16.517] Stopping 3 containers
[Dec 5 13:31:16.517] Batch processing start
[Dec 5 13:31:20.159] Batch processing completed in 3.642s

$ lxd.benchmark delete
Test environment:
 Server backend: lxd
 Server version: 2.20
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.10.0-40-generic
 Storage backend: zfs
 Storage version:
 Container backend: lxc
 Container version: 2.1.1

[Dec 5 13:31:24.902] Deleting 3 containers
[Dec 5 13:31:24.902] Batch processing start
[Dec 5 13:31:25.007] Batch processing completed in 0.105s

Note that the lxd-benchmark actions follow the naming of the lxc actions (launch, start, stop and delete).


Error “Target LXD already has images”

$ sudo lxd.migrate 
=> Connecting to source server
=> Connecting to destination server
=> Running sanity checks
error: Target LXD already has images, aborting.
Exit 1

This means that the snap version of LXD has some images and it is not clean. lxd.migrate requires the snap version of LXD to be clean. Solution: remove the LXD snap and install again.

$ snap remove lxd
lxd removed

$ snap install lxd
lxd 2.20 from 'canonical' installed

Which “lxc” command am I running?

This is the lxc command of the DEB/PPA package,

$ which lxc

This is the lxc command from the LXD snap package.

$ which lxc

If you installed the LXD snap but you do not see the the /snap/bin/lxc executable, it could be an artifact of your Unix shell. You may have to close that shell window and open a new one.

Error “bash: /usr/bin/lxc: No such file or directory”

If you get the following,

$ which lxc

but the lxc command is not found,

$ lxc
bash: /usr/bin/lxc: No such file or directory
Exit 127

then you must close the terminal window and open a new one.

Note: if you loudly refuse to close the current terminal window, you can just type

$ hash -r

which will refresh the list of executables from the $PATH. Applies to bash, zsh. Use rehash if on *csh.


on December 05, 2017 01:35 PM

December 04, 2017


Sebastian Heinlein

I am glad to announce aptdaemon: It is a DBus controlled and PolicyKit using package management...
on December 04, 2017 08:01 PM

December 03, 2017

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12h but I only spent 10h. During this time, I managed the LTS frontdesk during one week, reviewing new security issues and classifying the associated CVE (16 commits to the security tracker).

I prepared and released DLA-1171-1 on libxml-libxml-perl.

I prepared a new update for simplesamlphp (1.9.2-1+deb7u1) fixing 6 CVE. I did not release any DLA yet since I was not able to test the updated package yet. I’m hoping that the the current maintainer can do it since he wanted to work on the update a few months ago.

Distro Tracker

Distro Tracker has seen a high level of activity in the last month. Ville Skyttä continued to contribute a few patches, he helped notably to get rid of the last blocker for a switch to Python 3.

I then worked with DSA to get the production instance ( upgraded to stretch with Python 3.5 and Django 1.11. This resulted in a few regressions related to the Python 3 switch (despite the large number of unit tests) that I had to fix.

In parallel Pierre-Elliott Bécue showed up on the debian-qa mailing list and he started to contribute. I have been exchanging with him almost daily on IRC to help him improve his patches. He has been very responsive and I’m looking forward to continue to cooperate with him. His first patch enabled the use “src:” and “bin:” prefix in the search feature to specify if we want to lookup among source packages or binary packages.

I did some cleanup/refactoring work after the switch of the codebase to Python 3 only.

Misc Debian work

Sponsorship. I sponsored many new packages: python-envparse 0.2.0-1, python-exotel 0.1.5-1, python-aws-requests-auth 0.4.1-1, pystaticconfiguration 0.10.3-1, python-jira 1.0.10-1, python-twilio 6.8.2-1, python-stomp 4.1.19-1. All those are dependencies for elastalert 0.1.21-1 that I also sponsored.

I sponsored updates for vboot-utils 0~R63-10032.B-2 (new upstream release for openssl 1.1 compat), aircrack-ng 1:1.2-0~rc4-4 (introducing airgraph-ng package) and asciidoc 8.6.10-2 (last upstream release, tool is deprecated).

Debian Installer. I submitted a few patches a while ago to support finding ISO images in LVM logical volumes in the hd-media installation method. Colin Watson reviewed them and made a few suggestions and expressed a few concerns. I improved my patches to take into account his suggestions and I resolved all the problems he pointed out. I then committed everything to the respective git repositories (for details review #868848, #868859, #868900, #868852).

Live Build. I merged 3 patches for live-build (#879169, #881941, #878430).

Misc. I uploaded Django 1.11.7 to stretch-backports. I filed an upstream bug on zim for #881464.


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on December 03, 2017 05:52 PM

I have been a loyal customer for a password manager called LastPass for a number of years now.  It all started when I decided to treat myself to an early Christmas present by purchasing the “Premium” version back in 2013, in order to take advantage of the extra features such as the mobile app.

Now, don’t get me wrong, I do think $12 is very good value for money and I was very happy with LastPass, but I must say this article really, really got my back up.  (Apparently I’m an “entitled user”).  Not only that but the fact that not one, but three of the Google ads on the page are for LastPass (now there’s a spooky coincidence!)

I do agree with a lot of other users that to double the price for absolutely no benefits is an extremely bitter pill to swallow, especially as there are a number of issues I been having regarding the security of the mobile app.  But anyways, I calmed down and the topic went out of my head until I received an email reminding me that they would automatically charge my credit card with the new $24 price.  Then, about a week later, as I watched a YouTube video by TuxDigital, he mentioned another password manager called bitwarden

So a big thank you to Michael for bringing this to my attention. Not only does it have way more features than LastPass, but it is also open source (code on GitHub), self host-able and the “Premium” version is only $10. My issues with the LastPass mobile app are gone in bitwarden and replaced with the option to lock the app with your fingerprint or a pin code, which is a nice happy medium of having to log out of LastPass and then re-enter your entire master code to regain access!

Also another feature I *beeping* love (excuse my French), is the app and vault allows you to store a “Google Authenticator” key in the vault and then automatically generates a One Time Password (OTP) on the fly and copies it to the device clipboard.  This allows it to be easily copied in when auto-filling the username and password, great for those who use this feature on their blogs.

on December 03, 2017 04:42 PM

December 02, 2017

See on how to set up and test LXD on Ubuntu (or another Linux distribution).

In this post we see how to set up the timezone in a newly created container.

The problem

The default timezone for a newly created container is Etc/UTC, which is what we used to call Greenwich Mean Time.

Let’s observe.

$ lxc launch ubuntu:16.04 mycontainer
Creating mycontainer
Starting mycontainer

$ lxc exec mycontainer -- date
Sat Dec 2 11:40:57 UTC 2017

$ lxc exec mycontainer -- cat /etc/timezone 

That is, the observed time in a container follows a timezone that is different from the vast majority our computer settings. When we connect with a shell inside the container, the time and date is not the same with that of our computer.

The time is recorded correctly inside the container, it is just the way it is presented, that is off by a few hours.

Depending on our use of the container, this might or might not be an issue to pursue.

The workaround

We can set the environment variable TZ (for timezone) of each container to our preferred timezone setting.

$ lxc exec mycontainer -- date
Sat Dec 2 11:50:37 UTC 2017

$ lxc config set mycontainer environment.TZ Europe/London

$ lxc exec mycontainer -- date
Sat Dec 2 11:50:50 GMT 2017

That is, we use the lxc config set action to set, for mycontainer,  the environment variable TZ to the proper timezone (here, Europe/London). UTC time and Europe/London time happen to be the same during the winter.

How do we unset the container timezone and return back to Etc/UTC?

$ lxc config unset mycontainer environment.TZ

Here we used the lxc config unset action to unset the environment variable TZ.

The solution

LXD supports profiles and you can edit the default profile in order to get the timezone setting automatically applied to any containers that follow this profile. Let’s get a list of the profiles.

$ lxc profile list
| NAME    | USED BY |
| default |       7 |

Only one profile, called default. It is used by 7 containers already on this LXD installation.

We set the environment variable TZ in the profile with the following,

$ lxc exec mycontainer -- date
Sat Dec 2 12:02:37 UTC 2017

$ lxc profile set default environment.TZ Europe/London

$ lxc exec mycontainer -- date
Sat Dec 2 12:02:43 GMT 2017

How do we unset the profile timezone and get back to Etc/UTC?

lxc profile unset default environment.TZ

Here we used the lxc profile unset action to unset the environment variable TZ.


on December 02, 2017 12:06 PM

December 01, 2017

The hackerspace in Lausanne, Switzerland has started this weekend's VR Hackathon with a somewhat low-tech 2D hack: using the FSFE's Public Money Public Code stickers in lieu of sticky tape to place the NO CLOUD poster behind the bar.

Get your free stickers and posters

FSFE can send you these posters and stickers too.

on December 01, 2017 08:27 PM

November 30, 2017

This week we’ve seen The Darkness and failed to get a PowerColor RX Vega 56 to work in a Razer Core. In a crowd-ed news segment, we discuss the Gameshell, the Nimabtus drone simulator, the Gemini PDA and the Librem 5 ringtones.

It’s Season Ten Episode Thirty-Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on November 30, 2017 06:00 PM

November 29, 2017

Kubuntu Kafe Live approaching

Kubuntu General News

This Saturday ( December 2nd ) the second Kubuntu Kafe Live, our online video cafe will be taking place from 21:00 UTC.
Join the Kubuntu development community, and guests as our intrepid hosts.

  • Aaron Honeycut
  • Ovidiu-Florin Bogdan
  • Rick Timmis

Discuss a cornucopia of topics in this free format, magazine style show.

This show includes Technical Design and Planning for a Kubuntu CI Rebuild, a Live Testing workshop in the Kubuntu Dojo, Kubuntu product development and more.

We will be attempting to run a live stream into our YouTube Channel although we encourage you to come and join us in our Big Blue Button conference server, use your name and welcome to join room 1 and come and interact with us and be part of the show.

See you there

on November 29, 2017 08:32 PM

Out of curiosity, I decided to try and package this blog as a snap package, and it turns out to be an extremely easy and convenient way to deploy a static blog!

An image of fingers snapping, with the caption "aw snap"


There are several advantages that the snappy packaging format bring to the table as far as applications developers are concerned (which I am, my application in this case being my blog).

Snapcraft makes it very easy to package things, there's per-application jails preventing/sandboxing your applications/services that basically comes for free, and it also comes with a distribution mechanism that takes care of auto-upgrading your snap on any platform.



Since this blog is generated using the excellent "pelican" static blog generator from a bunch of markdown articles and a theme, there's not a lot of things to package in the first place :)

A webserver for the container age

A static blog obviously needs to be served by a webserver.

Packaging a "full" traditional webserver like apache2 (what I used before) or nginx is a little outside the scope of what I would have liked to do with my spare time, so I looked around for another way to serve it.


  • A static files webserver.
  • Able to set headers for cache control and HSTS.
  • Ideally self-contained / statically linked (because snapping the whole would be much faster/easier this way)
  • SSL ready. I've had an A+ rating on SSLlabs for years and intend to keep it that way.
  • Easy to configure.

After toying with the idea to write my own in rust, I instead settled on an already existing project that fits the bill perfectly and is amazingly easy to deploy and configure - Caddy.

A little bit of snapcraft magic

Of course, a little bit of code was needed in the snapcraft recipe to make it all happen.

All of the code is available on a github project, and most of the logic can be found in the snapcraft.yaml file.

Simply copying the Caddyfile and the snap/ subfolder to your existing pelican project should be all you need to get going, then run the following to get a snap package:

# On an Ubuntu system.
snap install snapcraft

With your site's FQDN added to the Caddyfile and pushed to production, you can marvel at all the code and configuration you did not have to write to get an A+ rating with SSLlabs :)

Questions? Comments?

As usual, feel free to reach out with any question or comment you may have!

on November 29, 2017 09:31 AM

I wish this was about Ubuntu MATE. It isn't, alas. With the general freak-out again over net neutrality in the United States let alone the Internet blackout in Pakistan, it is time to run some ideas.1

The Internet hasn't been healthy for a while. Even with net neutrality rules in the United States, I have my Internet Service Provider neutrally blocking all IPv6 traffic and throttling me. As you can imagine, that now makes an apt update quite a pain. When I have asked my provider, they have said they have no plans to offer this on residential service. When I have raised the point that my employer wants me to verify the ability to potentially work from home in crisis situations, they said I would need to subscribe to "business class" service and said they would happily terminate my residential service for me if I tried to use a Virtual Private Network.

At this point, my view of the proposed repeal of net neutrality rules in the United States is simple. To steal a line from a former presidential candidate: What difference at this point does it make?2 I have exactly one broadband provider available to me.3 Unless I move to HughesNet or maybe something exotic, I have what is generally available.4

The Internet, if we can even call it a coherent whole anymore, has been quite stressed over the past few years. After all, a simple hurricane can wipe out Internet companies with their servers and networks based in New York City.5 In Puerto Rico, mail carriers of the United States Postal Service were the communications lifeline for quite a while until services could come back online.6 It can be popular on the African continent to simply make Internet service disappear at times to meet the needs of the government of the day.7 Sometimes bad things simply happen, too.8

Now, this is not say people are trying to drive forward. I have found concept papers with ideas that are not totally "pie in the sky".9 Librarians see the world as one where it is littered with PirateBoxes that are instead called LibraryBoxes.10 Alphabet's own Project Loon has been field tested in the skies of Puerto Rico thanks to the grant of a "Special Temporary Authority" by the Federal Communications Commission's Office of Engineering Technology.11

Now, I can imagine life without an Internet. My first e-mail address was tremendously long as it had a gateway or two in it to get the message to the BBS I dialed into that was tied into FidoNet. I was hunting around for FidoNews and, after reading a recent issue, noticed some names that correlate in an interesting fashion with the Debian & Ubuntu realms. That was a very heartening thing for me to find. With the seeding of apt-offline on at least the Xubuntu installation disc, I know that I would be able to update a Xubuntu installation whenever I actually found access somewhere even if it was not readily available to me at home. Thankfully with that bit of seeding we solved the "chicken and the egg" problem of how do you install something like that which you need when you don't have the access to get it to use.

We can and likely will adapt. We can and likely will overcome. These bits of madness come and go. As it was, I already started pricing the build-out of a communications hub with a Beverage antenna as well as an AN-FLR9 Wullenweber array at a minimum. On a local property like a couple acres of farm land I could probably set this up for just under a quarter million dollars with sufficient backups.12 One farm was positioned close enough to a physical corridor to the PIT Internet Exchange Point but that would still be a little over 100 miles to traverse. As long as I could get the permissions, could get the cable laid, and find a peer, peering with somebody who uses YYZ as their Internet Exchange Point is oddly closer due to quirks of geography.

Earlier in today's news, it appeared that the Democratic People's Republic of Korea made yet another unauthorized missile launch.13 This one appears to have been an ICBM that landed offshore from Japan.14 The DPRK's leader has threatened missile strikes of various sorts over the past year on the United States.15 A suborbital electromagnetic pulse blast near our Pacific coast, for example, would likely wipe out the main offices of companies ranging from Google to Yahoo to Apple to Amazon to Microsoft in terms of their computers and other electronic hardware.16

I'm not really worried right now about the neutrality of internetworking. I want there to still be something carried on it. With the increasingly real threat of an EMP possibly wiping out the USA's tech sector due to one rogue missile, bigger problems exist than mere paid prioritization.17

  1. Megan McArdle, "The Internet Had Already Lost Its Neutrality," Bloomberg.Com, November 21, 2017, ; M. Ilyas Khan, "The Politics behind Pakistan's Protests," BBC News, November 26, 2017, sec. Asia,

  2. The candidate in this case is Hillary Clinton. That sentence, often taken out of context, was uttered before the Senate Foreign Relations Committee in 2013.

  3. Sadly the National Broadband Map project was not funded to be continually updated. It would have continued to show that, even though cell phone services are available, those are not meant for use in place of a wired broadband connection. Updates stopped in 2014.

  4. I am not made of gold but this is an example of an offering on the Iridium constellation:

  5. Sinead Carew, "Hurricane Sandy Disrupts Northeast U.S. Telecom Networks," Reuters, October 30, 2012,

  6. Hugh Bronstein, "U.S. Mail Carriers Emerge as Heroes in Puerto Rico Recovery," Reuters, October 9, 2017,

  7. "Why Has Cameroon Blocked the Internet?," BBC News, February 8, 2017, sec. Africa,

  8. "Marshall Islands' 10-Day Internet Blackout Extended," BBC News, January 9, 2017, sec. News from Elsewhere,

  9. Pekka Abrahamsson et al., "Bringing the Cloud to Rural and Remote Areas - Cloudlet by Cloudlet," ArXiv:1605.03622 [Cs], May 11, 2016,

  10. Jason Griffey, "LibraryBox: Portable Private Digital Distribution," Make: DIY Projects and Ideas for Makers, January 6, 2014,

  11. Nick Statt, "Alphabet's Project Loon Deploys LTE Balloons in Puerto Rico," The Verge, October 20, 2017,

  12. One property reviewed with a house, two barns, and a total of six acres of land came to $130,000. The rest of the money would be for licensing, equipment, and construction.

  13. "North Korea Fires New Ballistic Missile." BBC News, November 28, 2017, sec. Asia.

  14. "N Korea 'Tested New Long-Range Missile.'" BBC News, November 29, 2017, sec. Asia.

  15. Kim, Christine, and Phil Stewart. "North Korea Says Tests New ICBM, Can Reach All U.S. Mainland." Reuters, November 29, 2017.

  16. For example: Zimmerman, Malia. "Electromagnetic Pulse Attack on Hawaii Would Devastate the State." Fox News, May 12, 2017.

  17. Apparently the last missile test can reach the Pacific coast of the United States. See: Smith, Josh. "How North Korea’s Latest ICBM Test Stacks up." Reuters, November 29, 2017.

on November 29, 2017 04:40 AM

November 28, 2017

At fleetster we have our own instance of Gitlab and we rely a lot on Gitlab CI. Also our designers and QA guys use (and love) it, thanks to its advanced features.

Gitlab CI is a very powerful system of Continuous Integration, with a lot of different features, and with every new releases, new features land. It has a very rich technical documentation, but it lacks a generic introduction for whom want to use it in an already existing setup. A designer or a tester doesn’t need to know how to autoscale it with Kubernetes or the difference between an image or a service.

But still, he needs to know what is a pipeline, and how to see a branch deployed to an environment. In this article therefore I will try to cover as many features as possible, highlighting how the end users can enjoy them; in the last months I explained such features to some members of our team, also developers: not everyone knows what Continuous Integration is or has used Gitlab CI in a previous job.

If you want to know why Continuous Integration is important I suggest to read this article, while for finding the reasons for using Gitlab CI specifically, I leave the job to itself.


Every time a developer changes some code he saves his changes in a commit. He can then push that commit to Gitlab, so other developers can review the code.

Gitlab will also start some work on that commit, if the Gitlab CI has been configured. This work is executed by a runner. A runner is basically a server (it can be a lot of different things, also your PC, but we can simplify it as a server) that executes instructions listed in the .gitlab-ci.yml file, and reports the result back to Gitlab itself, which will show it in his graphical interface.

When a developer has finished implementing a new feature or a bugfix (activity that usual requires multiple commits), can open a merge request, where other member of the team can comment on the code and on the implementation.

As we will see, also designers and testers can (and really should!) join this process, giving feedbacks and suggesting improvements, especially thanks to two features of Gitlab CI: environments and artifacts.


Every commit that is pushed to Gitlab generates a pipeline attached to that commit. If multiple commits are pushed together the pipeline will be created only for the last of them. A pipeline is a collection of jobs split in different stages.

All the jobs in the same stage run in concurrency (if there are enough runners) and the next stage begins only if all the jobs from the previous stage have finished with success.

As soon as a job fails, the entire pipeline fails. There is an exception for this, as we will see below: if a job is marked as manual, then a failure will not make the pipeline fails.

The stages are just a logic division between batches of jobs, where doesn’t make sense to execute next jobs if the previous failed. We can have a build stage, where all the jobs to build the application are executed, and a deploy stage, where the build application is deployed. Doesn’t make much sense to deploy something that failed to build, does it?

Every job shouldn’t have any dependency with any other job in the same stage, while they can expect results by jobs from a previous stage.

Let’s see how Gitlab shows information about stages and stages’ status.




A job is a collection of instructions that a runner has to execute. You can see in real time what’s the output of the job, so developers can understand why a job fails.

A job can be automatic, so it starts automatically when a commit is pushed, or manual. A manual job has to be triggered by someone manually. Can be useful, for example, to automatize a deploy, but still to deploy only when someone manually approves it. There is a way to limit who can run a job, so only trustworthy people can deploy, to continue the example before.

A job can also build artifacts that users can download, like it creates an APK you can download and test on your device; in this way both designers and testers can download an application and test it without having to ask for help to developers.

Other than creating artifacts, a job can deploy an environment, usually reachable by an URL, where users can test the commit.

Job status are the same as stages status: indeed stages inherit theirs status from the jobs.



As we said, a job can create an artifact that users can download to test. It can be anything, like an application for Windows, an image generated by a PC, or an APK for Android.

So you are a designer, and the merge request has been assigned to you: you need to validate the implementation of the new design!

But how to do that?

You need to open the merge request, and download the artifact, as shown in the figure.

Every pipeline collects all the artifacts from all the jobs, and every job can have multiple artifacts. When you click on the download button, it will appear a dropdown where you can select which artifact you want. After the review, you can leave a comment on the MR.

You can always download the artifacts also from pipelines that do not have a merge request open ;-)

I am focusing on merge request because usually is where testers, designer, and shareholder in general enter the workflow.

But merge requests are not linked to pipelines: while they integrate nice one in the other, they do not have any relation.



In a similar way, a job can deploy something to an external server, so you can reach it through the merge request itself.

As you can see the environment has a name and a link. Just clicking the link you to go to a deployed version of your application (of course, if your team has setup it correctly).

You can click also on the name of the environment, because Gitlab has also other cool features for environments, like monitoring.



This was a small introduction to some of the features of Gitlab CI: it is very powerful, and using it in the right way allows all the team to use just one tool to go from planning to deploying. A lot of new features are introduced every month, so keep an eye on the Gitlab blog.

For setting it up, or for more advanced features, take a look to the documentation.

In fleetster we use it not only for running tests, but also for having automatic versioning of the software and automatic deploys to testing environments. We have automatized other jobs as well (building apps and publish them on the Play Store and so on).

Speaking of which, do you want to work in a young and dynamically office with me and a lot of other amazing guys? Take a look to the open positions at fleetster!

Kudos to the Gitlab team (and others guys who help in their free time) for their awesome work!

If you have any question or feedback about this blog post, please drop me an email at or tweet me :-) Feel free to suggest me to add something, or to rephrase paragraphs in a clearer way (English is not my mother tongue).

Bye for now,

P.S: if you have found this article helpful and you’d like we write others, do you mind to help us reaching the Ballmer’s peak and buy me a beer?

on November 28, 2017 09:00 PM

KDE’s Goal: Privacy

Sebastian Kügler

by Banksyby Banksy
At Akademy 2016, the KDE community started a long-term project to invigorate its development (both, technically and organizationally) with more focus. This process of soul-searching has already yielded some very useful results, the most important one so far being agreement of a common community-wide vision:

A world in which everyone has control over their digital life and enjoys freedom and privacy.

This presents a very high-level vision, so a logical follow-up question has been how this influences KDE’s activities and actions in practice. KDE, being a fairly loose community with many separate sub-communities and products, is not an easy target to align to a common goal. A common goal may have very different on each of KDE’s products, for an email and groupware client, that may be very straight-forward (e.g. support high-end crypto, work very well with privacy-respecting and/or self-hosted services), for others, it may be mostly irrelevant (a natural painting app such as Krita simply doesn’t have a lot of privacy exposure), yet for a product such as Plasma, the implications may be fundamental and varied.
So in the pursuit of the common ground and a common goal, we had to concentrate on what unites us. There’s of course Software Freedom, but that is somewhat vague as well, and also it’s already entrenched in KDE’s DNA. It’s not a very useful goal since it doesn’t give us something to strive for, but something we maintain anyway. A “good goal” has to be more specific, yet it should have a clear connection to Free Software, since that is probably the single most important thing that unites us. Almost two years ago, I posed that privacy is Free Software’s new milestone, trying to set a new goal post for us to head for. Now the point where these streams join has come, and KDE has chosen privacy as one of its primary goals for the next 3 to 4 years. The full proposal can be read here.
“In 5 years, KDE software enables and promotes privacy”

Privacy, being a vague concept, especially given the diversity in the KDE community needs some explanation, some operationalization to make it specific and to know how we can create software that enables privacy. There are three general focus areas we will concentrate on: Security, privacy-respecting defaults and offering the right tools in the first place.


Improving security means improving our processes to make it easier to spot and fix security problems and avoiding single points of failure in both software and development processes. This entails code review, quick turnaround times for security fixes.

Privacy-respecting defaults

Defaulting to encrypted connections where possible and storing sensible data in a secure way. The user should be able to expect the KDE software Does The Right Thing and protect his or her data in the best possible way. Surprises should be avoided as much as possible, and reasonable expectations should be met with best effort.

Offering the right tools

KDE prides itself for providing a very wide range of useful software. From a privacy point of view, some functions are more important than others, of course. We want to offer the tools that most users need in a way that allows them to lead their life privately, so the toolset needs to be comprehensive and cover as many needs as possible. The tools itself should make it easy and straight-forward to achieve privacy. Some examples:

  • An email client allowing encrypted communication
  • Chat and instant messenging with state-of-the art protocol security
  • Support for online services that can be operated as private instance, not depending on a 3rd party provider

Of course, this is only a small part, and the needs of our userbase varies wildly.

Onwards from here…

In the past, KDE software has come a long way in providing privacy tools, but the tool-set is neither comprehensive, nor is privacy its implications widely seen as critical to our success in this area. Setting privacy as a central goal for KDE means that we will put more focus on this topic and lead to improved tools that allow users to increase their level of privacy. Moreover, it will set an example for others to follow and hopefully increase standards across the whole software ecosystem. There is much work to do, and we’re excited to put our shoulder under it and work on it.

on November 28, 2017 07:29 PM

Rumors are abound that WeWork are to acquire for $30 million. I wanted to share a few thoughts here. The caveat: I have no behind-the-scenes knowledge here, these are just some thoughts based on a somewhat cursory knowledge of both organizations. This is also (currently) speculation, so the nature and numbers of an acquisition might be different.

It is unsurprising that WeWork would explore for an acquisition. From merely a lead-generation perspective, making it simple for the hundreds of thousands of meetups around the world to easily host their events at WeWork spaces will undoubtedly have a knock-on effect of people registering as WeWork members, either as individual entrepreneurs, or hiring hosted office space for their startups.

Some of the biggest hurdles for meetups are (a) sourcing sponsorship funds to cover costs, and (b) facilitating the target of those very costs such as food/beverage, AV/equipment, and promotional costs/collateral. WeWork could obviously provide not just space and equipment but also potentially broker sponsorships too. As with all ecosystem-to-ecosystem acquisitions, bridging those ecosystems exposes value (e.g. Facebook and Instagram.)

The somewhat surprising element here to me is the $30 million valuation. used to publish their growth stats, but it seems they are not there any more. The most recent stats I could find (from 2012 on Quora) suggested 11.1 million users and 349,000+ meetups. There is a clear source of revenue here, and while it may be relatively limited in potential growth, I would have expected the revenue projection, brand recognition, and current market lead would be worth more than $30 million.

Mind you, and with the greatest of respect to the wonderful people at, I feel they have somewhat missed the mark in terms of their potential for innovation. There are all kinds of things they could have done to capitalize on their market position by breaking down the onboarding and lifecycle of a meetup (from new formation to regular events) and optimizing and simplifying every element of this for organizations.

There are all kinds of services models that could have been hooked in here such as partnerships with providers (e.g. food, equipment, merch etc) and partner organizations (e.g. major potential consumers and sponsors of the service), and more. I also think they could have built out the profile elements of their service to glue different online profiles together (e.g. GitHub, LinkedIn, Reddit) to not just source groups, but to become a social platform that doesn’t just connect you to neat groups, but to neat people too.

As I have been saying for a while, there is also a huge missed opportunity in converting the somewhat transitory nature of a meetup (you go along and have a good time, but after the meetup finishes, nothing happens) into a broader and more consistently connected set of engagements between members. Doing this well requires community building and collaboration experience, that I would proffer, most organizers probably don’t have.

All of this seems like a bit of a missed opportunity, but as someone sitting on the outside of the organization, who am I to judge? Running a popular brand and service is a lot of work, and from what I understand, they have a fairly small team. There is only so much you can do.

My suspicion is that were shopping around a little for a sale and prospective buyers were primarily interested in their brand potential and integration (with an expected limited cap on revenues). As such, $30 million might make sense, and would strike me as a steal for WeWork.

Either way, congratulations to WeWork and in their future partnership.

The post WeWork to Aquire Some Thoughts appeared first on Jono Bacon.

on November 28, 2017 06:30 AM

November 27, 2017

Development on the Xfce PulseAudio Plugin has been moving along at a steady pace, and the latest release marks the completion of another great feature for the Sound Indicator replacement applet.

What’s New?

New Feature: Multimedia Key Support

Multimedia keyboard support has been hit and miss in the Linux space for as long as there’s been multimedia keyboards. Support for these keys has been entirely dependent on support baked into each individual application. The best current example of this is the Spotify Linux client. Users can control the media player with various panel plugins, but not with their keyboards.

With the new multimedia key support in Xfce PulseAudio Plugin 0.3.3, the recently added MPRIS2 integration has been complemented with key bindings for the Play/Pause, Previous, Next, and Stop keys. When these keys are pressed, any actively running player known to the plugin will be notified, enabling keyboard playback control.

You can check out the new feature in the video below, where I very excitedly inundate my media players with playback commands.

General Improvements

  • Simplified device menus: The bold section headers have been replaced in favor of a single menu per input and output device. If there’s only one option available, the menu is no longer displayed.
  • Improved volume scale increments: The old defaults were steps of 6% and a max of 153%. These seemed a bit unusual, and have been replaced with a more sensible 5% and 150%.

Bug Fixes

  • Fixed builds with clang (Xfce #13889) (0.3.2)
  • Fixed panel icon size with high DPI (Xfce #13894) (0.3.2)
  • Show volume change notifications when changed with another application (Xfce #13677)
  • Change default device when changed with another application (Xfce #13908)
  • Fixed flag in g_bus_watch_name_on_connection() method (Xfce #13961)
  • Fix plugin size calculation with multiple rows (Xfce #13998)

Translation Updates

Chinese (China), Croatian, Czech, Danish, Dutch, French, German, Indonesian, Kazakh, Korean, Norwegian Bokmål, Polish, Portuguese (Brazil), Swedish, Ukrainian



The latest version of Xfce PulseAudio Plugin can always be downloaded from the Xfce archives. Grab version 0.3.3 from the below link.

  • SHA-256: d6aae9409714c5ddea975c350b4d517e078a5550190165b17ca062d0eb69f9a6
  • SHA-1: 5921f7c17b96dda09f035e546e06945f40398dc9
  • MD5: d3d3e012369af6d2302d4b70a7720a17
on November 27, 2017 11:23 AM

November 26, 2017

I’ve gotten this question several times since I started developing the Suru icon set: “why aren’t you including third-party application icons?”

I’m of the position that if you are a software vendor, you should not infringe on the brands of third-party software that may be installed on your platform for the simple reason that the developers of that software deserve to have their brands respected (regardless of whether or not it is open source).

You should not infringe on the brands of third-party software.

Once a platform vendor makes the decision to start shipping an icon theme that overrides the brands of tens or hundreds of applications that users may install on their platform it immediately infringes on the rights of all those application developers. For example, Mozilla invested in creating a brand and icon for Firefox and no Linux vendor should replace or modify it without Mozilla’s permission –same applies to the hundreds of other apps.

Shipping an icon theme that overrides the brands of applications infringes on the rights of application developers.

This would be like Apple or Google deciding they don’t like the icons of certain applications on their platforms and shipping in-built icons to override them ahead of time –problematic, no? While Linux distributions don’t have the same scale as Android or iOS the principle should be the same.

“What about choice?”

While an individual user is free to choose to install and use icon themes from the community –since I believe people (should) have the right to do what they want with their own personal setup– it shouldn’t be the position of a software vendor to make those choices beyond the scope of their platform.

That said, I do think a vendor is free to make available any number of themes or customization options to users as they see fit. But in the case of Linux distribution (and I’ve said this to a few distribution maintainers who’ve contacted me about using my icons), it behooves a maintainer to be at least aware of the responsibility they have not only to their users but to the Free Software developer community and not make sweeping choices that may be against the will of some developers.

Linux distributions should be aware of the responsibility they have not only to their users but to the Free Software developer community.

All this is to say if Ubuntu (or any other serious distribution vendor) is thinking about shipping a custom icon theme, it has a responsibility not just to users but to the developer community to not infringe on their rights.

So better safe than sorry.

on November 26, 2017 07:00 PM

About 3 months, a GStreamer Conference and two bug-fix releases have passed now since the GStreamer Rust bindings release 0.8.0. Today version 0.9.0 (and 0.9.1 with a small bugfix to export some forgotten types) with a couple of API improvements and lots of additions and cleanups was released. This new version depends on the new set of releases of the gtk-rs crates (glib/etc).

The full changelog can be found here, but below is a short overview of the (in my opinion) most interesting changes.


The basic tutorials 1 to 8 were ported from C to Rust by various contributors. The C versions and the corresponding explanatory text can be found here, and it should be relatively easy to follow the text together with the Rust code.

This should make learning to use GStreamer from Rust much easier, in combination with the few example applications that exist in the repository.

Type-safety Improvements

Previously querying the current playback position from a pipeline (and various other things analogous) was giving you a plain 64-bit integer, just like in C. However in Rust we can easily do better.

The main problem with just getting an integer was that there are “special” values that have the meaning of “no value known”, specifically GST_CLOCK_TIME_NONE for values in time. In C this often causes bugs by code ignoring this special case and then doing calculations with such a value, resulting in completely wrong numbers. In the Rust bindings these are now expressed as an Option<_> so that the special case has to be handled separately, and in combination with that for timed values there is a new type called ClockTime that is implementing all the arithmetic traits and others so you can still do normal arithmetic operations on the values, while the implementation of those operations takes care of GST_CLOCK_TIME_NONE. Also it was previously easy to get a value in bytes and add it to a value in time. Whenever multiple formats are possible, a new type called FormatValue is now used that combines the value itself with its format to prevent such mistakes.

Error Handling

Various operations in GStreamer can fail with a custom enum type: link pads (PadLinkReturn), pushing a buffer (FlowReturn), changing an element’s state (StateChangeReturn). Previously handling this was not as convenient as the usual Result-based error handling in Rust. With this release, all these types provide a function into_result() that allows to convert into a Result that splits the enum into its good and bad cases, e.g. FlowSuccess and FlowError. Based on this, the usual Rust error handling is possible, including usage of the ?-operator. Once the Try trait is stable, it will also be possible to directly use the ?-operator on FlowReturn and the others before conversion into a Result.

All these enums are also marked as #[must_use] now, which causes a compiler warning if code is not specifically handling them (which could mean to explicitly ignore them), making it even harder to ignore errors caused by any failures of such operations.

In addition, all the examples and tutorials make use of the above now and many examples were ported to the failure crate and implement proper error handling in all situations now, for example the decodebin example.

Various New API

Apart from all of the above, a lot of new API was added. Both for writing GStreamer-based applications, and making that easier, as well as for writing GStreamer plugins in Rust. For the latter, the gst-plugin-rs repository with various crates (and plugins) was ported to the GStreamer bindings and completely rewritten, but more on that in another blog post in the next couple of days once the gst-plugin crate is released and published on

on November 26, 2017 06:59 PM

There is an "Auto mode" while automating the Debian/Ubuntu installation using preseeding.

You can check Debian's or Ubuntu's documentions for details.

Basically if we have prepared a "preseed.cfg" at some right place, like, and then we can use "auto" to install Debian stretch by that "preseed.cfg".

"preseed.cfg" is usually a static file so I come out some ideas. How about making it dynamic and we can share it to others.

If you are interested, please check

on November 26, 2017 10:07 AM

November 24, 2017

Full Circle Magazine #127

Full Circle Magazine

This month:
* Command & Conquer
* How-To : Ubuntu Base, Intro To FreeCAD, and Great Cow Basic [NEW!]
* Graphics : Inkscape
* Researching With Linux
* My Opinion: Plasma 5, or Plasma 4?
* KODI Room
* Review: FixMeStick
* Ubuntu Games: Humble Bundles
plus: News, Q&A, and much more.
Get it while it’s hot!
on November 24, 2017 08:12 PM

There are an increasing number of events for free software enthusiasts to meet in an alpine environment for hacking and fun.

In Switzerland, Swiss Linux is organizing the fourth edition of the Rencontres Hivernales du Libre in the mountain resort of Saint-Cergue, a short train ride from Geneva and Lausanne, 12-14 January 2018. The call for presentations is still open.

In northern Italy, not far from Milan (Malpensa) airport, Debian is organizing a Debian Snow Camp, a winter getaway for developers and enthusiasts in a mountain environment where the scenery is as diverse as the Italian culinary options. It is hoped the event will take place 22-25 February 2018.

on November 24, 2017 08:31 AM

November 23, 2017

Warm white lightsWarm white lights
Since I’ve been playing with various home automation technologies for some time already, I thought I’d also start writing about it. Be prepared for some blogs about smart lighting, smart home and related technologies.

Most recently, I’ve gotten myself a few items from IKEA new product range of smartlight. It’s called trådfri (Swedish for wireless). These lights can be remote-controlled using a smartphone app or other kinds of switches. These products are still fairly young, so I thought I’d give them a try. Overall. the system seems well thought-through and feels fairly high-end. I didn’t notice any major annoyances.

First Impressions

Trådfri hub and dimmerTrådfri hub and dimmer

My first impressions are actually pretty good. Initially, I bought a hub which is used to control the lights centrally. This hub is required to be able to use the smartphone app or update the firmware of any component (more on that later!). If you just want to use one of the switches or dimmers that come separately, you won’t need the hub.
Setting everything up is straight-forward, the documentation is fine and no special skills are needed to install these smartlights. Unpacking unfortunately means the usual fight with blister packaging (will it ever stop?), but after that, a few handy surprises awaited me. What I liked:
Hub hides cablesHub hides cables

  • The light is nice and warm. The GU10 bulbs i got give 400 lumens and are dimmable. For my taste, they could be a bit darker at the lower end of the scale, but overall, the light feels comfy warm and not too cold, but not too yellow either. The GU10 bulbs I got are spec’ed at 2700 Kelvin. No visible flickering either.
  • Trådfri components are relatively inexpensive. A hub, dimmer and 4 warm-white GU10 bulbs set me back about 75€. It is way cheaper than comparable smartlights, for example Philips Hue. As needs are fairly individual exact prices are best looked up on IKEA’s website by yourself.
  • The hub has a handy cable storage function, you can roll up excessive cable inside the hub — a godsend if you want to have the slightest chance of preventing a spaghetti situation.
  • The hub is USB-powered, 1A power supply suffices, so you may be able to plug it into the USB port of some other device, or share a power supply.
  • The dimmer can be removed from the cradle. The cradle can be stuck on any flat surface, it doesn’t need additional cabling, and you can easily take out the dimmer and carry it around.
  • The wireless technology used is ZigBee, which is a standard thing and also used by other smarthome technologies, most notably, Philips Hue. I already own (and love) some Philips Hue lights, so in theory I should be able to pair up the Trådfri lights with my already existing Hue lights. (This is a big thing for me, I don’t want to have different lighting networks around in my house, but rather concert the whole lighting centrally.)

Pairing IKEA Trådfri with Philips Hue

Let’s call this “work in progress”, meaning, I haven’t yet been able to pair a Trådfri bulb with my Hue system. I’ll dig some more into it, and I’m pretty sure I’ll make it work at some point. If you’re interested in combining Hue and Trådfri bulbs, I’ll suffice with a couple of pointers:

If you want to try this yourself, make sure you get the most recent lights from the store (the clerk was helpful to me and knowledgable, good advice there!). You’ll also likely need a hub at least for updating the firmware. If you’re just planning to use the bulbs together with a Hue system, you won’t need the hub later on, so that may seem like 30€ down the drain. Bit of a bummer, but depending on how many lights you’ll be buying, given the difference in price between IKEA and Philips, it may well be worth it.

\edit: After a few more tries, the bulb is now paired to the Philips Hue system. More testing will ensue, and I’ll either update this post, or write a new one.

on November 23, 2017 02:28 PM

November 22, 2017

I’ve been thinking about gifts for Hackers and Makers lately as the holiday season arrives. I decided I’d build a public list of some of my favorite things (and perhaps some things I’d like myself as well!) I’ll break it down into a few categories for different kinds of hackers (and different kinds of gifters as well). Prices are current as of writing, but not something I’ll be updating.

Stocking Stuffers (Under about $20)

Yubico U2F Security Key

Yubico U2F Security Key

The U2F Security Key by Yubico is a hardware 2-factor authentication token compatible with the Fido Alliance Universal 2-Factor (U2F) standard. This includes sites like Google (GMail), Github, Gitlab, Bitbucket, Dropbox and Facebook. Unlike SMS, U2F can’t be intercepted by an adversary (even in countries with government-run telcos). It continues to work with a dead battery in your smartphone, and is backed by a hardware secure element. It still won’t protect you against malware on your computer, but it’s a dramatic increase in security for most threat models. Everyone should have two security keys: one for daily use, and a backup (already enrolled) in a safe place in case something happens to the primary. $17 at Amazon

Red Team Field Manual

Red Team Field Manual

The Red Team Field Manual is a versatile guide for anyone who quickly needs to perform security tasks on both Windows and Linux. Though mostly targeted towards penetration testers & red teamers, this is useful for system administrators who spend most of their time on one platform but need to work on the other occasionally, or for budding infosec students getting used to working on their non-native platform. It provides command lines for a number of different tasks on both platforms, including:

  • Networking Commands (ip address, routing table, etc.)
  • Common file operations (search, replace, extract, hash, etc.)
  • Common file locations (password hashes, configuration, etc.)
  • Basic scripting operations (bash, python, powershell)

This is not reading material – it’s strictly a reference, but in a quite handy format & form factor. $9 at Amazon

iFixit Essentials Toolkit

iFixit Essentials Toolkit

The iFixit Essentials Toolkit is a smaller version of my favorite toolkit, the iFixit Pro Tech Toolkit. It contains a high quality screwdriver handle, the most frequently used screwdriver bits, and several tools useful for opening all kinds of devices, including smartphones, routers, and pretty much any other IoT device out there. It comes in a nice case that uses neodymium magnets to hold it closed. It also supports iFixit, who produce some really high quality teardowns and post it all online for free. $19.99 at Amazon

For Penetration Testers & Red Teamers

WiFi Pineapple Nano

WiFi Pineapple Nano

Hak5’s WiFi Pineapple may be the best-known piece of hacking hardware out there. In the current generation, the Nano offers two radios built-in, making it perfect for a repeater style setup. In addition to the use as an attack device, allowing penetration testers to conduct wireless audits, attacks on clients, and other kinds of applied attacks, the Pineapple is also great for the hacker on the go. I often use mine to connect to hotel WiFi on one radio, perform a VPN link back to a VPN server, and provide a WPA2 hotspot on the other radio. Few other travel APs can provide this kind of functionality, so bringing the Pineapple Nano on the road with me always gives a lot of flexibility and options. $99.99 at HakShop

Packet Squirrel

Packet Squirrel

If the WiFi Pineapple is a Swiss Army Knife for WiFi networks, then the Packet Squirrel is that for wired ethernet. As a physical man-in-the-middle (MitM) device, the Packet Squirrel allows you to perform network attacks, modify traffic, or just VPN your own devices. (Even multiple devices if a network switch is connected behind the Packet Squirrel.) While the Wifi Pineapple may be the classic Hak5 tool (perhaps excepting the USB rubber ducky), the Packet Squirrel is the newest member of the family. I haven’t had a chance to do much with mine yet, but it looks promising and has a ton of cool features. $59.99 at HakShop

Red Team: How to Succeed By Thinking Like the Enemy

Red Team Book

Not just about Information Security Red Teaming, this book by Micah Zenko describes the way in which adversarial simulation helps organizations strengthen their posture. By taking a look at the role played by assumed attackers, it helps demonstrate how understanding the enemy leads to be better defenses, and how playing the enemy leads to finding previously unknown weaknesses in defenses. $20.32 at Amazon

Unauthorised Access: Physical Penetration Testing for IT Security Teams

Unauthorised Access

Red Teaming takes on many forms, and understanding physical security is important for any penetration tester or red teamer, even if he or she does not actually execute physical attacks. Knowing about the possibilities in the physical space helps to understand risks and compensating controls. Unauthorised Access: Physical Penetration Testing for IT Security Teams by Will Allsopp describes the basics of penetration testing physical access controls (mostly buildings and datacenters) and will help you to look at these controls in an entirely new light. $25.43 at Amazon

For Hardware Hackers & Electronics Makers

Brymen BM235 Multimeter (EEVBlog Model)


When doing any electronics work, whether it’s making, debugging, reverse engineering, or any other form of hacking, being able to take voltage, current, resistance and other readings is critical. The typical tool of choice for this is the handheld multimeter, and the Brymen BM235 is my favorite multimeter. While there are surely better multimeters out there (the Fluke 87V is probably the best known multimeter for electronics work), this Brymen offers most of the features at a significantly lower price. Most hobbyists and hardware hackers don’t need the resolution of the 87V or similar multimeters, but the Brymen still offers good functionality, and most importantly, is a quality multimeter with proper safety features. $125 at Amazon

Dremel Cordless Rotary Tool

Dremel 8220

The Dremel 8220 is a 12V cordless rotary tool. You can use it to cut, grind, or drill all kinds of materials. I’ve used mine to cut openings in project boxes for several electronics projects, to open ultrasonically welded electronics devices, and even the occasional home improvement project. While they have corded models as well, I find the cordless model more convenient, especially when working on my patio. (Being in Silicon Valley, I don’t exactly have room for a full workshop.) $99.00 at Amazon

TUMPA Multi-Protocol Adapter


The TIAO USB Multi-Protocol Adapter, or TUMPA for short, is a multi-protocol interface, allowing for JTAG, SPI, UART, RS-232, and SWD. All of this is useful for interfacing with all kinds of hardware, like dumping flash, using JTAG to examine the running state of a CPU, or even just basic UART interfacing. Like so many of these multi-interface systems, it uses an FTDI FT-2232H chip, but this one has neatly designed interface connections and a great support wiki. $39.99 at Amazon

Ubertooth One

Ubertooth One

Given the proliferation of Bluetooth devices, the Ubertooth One is an essential device for assessing modern Internet of Things devices. The Ubertooth is essentially a Software Defined Radio (SDR) for bluetooth, allowing the security professional to examine, capture, modify, and replay bluetooth frames. Find out what your gadgets are sending to each other or look for bugs in the firmware itself. $127.95 at Amazon

Adafruit & Sparkfun Gift Certificates

Adafruit and Sparkfun are retailers of a variety of maker & hardware hacking supplies. Both have a wide variety of tools and parts and both support Open Source Hardware and the maker movement. Get an Adafruit or Sparkfun Gift Certificate if you don’t know what your favorite maker might want.

For InfoSec Students & N00bs

Hacking: The Art of Exploitation

Hacking: TAoE

Hacking: The Art of Exploitation may not be the most recent book, but it’s still a good read for those new to the binary exploitation areas of security. It’s an excellent introduction, and contains lots of still-relevant material, even if it doesn’t include bypasses for all the latest mitigations. $39.55 at Amazon

DT2000 Hardware Encrypted Flash Drive


The DT2000 is a flash drive from Kingston that features a keypad to allow the entry of a PIN allowing access to the hardware-encrypted contents. Contents are encrypted by 256 bit AES, and I have it on good word that this device has fairly properly implemented their encryption. They’re obviously significantly more expensive than stock flash drives, but the encryption and the case make them a great place to protect important documents and files including backups of password managers (you do use a password manager, don’t you?), financial records, medical records, GPG keys, and other sensitive data. I use an older USB 2.0 encrypted flash drive, and have been looking a lot at an upgrade, and the DT2000 would be at the top of my list. $124.88 at Amazon

Offensive Security Training

There are few things I’m prouder of than holding both the OSCP and OSCE certifications. They teach hands-on practical Offensive Security (hence the name) and do an incredible job of it, especially for those who learn best by doing. With fully immersive labs and exams that require doing instead of answering some multiple choice questions, these really push security professionals to the next level. If you know someone who can “Try Harder”, this is a great gift to get for them.

Raspberry Pi 3

Raspberry Pi 3

Ever since the Raspberry Pi first hit the market, it’s been a popular option with Hackers and Makers. This starter kit gives everything you need to get started with the Raspberry Pi 3, which is the latest iteration of the full-sized Raspberry Pi. The 3 includes integrated WiFi and Bluetooth, so no more need for a dongle for that. One of the nicest features with the Raspberry Pi is how trivially you can switch your operating system: just swap to another MicroSD card. You can have one card with Raspbian, another with Kali, etc. Likewise, if you manage to terribly misconfigure your system, you can either move the MicroSD to another computer to fix it or just reflash it to a stock system. Though the Raspberry Pi 3 alone is $35, the kit with a case, power supply, heatsinks, MicroSD card, etc., is $69.99 at Amazon.

Geek & Hacker Apparel

Despite the Security Weekly suggestion to Hack Naked, there are a couple of providers of fine hacking apparel to be found because most hackerspaces and offices do require clothing:

Young Hackers & Makers

You’ll have to make your own decisions about the age appropriateness of each of the options here for the young hackers & makers in your life. I’m clearly not an expert in that area, but decided I’d share my thoughts anyway. (Plus, many of these items are fun for older hackers exploring new areas too!)

Circuit Playground Express

Circuit Playground Express

For the first foray into embedded systems and microcontrollers, I recommend the Circuit Playground Express from Adafruit. It allows programming the device in MicroPython and loading your code is as simple as plugging in and seeing it appear as a USB mass storage device. Save your micropython program to the device, hit reset, and see it run. It contains 10 Neopixel-style RGB LEDs, a thermometer, light sensor, accelerometer, sound sensor, speaker, buttons, switches, and more! It does require a bit of understanding of electronics, but it’s a great start into programming for the physical world. $32.99 on Amazon or $24.95 direct from Adafruit

Lego Mindstorms EV3

Lego Mindstorms EV3

The LEGO Mindstorms EV3 is the robots kit I wish I had when I was a kid. While I did eventually get an original Mindstorms kit, the modern LEGO Robotics kit has far more features and has three kinds of sensors and two kinds of motors. Instructions for building multiple robots are included. If you (or your young hacker) gets bored of the built-in firmware and programming interface, it turns out the EV3 programmable brick is actually a fully-featured Debian Linux computer, for which a community has sprung up and built a replacement firmware allowing so much more. Imagine a swarm of EV3-powered robots. The kit is pricey, but it’s good for most ages and might inspire the next generation of robotics engineers. $349.95 at Amazon

on November 22, 2017 08:00 AM

November 21, 2017

Dear Ubuntu Community,

we’re happy to report that the five vacant seats on the LoCo Council3 have been, at long last, filled. Your new council members are as follows:

  1. Nathan Haines (incumbent) (@nhaines)
  2. Carla Sella (@carla-sella)
  3. Kyle Fazzari (@kyrofa)
  4. Ken VanDine (@kenvandine)
  5. Gustavo Silva (@gsilvapt)

A big congratulation to them!

The local community project is an international effort which strives to evangelize and support the Ubuntu project around the world. The LoCo Council acts on the delegation of the Community Council to support this worldwide movement. Most notably they have been involved in reviewing verification requests.

The Community Council will be working closely with the LoCo Council in the upcoming months to give new value to both the LoCo project and the LoCo Council itself, so exciting times are ahead!

This post was initially posted on the Ubuntu Community Hub by Martin Wimpress from the Ubuntu Community Council.

on November 21, 2017 04:39 PM

MAAS 2.3.0 (final) Released!

Andres Rodriguez

Hello MAASters!

I’m happy to announce that MAAS 2.3.0 (final) is now available!
This new MAAS release introduces a set of exciting features and improvements to the overall user experience. It now becomes the focus of maintenance, as it fully replaces MAAS 2.2
In order to provide with sufficient notice, please be aware that 2.3.0 will replace MAAS 2.2 in the Ubuntu Archive in the coming weeks. In the meantime, MAAS 2.3 is available in PPA and as a Snap.
PPA’s Availability
MAAS 2.3.0 is currently available in ppa:maas/next for the coming week.
sudo add-apt-repository ppa:maas/next
sudo apt-get update
sudo apt-get install maas
Please be aware that MAAS 2.3 will replace MAAS 2.2 in ppa:maas/stable within a week.
Snap Availability
For those wanting to use the snap, you can obtain it from the stable channel:
sudo snap install maas –devmode –stable

MAAS 2.3.0 (final)

Important announcements

Machine network configuration now deferred to cloud-init.

Starting from MAAS 2.3, machine network configuration is now handled by cloud-init. In previous MAAS (and curtin) releases, the network configuration was performed by curtin during the installation process. In an effort to improve robustness, network configuration has now been consolidated in cloud-init. MAAS will continue to pass network configuration to curtin, which in turn, will delegate the configuration to cloud-init.

Ephemeral images over HTTP

As part of the effort to reduce dependencies and improve reliability, MAAS ephemeral (network boot) images are no longer loaded using iSCSI (tgt). By default, the ephemeral images are now obtained using HTTP requests to the rack controller.

After upgrading to MAAS 2.3, please ensure you have the latest available images. For more information please refer to the section below (New features & improvements).

Advanced network configuration for CentOS & Windows

MAAS 2.3 now supports the ability to perform network configuration for CentOS and Windows. The network configuration is performed via cloud-init. MAAS CentOS images now use the latest available version of cloud-init that includes these features.

New features & improvements

CentOS network configuration

MAAS can now perform machine network configuration for CentOS 6 and 7, providing networking feature parity with Ubuntu for those operating systems. The following can now be configured for MAAS deployed CentOS images:

  • Bonds, VLAN and bridge interfaces.
  • Static network configuration.

Our thanks to the cloud-init team for improving the network configuration support for CentOS.

Windows network configuration

MAAS can now configure NIC teaming (bonding) and VLAN interfaces for Windows deployments. This uses the native NetLBFO in Windows 2008+. Contact us for more information (

Improved Hardware Testing

MAAS 2.3 introduces a new and improved hardware testing framework that significantly improves the granularity and provision of hardware testing feedback. These improvements include:

  • An improved testing framework that allows MAAS to run each component individually. This allows MAAS to run tests against storage devices for example, and capture results individually.
  • The ability to describe custom hardware tests with a YAML definition:
    • This provides MAAS with information about the tests themselves, such as script name, description, required packages, and other metadata about what information the script will gather. All of which will be used by MAAS to render in the UI.
    • Determines whether the test supports a parameter, such as storage, allowing the test to be run against individual storage devices.
    • Provides the ability to run tests in parallel by setting this in the YAML definition.
  • Capture performance metrics for tests that can provide it.
    • CPU performance tests now offer a new ‘7z’ test, providing metrics.
    • Storage performance tests now include a new ‘fio’ test providing metrics.
    • Storage test ‘badblocks’ has been improved to provide the number of badblocks found as a metric.
  • The ability to override a machine that has been marked ‘Failed testing’. This allows administrators to acknowledge that a machine is usable despite it having failed testing.

Hardware testing improvements include the following UI changes:

  • Machine Listing page
    • Displays whether a test is pending, running or failed for the machine components (CPU, Memory or Storage.)
    • Displays whether a test not related to CPU, Memory or Storage has failed.
    • Displays a warning when the machine has been overridden and has failed tests, but is in a ‘Ready’ or ‘Deployed’ state.
  • Machine Details page
    • Summary tab – Provides hardware testing information about the different components (CPU, Memory, Storage).
    • Hardware Tests /Commission tab – Provides an improved view of the latest test run, its runtime as well as an improved view of previous results. It also adds more detailed information about specific tests, such as status, exit code, tags, runtime and logs/output (such as stdout and stderr).
    • Storage tab – Displays the status of specific disks, including whether a test is OK or failed after running hardware tests.

For more information please refer to

Network discovery & beaconing

In order to confirm network connectivity and aide with the discovery of VLANs, fabrics and subnets, MAAS 2.3 introduces network beaconing.

MAAS now sends out encrypted beacons, facilitating network discovery and monitoring. Beacons are sent using IPv4 and IPv6 multicast (and unicast) to UDP port 5240. When registering a new controller, MAAS uses the information gathered from the beaconing protocol to ensure that newly registered interfaces on each controller are associated with existing known networks in MAAS. This aids MAAS by providing better information on determining the network topology.

Using network beaconing, MAAS can better correlate which networks are connected to its controllers, even if interfaces on those controller are not configured with IP addresses. Future uses for beaconing could include validation of networks from commissioning nodes, MTU verification, and a better user experience for registering new controllers.

Upstream Proxy

MAAS 2.3 now enables an upstream HTTP proxy to be used while allowing MAAS deployed machines to continue to use the caching proxy for the repositories. Doing so provides greater flexibility for closed environments, including:

  • Enabling MAAS itself to use a corporate proxy while allowing machines to continue to use the MAAS proxy.
  • Allowing machines that don’t have access to a corporate proxy to gain network access using the MAAS proxy.

Adding upstream proxy support also includes an improved configuration on the settings page. Please refer to Settings > Proxy for more details.

Ephemeral Images over HTTP

Historically, MAAS has used ‘tgt’ to provide images over iSCSI for the ephemeral environments (e.g commissioning, deployment environment, rescue mode, etc). MAAS 2.3 changes the default behaviour by now providing images over HTTP.

These images are now downloaded directly by the initrd. The change means that the initrd loaded on PXE will contact the rack controller to download the image to load in the ephemeral environment. Support for using ‘tgt’ is being phased out in MAAS 2.3, and will no longer be supported from MAAS 2.4 onwards.

For users who would like to continue to use & load their ephemeral images via ‘tgt’, they can disable http boot with the following command.

  maas <user> maas set-config name=http_boot value=False

UI Improvements

Machines, Devices, Controllers

MAAS 2.3 introduces an improved design for the machines, devices and controllers detail pages that include the following changes.

  • “Summary” tab now only provides information about the specific node (machine, device or controller), organised across cards.
  • “Configuration” has been introduced, which includes all editable settings for the specific node (machine, device or controllers).
  • “Logs” consolidates the commissioning output and the installation log output.

Other UI improvements

Other UI improvements that have been made for MAAS 2.3 include:

  • Added DHCP status column on the ‘Subnet’s tab.
  • Added architecture filters
  • Updated VLAN and Space details page to no longer allow inline editing.
  • Updated VLAN page to include the IP ranges tables.
  • Zones page converted to AngularJS (away from YUI).
  • Added warnings when changing a Subnet’s mode (Unmanaged or Managed).
  • Renamed “Device Discovery” to “Network Discovery”.
  • Discovered devices where MAAS cannot determine the hostname now show the hostname as “unknown” and greyed out instead of using the MAC address manufacturer as the hostname.

Rack Controller Deployment

MAAS 2.3 can now automatically deploy rack controllers when deploying a machine. This is done by providing cloud-init user data, and once a machine is deployed, cloud-init will install and configure the rack controller. Upon rack controller registration, MAAS will automatically detect the machine is now a rack controller and it will be transitioned automatically. To deploy a rack controller, users can do so via the API (or CLI), e.g:

maas <user> machine deploy <system_id> install_rackd=True

Please note that this features makes use of the MAAS snap to configure the rack controller on the deployed machine. Since snap store mirrors are not yet available, this will require the machine to have access to the internet to be able to install the MAAS snap.

Controller Versions & Notifications

MAAS now surfaces the version of each running controller and notifies the users of any version mismatch between the region and rack controllers. This helps administrators identify mismatches when upgrading their MAAS on a multi-node MAAS cluster, such as within a HA setup.

Improved DNS Reloading

This new release introduces various improvements to the DNS reload mechanism. This allows MAAS to be smarter about when to reload DNS after changes have been automatically detected or made.

API Improvements

The machines API endpoint now provides more information on the configured storage and provides additional output that includes volume_groups, raids, cache_sets, and bcaches fields.

Django 1.11 support

MAAS 2.3 now supports the latest Django LTS version, Django 1.11. This allows MAAS to work with the newer Django version in Ubuntu Artful, which serves as a preparation for the next Ubuntu LTS release.

  • Users running MAAS in Ubuntu Artful will use Django 1.11.
  • Users running MAAS in Ubuntu Xenial will continue to use Django 1.9.
on November 21, 2017 03:34 PM


Ante Karamatić



Nakon što sam jučer odustao od Hitra, traženje parkinga oko zgrade Fine u Vukovarskoj je čista ludost, danas sam pokušao ponovno. Za razliku od poslovnice u Šibeniku, ova u Zagrebu mi nije ni htjela uzeti papire. Obrazloženje je bilo “Ispunili ste rukom obrazac” i “Trebate donijeti kopiju riječnika stranih riječi za riječ ‘solutions'”. Spomenuti obrazac je dostupan samo kao PDF, pa pretpostavljam da bi sad trebao kupiti softver kojim ću urediti datoteku koja mi treba kako bi državu pitao mogu li firmu nazvati kako hoću. I recimo da bi shvatio nepoznavanje engleskog jezika i potrebu za definicijom riječi ‘solutions’ kada bi to bilo prvi put da se javlja u sudskom registru. Ali nije, postoje deseci, ako ne i stotine, trgovačkih društava sa rječju ‘solutions’.

Moja je konačna odluka nastaviti pokušavati svaki dan do kraja ovog tjedna. Ako ne uspijem do petka, trgovačko društvo neću otvoriti u Republici Hrvatskoj.

on November 21, 2017 07:36 AM

November 20, 2017

About powerline

Powerline does some font substitutions that allow additional theming for terminal applications such as tmux, vim, zsh, bash and more. The powerline font has been packaged in Debian for a while now, and I’ve packaged two powerline themes for vim and zsh. They’re currently only in testing, but once my current todo list on packages look better, I’ll upload them to stretch-backports.

For vim, vim-airline

vim-airline is different from previous vim powerline plugins in that it doesn’t depend om perl or python, it’s purely implemented in vim config files.


Here’s a gif from the upstream site, they also demo various themes on there that you can get in Debian by installing the vim-airlines-themes package.

Vim Airline demo gif

How to enable

Install the vim-airline package, and add the following to your .vimrc file:

" Vim Airline theme
let g:airline_theme='powerlineish'
let g:airline_powerline_fonts = 1
let laststatus=2

The vim-airline-themes package contains additional themes that can be defined in the snippet above.

For zsh, powerlevel9k


Here’s a gif from upstream that walks through some of its features. You can configure it to display all kinds of system metrics and also information about VCS status in your current directory.

Powerline demo gif

Powerlevel9k has lots of options and features. If you’re interested in it, you should probably take a look at their readme file on GitHub for all the details.

How to enable

Install the zsh-theme-powerlevel9k package and add the following to your to your .zshrc file.

source /usr/share/powerlevel9k/powerlevel9k.zsh-theme
on November 20, 2017 07:22 PM

I’ve been using Kitten Block for years, since I don’t really need the blood pressure spike caused by accidentally following links to certain UK newspapers. Unfortunately it hasn’t been ported to Firefox 57. I tried emailing the author a couple of months ago, but my email bounced.

However, if your primary goal is just to block the websites in question rather than seeing kitten pictures as such (let’s face it, the internet is not short of alternative sources of kitten pictures), then it’s easy to do with uBlock Origin. After installing the extension if necessary, go to Tools → Add-ons → Extensions → uBlock Origin → Preferences → My filters, and add and, each on its own line. (Of course you can easily add more if you like.) Voilà: instant tranquility.

Incidentally, this also works fine on Android. The fact that it was easy to install a good ad blocker without having to mess about with a rooted device or strange proxy settings was the main reason I switched to Firefox on my phone.

on November 20, 2017 12:00 AM

November 17, 2017

I am now a Debian Developer

Jonathan Carter

It finally happened

On the 6th of April 2017, I finally took the plunge and applied for Debian Developer status. On 1 August, during DebConf in Montréal, my application was approved. If you’re paying attention to the dates you might notice that that was nearly 4 months ago already. I was trying to write a story about how it came to be, but it ended up long. Really long (current draft is around 20 times longer than this entire post). So I decided I’d rather do a proper bio page one day and just do a super short version for now so that someone might end up actually reading it.

How it started

In 1999… no wait, I can’t start there, as much as I want to, this is a short post, so… In 2003, I started doing some contract work for the Shuttleworth Foundation. I was interested in collaborating with them on tuXlabs, a project to get Linux computers into schools. For the few months before that, I was mostly using SuSE Linux. The open source team at the Shuttleworth Foundation all used Debian though, which seemed like a bizarre choice to me since everything in Debian was really old and its “boot-floppies” installer program kept crashing on my very vanilla computers. 

SLUG (Schools Linux Users Group) group photo. SLUG was founded to support the tuXlab schools that ran Linux.

My contract work then later turned into a full-time job there. This was a big deal for me, because I didn’t want to support Windows ever again, and I didn’t ever think that it would even be possible for me to get a job where I could work on free software full time. Since everyone in my team used Debian, I thought that I should probably give it another try. I did, and I hated it. One morning I went to talk to my manager, Thomas Black, and told him that I just don’t get it and I need some help. Thomas was a big mentor to me during this phase. He told me that I should try upgrading to testing, which I did, and somehow I ended up on unstable, and I loved it. Before that I used to subscribe to a website called “freshmeat” that listed new releases of upstream software and then, I would download and compile it myself so that I always had the newest versions of everything. Debian unstable made that whole process obsolete, and I became a huge fan of it. Early on I also hit a problem where two packages tried to install the same file, and I was delighted to find how easily I could find package state and maintainer scripts and fix them to get my system going again.

Thomas told me that anyone could become a Debian Developer and maintain packages in Debian and that I should check it out and joked that maybe I could eventually snap up “”. I just laughed because back then you might as well have told me that I could run for president of the United States, it really felt like something rather far-fetched and unobtainable at that point, but the seed was planted :)

Ubuntu and beyond

Ubuntu 4.10 default desktop – Image from distrowatch

One day, Thomas told me that Mark is planning to provide official support for Debian unstable. The details were sparse, but this was still exciting news. A few months later Thomas gave me a CD with just “warty” written on it and said that I should install it on a server so that we can try it out. It was great, it used the new debian-installer and installed fine everywhere I tried it, and the software was nice and fresh. Later Thomas told me that this system is going to be called “Ubuntu” and the desktop edition has naked people on it. I wasn’t sure what he meant and was kind of dumbfounded so I just laughed and said something like “Uh ok”. At least it made a lot more sense when I finally saw the desktop pre-release version and when it got the byline “Linux for Human Beings”. Fun fact, one of my first jobs at the foundation was to register the domain name. Unfortunately I found it was already owned by a domain squatter and it was eventually handled by legal.

Closer to Ubuntu’s first release, Mark brought over a whole bunch of Debian developers that was working on Ubuntu over to the foundation and they were around for a few days getting some sun. Thomas kept saying “Go talk to them! Go talk to them!”, but I felt so intimidated by them that I couldn’t even bring myself to walk up and say hello.

In the interest of keeping this short, I’m leaving out a lot of history but later on, I read through the Debian packaging policy and really started getting into packaging and also discovered Daniel Holbach’s packaging tutorials on YouTube. These helped me tremendously. Some day (hopefully soon), I’d like to do a similar video series that might help a new generation of packagers.

I’ve also been following DebConf online since DebConf 7, which was incredibly educational for me. Little did I know that just 5 years later I would even attend one, and another 5 years after that I’d end up being on the DebConf Committee and have also already been on a local team for one.

DebConf16 Organisers, Photo by Jurie Senekal.

It’s been a long journey for me and I would like to help anyone who is also interested in becoming a Debian maintainer or developer. If you ever need help with your package, upload it to and if I have some spare time I’ll certainly help you out and sponsor an upload. Thanks to everyone who have helped me along the way, I really appreciate it!

on November 17, 2017 05:48 PM