December 10, 2019

When designers put their heart and soul into making super-fast, easy-to-use software to help take Internet of Things (IoT) apps to the next level, installation of that software needs to meet the same high standards.

ObjectBox is a database and synchronisation solution for rapid, efficient edge computing for mobile and IoT devices. Rather than each device sending all its data back to a cloud/server, ObjectBox enables data storage and processing within the device. Developers get simplicity and ease of implementation with native language APIs instead of SQL. Users can use data on edge devices faster with fewer resources.

Markus Junginger, CTO and co-founder of ObjectBox explains, “Moving decision making to the edge means faster response rates, less traffic to the cloud, and lower costs. We built an edge database with a minimal device footprint of just 1 MB for high on-device performance.” ObjectBox also synchronises data between devices and servers/the cloud for an ‘always-on’ feeling and improved data reliability. With ObjectBox, an application always works – whether the device is online or offline. 

However, user-friendliness of installation and limited user installation options were both concerns. To date, ObjectBox installation had been offered only as a GitHub package. According to Markus, “GitHub is developer-centric. While it’s useful for getting started, we wanted to offer something additional that would broaden our market reach and be easy for users in general”. 

ObjectBox found the installation option it was looking for through the company’s involvement in the EdgeX Foundry™ IoT Edge Platform. This vendor-neutral open source project hosted by The Linux Foundation provides a common open framework for IoT edge computing. EdgeX is also available for download and installation as a snap, the containerised software package for easy, secure installation in any Linux environment, including desktop, cloud, and IoT devices.

When Markus saw how easily the Edge X snap could be downloaded and installed from the Snap Store, he immediately saw the potential of a snap for ObjectBox. As he says, “Although EdgeX is complex, the snap is very easy for users. Whereas in other approaches, like Docker, users must specify the setup location of the file system, they do not need to do this with a snap. Snap updates with new binaries are very simple too.”

The EdgeX snap was a convenient model for ObjectBox to work with. Markus and his team built their own snap based on the EdgeX snap, swapping out the default database and replacing it with ObjectBox. He adds, “We loaded the shared libraries and other library files to get a snap with ObjectBox, EdgeX core, security, and support services, as well as Consul, Kong, Vault, a Java Runtime Environment (JRE), and a set of basic device services, all with a simpler YAML file for installation.”

Markus and ObjectBox believe that improved user discovery and adoption will be among the benefits of using snaps. Snaps will help make the advanced ObjectBox technology accessible to all businesses that want to use IoT to its full potential. Markus says, “We make it easy to handle high data volumes in different sizes of devices and systems. It’s great that installation can now be so easy too. No big instruction manual is required, and all Linux platforms can be served by snaps, from entry level like Raspberry Pi and upwards.”

ObjectBox made it’s snap available in the stable channel of the Snap Store. Publication coincided with the new EdgeX release. Markus says, “As performance is such a vital element for ObjectBox, we’re interested in any performance improvements for snaps in the future, but we’re also pretty happy for the moment.”

What advice does Markus have for other developers thinking about using snaps for the installation of their own software? He says, “Although documentation for snaps is good, it really helps to see an example of how somebody else has done it. For example, with the EdgeX snap, we could see how EdgeX had done it. Talking to other developers also is a great way to pick up a new tool speedily. From a user perspective, snaps are a great tool and I look forward to them being used more and more!”

The ObjectBox snap is available to install here.

on December 10, 2019 05:32 PM

Canonical announces full enterprise support for Kubernetes 1.17, with support covering Charmed Kubernetes, MicroK8s and Kubeadm.

MicroK8s will be updated with Kubernetes 1.17 enabling users access to the latest upstream release with a single-line command in under 60 seconds. MicroK8s now brings Machine Learning deployments in seconds with the Kubeflow add-on. MetalLB load balancer add-on is now part of MicroK8s  as well as enhancements, upgrades and bug fixes. With MicroK8s 1.17, users can develop and deploy enterprise-grade Kubernetes on any Linux desktop, server or VM across 42 Linux distros. It’s a full Kubernetes in a small package, perfect for IoT, Edge and your laptop!

Canonical’s Charmed Kubernetes 1.17 will come with exciting changes like CIS benchmarking ability, Snap coherence and Nagios checks.

Charmed Kubernetes 1.17

CIS Benchmark

The Center for Internet Security (CIS) maintains a Kubernetes benchmarkthat is helpful to ensure clusters are deployed in accordance with security best practices. CHarmed Kubernetes clusters can now be checked for how well a cluster complies with this benchmark.

Snap Coherence

Beginning with Charmed Kubernetes 1.17, revisions of snap packages used by `kubernetes-master` and `kubernetes-worker` charms can be controlled by a snap store proxy.

Nagios checks

Additional Nagios checks have been added for the `kubernetes-master` and `kubernetes-worker` charms. These checks enhance the monitoring and reporting available via Nagios by collecting data on node registration and API server connectivity.


A list of bug fixes and other minor feature updates in this release can be found at Launchpad.

MicroK8s 1.17

  • Kubeflow add-on. Give it a try with `microk8s.enable kubeflow`.
  • MetalLB Loadbalancer add-on, try it with `microk8s.enable metallb`.
  • Separate front proxy CA.
  • Linkerd updated to v2.6.0.
  • Jaeger operator updated to v1.14.0.
  • Updating Prometheus operator (latest).
  • Istio upgraded to v1.3.4.
  • Helm upgraded to 2.16.0.
  • Helm status reported in `microk8s.status`.
  • Set default namespace of `microk8s.ctr` to ``.
  • Better exception handling in the clustering agent.
  • Fixes in cluster upgrades.
  • `microk8s.inspect` now cleans priority and storage classes.
  • `microk8s.inspect` will detect missing cgroups v1 and suggest changes on Fedora 31.

Kubernetes 1.17 Changes

Cloud provider labels

Cloud provider labels (, and have now reached general availability. All Kubernetes components have been updated to populate and react on those labels. Cloud provider labels can be used to target certain workloads to certain instance types, ensure that pods are placed on the same zone as the volumes they claim, configure node affinity, etc. All of those specs are portable among different cloud providers. 

Volume snapshots

The volume snapshot feature was introduced in Kubernetes 1.12 and is not moving to the beta state. It enables creating snapshots of persistent volumes which can later be used to restore a point-in-time copy of the volume. This provides backup and restore functionality for Kubernetes volumes allowing users to benefit from increased agility with regards to workloads operations.

CSI migration

The CSI (container storage interface) migration enables the replacement of existing storage plugins with a corresponding CSI driver. Prior to CSI, Kubernetes provided a variety of so-called “in-tree” storage plugins which were part of the core Kubernetes code and shipped together with Kubernetes binaries. In order to resolve the issues associated with an ongoing support of storage plugins, CSI was introduced in Kubernetes 1.13. The migration feature is now available in the beta state. The entire process aims to be fully transparent to the end-users. 

Windows-specific options

This feature provides enhancements in the Kubernetes pod spec to capture Windows-specific security options. This includes external resources and the RunAsUserName option which allows users to specify a string that represents a username to run the entrypoint of Windows containers. This increases the security of the workloads and provides an easy-to-use interface for defining those options.

Other changes

  • Topology aware routing of services feature is now available in an alpha state
  • Taint node by condition feature has graduated to a stable state
  • Configurable pod process namespace sharing feature has graduated to a stable state
  • Schedule DaemonSet pods by kube-scheduler feature has graduated to a stable state
  • Dynamic maximum volume count feature has graduated to a stable state
  • Kubernetes CSI topology support feature has graduated to a stable state
  • Provide environment variables expansion in SubPath mount feature has graduated to a stable state
  • Defaulting of custom resources feature has graduated to a stable state
  • Move frequent kubelet heartbeats to lease API feature has graduated to a stable state
  • Break apart the Kubernetes test tarball feature has graduated to a stable state
  • Add watch bookmarks support feature has graduated to a stable state
  • Behavior-driven conformance testing feature has graduated to a stable state
  • Finalizer protection for service load balancers feature has graduated to a stable state
  • Avoid serializing the same object independently for every watcher feature has graduated to a stable state
  • An ongoing support of the IPv4/IPv6 dual stack

Get in touch

If you are interested in Kubernetes support, consulting, or training, please get in touch!

on December 10, 2019 04:06 PM

December 09, 2019

Welcome to the Ubuntu Weekly Newsletter, Issue 608 for the week of December 1 – 7, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on December 09, 2019 10:13 PM

Welcome to Ghost

Sean Davis

A few things you should know

  1. Ghost is designed for ambitious, professional publishers who want to actively build a business around their content. That's who it works best for.
  2. The entire platform can be modified and customised to suit your needs. It's very powerful, but does require some knowledge of code. Ghost is not necessarily a good platform for beginners or people who just want a simple personal blog.
  3. It's possible to work with all your favourite tools and apps with hundreds of integrations to speed up your workflows, connect email lists, build communities and much more.

Behind the scenes

Welcome to Ghost

Ghost is made by an independent non-profit organisation called the Ghost Foundation. We are 100% self funded by revenue from our Ghost(Pro) service, and every penny we make is re-invested into funding further development of free, open source technology for modern publishing.

The version of Ghost you are looking at right now would not have been made possible without generous contributions from the open source community.

Next up, the editor

The main thing you'll want to read about next is probably: the Ghost editor. This is where the good stuff happens.

By the way, once you're done reading, you can simply delete the default Ghost user from your team to remove all of these introductory posts!
on December 09, 2019 02:25 AM

Just start writing

Writing posts with Ghost ✍️

Ghost has a powerful visual editor with familiar formatting options, as well as the ability to add dynamic content.

Select your text to add formatting such as headers or to create links. Or use Markdown shortcuts to do the work for you - if that's your thing.

Writing posts with Ghost ✍️

Rich editing at your fingertips

The editor can also handle rich media objects, called cards, which can be organised and re-ordered using drag and drop.

You can insert a card either by clicking the  +  button, or typing  /  on a new line to search for a particular card. This allows you to efficiently insert images, markdown, html, embeds and more.

For example:

  • Insert a video from YouTube directly by pasting the URL
  • Create unique content like buttons or forms using the HTML card
  • Need to share some code? Embed code blocks directly
<header class="site-header outer">
    <div class="inner">
        {{> "site-nav"}}

It's also possible to share links from across the web in a visual way using bookmark cards that automatically render information from a websites meta data. Paste any URL to try it out:

Ghost: The #1 open source headless Node.js CMS
The world’s most popular modern open source publishing platform. A headless Node.js CMS used by Apple, Sky News, Tinder and thousands more. MIT licensed, with 30k+ stars on Github.
Writing posts with Ghost ✍️

Working with images in posts

You can add images to your posts in many ways:

  • Upload from your computer
  • Click and drag an image into the browser
  • Paste directly into the editor from your clipboard
  • Insert using a URL

Image sizes

Once inserted you can blend images beautifully into your content at different sizes and add captions and alt tags wherever needed.

Writing posts with Ghost ✍️

Image galleries

Tell visual stories using the gallery card to add up to 9 images that will display as a responsive image gallery:

Image optimisation

Ghost will automatically resize and optimise your images with lossless compression. Your posts will be fully optimised for the web without any extra effort on your part.

Next: Publishing Options

Once your post is looking good, you'll want to use the publishing options to ensure it gets distributed in the right places, with custom meta data, feature images and more.

on December 09, 2019 02:25 AM

With LXD you can run system containers, which are similar to virtual machines. Normally, you would use a system container to run network services. But you can also run X11 applications. See the following discussion and come back here. In this post, we further refine and simplify the instructions for the second way to run X applications. Previously I have written several tutorials on this.

LXD GUI profile

Here is the updated LXD profile to setup a LXD container to run X11 application on the host’s X server. Copy the following text and put it in a file, x11.profile. Note that the bold text below (i.e. X1) should be adapted for your case; the number is derived from the environment variable $DISPLAY on the host. If the value is :1, use X1 (as it already is below). If the value is :0, change the profile to X0 instead.

  environment.DISPLAY: :0
  environment.PULSE_SERVER: unix:/home/ubuntu/pulse-native
  nvidia.driver.capabilities: all
  nvidia.runtime: "true"
  user.user-data: |
      - 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf'
      - x11-apps
      - mesa-utils
      - pulseaudio
description: GUI LXD profile
    bind: container
    connect: unix:/run/user/1000/pulse/native
    listen: unix:/home/ubuntu/pulse-native
    security.gid: "1000"
    security.uid: "1000"
    uid: "1000"
    gid: "1000"
    mode: "0777"
    type: proxy
    bind: container
    connect: unix:@/tmp/.X11-unix/X1
    listen: unix:@/tmp/.X11-unix/X0
    security.gid: "1000"
    security.uid: "1000"
    type: proxy
    type: gpu
name: x11
used_by: []

Then, create the profile with the following commands. This creates a profile called x11.

$ lxc profile create x11
Profile x11 created
$ cat x11.profile | lxc profile edit x11

To create a container, run the following.

lxc launch ubuntu:18.04 --profile default --profile x11 mycontainer

To get a shell in the container, run the following.

lxc exec mycontainer -- sudo --user ubuntu --login

Once we get a shell inside the container, you run the diagnostic commands.

$ glxinfo -B
name of display: :0
display: :0  screen: 0
direct rendering: Yes
OpenGL vendor string: NVIDIA Corporation
$ nvidia-smi 
 Mon Dec  9 00:00:00 2019       
 | NVIDIA-SMI 430.50       Driver Version: 430.50       CUDA Version: 10.1     | |-------------------------------+----------------------+----------------------+
 | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
 | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
$ pactl info
 Server String: unix:/home/ubuntu/pulse-native
 Library Protocol Version: 32
 Server Protocol Version: 32
 Is Local: yes
 Client Index: 43
 Tile Size: 65472
 User Name: myusername
 Host Name: mycomputer
 Server Name: pulseaudio
 Server Version: 11.1
 Default Sample Specification: s16le 2ch 44100Hz
 Default Channel Map: front-left,front-right
 Default Sink: alsa_output.pci-0000_01_00.1.hdmi-stereo-extra1
 Default Source: alsa_output.pci-0000_01_00.1.hdmi-stereo-extra1.monitor
 Cookie: f228:e515

You can run xclock which is an Xlib application. If it runs, it means that unaccelerated (standard X11) applications are able to run successfully.
You can run glxgears which requires OpenGL. If it runs, it means that you can run GPU accelerated software.
You can run paplay to play audio files. This is the PulseAudio audio play.
If you want to test with Alsa, install alsa-utils and use aplay to play audio files.


We dissect the LXD profile in pieces.

We set two environment variables in the container. $DISPLAY for X and PULSE_SERVER for PulseAudio. Irrespective of the DISPLAY on the host, the DISPLAY in the container is always mapped to :0. While the PulseAudio Unix socket is often located under /var, in this case we put it into the home directory of the non-root account of the container. This will make PulseAudio accessible to snap packages in the container, as long as they support the home interface.

environment.DISPLAY: :0
environment.PULSE_SERVER: unix:/home/ubuntu/pulse-native

This enables the NVidia runtime with all the capabilities, if such a GPU is available. The text all for the capabilities means that it enables all of compute, display, graphics, utility, video. If you would rather restrict the capabilities, then graphics is for running OpenGL applications. And compute is for CUDA applications. If you do not have an NVidia GPU, then these directly will silently fail.

  nvidia.driver.capabilities: all
nvidia.runtime: "true"

Here we use cloud-init to get the container to perform the following tasks on the first time it starts. The sed command disables shm support in PulseAudio, which means that it enables the Unix socket support. Additionally, the listed three packages are installed with utilities to test X11 application, X11 OpenGL applications and audio applications.

  user.user-data: |
- 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf'
- x11-apps
- mesa-utils
- pulseaudio

This command shares the Unix socket of the PulseAudio server on the host to the container. In the container it is /home/ubuntu/pulse-native. The security configuration refers to the host. The uid, gid and mode refer to the Unix socket in the container. This is a LXD proxy device, and binds into the container, meaning that it makes the host’s Unix socket appear in the container.

bind: container
connect: unix:/run/user/1000/pulse/native
listen: unix:/home/ubuntu/pulse-native
security.gid: "1000"
security.uid: "1000"
uid: "1000"
gid: "1000"
mode: "0777"
type: proxy

This part shares the Unix socket of the X server on the host to the container. If $DISPLAY on your host is also :1, then keep the default shown below to X1. Otherwise, adjust the number accordingly. The @ character means that we are using abstract Unix sockets, which means that there is no actual file on the filesystem. Although /tmp/.X11-unix/X0 looks like an absolute path, it is just a name. We could have used myx11socket instead, for example. We use an abstract Unix socket so that it is also accessible by snap packages. We would have used an abstract Unix socket for PulseAudio, but PulseAudio does not support them. The security uid and gid refer to the host.

bind: container
connect: unix:@/tmp/.X11-unix/X1
listen: unix:@/tmp/.X11-unix/X0
security.gid: "1000"
security.uid: "1000"
type: proxy

We make available the host’s GPU to the container. We do not need to specify explicitly which CPU we are using if we only have a single GPU.

type: gpu

Installing software

You can install any graphical software. For example,

sudo apt-get install -y firefox

Then, run as usual.

Firefox running in a container.


This is the latest iteration of instructions on running GUI or X11 applications and having them appear on the host’s X server.

Note that the applications in the container have full access to the X server (due to how the X server works as there are no access controls). Do not run malicious or untrusted software in the container.

on December 09, 2019 12:27 AM

December 08, 2019

With LXD, you can create system containers. These system containers are similar to virtual machines, while at the same time they are very lightweight.

In a VM, you boot a full Linux kernel and you run your favorite Linux distribution in a virtualized environment that has a fixed disk size and dedicated allocation of RAM memory. To get a graphics application to run in a VM, you need a virtualized GPU, such that will have hardware accelerated access to the host graphics driver.

In contrast, in a system container, you keep using the running Linux kernel of the host, and you just start the container image (runtime, aka rootfs) of your favorite Linux distribution. Your container uses as much disk space are needed from a common storage, and the same goes with the memory (you can also put strict restrictions, if you need). To get a graphics application to run in a container, you need to pass a Unix socket of your existing X server (or a new isolated X server).

In this post we are going to discuss the details of running X11 applications from within a LXD system container. There are a few different ways, so we explain them here.

  1. The X11 application in the container accesses the host’s X server through a network protocol. For example, connecting from the host to the container with ssh -X ... for X11 forwarding.
  2. The X11 application in the container uses directly the X server of the host (by having access to the X Unix socket or X port). It is easy to setup, with GPU acceleration, but you do not get isolation between the container and the host. I have written several tutorials on this.
  3. The X11 application in the container use a separate X server running on the host (such as xpra, Xephyr). There is isolation between the container and the host. You may have GPU acceleration with this. I have not written a tutorial yet.
  4. The container starts its own X server on the computer. There is a post for LXC using a privileged container but not for LXD yet.
  5. Using X2Go in the container to run either individual X11 applications or even a full desktop. You need to install X2Go components both on the container and the host. There is isolation but there is no GPU acceleration.


  1. Initial post.
on December 08, 2019 08:55 PM

December 06, 2019


Rhonda D'Vine

It's been a while. And to be honest, I'm overdue with a few things that I want to get out. One of those things is … Brazil doesn't let me go. I'm watching this country since over a year now, hopefully understandable with the political changes last year and this year's debconf being there, and I promise to go into more details with that in the future because there is more and more to it …

Because one of those things that showed me that Brazil doesn't want to let me go was stumbling upon this artist. They were shared by some friends, and I instantly fell for them. This is about Oxa, but see for yourself:

  • Toy: Their first performance at the show »The Voice of Germany«, where they also stated that they are non-binary. And the song is lovely.
  • Born This Way: With this one, the spoken word interlude gave me goosebumps and I'm astonished that this was possible to get into the show. Big respect!
  • I'm Still Standing: The lyrics in this song are also just as powerful as the other chosen ones. Extremely fine selection!

I'm absolute in love with the person on so many levels–and yes, they are from Brazil originally. Multo brigado, Brazil!

/music | permanent link | Comments: 0 | Flattr this

on December 06, 2019 11:01 PM

S12E35 – Feud

Ubuntu Podcast from the UK LoCo

This week we’ve been talking to the BBC about Thinkpads and Ubuntu goes Pro. We round up the news from the Ubuntu community and discuss our picks from the wider tech news.

It’s Season 12 Episode 35 of the Ubuntu Podcast! Alan Pope and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on December 06, 2019 07:00 PM

December 05, 2019

Ep 67 – PicoHoHoHo

Podcast Ubuntu Portugal

Neste “Episódio 67 – PicoHoHoHo” estivemos novamente em dupla, com actualizações sobre os trabalhos da Comunidade UBPorts, PicoCMS um poderoso CMS, voltaram também as impressões 3D, enfim… mais 1 semana normal.



Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles se acrescentarem no fim do link para qualquer bundle: ?partner=pup (da mesma forma como no link da sugestão) e vão estar também a apoiar-nos.

Atribuição e licenças

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on December 05, 2019 11:00 PM

DNS over HTTP may be harmful?

Stephen Michael Kellat

Hacker News pointed out a blog post on the PowerDNS Blog discussing why DNS over HTTP may not be such a good idea. The Hacker News comments were on-brand. The comments overlook something pretty simple from the article.

The original author wrote in pertinent part:

We have to keep in mind that if a DNS lookup is slow, the entire internet feels sluggish. Slow DNS = Slow internet.

Right now my current domestic broadband provider is providing inconsistent service as it is. Having requests to a variety of known-good sites mysteriously timeout and crash is not unheard of. Having sites become mysteriously inaccessible is not unheard of either. I’m not living anywhere drastic either as this is just northeast Ohio about fifty miles outside Cleveland. It should not provide me with a performance boost when I disable this feature in Firefox.

Unfortunately I get such a performance boost. I don’t think it is something wrong with my machine or my in-house LAN. I’ve looked at the maps of the concept and frankly there are spots where this paradigm breaks down hard if viewed from a Red Team perspective.

I’ve looked at the lack of competition in my local area on the FCC broadband deployment map. I’ve even considered dumping the current provider for somebody else. Unfortunately I don’t really have a choice beyond my current provider’s random loss of packets, disappearances of known active sites, and generally horrible maintenance of inherited rural legacy infrastructure that they probably aren’t making much revenue from.

Looking at traceroute output like this is getting unreal…

on December 05, 2019 10:23 PM

December 04, 2019

I’m voting for Owen Thompson and the SNP at the UK election on December 12th.  Normally for an election I would look through the manifestos and compare them along with consideration of the candidates and the party leaders to decide.  But this election is a single issue election.  It was called because the flawed 2016 referendum on EU membership did not ask what people wanted, it asked what they didn’t want (EU citizenship) but because there was no question asking what people did want instead it led to three years of parliament being stuck.  The SNP policy is for a double proposal to have a referendum on the UK’s EU membership against the Withdrawal Deal as currently negotiated, and then to have a referendum on Scottish independence.  This offers me the best chance to keep my EU citizenship and the freedoms it brings, while offering a good chance to get rid of a corrupt and pointless layer of government.

As I’ve said before all the political parties let us down in 2016 by not effectively campaigning for EU membership and letting the racists and populists win over. They continue to let us down here on those measures.  Not one party proposes to ban political advertising online as done with TV despite the well documented populism that gives.  Not one seems to have a commitment to reform the rules of election and referendum campaigns to stop the illegal behaviour that Johnson’s Vote Leave campaign used in 2016.  And I’ve never heard anyone point out that asking a referendum question which only says what you don’t want and not what you do want instead is a pointless question.

But here’s a quick look at the manifestos anyway.

SNP Good stuff about refendums, no nuclear bombs and critique of why Westminster if broken.   The usual  vague stuff about ending austerity without defining it and promises for the NHS with no explanation of why that public service deserves them more than every other public service.  Various good ideas for things to be devolved like broadcasting or employment law.  They do want to fix the voting franchise for UK elections to include non-UK EU citizens and people from age 16.  They seem to think the UK government will allow an independence referendum while also de-legitimising the idea that there is no need for anyone to allow Scotland to have a referendum, this is a dangerous stance to take as well as incorrect, no other country considers that it has to ask its neighbour for permission for independence. Climate emergency comes in a bit later in the manifesto than I’d like to see but I suppose there’s not much the SNP can do at the UK level since the right layer of government for this is the EU and Scottish layers.  Complying with international law to allow the return of residents of Diego Garcia is pleasingly in there but not on Catalonia.  I’ve done door knocking with their candidate Owen Thompson this election who is an experienced politican from local and UK layers and I’m happy to support him.

Labour doesn’t get round to the Brexit question until page 80.  The central issue of the election which defines if I will have freedoms and a functional economy in a year’s time and they can’t be arsed to highlight their policy on it.  When they do they say they’ll negotiate a hard Brexit (outside the customs union; outside the single market) and then have a referendum on it.  This sounds faffy and dislikeable.  The leaflet from their candidate said she would campaign to remain and reform but with no suggestion of what they reform would be and there’s nothing about it in this manifesto so I think she’s lying on that point.  They support weapons of mass destruction despite the party membership in Scotland voting against them and UK and Scottish leaders campaigning against them, which shows what a mess this organisation is.  Lots of interesting stuff about renationalising public services which I think is a strong part of the cause for the party leadership wanting to leave the EU, EU law will mean having to pay full rate for renationalising these industries while outwith the EU they can pay below market rate, but on the whole I’m against cheating the rules of a functional economy, after all this is my pension scheme they’d be cheating.  No mention of complying with international law about Diego Garcia or Catalonia.  Fixing the voting franchise is in there.  Climate emergency is pleasingly put as a headline item.

The Lib Dems have clear constitutional positions which is fine but being against referendum on them is hypocritical.  They compare Scottish independence to Brexit, which is nonsense. Climate emergency doesn’t come until half way through.  No mention of Diego Garcia or Catalonia.   No mention of nuclear bombs.  Nothing devolved to Scotland.  Pleasingly they do want to fix the undemocratic where we get a prime minister without a vote of parliament or people and they do want to fix the shutting down of parliament.  Otherwise largely underwhelming.

The Conservative party is now a radicalised dangerous nationalistic vehicle which support shutting down parliament, corruption of referendums, limiting the voting franchise, blocking the release of reports on foreign interference in voting and ignoring international law.  Everyone should vote to stop them from getting power.  They will start the Brexit process with the Withdrawal Agreement but still with only a minimal plan for how to implement Brexit, but their lie that this will “get Brexit done” rather than the truth that it is only the start of the process seems to be ignored by the media.  Their hard Brexit will put up new borders, shut off supply chains, limit the economy and take away my freedoms.    The headline item of course is to stop a referendum on independence which is as hypocritical as it comes.      Climate emergency doesn’t seem to feature.  There is scary protectionist British nationalism like “When we leave the EU, we will be able to encourage the public sector to ‘Buy British’” which goes against basic economics and shows how far they have fallen from their Margaret Thatcher free-market politices, which as simplitic and damaging as they were, at least were consistent.  This party is run by people who ran illegal campaigns in 2016, take power without a vote, ignore international and national law and shut down parliament, they are not democratically accountable, they need to be stopped.

[ Update to the below paragraph, in my rush I missed the Green manifesto which is full of good stuff.]

The Greens aren’t standing in my constituency and don’t have a manifesto and because of the voting system won’t get any result except maybe help the SNP lose where they should win so despite being a party member I can’t advocate voting for them.  They make the point that the climate emergency is more important than Brexit, but alas the EU is the right layer of government to take the lead on it so EU membership is vital to helping prevent or limit it and the votes this election need to be directed towards that.

So hopefully an SNP win in Scotland (like they have in every election for the last decade) will help them support a Labour government in England to have a referendum (with rules fixed to make it a valid and fair one) on EU membership vs Johnson’s hard brexit proposal and then a referendum on Scottish independence.  But it probably won’t be that simple.

on December 04, 2019 10:18 PM

Python and AArch64

Marcin Juszkiewicz

Python runs everywhere, right? All those libraries are just one ‘pip install’ away. And we are used to it. Unless on AArch64.

On AArch64 when you do pip install SOMETHING you may end with “no compiler installed” or “No lapack/blas resources found.” messages. All due to lack of wheel files generated for this architecture… And even if you have all dependencies installed then building takes more time than it takes to install existing wheel file.

But there is a light!

PEP 599 defined “manylinux2014” target which is the first supporting something else than just “x86(-64)” architecture. The new arrivals are AArch64, PPC64 (Big and Little Endian) and 32-bit Arm.

I helped getting them working on AArch64. Do not remember how I ended there to be honest ;D

And those images are now available in Pypa repository on

And there is more!

If your project uses Travis CI for testing and release then adding support for non-x86 architectures is just one edit away. All you need is entry in “arch” list of in “matrix”:

   - os: linux
     arch: amd64
   - os: linux
     arch: arm64

And it really works!

I added support for it in “dumb-init” so now kolla does not need to use any workarounds but can grab binary straight from project’s releases. Took just few simple lines in “.travis.yml” file.

What is left to do now?

There are many Python projects out there. Most of them do not know that manylinux2014 got released and what it brings. Or that Travis CI can give them non-x86 architecture support. I slowly started creating issues, bug reports in them to make sure they are aware.

on December 04, 2019 09:53 AM

Your booth team makes or breaks your entire event strategy! If they’re not equipped to be successful, or set up with the rules of the road, then you’ve wasted your time, effort and budget,this is compounded if you’re a start up.

During the event, it’s you and your team of magical pugs who are the first point of interaction with your organisation, your product, and your company culture. Making it a good first time in person memorable experience usually leads to more meaningful conversations down the road. (You know, what the marketing people call engagement. 🙂

Your amazing colleagues role is to create and establish a relationship, they’ll follow up via email, maybe arrange a Lunch and Learn at a company office, or just speak more to folks about their individual interests.

Still, if you’re new to setting up a conference presence ,or being a booth amazing pug, you may not know the do’s and don’ts of booth duty. If you’re a busy events human at new start up or just a busy human you may want to copy this list to start from while you’re prepping for your next show.

So here we go …

Here are some best practices and guidelines to follow when preparing for conference season and booth duty!

Before the event:

  • Best practice: It’s useful, but not always possible to have a briefing call before the event so all the team members know what’s happening.
  • Always: Create an Event Briefing Document. It should have all relevant details: event date, location, booth number, event themes, key messages for your company, booth staffer list with contact details, and a list of items to expect on your booth. Bonus points if you add the tracking numbers of your swag shipments or items being delivered to your booth. You know at some point you’re going to have to call FedEx. Or DHL. Or ….
  • Get a good night’s sleep before the conference starts and throughout the event.I know there are often many social events going on the day before you have to stand for 8 hours straight, so try and not make it a late one. You need to be fresh, lively and ready to interface with the people who’ve taken the time to stop by! And remind your magical pugs to do the same in your pre-briefing call. 😉

At the event!

Allow enough timeWe are all busy but we must allow enough time to do each event properly. For example, arrive the evening before rather than the morning of the conference. Things often go wrong; let’s give ourselves enough time to fix a delayed flight or lost bag of cables.

Be punctual!Show up way before the attendees. Remember, you’re on duty as a representative of your organisation, so you should be on the show floor 30 mins before it opens for a final briefing and to find out where everything is.

Demos: The demo Gods can be cruel. Check your display each morning to make sure it (still) works.

Dress code: We live in the world of Insta we are professionals., Figure out if your organisation has a preferred way dress code for an event, e.g. if there is a specific t-shirt that needs to be worn for a launch. Trust me when I say this, wear comfortable shoes, I’d go as far as to say bring alternative shoes for different days. Standing is difficult, make it easier on your little twinkle toes!

Be prepared: If you are in charge of a demo, make sure the laptop is set up and ready the day before, turning up to the event to get it setup or installed is not a good use of your time. Make sure the laptop is charged the night before. Bring your charger with you, not everyone has the same connector and an adaptor if you’re travelling in a different country to be on the safe side!

Be approachable.

Avoid eating at the booth, or holding extended conversations with coworkers. It is only human nature not to be rude and want to interrupt peoplomes across like you don’t want to be interrupted. Instead allow for people to leave the booth to go and grab food, just also try and understand you may not get your usual full hour for lunch during a busy conference.

Avoid sitting behind the desks at booth duty — people won’t engage with you if they think you’re working. Do not sit there with your laptop open and working. Your role at these events is to talk to people, there is nothing worse than walking past a booth at a conference, stop to look at it’s messaging and seeing people on their laptops working. Most will continue to walk on, you’ve now lost an opportunity to talk to someone.

Be courteous: there’s always one person who wants to spark a debate that may not be the best place for this to happen. Prepare a disengagement line or two. The best one is “thanks for stopping by, I’m sorry I couldn’t help but if you let me pass your details on to someone who can help you”… or “how would you like me to follow up?”

Take notes. There are so many people and so little time. Brief notes will help you to be more effective with your follow-up. Most of the time you will have some sort of scanning tool, either your phone or a device that has been given to you. There should be a note ability on here, take notes it’s useful for following up. If not, use contact cards or something like notes, or evening opening up a blank email to send to yourself.

Stay upbeat It’s easy to get discouraged when person after person walks by your booth seemingly without a glance in your direction. Even if you go with the best booth out there, this will happen sometimes. The key is staying motivated and remaining approachable. Look for opportunities to engage with passers-by, even if they don’t initiate a conversation. Try and make the first move and engage with people, draw them in by asking them have you heard of Couchbase, or if we’re running a competition ask them have they entered. Booths are hard work, social interaction is hard work: that’s the job we have.

Stay refreshed. Let workers take turns going on a break, either for a brisk walk around the venue to get some oxygen, or a relaxing sit down at the snack bar. Fresh workers bring more liveliness into their presentations and encounters, and people will respond better.

This one causes a debate depending on your role or the size of your team and organisation. No sessions. Being on booth duty and at a conference does not mean you are there to go to sessions. In most cases the sponsorship will not cover attendance to talks. Your role at the event is to be on the booth and not sitting in lecture rooms.

Know Your Stuff. Grab Their Attention Fast. You will only have a few seconds to capture attendees’ attention in the midst of all the other lights, sounds, and happenings at the event.

Social Media: Use it, it can help drive people to you booth, to let them know what you have to offer, if you have demos going on, raffles or if you have a guest to meet on the booth.

Tweet pictures using you conference and product tag if you have one and the event hashtag. Work with your social medial team before the event to schedule tweets encouraging people to stop by your booth or attend a talk you are presenting.

I hope this list helps, it’s definitely not a definitive list but it can certainly help you starting out!

on December 04, 2019 08:13 AM

December 03, 2019

Full Circle Weekly News #156

Full Circle Magazine

Canonical Donates More Ubuntu Phones to UBports
A Linux-Based Smartphone Promises to Keep You Anonymous
Sparky Linux Releases Special Editions
Ubuntu “Complete” sound: Canonical

Theme Music: From The Dust – Stardust

on December 03, 2019 07:31 PM
The Lubuntu Team is pleased to announce we are running a Focal Fossa wallpaper competition, giving you, our community, the chance to submit, and get your favorite wallpapers included in the Lubuntu 20.04 LTS (Long Term Support) release. Show Your Artwork To enter, simply post your image into this thread on our Discourse forum. We […]
on December 03, 2019 12:53 AM

December 02, 2019

Welcome to the Ubuntu Weekly Newsletter, Issue 607 for the week of November 24 – 30, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on December 02, 2019 09:19 PM

December 01, 2019

TL;DR: Mostly a bunch of package sponsoring this month. :)

2019-11-11: Sponsor package python-tempora (1.14.1-1) for Debian unstable (Python team request).

2019-11-11: Sponsor package python-jaraco.functools (2.0-2) for Debian unstable (Python team request).

2019-11-11: Review package fpylll (Needs some more work) (Python team request).

2019-11-12: Sponsor package fpylll (0.4.1+ds1-7) for Debian unstable (Python team request).

2019-11-12: Review package python-flask-openid (Needs some more work) (Python team request).

2019-11-12: Upload package calamares (3.2.16-1) to Debian unstable.

2019-11-12: Review package python-six (Deferred to maintainer) (Python team request).

2019-11-12: Upload package gnome-shell-extension-draw-on-your-screen (14.1-1) to Debian unstable.

2019-11-12: Upload package vim-airline (11-1) to Debian unstable.

2019-11-12: Upload package gnome-shell-extension-arc-menu (38-dev-3) to Debian unstable.

2019-11-12: Sponsor package python-opentimestamps (0.4.1-1) for Debian unstable (Python team request).

2019-11-12: Sponsor package sphinx-autodoc-typehints (1.9.0-1) for Debian unstable (Python team request).

2019-11-12: Sponsor package flask-principal (0.4.0-2) for Debian unstable (Python team request).

2019-11-13: Sponsor package runescape (0.6-2) for Debian unstable ( request).

2019-11-13: Sponsor package trace-cmd (2.8.3-1) for Debian unstable ( request).

2019-11-13: Sponsor package gexiv (0.12.0-1) for Debian unstable ( request).

2019-11-13: Sponsor package notepadqq (2.0.0~beta1-1) for Debian unstable ( request).

2019-11-13: Sponsor package hijra (0.4.1-2) for Debian unstable ( request).

2019-11-13: Sponsor package simple-scan (3.34.1-2) for Debian unstable ( request).

2019-11-13: Sponsor package gyros (0.3.12) for Debian unstable ( request).

2019-11-13: Sponsor package sysbench (1.0.18+ds-1) for Debian unstable ( request).

2019-11-14: Sponsor package onedrivesdk (1.1.8-2) for Debian unstable (Python team request).

2019-11-14: Sponsor package pympler (0.7+dfsg1-1) for Debian unstable (Python team request).

2019-11-14: Sponsor package python3-portend (2.5-1) for Debian unstable (Python team request).

2019-11-14: Sponsor package clamfs (1.1.0-1) for Debian unstable ( request).

2019-11-14: Sponsor package xautolock (2.2-6) for Debian unstable ( request).

2019-11-14: Review package piper (0.3-1) (Needs some more work) ( request).

2019-11-14: Review package srcpy (1.10+ds-1) (Needs some more work) ( request).

2019-11-14: Sponsor package python-ebooklib (0.17-1) for Debian unstable ( request).

2019-11-14: Sponsor package plowshare (2.1.7-4) for Debian unstable ( request).

2019-11-14: Sponsor package py-libzfs (0.0+git20191113.2991805-1) for Debian unstable (Python team request).

2019-11-18: Sponsor package rpl (1.6.3-1) for Debian unstable ( request).

2019-11-19: Upload new package feed2toot (0.12-1) to Debian unstable.

2019-11-21: Sponsor package isbg (2.2.1-2) for Debian unstable (Python team request).

2019-11-21: Sponsor package python-elasticsearch (7.1.0-1) for Debian unstable (Python team request).

2019-11-21: Review package python-fsspec (0.6.0-1) (Needs some more work) (Python team request).

2019-11-21: Sponsor package blastem ( for Debian unstable ( request).

2019-11-21: Review package ledmon (0.93-1) for Debian unstable (needs some more work) ( request).

2019-11-21: Sponsor package ripser (1.1-1) for Debian unstable ( request).

2019-11-21: Sponsor package surgescript (0.5.4-1) for Debian unstable ( request).

2019-11-21: Upload package power (1.4+dfsg-4) to Debian unstable (Closes: #854887).

2019-11-22: Sponsor package sqlobject (3.7.3+dfsg-1) for Debian unstable (Python team request).

2019-11-22: Upload package foliate (1.5.3+dfsg1-1) to Debian experimental.

2019-11-28: Sponsor package micropython (1.11-1) (E-mail request).

2019-11-28: Sponsor package qosmic (1.6.0-2) ( request).

on December 01, 2019 12:14 PM

Meat as Technology

Bryan Quigley

My son and I tried some plant-based "meat" products. Primarily doing this for the planet/climate. Dates are estimates as I'm doing this from memory.

Date Meal Brand Product Cost Notes
2019-06 Burger Impossible Ruby's $14 Tried Impossible Burger at Ruby's, not impressed. In-N-Out accross the street sells better burgers for ~$3. Not worth it.
2019-09 Burritos MorningStar ground ???? meat for burritos - I don't remember the exact one, but it was inedible for both of us.
2019-10 Sausage Beyond Meat Sausauge $9 Cooked like Kielbasa (may not be recommended). We both ate it, but wouldn't do it again. Described as "rotted chicken nuggets".
2019-10 Burritos Beyond Meat ground $9 Good, but still have slight preference for the ground turkey (range from $2-$7) we usually get
2019-11 Burritos Impossible ground $9 Both of us prefer it to our previous ground turkey, by a large margin.
on December 01, 2019 07:00 AM

November 29, 2019

Black Friday Alternative Planning

Stephen Michael Kellat

It has become an odd tradition in America to effectively “battle shop” on the Friday following the Thanksgiving holiday. It is already weird that we even have the Thanksgiving holiday here as it is something that pretty much only Canada and the USA have. Having previously worked in consumer electronics retail I have effectively sworn it off after a few too many rounds of handling people who actually like to “battle shop”.

If you’re safe at home and want to do something else to push Ubuntu forward, may I put forward some ideas? How about:

As for me, I am going to be avoiding the stores Friday if at all possible. Going out in the craziness is just not worth it at this time…

on November 29, 2019 04:34 AM

November 28, 2019

Ep 66 – Bestas à solta!

Podcast Ubuntu Portugal

Neste “Episódio 66 – Bestas à solta!”: Mais um episódio em que contámos novamente com a companhia do Luís da Costa, director da Libretrend, e domador de Wildbeests nos tempos tempos livres para partilhar com os nossos ouvintes os pormenores dos seus novos equipamentos.



Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles se acrescentarem no fim do link para qualquer bundle: ?partner=pup (da mesma forma como no link da sugestão) e vão estar também a apoiar-nos.

Atribuição e licenças

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on November 28, 2019 05:23 PM

S12E34 – Buggy Boy

Ubuntu Podcast from the UK LoCo

This week we’ve been in Vancouver and planning for Ubuntu 20.04. We respond to all your distro hopping feedback and bring you a command line love.

It’s Season 12 Episode 34 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been up to recently:
    • Martin has been in Vancouver at the Product Strategy sprint for Ubuntu 20.04
  • We discuss a segment.

  • We share a type of Lurve:

  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • Image taken from Buggy Boy published in 1987 for Commodore 64 by Elite Systems.

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on November 28, 2019 03:00 PM

November 27, 2019

I wanted to put together a few thoughts I had on gifts for my fellow hackers this holiday season. I’m including a variety of different things to appeal to almost anyone involved in information security or hardware hacking, but I’m obviously a bit biased to my own areas of interest. I’ve tried to roughly categorize things, but they tend to transcend boundaries somewhat. Got a suggestion I missed? Hit me up on Twitter.


Quick Reference Manuals (RTFM, BTFM, HashCrack)


Though some have questioned the usefulness of having this material in printed form, I sometimes like being able to thumb through these for a quick reference. The 3 quick references I’ve used a bunch of times are:

Each of these are quick translations of information for cases where you might not be familiar with the relevant information immediately, such as needed to run shell commands on a platform that is less familiar to you, or esoteric post-exploitation information at the last minute. Though internet through a cell phone makes it less critical, having this when onsite can be a quick win, and if you ever need to test or assess when in an area with no reception, it’s even more benefit.

Breaking and Entering

Breaking and Entering: The Extraordinary Story of a Hacker Called “Alien” is a mostly-true story about a professional hacker (penetration tester), detailing her start while a student at MIT through her career as a penetration tester. It details not only some of the information security-related hacks, but also other clever hacks and explorations in her life. It’s an exciting read, and I was super happy to see how detailed and accurate the recollection is.

Cult of the Dead Cow

Cult of the Dead Cow

Cult of the Dead Cow: How the Original Hacking Supergroup Might Just Save the World is a great history of the early days of hacking and hacking groups. cDc is probably the single most influential hacking group, and is responsible for many of the tools that are used by information security professionals today. With significant overlap with l0pht (another influential group), the group helped to shape what hacking is today and has had significant influence that many individuals may not realize. Those outside the hacking scene may take many things for granted that are influenced by cDc. Whether you realize it or not, they shaped the internet of today, and this is a well-researched read into their history and influence.


Encrypted Flash Drive


This encrypted flash drive has hardware-based encryption to protect the data contained on it. Because the security is hardware based, it’s workable with any operating system or hardware platform. It’s also immune to hardware keylogging. It uses a PIN pad to support input of the passcode, and the operating system can’t interact with the drive at all until it is locked. The downside of hardware encryption is that it is very hard to verify or audit, but I believe the quality of this particular flash drive to be relatively high. Though these drives are expensive for the capacity, they’re not all about dollars per gigabyte – the features are in the firmware. It’s available at least up to 64GB.

Keysy RFID Duplicator


The Keysy RFID duplicator lets you clone various forms of RFID credentials either into its internal memory, or into a writable card. Though it’s dead simple to use, it’s not as flexible or sophisticated as an RFID hacking tool like the Proxmark. It’s super useful for cloning things like apartment gate fobs or basic proxcard authentication systems for access control. It won’t work for anything using encryption or handshaking. It also only works on low-frequency (125 kHz) RFID technology, which tends to be the unencrypted cards anyway.

Proxmark3 RDV4


The Proxmark3 RDV4 is an RFID hacking kit. Unlike the Keysy, this is for those interested in more advanced RFID hacking, including cracking encrypted RFID cards, or researching custom RFID implementations. This supports both 125 kHz and 13.56 MHz RFID cards, and due to the customizable firmware, it can be adapted to run just about any protocol known. There are even some offline modes that don’t require a connection to a computer to hack the RFID cards (such as automatic cloning or replay).

  • Can pretend to be either a card or reader.
  • Sniff communications between other readers & cards.
  • Operate standalone
  • Multiple RFID modes supported.

Yubikey 5

Yubikey 5

The Yubikey 5 is a hardware security token for a variety of purposes. Most obviously, it offers support for U2F/FIDO2/WebAuthn for account login on sites like Google, GitHub, Dropbox, Coinbase, Lastpass, and more. This is a 2nd factor authentication mechanism that can’t be phished or attacked via mobile malware.

The Yubikey 5 also supports a smartcard mode where it can store OpenPGP and/or SSH keys in its secure memory, protecting them against local malware attempts to steal the keys.

Packet Squirrel

Packet Squirrel

The Hak5 Packet Squirrel is a great little Man-in-the-Middle device. With dual ethernet interfaces, it’s a physical MITM, so not vulnerable to the kinds of detection that work against ARP spoofing and other techniques. It’s a great option for a penetration test drop box, as well as things as simple as debugging network problems when you can’t run a packet capture on the endpoint itself. It’s USB micro powered and supports pre-programmable payloads via an external switch, so you can have it ready to perform any of several roles, depending on the situation you find yourself in. It runs a full Linux stack, so lots of capabilities available there.

Shark Jack

Shark Jack

The Hak5 Shark Jack is a tiny implant with a small battery built-in. This battery allows it to be completely self-contained, needing only an ethernet port to plug into. This is perfect for when you find that ethernet port behind a piece of furniture and can deploy your implant quickly. This isn’t an implant to leave in place – the battery only lasts about 10-15 minutes. (Though I suppose you could power it via USB-C, so it’s not impossible to run it that way.) By default it will do a quick nmap scan and save it to the internal flash, but you can, of course, script it to do anything you want when plugged in. Like the Packet Squirrel, theres a switch to choose the mode you want to run.


Anker 60W Dual USB-PD Charger

Anker USB-PD Power Supply

Anker is one of my favorite manufacturers of chargers, USB battery banks, etc. I’ve had very good experiences with their products, and this 60W USB-PD charger with 2 USB-C ports is one of the most compact and capable USB-PD chargers I’ve found. It can charge two phones or laptops at the same time, or – best of all – one of each, which makes it great for travel (laptop + phone). Using Gallium Nitride (GaN) instead of Silicon transistors makes it smaller and more efficient than other models of USB power supplies.

Anker USB-PD Battery Pack

Anker USB-PD Battery

Like their USB-PD chargers, I’m super happy with Anker battery packs. My main travel USB power supply is a smaller and older version of this battery pack, but I have no doubt this one is also a great option. This battery is 26800mAh, which is about 99Wh, so just barely under the FAA 100 Wh limit for lithium ion batteries – in other words, this is the largest possible battery bank you can bring on an airplane in the US. It’s about 7 full charges of a cell phone, or a complete recharge of your USB-C powered laptop.

Travel Router


It’s amazing how useful a travel router is on the road. My current favorite is the AR-750S (Slate) from GL.iNet. The stock firmware is based on OpenWRT, which means you can do a wide variety of things by installing a stock OpenWRT build and then using the wide OpenWRT ecosystem of packages to enhance your router. Alternatively, you can build your own custom OpenWRT build with whatever features and configuration you’d like, using OpenWRT’s buildroot system. The router has a lot of hardware features, including both a core NOR flash and an expanded NAND flash.

Some of the reasons for using a travel router include:

  • Single connection for multiple devices when you’re limited to a single connection (e.g., hotels)
  • VPN connection for all connected devices.
  • Drop device for penetration testing engagements.
  • Local network for client-to-client comms (e.g., Chromecast)

There are also cheaper options like the GL-MT300N or the GL-AR300M if you don’t need all the power of the Slate.

Keyport Pivot

Keyport Pivot

More than just travel, the Keyport Pivot is part of my every day carry. I have a tendency to carry too much, and that includes keys. My keys feel so much more compact and so much more organized when in the Keyport Pivot, which keeps them all together at once. I also carry the MOCA Multi-tool in my Pivot, which is a nice multi-tool that meets TSA guidelines, so is great for those that travel regularly. In fact, the MOCA is also available in a standalone format with a handle and paracord pull.


Raspberry Pi 4

Raspberry Pi 4

The Raspberry Pi 4 is the latest generation of the venerable Raspberry Pi single board computer. This generation has entered the realm of being a full desktop for web surfing, etc., as well as an option for a home theater PC or emulating older console game titles. It even has enough processing power to pair with the Analog Discovery 2 to be a PC-based oscilloscope/logic analyzer. They’ve finally upgraded to Gigabit ethernet, so much better on the network (though it retains built-in WiFi support as well).

Circuit Playground Express

Circuit Playground Express

Adafruit’s Circuit Playground Express is the ultimate introduction to embedded devices. Programmable in either the popular Arduino IDE or CircuitPython, this takes the concept of an Arduino one step further. Instead of having to hook up lights, buttons, or sensors, they are all integrated into the one board. It includes two push buttons, an accelerometer, 10 RGB LEDs, temperature, light and sound sensors, and much more. It’s powered by an ARM microcontoller at 48 MHz to allow you to make use of all these inputs and outputs, and can be extended via the connections around the outside (which are large enough to use with alligator clips). If you or someone you know wants to learn embedded development without needing to do wiring or circuits themselves, this is a great way to get started.

iFixit Tool Kit

iFixit Toolkit

This iFixit Tool Kit is my go-to toolkit for opening and working with electronics. It has all the bits I’ve found on devices, including the “security Torx” bits, precision Phillips and slotted bits, and hex bits. The handle is a great size to both get a grip but also fit into tight spaces, and the flex shaft helps in even tighter spaces. The plastic spudgers and pry tool are great for getting into devices held together with clips instead of (or in addition to) screws. (Like the base plate on my Lenovo laptop, and so many electronic devices.) I’ve used some of the cheaper clones of these kits, and the pieces just don’t hold up as well as these do, or are made with materials that don’t perform as well.

Pocket Flashlight


This USB-rechargable pocket flashlight is a very useful tool to have on hand. Flashlights are obviously useful, but being rechargable is both eco-friendly and convenient, and being pocket-sized ensures you can always have it with you. (Or carry it in your bag instead of your pocket, which is my approach so I don’t lose it.) Streamlight is a well known brand with strong ratings and a history of quality, so this will keep going reliably.


Sugru is a “Mouldable Glue”, which is basically an adhesive putty that holds fast when it cures. They are useful for making custom hooks, adding protection or strain relief to cables, waterproofing around openings, or repairing small breaks. As it’s silicone based, it remains slightly flexible and holds up well against water. I’ve used it before to seal around cables going through openings in enclosures and to make custom cable organizers.

Skeletool CX


Like the pocket flashlight, I like to carry a multitool with me daily. I’ve carried the Skeletool for a few years now, including a replacement after I accidentally got to the TSA checkpoint with one. (Oops.) This has a knife blade, pliers, and the ability to hold interchangable bits in the handle for a screwdriver. I’ve run into many occassions where this was useful, ranging from quick repairs to taking something apart on a whim, to opening packages. There’s even a bottle opener at the base of the handle, which is perfect for those adult beverages. (No corkscrew for you wine drinkers. Try screwtop bottles.)

on November 27, 2019 08:00 AM

November 26, 2019

Full Circle Weekly News #155

Full Circle Magazine

Microsoft Edge Will be Available on Linux
SINGA becomes top-level project of the Apache Software Foundation
Canonical Will Fully Support Ubuntu Linux on All Raspberry Pi Boards
Ubuntu Bug Reveals Your Media Files To Others Without Warning
Libarchive vulnerability can lead to code execution on Linux, FreeBSD, NetBSD
Ubuntu “Complete” sound: Canonical

Theme Music: From The Dust – Stardust

on November 26, 2019 07:04 PM

November 21, 2019

While much of the work on kernel Control Flow Integrity (CFI) is focused on arm64 (since kernel CFI is available on Android), a significant portion is in the core kernel itself (and especially the build system). Recently I got a sane build and boot on x86 with everything enabled, and I’ve been picking through some of the remaining pieces. I figured now would be a good time to document everything I do to get a build working in case other people want to play with it and find stuff that needs fixing.

First, everything is based on Sami Tolvanen’s upstream port of Clang’s forward-edge CFI, which includes his Link Time Optimization (LTO) work, which CFI requires. This tree also includes his backward-edge CFI work on arm64 with Clang’s Shadow Call Stack (SCS).

On top of that, I’ve got a few x86-specific patches that get me far enough to boot a kernel without warnings pouring across the console. Along with that are general linker script cleanups, CFI cast fixes, and x86 crypto fixes, all in various states of getting upstreamed. The resulting tree is here.

On the compiler side, you need a very recent Clang and LLD (i.e. “Clang 10”, or what I do is build from the latest git). For example, here’s how to get started. First, checkout, configure, and build Clang (and include a RISC-V target just for fun):

# Check out latest LLVM
mkdir -p $HOME/src
cd $HOME/src
git clone
mkdir llvm-build
cd llvm-build
# Configure
cmake -DCMAKE_BUILD_TYPE=Release \
      -DLLVM_ENABLE_PROJECTS='clang;lld' \
# Build!
make -j$(getconf _NPROCESSORS_ONLN)
# Install cfi blacklist template (why is this missing from "make" above?)
mkdir -p $(echo lib/clang/*)/share
cp ../llvm-project/compiler-rt/lib/cfi/cfi_blacklist.txt lib/clang/*/share/cfi_blacklist.txt

Then checkout, configure, and build the CFI tree. (This assumes you’ve already got a checkout of Linus’s tree.)

# Check out my branch
cd ../linux
git remote add kees
git fetch kees
git checkout kees/kspp/cfi/x86 -b test/cfi
# Configure (this uses "defconfig" but you could use "menuconfig"), but you must
# include CC and LD in the make args or your .config won't know about Clang.
make defconfig \
     CC=$HOME/src/llvm-build/bin/clang LD=$HOME/src/llvm-build/bin/ld.lld
# Enable LTO and CFI.
scripts/config \
     -e CONFIG_LTO \
# Enable LKDTM if you want runtime fault testing:
scripts/config -e CONFIG_LKDTM
# Build!
make -j$(getconf _NPROCESSORS_ONLN) \
     CC=$HOME/src/llvm-build/bin/clang LD=$HOME/src/llvm-build/bin/ld.lld

Do not be alarmed by various warnings, such as:

ld.lld: warning: cannot find entry symbol _start; defaulting to 0x1000
llvm-ar: error: unable to load 'arch/x86/kernel/head_64.o': file too small to be an archive
llvm-ar: error: unable to load 'arch/x86/kernel/head64.o': file too small to be an archive
llvm-ar: error: unable to load 'arch/x86/kernel/ebda.o': file too small to be an archive
llvm-ar: error: unable to load 'arch/x86/kernel/platform-quirks.o': file too small to be an archive
WARNING: EXPORT symbol "page_offset_base" [vmlinux] version generation failed, symbol will not be versioned.
WARNING: EXPORT symbol "vmalloc_base" [vmlinux] version generation failed, symbol will not be versioned.
WARNING: EXPORT symbol "vmemmap_base" [vmlinux] version generation failed, symbol will not be versioned.
WARNING: "__memcat_p" [vmlinux] is a static (unknown)
no symbols

Adjust your .config as you want (but, again, make sure the CC and LD args are pointed at Clang and LLD respectively). This should(!) result in a happy bootable x86 CFI-enabled kernel. If you want to see what a CFI failure looks like, you can poke LKDTM:

# Log into the booted system as root, then:
cat <(echo CFI_FORWARD_PROTO) >/sys/kernel/debug/provoke-crash/DIRECT

Here’s the CFI splat I see on the console:

[   16.288372] lkdtm: Performing direct entry CFI_FORWARD_PROTO
[   16.290563] lkdtm: Calling matched prototype ...
[   16.292367] lkdtm: Calling mismatched prototype ...
[   16.293696] ------------[ cut here ]------------
[   16.294581] CFI failure (target: lkdtm_increment_int$53641d38e2dc4a151b75cbe816cbb86b.cfi_jt+0x0/0x10):
[   16.296288] WARNING: CPU: 3 PID: 2612 at kernel/cfi.c:29 __cfi_check_fail+0x38/0x40
[   16.346873] ---[ end trace 386b3874d294d2f7 ]---
[   16.347669] lkdtm: Fail: survived mismatched prototype function call!

The claim of “Fail: survived …” is due to CONFIG_CFI_PERMISSIVE=y. This allows the kernel to warn but continue with the bad call anyway. This is handy for debugging. In a production kernel that would be removed and the offending kernel thread would be killed. If you run this again with the config disabled, there will be no continuation from LKDTM. :)

Enjoy! And if you can figure out before me why there is still CFI instrumentation in the KPTI entry handler, please let me know and help us fix it. ;)

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

on November 21, 2019 05:09 AM

November 20, 2019

Today, November 20th, is Trans Day of Remembrance. It is about remembering victims of hate crimes that aren't amongst us anymore. Last year we learned about at least 331 murdered trans people, the real number is like always higher. Also like always it affects mostly trans women of color who are the target of multiple discriminatory patterns.

What is also a pattern is that Brazil has a fair chunk of those murders. Unfortunately the country since a while is on the top in that statistic, but the election of an right winged outspoken queer hating person as president of the country last year did make those who feel the hate having some sort of legitimacy to it, which makes it obviously harder to survive these days. My thoughts thus are specifically with the people of Brazil who fight for their survival.

Right-winged parties though rise all around the globe spreading hate, and as our Debian Free Software Guidelines say in #5, "No Discrimination Against Persons or Groups", and this is something that we can't limit only to software licenses but also have to extend to the way we work as community.

If you ask what you can do: Support your local community spaces and support groups. I had the pleasure to meet Grupo Dignidade during my stay in Curitiba for DebConf 19, and was very thankful for a representative of that group to join my Debian Diversity BoF. Thanks again, Ananda, it was lovely having you!

Meu Corpo é Político - my body is political.

/personal | permanent link | Comments: 0 | Flattr this

on November 20, 2019 01:33 PM

November 19, 2019

Managing dynamic inventory in private subnets using bastion jump box
The title of post is quite large, but is something I encountered issues in the last weeks. I had a VPC in AWS, creating x amount of instances in a private network, and was quite complex to manage this instance using static inventory files. So I will explain you how to manage this problem with Ansible.
Before continue, I want to say these articles are really good and can help you with this issues.
So you will be asking, if these articles are so good, why are you writing them again? Easy, I’m doing this in Gitlab CI, and I suppose other CI will encounter similar issues. It’s not possible to connect to the instances using the instructions above.

First Step

We get our inventory in a dynamic way. For this we will use the inventory scripts.
We need to modify the ec2.ini file with uncommenting the vpc_destination_variable and set the value to private_ip_address
An example
# For server inside a VPC, using DNS names may not make sense. When an instance
# has 'subnet_id' set, this variable is used. If the subnet is public, setting
# this to 'ip_address' will return the public IP address. For instances in a
# private subnet, this should be set to 'private_ip_address', and Ansible must
# be run from within EC2. The key of an EC2 tag may optionally be used; however
# the boto instance variables hold precedence in the event of a collision.
# WARNING: - instances that are in the private vpc, _without_ public ip address
# will not be listed in the inventory until You set:
vpc_destination_variable = private_ip_address
#vpc_destination_variable = ip_address
Be sure to have your ansible.cfg, with the following line.
host_key_checking = False
This is useful, as we’re running this in a CI, we can’t hit enter to accept the connection in the terminal.
Then we begin working with our yml file. As I’m running this on a container, I need to create the .ssh directory and the config file. Here it’s important to add StrictHostKeyChecking=no If we don’t do this, this will fail in our CI, as we can’t hit enter. If you don’t included it and run it locally, it will work.
- name: Creates ssh directory
path: ~/.ssh/
state: directory

- name: Create ssh config file in local computer
dest: ~/.ssh/config
content: |
Host 10.*.*.*
User ubuntu
IdentityFile XXXXX.pem
ProxyCommand ssh -q -W %h:%p {{ lookup('env', 'IP') }}
Host {{ lookup('env', 'IP') }}
User ubuntu
IdentityFile XXXXX.pem
ForwardAgent yes

And finally we test it running the ping command.
- name: test connection

In case you need the code :
on November 19, 2019 04:09 PM

November 18, 2019

Linux Applications Summit

Jonathan Riddell

I had the pleasure of going to the Linux Applications Summit last week in Barcelona.  A week of talks and discussion about getting Linux apps onto people’s computers.  It’s the third one of these summits but the first ones started out with a smaller scope (and located in the US) being more focused on Gnome tech, while this renamed summit was true cross-project collaboration.


Oor Aleix here opening the conference (Gnome had a rep there too of course).


It was great to meet with Heather here from Canonical’s desktop team who does Gnome Snaps, catching up with Alan and Igor from Canonical too was good to do.


Here is oor Paul giving his talk about the language used.  I had been minded to use “apps” for the stuff we make but he made the point that most people don’t associate that word with the desktop and maybe good old “programs” is better.


Oor Frank gave a keynote asking why can’t we work better together?  Why can’t we merge the Gnome and KDE foundations for example?  Well there’s lots of reasons why not but I can’t help think that if we could overcome those reasons we’d all be more than the sum of our parts.

I got to chat with Ti Lim from Pine64 who had just shipped some developer models of his Pine Phone (meaning he didn’t have any with him).

Pureism were also there talking about the work they’ve done using Gnomey tech for their Librem5 phone.  No word on why they couldn’t just use Plasma Mobile where the work was already largely done.

This conference does confirm to me that we were right to make a goal of KDE to be All About the Apps, the new technologies and stores we have to distribute our programs we have mean we can finally get our stuff out to the users directly and quickly.

Barcelona was of course beautiful too, here’s the cathedral in moonlight.


on November 18, 2019 03:04 PM

November 15, 2019

A Debian LTS logo

Like each month, here comes a report about
the work of paid contributors
to Debian LTS.

Individual reports

In October, 214.50 work hours have been dispatched among 15 paid contributors. Their reports are available:

  • Abhijith PA did 8.0h (out of 14h assigned) and gave the remaining 6h back to the pool.
  • Adrian Bunk didn’t get any hours assigned as he had been carrying 26h from September, of which he gave 8h back, so thus carrying over 18h to November.
  • Ben Hutchings did 22.25h (out of 22.75h assigned), thus carrying over 0.5h to November.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did 46.25h (out of 21.75h assigned at the beginning of the month and 24.5h assigned at the end of the month).
  • Hugo Lefeuvre did 46.5h (out of 22.75h assigned and 23.75h from September).
  • Jonas Meurer didn’t get any hours assigned and gave back the 14.5h he was carrying from September as he did nothing.
  • Markus Koschany did 22.75h (out of 22.75h assigned).
  • Mike Gabriel did 11.75h (out of 10h assigned and 1.75h from September).
  • Ola Lundqvist did 8.5h (out of 8h assigned and 14h from September), thus carrying over 13.5h to November.
  • Roberto C. Sánchez did 8h (out of 8h assigned).
  • Sylvain Beucler did 22.75h (out of 22.75h assigned).
  • Thorsten Alteholz did 22.75h (out of 22.75h assigned).
  • Utkarsh Gupta did 10.0h (out of 10h assigned).

Evolution of the situation

In October Emilio spent many hours bringing firefox-esr 68 to jessie and stretch, thus expanding the impact from Debian LTS to stable security support. For jessie firefox-esr needed these packages to be backported: llvm-toolchain, gcc-mozilla, cmake-mozilla, nasm-mozilla, nodejs-mozilla, cargo, rustc and rust-cbindgen.
October was also the month were we saw the first paid contributions from Utkarsh Gupta, who was a trainee in September.

Starting in November we also have a new trainee, Dylan Aïssi. Welcome to the team, Dylan!

We currently have 59 LTS sponsors sponsoring 212h per month. Still, as always we are welcoming new LTS sponsors!

The security tracker currently lists 35 packages with a known CVE and the dla-needed.txt file has 35 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on November 15, 2019 02:26 PM

Previously: v5.2.

Linux kernel v5.3 was released! I let this blog post get away from me, but it’s up now! :) Here are some security-related things I found interesting:

heap variable initialization
In the continuing work to remove “uninitialized” variables from the kernel, Alexander Potapenko added new init_on_alloc” and “init_on_free” boot parameters (with associated Kconfig defaults) to perform zeroing of heap memory either at allocation time (i.e. all kmalloc()s effectively become kzalloc()s), at free time (i.e. all kfree()s effectively become kzfree()s), or both. The performance impact of the former under most workloads appears to be under 1%, if it’s measurable at all. The “init_on_free” option, however, is more costly but adds the benefit of reducing the lifetime of heap contents after they have been freed (which might be useful for some use-after-free attacks or side-channel attacks). Everyone should enable CONFIG_INIT_ON_ALLOC_DEFAULT_ON=1 (or boot with “init_on_alloc=1“), and the more paranoid system builders should add CONFIG_INIT_ON_FREE_DEFAULT_ON=1 (or “init_on_free=1” at boot). As workloads are found that cause performance concerns, tweaks to the initialization coverage can be added.

pidfd_open() added
Christian Brauner has continued his pidfd work by creating the next needed syscall: pidfd_open(), which takes a pid and returns a pidfd. This is useful for cases where process creation isn’t yet using CLONE_PIDFD, and where /proc may not be mounted.

-Wimplicit-fallthrough enabled globally
Gustavo A.R. Silva landed the last handful of implicit fallthrough fixes left in the kernel, which allows for -Wimplicit-fallthrough to be globally enabled for all kernel builds. This will keep any new instances of this bad code pattern from entering the kernel again. With several hundred implicit fallthroughs identified and fixed, something like 1 in 10 were missing breaks, which is way higher than I was expecting, making this work even more well justified.

x86 CR4 & CR0 pinning
In recent exploits, one of the steps for making the attacker’s life easier is to disable CPU protections like Supervisor Mode Access (and Execute) Prevention (SMAP and SMEP) by finding a way to write to CPU control registers to disable these features. For example, CR4 controls SMAP and SMEP, where disabling those would let an attacker access and execute userspace memory from kernel code again, opening up the attack to much greater flexibility. CR0 controls Write Protect (WP), which when disabled would allow an attacker to write to read-only memory like the kernel code itself. Attacks have been using the kernel’s CR4 and CR0 writing functions to make these changes (since it’s easier to gain that level of execute control), but now the kernel will attempt to “pin” sensitive bits in CR4 and CR0 to avoid them getting disabled. This forces attacks to do more work to enact such register changes going forward. (I’d like to see KVM enforce this too, which would actually protect guest kernels from all attempts to change protected register bits.)

additional kfree() sanity checking
In order to avoid corrupted pointers doing crazy things when they’re freed (as seen in recent exploits), I added additional sanity checks to verify kmem cache membership and to make sure that objects actually belong to the kernel slab heap. As a reminder, everyone should be building with CONFIG_SLAB_FREELIST_HARDENED=1.

KASLR enabled by default on arm64
Just as Kernel Address Space Layout Randomization (KASLR) was enabled by default on x86, now KASLR has been enabled by default on arm64 too. It’s worth noting, though, that in order to benefit from this setting, the bootloader used for such arm64 systems needs to either support the UEFI RNG function or provide entropy via the “/chosen/kaslr-seed” Device Tree property.

hardware security embargo documentation
As there continues to be a long tail of hardware flaws that need to be reported to the Linux kernel community under embargo, a well-defined process has been documented. This will let vendors unfamiliar with how to handle things follow the established best practices for interacting with the Linux kernel community in a way that lets mitigations get developed before embargoes are lifted. The latest (and HTML rendered) version of this process should always be available here.

Those are the things I had on my radar. Please let me know if there are other things I should add! Linux v5.4 is almost here…

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

on November 15, 2019 01:36 AM

November 13, 2019

When talking to various people at conferences in the last year or at conferences, a recurring topic was that they believed that the GTK Rust bindings are not ready for use yet.

I don’t know where that perception comes from but if it was true, there wouldn’t have been applications like Fractal, Podcasts or Shortwave using GTK from Rust, or I wouldn’t be able to do a workshop about desktop application development in Rust with GTK and GStreamer at the Linux Application Summit in Barcelona this Friday (code can be found here already) or earlier this year at GUADEC.

One reason I sometimes hear is that there is not support for creating subclasses of GTK types in Rust yet. While that was true, it is not true anymore nowadays. But even more important: unless you want to create your own special widgets, you don’t need that. Many examples and tutorials in other languages make use of inheritance/subclassing for the applications’ architecture, but that’s because it is the idiomatic pattern in those languages. However, in Rust other patterns are more idiomatic and even for those examples and tutorials in other languages it wouldn’t be the one and only option to design applications.

Almost everything is included in the bindings at this point, so seriously consider writing your next GTK UI application in Rust. While some minor features are still missing from the bindings, none of those should prevent you from successfully writing your application.

And if something is actually missing for your use-case or something is not working as expected, please let us know. We’d be happy to make your life easier!


Some people are already experimenting with new UI development patterns on top of the GTK Rust bindings. So if you want to try developing an UI application but want to try something different than the usual signal/callback spaghetti code, also take a look at those.

on November 13, 2019 03:02 PM

November 05, 2019

I am excited to be back for another Reddit Ask Me Anything on Wed 20th November 2019 at 8.30am Pacific / 11.30am Eastern.

For those unfamiliar with Reddit AMAs, it is essentially a way in which people can ask questions that someone will respond to. You simply add your questions (serious or fun) and I will respond to as many as I can. It has been a while since my last AMA, so I am looking forward to this one!

Feel free to ask any questions you like. Here is some food for thought:

  • The value of building communities, what works, and what doesn’t
  • The methods and approaches to community management, leadership, and best practice.
  • My new book, ‘People Powered: How communities can supercharge your business, brand, and teams‘, what is in it, and what it covers.
  • Recommended tools, techniques, and tricks to build communities and get people involved.
  • Working at Canonical, GitHub, XPRIZE, and elsewhere.
  • The open source industry, how it has changed, and what the future looks like.
  • Remote working and online collaboration, and what the future looks like
  • The projects I have been involved in such as Ubuntu, GNOME, KDE, and others.
  • The driving forces behind people and groups, behavioral economics, etc.
  • My other things such as my music, conferences, writing etc.
  • Anything else – politics, movies, news, tech…ask away!

If you want to ask about something else though, go ahead! 🙂

How to Join

Joining the AMA is simple. Just follow these steps:

  • Be sure to have a Reddit account. If you don’t have one, head over here and sign up.
  • On Wednesday 20th November 2019 at 8.30am Pacific / 11.30am Eastern (see other time zone times here) I will share the link to my AMA on Twitter (I am not allowed to share it until we run the AMA). You can look for this tweet by clicking here. Here are the times for the AMA for different timezones:

  • Click the link in my tweet to go to the AMA and then click the text box to add your question(s).
  • Now just wait until I respond. Feel free to follow up, challenge my response, and otherwise have fun!

I hope to see you all there!

The post Reddit Ask Me Anything: Wednesday Nov 20th 2019 appeared first on Jono Bacon.

on November 05, 2019 01:30 AM

November 04, 2019

Last week I attended the Embedded Linux Conference (Europe 2019) and presented a talk on stress-ng.  The slide deck for this presentation is now available.
on November 04, 2019 09:33 AM

GitHub has seen a startling level of growth. With over 31 million developers spread across 96 million repositories, it has become the quite literal hub for how people build technology (and was recently acquired by Microsoft for $7.5 billion). Throughout this remarkable growth, GitHub has continued to evolve as a product and platform, and the fit and finish of a reliable, consistent product has been a staple of GitHub throughout the years.

Jason Warner is SVP of Technology at GitHub and is tasked delivering this engineering consistency. Jason and I used to work on the engineering management team at Canonical before he went to Heroku and then ultimately GitHub.

In this episode of Conversations With Bacon, we get into a truly fascinating discussion about not just how GitHub builds GitHub, but also Jason’s philosophy, experience, and perspective when it comes to effective leadership.

We discuss how GitHub evaluate’s future features, how they gather customer/user feedback, how Jason structures his team and bridges product and engineering, what he believes truly great leadership looks like, and where the future of technology leadership is going.

Since I have started doing Conversations With Bacon, this conversation with Jason is one of my favorites: there is so much in here that I think will be interesting and insightful for you folks. If you are interested in technology and leadership, and especially if you are curious about hot the GitHub machine works, this one is well worth a listen. Enjoy!


   Listen on Google Play Music

The post Jason Warner on GitHub and Leadership appeared first on Jono Bacon.

on November 04, 2019 04:55 AM

November 03, 2019

Bits and pieces

AIMS Desktop talk: On the 1st of October I gave a talk at AIMS titled “Why AIMS Desktop?” where I talked about the Debian-based system we use at AIMS, and went into some detail about what Linux is, what Debian is, and why it’s used within the AIMS network. I really enjoyed the reaction from the students and a bunch of them are interested in getting involved directly with Debian. I intend to figure out a way to guide them into being productive community members without letting it interfere with their academic program.

Catching up with Martin: On the 12th of October I had lunch with Martin Michlmayr. Earlier this year we were both running for DPL, which was an interesting experience and the last time we met neither of us had any intentions to do so. This was the first time we talked in person since then and it was good reflecting over the last year and we also talked about a bunch of non-Debian stuff.

Cover art of our band?

Activity log

2019-10-08: Upload package bundlewrap (3.7.0-1) to Debian unstable.

2019-10-08: Upload package calamares (3.2.14-1) to Debian unstable.

2019-10-08: Sponsor package python3-fastentrypoints (0.12-2) for Debian unstable (Python team request).

2019-10-08: Sponsor package python3-cheetah (3.2.4-1) for Debian unstable (Python team request).

2019-10-14: Upload package calamares (3.2.15-1) to Debian unstable.

2019-10-14: Upload package kpmcore (4.0.1-1) to Debian unstable.

2019-10-14: Upload package gnome-shell-extension-disconnect-wifi (21-1) to Debian unstable.

2019-10-15: Upload package partitionmanager (4.0.0-1) to Debian unstable.

2019-10-15: Sponsor package python-sabyenc (3.3.6-1) for Debian unstable (Python team request).

2019-10-15: Sponsor package python-jaraco.functools (2.0-1) for Debian unstable (Python team request).

2019-10-15: Sponsor package python3-gntp (1.0.3-1) for Debian unstable (Python team request).

2019-10-15: Review package python3-portend (2.5-1) (Not yet ready) (Python team request).

2019-10-15: Review package python3-tempora (1.14.1) (Net yet ready) (Python team request))

2019-10-15: Upload package python3-flask-silk (0.2-15) to Debian unstable.

2019-10-15: Upload package tuxpaint (0.9.24~git20190922-f7d30d-1~exp2) to Debian experimental.

2019-10-15: Upload package python3-flask-silk (0.2-16) to Debian unstable.

2019-10-16: Upload package gnome-shell-extension-multi-monitors (19-1) to Debian unstable.

2019-10-16: Upload package python3-flask (0.6.2-5) to Debian unstable.

2019-10-16: Sponsor package buildbot (2.4.1-1) for Debian unstable (Python team request).

2019-10-16: Signed/sent keys from DebConf19 KSP.

2019-10-17: Publish blog entry “Calamares plans for Debian 11“.

2019-10-17: Upload package kpmcore (4.0.1-2) to Debian unstable (Thanks to Alf Gaida for the merge request with fixes) (Closes: #942522, #942528, #942512).

2019-10-22: Sponsor package membernator (1.1.0-1) for Debian unstable (Python team request).

2019-10-22: Sponsor package isbg (2.1.5-1) for Debian unstable (Python team request).

2019-10-22: Sponsor package python-pluggy (0.13.0-1) for Debian unstable (Python team request).

2019-10-22: Sponsor package python-pyqt5chart (5.11.3+dfsg-2) for Debian unstable (Python team request).

2019-10-23: Upload package tetzle (2.1.4+dfsg1-3) to Debian unstable.

2019-10-23: Upload package partitionmanager (4.0.0-2) to Debian unstable.

2019-10-24: Upload package tetzle (2.1.5+dfsg1-1) to Debian unstable.

2019-10-24: Upload package xabacus (8.2.2-1) to Debian unstable.

2019-10-24: Review package (fpylll) (Needs some more work) (Python team request).

2019-10-28: Upload package gnome-shell-extension-dash-to-dock (25-1) to Debian unstable.

on November 03, 2019 06:18 PM

With version 12.0 Gitlab has introduced a new interesting feature: Visual Reviews! You can now leave comments to Merge Requests directly from the page you are visiting over your stage environment, without having to change tab.

If you already have Continuous Integration and Continuous Delivery enabled for your websites, adding this feature is blazing fast, and will make life of your reviewers easier! If you want to start with CI/CD in Gitlab, I’ve written about it in the past.

The feature

While the official documentation has a good overview of the feature, we can take a deeper look with some screenshots:

Inserting a comment We can comment directly from the staging environment! And additional metadata will be collected and published as well, making easier to reproduce a bug.

Comment appears in the MR Our comment (plus the metadata) appears in the merge request, becoming actionable.

Implementing the system

Adding the snippet isn’t complicate, you only need some information about the MR. Basically, this is what you should add to the head of your website for every merge request:


Of course, asking your team to add the HTML snippet, and filling it with the right information isn’t feasible. We will instead take advantage of Gitlab CI/CD to inject the snippet and autocomplete it with the right information for every merge request.

First we need the definition of a Gitlab CI job to build our client:

  image: node:12
  stage: build
    - ./scripts/
    - npm ci
    - npm run build
      - build
    - merge_requests
      - .npm

The important bit of information here is only: merge_requests. When used, Gitlab injects in the job a environment variable, called CI_MERGE_REQUEST_IID, with the unique ID of the merge request: we will fill it in the HTML snippet. The official documentation of Gitlab CI explains in detail all the other keywords of the YAML.

The script

The other important bit is the script that actually injects the code: it’s a simple bash script, which looks for the </title> tag in the HTML, and append the needed snippet:


quoteSubst() {
  IFS= read -d '' -r < <(sed -e ':a' -e '$!{N;ba' -e '}' -e 's/[&/\]/\\&/g; s/\n/\\&/g' <<<"$1")
  printf %s "${REPLY%$'\n'}"


sed -i "s~</title>~&$(quoteSubst "${TEXT_TO_INJECT}")~" public/index.html

Thanks to the Gitlab CI environment variables, the snippet has already all the information it needs to work. Of course you should customize the script with the right path for your index.html (or any other page you have).

Now everything is ready! Your team needs only to generate personal access tokens to login, and they are ready to go! You should store your personal access token in your password manager, so you don’t need to generate it each time.

Future features

One of the coolest things in Gitlab is that everything is always a work in progress, and each feature has some new goodies in every release. This is true for the Visual Reviews App as well. There is an epic that collects all the improvements they want to do, including removing the need for an access token, and adding ability to take screenshots that will be inserted in the MR comments as well.

That’s all for today, I hope you found this article useful! For any comment, feedback, critic, write to me on Twitter (@rpadovani93) or drop an email at

I have also changed the blog theme to a custom version of Rapido.css. I think it increases the readability, but let me know what you think!


on November 03, 2019 02:35 PM

October 30, 2019

In Ubuntu’s development process new package versions don’t immediately get released, but they enter the -proposed pocket first, where they are built and tested. In addition to testing the package itself other packages are also tested together with the updated package, to make sure the update doesn’t break the other packages either.

The packages in the -proposed pocket are listed on the update excuses page with their testing status. When a package is successfully built and all triggered tests passed the package can migrate to the release pocket, but when the builds or tests fail, the package is blocked from migration to preserve the quality of the release.

Sometimes packages are stuck in -proposed for a longer period because the build or test failures can’t be solved quickly. In the past several people may have triaged the same problem without being able to easily share their observations, but from now on if you figured out something about what broke, please open a bug against the stuck package with your findings and mark the package with the update-excuse tag. The bug will be linked to from the update excuses page so the next person picking up the problem can continue from there. You can even leave a patch in the bug so a developer with upload rights can find it easily and upload it right away.

The update-excuse tag applies to the current development series only, but it does not come alone. To leave notes for a specific release’s -proposed pocket, use the update-excuse-$SERIES tag, for example update-excuse-bionic to have the bug linked from 18.04’s (Bionic Beaver’s ) update excuses page.

Fixing failures in -proposed is big part of the integration work done by Ubuntu Developers and help is always very welcome. If you see your favorite package being stuck on update excuses, please take a look at why and maybe open an update-excuse bug. You may be the one who helped the package making it to the next Ubuntu release!

(The new tags were added by Tiago Stürmer Daitx and me during the last Canonical engineering sprint’s coding day. Fun! 🙂 )

on October 30, 2019 11:45 AM

October 29, 2019

After just over three years, my family and I are now Lawful Permanent Residents (Green Card holders) of the United States of America. It’s been a long journey.


Before anything else, I want to credit those who made it possible to reach this point. My then-manager Duncan Mak, his manager Miguel de Icaza. Amy and Alex from work for bailing us out of a pickle. Microsoft’s immigration office/HR. Gigi, the “destination services consultant” from DwellWorks. The immigration attorneys at Fragomen LLP. Lynn Community Health Center. And my family, for their unwavering support.

The kick-off

It all began in July 2016. With support from my management chain, I went through the process of applying for an L-1 intracompany transferee visa – a 3-5 year dual-intent visa, essentially a time-limited secondment from Microsoft UK to Microsoft Corp. After a lot of paperwork and an in-person interview at the US consulate in London, we were finally granted the visa (and L-2 dependent visas for the family) in April 2017. We arranged the actual move in July 2017, giving us a short window to wind up our affairs in the UK as much as possible, and run out most of my eldest child’s school year.

We sold the house, sold the car, gave to family all the electronics which wouldn’t work in the US (even with a transformer), and stashed a few more goodies in my parents’ attic. Microsoft arranged for movers to come and pack up our lives; they arranged a car for us for the final week; and a hotel for the final week too (we rejected the initial golf-spa-resort they offered and opted for a budget hotel chain in our home town, to keep sending our eldest to school with minimal disruption). And on the final day we set off at the crack of dawn to Heathrow Airport, to fly to Boston, Massachusetts, and try for a new life in the USA.

Finding our footing

I cannot complain about the provisions made by Microsoft – although not without snags. The 3.5 hours we spent in Logan airport waiting at immigration due to some computer problem on the day did not help us relax. Neither did the cat arriving at our company-arranged temporary condo before we did (with no food, or litter, or anything). Nor did the fact that the satnav provided with the company-arranged hire car not work – and that when I tried using my phone to navigate, it shot under the passenger seat the first time I had to brake, leading to a fraught commute from Logan to Third St, Cambridge.

Nevertheless, the liquor store under our condo building, and my co-workers Amy and Alex dropping off an emergency run of cat essentials, helped calm things down. We managed a good first night’s exhausted sleep, and started the following day with pancakes and syrup at a place called The Friendly Toast.

With the support of Gigi, a consultant hired to help us with early-relocation basics like social security and bank accounts, we eventually made our way to our own rental in Melrose (a small suburb north of Boston, a shortish walk from the MBTA Orange Line); with our own car (once the money from selling our house in the UK finally arrived); with my eldest enrolled in a local school. Aiming for normality.

The process

Fairly soon after settling in to office life, the emails from Microsoft Immigration started, for the process to apply for permanent residency. We were acutely aware of the time ticking on the three year visas – and we already burned 3 months of time prior to the move. Work permits; permission to leave and re-enter; Department of Labor certification. Papers, forms, papers, forms. Swearing that none of us have ever recruited child soldiers, or engaged in sex trafficking.

Tick tock.

Months at a time without hearing anything from USCIS.

Tick tock.

Work permits for all, but big delays listed on the monthly USCIS visa bulletin.

Tick tock.

We got to August 2019, and I started to really worry about the next deadline – our eldest’s passport expiring, along with the initial visas a couple of weeks later.

Tick tock.

Then my wife had a smart idea for plan B, something better than the burned out Mad Max dystopia waiting for us back in the UK: Microsoft just opened a big .NET development office in Prague, so maybe I could make a business justification for relocation to the Czech Republic?

I start teaching myself Czech.

Duolingo screenshot, Czech language, “can you see my goose”

Tick tock.

Then, a month later, out of the blue, a notice from USCIS: our Adjustment of Status interviews (in theory the final piece before being granted green cards) were scheduled, for less than a month later. Suddenly we went from too much time, to too little.



The problem with the one month of notice is we had one crucial missing piece of paperwork – for each of us, an I-693 medical exam issued by a USCIS-certified civil surgeon. I started calling around, and got a response from an immigration clinic in Lynn, with a date in mid October. They also gave us a rough indication of medical exams and extra vaccinations required for the I-693 which we were told to source via our normal doctors (where they would be billable to insurance, if not free entirely). Any costs in the immigration clinic can’t go via insurance or an HSA, because they’re officially immigration paperwork, not medical paperwork. Total cost ends up being over a grand.

More calling around. We got scheduled for various shots and tests, and went to our medical appointment with everything sorted.


Turns out the TB tests the kids had were no longer recognised by USCIS. And all four of us had vaccination record gaps. So not only unexpected jabs after we promised them it was all over – unexpected bloodletting too. And a follow-up appointment for results and final paperwork, only 2 days prior to the AOS interview.

By this point, I’m something of a wreck. The whole middle of October has been a barrage of non-stop, short-term, absolutely critical appointments.

Any missing paperwork, any errors, and we can kiss our new lives in the US goodbye.

Wednesday, I can’t eat, I can’t sleep, and various other physiological issues. The AOS interview is the next day. I’m as prepared as I can be, but still more terrified than I ever have been.

Any missing paperwork, any errors, and we can kiss our new lives in the US goodbye.

I was never this worried about going through a comparable process when applying for the visa, because the worst case there was the status quo. Here the worst case is having to restart our green card process, with too little time to reapply before the visas expire. Having wasted two years of my family’s comfort with nothing to show for it. The year it took my son to settle again at school. All of it riding on one meeting.


Our AOS interviews are perfectly timed to coincide with lunch, so we load the kids up on snacks, and head to the USCIS office in Lawrence.

After parking up, we head inside, and wait. We have all the paperwork we could reasonably be expected to have – birth certificates, passports, even wedding photos to prove that our marriage is legit.

To keep the kids entertained in the absence of electronics (due to a no camera rule which bars tablets and phones) we have paper and crayons. I suggest “America” as a drawing prompt for my eldest, and he produces a statue of liberty and some flags, which I guess covers the topic for a 7 year old.

Finally we’re called in to see an Immigration Support Officer, the end-boss of American bureaucracy and… It’s fine. It’s fine! She just goes through our green card forms and confirms every answer; takes our medical forms and photos; checks the passports; asks us about our (Caribbean) wedding and takes a look at our photos; and gracefully accepts the eldest’s drawing for her wall.

We’re in and out of her office in under an hour. She tells us that unless she finds an issue in our background checks, we should be fine – expect an approval notice within 3 weeks, or call up if there’s still nothing in 6. Her tone is congratulatory, but with nothing tangible, and still the “unless” lingering, it’s hard to feel much of anything. We head home, numb more than anything.


After two fraught weeks, we’re both not entirely sure how to process things. I had expected a stress headache then normality, but instead it was more… Gradual.

During the following days, little things like the colours of the leaves leave me tearing up – and as my wife and I talk, we realise the extent to which the stress has been getting to us. And, more to the point, the extent to which being adrift without having somewhere we can confidently call home has caused us to close ourselves off.

The first day back in the office after the interview, a co-worker pulls me aside and asks if I’m okay – and I realise how much the answer has been “no”. Friday is the first day where I can even begin to figure that out.

The weekend continues with emotions all over the place, but a feeling of cautious optimism alongside.

I-485 Adjustment of Status approval notifications

On Monday, 4 calendar days after the AOS interview, we receive our notifications, confirming that we can stay. I’m still not sure I’m processing it right. We can start making real, long term plans now. Buying a house, the works.

I had it easy, and don’t deserve any sympathy

I’m a white guy, who had thousands of dollars’ worth of support from a global megacorp and their army of lawyers. The immigration process was fraught enough for me that I couldn’t sleep or eat – and I went through the process in one of the easiest routes available.

Youtube video from HBO show Last Week Tonight, covering legal migration into the USA

I am acutely aware of how much more terrifying and exhausting the process might be, for anyone without my resources and support.

Never, for a second, think that migration to the US – legal or otherwise – is easy.

The subheading where I answer the inevitable question from the peanut gallery

My eldest started school in the UK in September 2015. Previously he’d been at nursery, and we’d picked him up around 6-6:30pm every work day. Once he started at school, instead he needed picking up before 3pm. But my entire team at Xamarin was on Boston time, and did not have the world’s earliest risers – meaning I couldn’t have any meaningful conversations with co-workers until I had a child underfoot and the TV blaring. It made remote working suck, when it had been fine just a few months earlier. Don’t underestimate the impact of time zones on remote workers with families. I had begun to consider, at this point, my future at Microsoft, purely for logistical reasons.

And then, in June 2016, the UK suffered from a collective case of brain worms, and voted for self immolation.

I relocated my family to the US, because I could make a business case for my employer to fund it. It was the fastest, cheapest way to move my family away from the uncertainty of life in the UK after the brain-worm-addled plan to deport 13% of NHS workers. To cut off 40% of the national food supply. To make basic medications like Metformin and Estradiol rarities, rationed by pharmacists.

I relocated my family to the US, because despite all the country’s problems, despite the last three years of bureaucracy, it still gave them a better chance at a safe, stable life than staying in the UK.

And even if time proves me wrong about Brexit, at least now we can make our new lives, permanently, regardless.

on October 29, 2019 09:41 AM

October 26, 2019

Paco Molinero, Fernando Lanero, Javier Teruelo y Marcos Costales entrevistamos a Joan CiberSheep sobre la Ubucon Europe y analizamos la nueva versión de Ubuntu 19.10.

Ubuntu y otras hierbas
Escúchanos en:
on October 26, 2019 01:41 PM

October 24, 2019

Ubuntu 19.10 Released

Josh Powers

The next development release of Ubuntu, the Eoan Ermine, was released last week! This was the last development release before our upcoming LTS, codenamed Focal Fossa. As a result, lots of bug fixes, new features, and experience improvements have made their way into the release. Some highlights include: New GNOME version to 3.34 Further refinement to the desktop yaro theme Latest upstream stable kernel 5.3 OpenSSL 1.1.1 support Experimental ZFS on root support in the desktop installer OpenStack Train support See the release notes for more details.
on October 24, 2019 12:00 AM