January 19, 2018

Bullet-proof continuous delivery of software is crucial to the health of your community, more than a way to run manual tests, it also enables your early adopters to test code and give feedback on it as soon as it lands. You may be already using build.snapcraft.io to do so for snaps, but in some cases, your existing tooling needs to prevail.

Enabling Circle CI for snaps

In this tutorial, you will learn how to use Circle CI to build a snap and push it automatically to the edge channel of the Snap Store every time you make a change to your master branch in GitHub.

What you’ll learn

  • How to build your snap in Circle CI
  • How to push your snap to the store automatically from Circle CI
  • How to use snapcraft to enable all this, with a few simple commands

Take The Tutorial

on January 19, 2018 07:15 PM

This post exists as a copy of what I had on my previous blog about configuring MSMTP on Ubuntu 16.04; I’m posting it as-is for posterity, and have no idea if it’ll work on later versions. As I’m not hosting my own Ubuntu/MSMTP server anymore I can’t see any updates being made to this, but if I ever do have to set this up again I’ll create an updated post! Anyway, here’s what I had…

I previously wrote an article around configuring msmtp on Ubuntu 12.04, but as I hinted at in a previous post that sort of got lost when the upgrade of my host to Ubuntu 16.04 went somewhat awry. What follows is essentially the same post, with some slight updates for 16.04. As before, this assumes that you’re using Apache as the web server, but I’m sure it shouldn’t be too different if your web server of choice is something else.

I use msmtp for sending emails from this blog to notify me of comments and upgrades etc. Here I’m going to document how I configured it to send emails via a Google Apps account, although this should also work with a standard Gmail account too.

To begin, we need to install 3 packages:
sudo apt-get install msmtp msmtp-mta ca-certificates
Once these are installed, a default config is required. By default msmtp will look at /etc/msmtprc, so I created that using vim, though any text editor will do the trick. This file looked something like this:

# Set defaults.
# Enable or disable TLS/SSL encryption.
tls on
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
# Setup WP account's settings.
host smtp.gmail.com
port 587
auth login
logfile /var/log/msmtp/msmtp.log

account default : 

Any of the uppercase items (i.e. ) are things that need replacing specific to your configuration. The exception to that is the log file, which can of course be placed wherever you wish to log any msmtp activity/warnings/errors to.

Once that file is saved, we’ll update the permissions on the above configuration file — msmtp won’t run if the permissions on that file are too open — and create the directory for the log file.

sudo mkdir /var/log/msmtp
sudo chown -R www-data:adm /var/log/msmtp
sudo chmod 0600 /etc/msmtprc

Next I chose to configure logrotate for the msmtp logs, to make sure that the log files don’t get too large as well as keeping the log directory a little tidier. To do this, we create /etc/logrotate.d/msmtp and configure it with the following file. Note that this is optional, you may choose to not do this, or you may choose to configure the logs differently.

/var/log/msmtp/*.log {
rotate 12

Now that the logging is configured, we need to tell PHP to use msmtp by editing /etc/php/7.0/apache2/php.ini and updating the sendmail path from
sendmail_path =
sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a -t"
Here I did run into an issue where even though I specified the account name it wasn’t sending emails correctly when I tested it. This is why the line account default : was placed at the end of the msmtp configuration file. To test the configuration, ensure that the PHP file has been saved and run sudo service apache2 restart, then run php -a and execute the following

mail ('personal@email.com', 'Test Subject', 'Test body text');

Any errors that occur at this point will be displayed in the output so should make diagnosing any errors after the test relatively easy. If all is successful, you should now be able to use PHPs sendmail (which at the very least WordPress uses) to send emails from your Ubuntu server using Gmail (or Google Apps).

I make no claims that this is the most secure configuration, so if you come across this and realise it’s grossly insecure or something is drastically wrong please let me know and I’ll update it accordingly.

on January 19, 2018 10:30 AM

January 18, 2018

The digital workspace will now be available to all Linux users

London, UK – 18th January 2018 – Canonical, the company behind Ubuntu, today announced the first iteration of Slack as a snap, bringing collaboration to open source users.

Slack is an enterprise software platform that allows teams and businesses of all sizes to communicate effectively. Slack works seamlessly with other software tools within a single integrated environment, providing an accessible archive of an organisation’s communications, information and projects.

In adopting the universal Linux app packaging format, Slack will open its digital workplace up to an-ever growing community of Linux users, including those using Linux Mint, Manjaro, Debian, ArchLinux, OpenSUSE, Solus, and Ubuntu.

Designed to connect us to the people and tools we work with every day, the Slack snap will help Linux users be more efficient and streamlined in their work. And an intuitive user experience remains central to the snaps’ appeal, with automatic updates and rollback features giving developers greater control in the delivery of each offering.

“Slack is helping to transform the modern workplace, and we’re thrilled to welcome them to the snaps ecosystem”, said Jamie Bennett, VP of Engineering, Devices & IoT at Canonical. “Today’s announcement is yet another example of putting the Linux user first – Slack’s developers will now be able to push out the latest features straight to the user. By prioritising usability, and with the popularity of open source continuing to grow, the number of snaps is only set to rise in 2018.”

Snaps are containerised software packages, designed to work perfectly and securely within any Linux environment; across desktop, the cloud, and IoT devices. Thousands of snaps have been launched since the first in 2016, with snaps’ automated updates, added security benefits, and rollback features, with the option for applications to revert back to the previous working version in the event of a bug.

Slack is available to download as a snap by clicking here, or running snap install Slack.


About Canonical
Canonical is the company behind Ubuntu, the leading OS for cloud operations. Most public cloud workloads use Ubuntu, as do most new smart gateways, switches, self-driving cars and advanced robots. Canonical provides enterprise support and services for commercial users of Ubuntu. Established in 2004, Canonical is a privately held company.

on January 18, 2018 04:54 PM

The wait is over. MenuLibre 2.1.4 is now available for public testing and translations! With well over 100 commits, numerous bug fixes, and a lot of polish, the best menu editing solution for Linux is ready for primetime.

What’s New?

New Features

  • New Test Launcher button to try out new changes without saving (LP: #1315875)
  • New Sort Alphabetically button to instantly sort subdirectories (LP: #1315536)
  • Added ability to create subdirectories in system-installed paths (LP: #1315872)
  • New Parsing Errors tool to review invalid launcher files
  • New layout preferences! Budgie, GNOME, and Pantheon users will find that MenuLibre uses client side decorations (CSD) by default, while other desktops will use the more traditional server side decorations with a toolbar. Users are able to set their preference via the commandline.
    • -b, --headerbar to use the headerbar layout
    • -t, --toolbar to use the toolbar layout


  • The folder icon is now used in place of applications-other for directories (LP: #1605905)
  • DBusActivatable and Hidden keys are now represented by switches instead of text entries
  • Additional non-standard but commonly used categories have been added
  • Support for the Implements key has been added
  • Cinnamon, EDE, LXQt, and Pantheon have been added to the list of supported ShowIn environments
  • All file handling has been replaced with the better-maintained GLib KeyFile library
  • The Version key has been bumped to 1.1 to comply with the latest version of the specification

Bug Fixes

  • TypeError when adding a launcher and nothing is selected in the directory view (LP: #1556664)
  • Invalid categories added to first launcher in top-level directory under Xfce (LP: #1605973)
  • Categories created by Alacarte not respected, custom launchers deleted (LP: #1315880)
  • Exit application when Ctrl-C is pressed in the terminal (LP: #1702725)
  • Some categories cannot be removed from a launcher (LP: #1307002)
  • Catch exceptions when saving and display an error (LP: #1444668)
  • Automatically replace ~ with full home directory (LP: #1732099)
  • Make hidden items italic (LP: #1310261)

Translation Updates

  • This is the first release with complete documentation for every translated string in MenuLibre. This allows translators to better understand the context of each string when they adapt MenuLibre to their language, and should lead to more and better quality translations in the future.
  • The following languages were updating since MenuLibre 2.1.3:
    • Brazilian Portuguese, Catalan, Croatian, Danish, French, Galician, German, Italian, Kazakh, Lithuanian, Polish, Russian, Slovak, Spanish, Swedish, Ukrainian



The latest version of MenuLibre can always be downloaded from the Launchpad archives. Grab version 2.1.4 from the below link.


  • SHA-256: 36a6350019e45fbd1219c19a9afce29281e806993d4911b45b371dac50064284
  • SHA-1: 498fdd0b6be671f4388b6fa77a14a7d1e127e7ce
  • MD5: 0e30f24f544f0929621046d17874ecf0
on January 18, 2018 11:50 AM

January 17, 2018

100 days to go!!
+ info: http://ubucon.eu
on January 17, 2018 07:00 PM
Ubuntu announced its 17.04 (Zesty Zapus) release almost 9 months ago, on
April 13, 2017.  As a non-LTS release, 17.04 has a 9-month support cycle
and, as such, will reach end of life on Saturday, January 13th.

At that time, Ubuntu Security Notices will no longer include information or
updated packages for Ubuntu 17.04.

The supported upgrade path from Ubuntu 17.04 is via Ubuntu 17.10.
Instructions and caveats for the upgrade may be found at:


Ubuntu 17.10 continues to be actively supported with security updates and
select high-impact bug fixes.  Announcements of security updates for Ubuntu
releases are sent to the ubuntu-security-announce mailing list, information
about which may be found at:


Development of a complete response to the highly-publicized Meltdown and
Spectre vulnerabilities is ongoing, and due to the timing with respect to
this End of Life, we will not be providing updated Linux kernel packages for
Ubuntu 17.04.  We advise users to upgrade to Ubuntu 17.10 and install the
updated kernel packages for that release when they become available.

For more information about Canonical’s response to the Meltdown and
Spectre vulnerabilities, see:


Since its launch in October 2004 Ubuntu has become one of the most highly
regarded Linux distributions with millions of users in homes, schools,
businesses and governments around the world.  Ubuntu is Open Source
software, costs nothing to download, and users are free to customise or
alter their software in order to meet their needs.


Originally posted to the ubuntu-announce mailing list on Fri Jan 5 22:23:25 UTC 2018 
by Steve Langasek, on behalf of the Ubuntu Release Team
on January 17, 2018 01:32 PM

January 16, 2018

Adventurous users, testers and developers running Artful 17.10 or our development release Bionic 18.04 can now test the beta version of Plasma 5.12 LTS.

An upgrade to the required Frameworks 5.42 is also provided.

As with previous betas, this is experimental and is only suggested for people who are prepared for possible bugs and breakages.

In addition, please be prepared to use ppa-purge to revert changes, should the need arise at some point.

Read more about the beta release at: https://www.kde.org/announcements/plasma-5.11.95.php

If you want to test then:

sudo add-apt-repository ppa:kubuntu-ppa/beta

and then update packages with

sudo apt update
sudo apt full-upgrade

A Wayland session can be made available at the SDDM login screen by installing the package plasma-workspace-wayland. Please note the information on Wayland sessions in the KDE announcement.

Note: Due to Launchpad builder downtime and maintenance due to Meltdown/Spectre fixes, limiting us to amd64/i386 architectures, these builds may be superseded with a rebuild once the builders are back to normal availability.

The primary purpose of this PPA is to assist testing for bugs and quality of the upcoming final Plasma 5.12 LTS release, due for release by KDE on 6th Febuary.

It is anticipated that Kubuntu Bionic Beaver 18.04 LTS will ship with Plasma 5.12.4, the latest point release of 5.12 LTS available at release date.

Bug reports on the beta itself should be reported to bugs.kde.org.

Packaging bugs can be reported as normal to: Kubuntu PPA bugs: https://bugs.launchpad.net/kubuntu-ppa

Should any issues occur, please provide feedback on our mailing lists [1] or IRC [2]

1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net

on January 16, 2018 04:21 PM

The TP-Link Kasa app is the Android app that TP-Link distributes to control their Smart Home line of products, including IoT light bulbs, outlet and a home hub. TP-Link describes the app as:

The Kasa app works with Android and iOS devices so you can control your home right from your smartphone or tablet. You can also use Kasa to pair TP-Link smart home products with any Amazon Echo, Dot, Tap and The Google Assistant for voice control, giving you the ability to control your home with voice commands.

For an unknown period of time prior to December 2017, this control app performed no verification of SSL certificates when talking to TP-Link’s servers. This included during user signup, authentication, and control of individual devices in a user’s home.

In September 2017, I was tooling around looking for an IoT device to play with. My wife and I had joked about getting a “smart” light bulb for our bedroom so we wouldn’t have to get out of bed and turn off the light in the evenings. (And yes, it’s definitely a 1st world problem, as are all problems solved by smart home devices.) I was at Fry’s and came across the LB100 by TP-Link: a $20 “Smart” light bulb. This seemed to fit the bill perfectly: it was both cheap enough that I could buy it just to play around with, and if I liked it enough, we’d use it in the never-ending quest that is so classically American: do as little effort as possible.

I brought home the bulb and immediately put it on my network and installed the app. Actually, as happens so commonly, I brought home the bulb and put it on my “lab” network, and installed the app on my testing phone. I had recently reset the phone, so I was getting Burp Suite setup and was going to install the certificate in the phone when I already noticed traffic going to TP-Link’s servers!

At first I thought it must be HTTP traffic, perhaps for analytics or similar functions, but then I noticed it was indicated as HTTPS. I quickly did a double take and double-checked the phone did not have the Burp CA certificate. When I confirmed it didn’t, I realized I had an interesting finding without even giving the app a try.

Thinking they might have different configurations for different parts of the app, I went through the sign-up, login, device registration, and device control flows, all the while seeing my requests in Burp. They were all encrypted, but not once did the app seem to care about the lack of a valid SSL CA. I had no idea why this was, but decided I would try to reverse engineer the app to figure it out.

Unfortunately, my Android reverse engineering skills are about as sharp as an economy class airplane knife, so I had to turn to my friend @itsc0rg1, who is, quite fortunately, well versed in Android. She pointed me at a couple of decompilers and I got into it and discovered this (somewhat modified) code:

private static void initializeSslConfig(com.tplinkra.iot.config.SSLConfig cfg) {
    if (cfg != null) {
        if (!com.tplinkra.common.utils.Utils.getDefault(cfg.getTrustAllCertificates(), 1)) {
        } else {


public static boolean getDefault(Boolean val, boolean default) {
    if (val != null) {
        default = val.booleanValue();
    return default;


public static void enable() {
    javax.net.ssl.HttpsURLConnection.setDefaultHostnameVerifier(new com.tplinkra.network.transport.http.TrustAllCertificates());

It turns out that the default for cfg.getTrustAllCertificates() is a null value because it is not explicitly configured. The preference system being used by their application attempts to return the contents of a Boolean, and when that Boolean is null, defaults to true, which results in trusting all SSL certificates!


The issue was reported to TP-Link on the 20th of September 2017. They promptly acknowledge the issue and asked that I not disclose until their December release. Once the new year rolled around, I reached out to my contact there and they confirmed the issue had been fixed in the December release, which appears to have been around 22nd December. The release notes do not mention the security fix, so I consider this a “silent fix”.

Kudos to TP-Link for fixing the issue – many IoT companies do not stand behind their products, so although they had a flaw, they dealt with it when it was reported.

on January 16, 2018 08:00 AM

OpenSym 2017 Program Postmortem

Benjamin Mako Hill

The International Symposium on Open Collaboration (OpenSym, formerly WikiSym) is the premier academic venue exclusively focused on scholarly research into open collaboration. OpenSym is an ACM conference which means that, like conferences in computer science, it’s really more like a journal that gets published once a year than it is like most social science conferences. The “journal”, in iithis case, is called the Proceedings of the International Symposium on Open Collaboration and it consists of final copies of papers which are typically also presented at the conference. Like journal articles, papers that are published in the proceedings are not typically published elsewhere.

Along with Claudia Müller-Birn from the Freie Universtät Berlin, I served as the Program Chair for OpenSym 2017. For the social scientists reading this, the role of program chair is similar to being an editor for a journal. My job was not to organize keynotes or logistics at the conference—that is the job of the General Chair. Indeed, in the end I didn’t even attend the conference! Along with Claudia, my role as Program Chair was to recruit submissions, recruit reviewers, coordinate and manage the review process, make final decisions on papers, and ensure that everything makes it into the published proceedings in good shape.

In OpenSym 2017, we made several changes to the way the conference has been run:

  • In previous years, OpenSym had tracks on topics like free/open source software, wikis, open innovation, open education, and so on. In 2017, we used a single track model.
  • Because we eliminated tracks, we also eliminated track-level chairs. Instead, we appointed Associate Chairs or ACs.
  • We eliminated page limits and the distinction between full papers and notes.
  • We allowed authors to write rebuttals before reviews were finalized. Reviewers and ACs were allowed to modify their reviews and decisions based on rebuttals.
  • To assist in assigning papers to ACs and reviewers, we made extensive use of bidding. This means we had to recruit the pool of reviewers before papers were submitted.

Although each of these things have been tried in other conferences, or even piloted within individual tracks in OpenSym, all were new to OpenSym in general.


Papers submitted 44
Papers accepted 20
Acceptance rate 45%
Posters submitted 2
Posters presented 9
Associate Chairs 8
PC Members 59
Authors 108
Author countries 20

The program was similar in size to the ones in the last 2-3 years in terms of the number of submissions. OpenSym is a small but mature and stable venue for research on open collaboration. This year was also similar, although slightly more competitive, in terms of the conference acceptance rate (45%—it had been slightly above 50% in previous years).

As in recent years, there were more posters presented than submitted because the PC found that some rejected work, although not ready to be published in the proceedings, was promising and advanced enough to be presented as a poster at the conference. Authors of posters submitted 4-page extended abstracts for their projects which were published in a “Companion to the Proceedings.”


Over the years, OpenSym has established a clear set of niches. Although we eliminated tracks, we asked authors to choose from a set of categories when submitting their work. These categories are similar to the tracks at OpenSym 2016. Interestingly, a number of authors selected more than one category. This would have led to difficult decisions in the old track-based system.

distribution of papers across topics with breakdown by accept/poster/reject

The figure above shows a breakdown of papers in terms of these categories as well as indicators of how many papers in each group were accepted. Papers in multiple categories are counted multiple times. Research on FLOSS and Wikimedia/Wikipedia continue to make up a sizable chunk of OpenSym’s submissions and publications. That said, these now make up a minority of total submissions. Although Wikipedia and Wikimedia research made up a smaller proportion of the submission pool, it was accepted at a higher rate. Also notable is the fact that 2017 saw an uptick in the number of papers on open innovation. I suspect this was due, at least in part, to work by the General Chair Lorraine Morgan’s involvement (she specializes in that area). Somewhat surprisingly to me, we had a number of submission about Bitcoin and blockchains. These are natural areas of growth for OpenSym but have never been a big part of work in our community in the past.

Scores and Reviews

As in previous years, review was single blind in that reviewers’ identities are hidden but authors identities are not. Each paper received between 3 and 4 reviews plus a metareview by the Associate Chair assigned to the paper. All papers received 3 reviews but ACs were encouraged to call in a 4th reviewer at any point in the process. In addition to the text of the reviews, we used a -3 to +3 scoring system where papers that are seen as borderline will be scored as 0. Reviewers scored papers using full-point increments.

scores for each paper submitted to opensym 2017: average, distribution, etc

The figure above shows scores for each paper submitted. The vertical grey lines reflect the distribution of scores where the minimum and maximum scores for each paper are the ends of the lines. The colored dots show the arithmetic mean for each score (unweighted by reviewer confidence). Colors show whether the papers were accepted, rejected, or presented as a poster. It’s important to keep in mind that two papers were submitted as posters.

Although Associate Chairs made the final decisions on a case-by-case basis, every paper that had an average score of less than 0 (the horizontal orange line) was rejected or presented as a poster and most (but not all) papers with positive average scores were accepted. Although a positive average score seemed to be a requirement for publication, negative individual scores weren’t necessary showstoppers. We accepted 6 papers with at least one negative score. We ultimately accepted 20 papers—45% of those submitted.


This was the first time that OpenSym used a rebuttal or author response and we are thrilled with how it went. Although they were entirely optional, almost every team of authors used it! Authors of 40 of our 46 submissions (87%!) submitted rebuttals.

Lower Unchanged Higher
6 24 10

The table above shows how average scores changed after authors submitted rebuttals. The table shows that rebuttals’ effect was typically neutral or positive. Most average scores stayed the same but nearly two times as many average scores increased as decreased in the post-rebuttal period. We hope that this made the process feel more fair for authors and I feel, having read them all, that it led to improvements in the quality of final papers.

Page Lengths

In previous years, OpenSym followed most other venues in computer science by allowing submission of two kinds of papers: full papers which could be up to 10 pages long and short papers which could be up to 4. Following some other conferences, we eliminated page limits altogether. This is the text we used in the OpenSym 2017 CFP:

There is no minimum or maximum length for submitted papers. Rather, reviewers will be instructed to weigh the contribution of a paper relative to its length. Papers should report research thoroughly but succinctly: brevity is a virtue. A typical length of a “long research paper” is 10 pages (formerly the maximum length limit and the limit on OpenSym tracks), but may be shorter if the contribution can be described and supported in fewer pages— shorter, more focused papers (called “short research papers” previously) are encouraged and will be reviewed like any other paper. While we will review papers longer than 10 pages, the contribution must warrant the extra length. Reviewers will be instructed to reject papers whose length is incommensurate with the size of their contribution.

The following graph shows the distribution of page lengths across papers in our final program.

histogram of paper lengths for final accepted papersIn the end 3 of 20 published papers (15%) were over 10 pages. More surprisingly, 11 of the accepted papers (55%) were below the old 10-page limit. Fears that some have expressed that page limits are the only thing keeping OpenSym from publshing enormous rambling manuscripts seems to be unwarranted—at least so far.


Although, I won’t post any analysis or graphs, bidding worked well. With only two exceptions, every single assigned review was to someone who had bid “yes” or “maybe” for the paper in question and the vast majority went to people that had bid “yes.” However, this comes with one major proviso: people that did not bid at all were marked as “maybe” for every single paper.

Given a reviewer pool whose diversity of expertise matches that in your pool of authors, bidding works fantastically. But everybody needs to bid. The only problems with reviewers we had were with people that had failed to bid. It might be reviewers who don’t bid are less committed to the conference, more overextended, more likely to drop things in general, etc. It might also be that reviewers who fail to bid get poor matches which cause them to become less interested, willing, or able to do their reviews well and on time.

Having used bidding twice as chair or track-chair, my sense is that bidding is a fantastic thing to incorporate into any conference review process. The major limitations are that you need to build a program committee (PC) before the conference (rather than finding the perfect reviewers for specific papers) and you have to find ways to incentivize or communicate the importance of getting your PC members to bid.


The final results were a fantastic collection of published papers. Of course, it couldn’t have been possible without the huge collection of conference chairs, associate chairs, program committee members, external reviewers, and staff supporters.

Although we tried quite a lot of new things, my sense is that nothing we changed made things worse and many changes made things smoother or better. Although I’m not directly involved in organizing OpenSym 2018, I am on the OpenSym steering committee. My sense is that most of the changes we made are going to be carried over this year.

Finally, it’s also been announced that OpenSym 2018 will be in Paris on August 22-24. The call for papers should be out soon and the OpenSym 2018 paper deadline has already been announced as March 15, 2018. You should consider submitting! I hope to see you in Paris!

This Analysis

OpenSym used the gratis version of EasyChair to manage the conference which doesn’t allow chairs to export data. As a result, data used in this this postmortem was scraped from EasyChair using two Python scripts. Numbers and graphs were created using a knitr file that combines R visualization and analysis code with markdown to create the HTML directly from the datasets. I’ve made all the code I used to produce this analysis available in this git repository. I hope someone else finds it useful. Because the data contains sensitive information on the review process, I’m not publishing the data.

This blog post was originally posted on the Community Data Science Collective blog.

on January 16, 2018 03:38 AM

January 15, 2018

The Lubuntu Team announces that as a non-LTS release, 17.04 has a 9-month support cycle and, as such, reached end of life on Saturday, January 13, 2018. Lubuntu will no longer provide bug fixes or security updates for 17.04, and we strongly recommend that you update to 17.10, which continues to be actively supported with […]
on January 15, 2018 08:32 AM

RHL'18 was held at the centre du Vallon à St-Cergue, the building in the very center of this photo, at the bottom of the piste:

People from various free software communities in the region attended for a series of presentations, demonstrations, socializing and ski. This event is a lot of fun and I would highly recommend that people look out for the next edition. (subscribe to rhl-annonces on lists.swisslinux.org for a reminder email)

Ham radio demonstration

I previously wrote about building a simple antenna for shortwave (HF) reception with software defined radio. That article includes links to purchase all the necessary parts from various sources. Everything described in that article, together with some USB sticks running Debian Hams Live (bootable ham radio operating system), some rolls of string and my FT-60 transceiver, fits comfortably into an OSCAL tote bag like this:

It is really easy to take this kit to an event anywhere, set it up in 10 minutes and begin exploring the radio spectrum. Whether it is a technical event or a village fair, radio awakens curiosity in people of all ages and provides a starting point for many other discussions about technological freedom, distributing stickers and inviting people to future events. My previous blog contains photos of what is in the bag and a video demo.

Open Agriculture Food Computer discussion

We had a discussion about progress building an Open Agriculture (OpenAg) food computer in Switzerland. The next meeting in Zurich will be held on 30 January 2018, please subscribe to the forum topic to receive further details.

Preparing for Google Summer of Code 2018

In between eating fondue and skiing, I found time to resurrect some of my previous project ideas for Google Summer of Code. Most of them are not specific to Debian, several of them need co-mentors, please contact me if you are interested.

on January 15, 2018 08:02 AM

Ubuntu’s changed a lot in the last year, and everything is leading up to a really exciting event: the release of 18.04 LTS! This next version of Ubuntu will once again offer a stable foundation for countless humans who use computers for work, play, art, relaxation, and creation. Among the various visual refreshes of Ubuntu, it’s also time to go to the community and ask for the best wallpapers. And it’s also time to look for a new video and music file that will be waiting for Ubuntu users on the install media’s Examples folder, to reassure them that their video and sound drivers are quite operational.

Long-term support releases like Ubuntu 18.04 LTS are very important, because they are downloaded and installed ten times more often than every single interim release combined. That means that the wallpapers, video, and music that are shipped will be seen ten times more than in other releases. So artists, select your best works. Ubuntu enthusiasts, spread the word about the contest as far and wide as you can. Everyone can help make this next LTS version of Ubuntu an amazing success.

All content must be released under a Creative Commons Attribution-Sharealike or Creative Commons Attribute license. (The Creative Commons Zero waiver is okay, too!). Each entrant must only submit content they have created themselves, and all submissions must adhere to the Ubuntu Code of Conduct.

The winners will be featured in the Ubuntu 18.04 LTS release this April!

There are a lot of details, so please see the Ubuntu Free Culture Showcase wiki page for details and links to where you can submit your work from now through March 15th. Good luck!

on January 15, 2018 08:00 AM

January 14, 2018

Back in December a Linux Mint user sent a strange bug report to the darktable mailing list. Apparently the GNU C Compiler (GCC) on his system exited with an unexpected error message, breaking the build process.
on January 14, 2018 05:35 PM

January 13, 2018

This is part one of a series of blog posts that I’ll write in the next weeks, as previously announced in the GStreamer Rust bindings 0.10.0 release blog post. Since the last series of blog posts about writing GStreamer plugins in Rust ([1] [2] [3] [4]) a lot has changed, and the content of those blog posts has only historical value now, as the journey of experimentation to what exists now.

In this first part we’re going to write a plugin that contains a video filter element. The video filter can convert from RGB to grayscale, either output as 8-bit per pixel grayscale or 32-bit per pixel RGB. In addition there’s a property to invert all grayscale values, or to shift them by up to 255 values. In the end this will allow you to watch Big Bucky Bunny, or anything else really that can somehow go into a GStreamer pipeline, in grayscale. Or encode the output to a new video file, send it over the network via WebRTC or something else, or basically do anything you want with it.

Big Bucky Bunny – Grayscale

This will show the basics of how to write a GStreamer plugin and element in Rust: the basic setup for registering a type and implementing it in Rust, and how to use the various GStreamer API and APIs from the Rust standard library to do the processing.

The final code for this plugin can be found here, and it is based on the 0.1 version of the gst-plugin crate and the 0.10 version of the gstreamer crate. At least Rust 1.20 is required for all this. I’m also assuming that you have GStreamer (at least version 1.8) installed for your platform, see e.g. the GStreamer bindings installation instructions.

Table of Contents

  1. Project Structure
  2. Plugin Initialization
  3. Type Registration
  4. Type Class & Instance Initialization
  5. Caps & Pad Templates
  6. Caps Handling Part 1
  7. Caps Handling Part 2
  8. Conversion of BGRx Video Frames to Grayscale
  9. Testing the new element
  10. Properties
  11. What next?

Project Structure

We’ll create a new cargo project with cargo init –lib –name gst-plugin-tutorial. This will create a basically empty Cargo.toml and a corresponding src/lib.rs. We will use this structure: lib.rs will contain all the plugin related code, separate modules will contain any GStreamer plugins that are added.

The empty Cargo.toml has to be updated to list all the dependencies that we need, and to define that the crate should result in a cdylib, i.e. a C library that does not contain any Rust-specific metadata. The final Cargo.toml looks as follows

name = "gst-plugin-tutorial"
version = "0.1.0"
authors = ["Sebastian Dröge <sebastian@centricular.com>"]
repository = "https://github.com/sdroege/gst-plugin-rs"
license = "MIT/Apache-2.0"

glib = "0.4"
gstreamer = "0.10"
gstreamer-base = "0.10"
gstreamer-video = "0.10"
gst-plugin = "0.1"

name = "gstrstutorial"
crate-type = ["cdylib"]
path = "src/lib.rs"

We’re depending on the gst-plugin crate, which provides all the basic infrastructure for implementing GStreamer plugins and elements. In addition we depend on the gstreamer, gstreamer-base and gstreamer-video crates for various GStreamer API that we’re going to use later, and the glib crate to be able to use some GLib API that we’ll need. GStreamer is building upon GLib, and this leaks through in various places.

With the basic project structure being set-up, we should be able to compile the project with cargo build now, which will download and build all dependencies and then creates a file called target/debug/libgstrstutorial.so (or .dll on Windows, .dylib on macOS). This is going to be our GStreamer plugin.

To allow GStreamer to find our new plugin and make it available in every GStreamer-based application, we could install it into the system- or user-wide GStreamer plugin path or simply point the GST_PLUGIN_PATH environment variable to the directory containing it:

export GST_PLUGIN_PATH=`pwd`/target/debug

If you now run the gst-inspect-1.0 tool on the libgstrstutorial.so, it will not yet print all information it can extract from the plugin but for now just complains that this is not a valid GStreamer plugin. Which is true, we didn’t write any code for it yet.

Plugin Initialization

Let’s start editing src/lib.rs to make this an actual GStreamer plugin. First of all, we need to add various extern crate directives to be able to use our dependencies and also mark some of them #[macro_use] because we’re going to use macros defined in some of them. This looks like the following

extern crate glib;
extern crate gstreamer as gst;
extern crate gstreamer_base as gst_base;
extern crate gstreamer_video as gst_video;
extern crate gst_plugin;

Next we make use of the plugin_define! macro from the gst-plugin crate to set-up the static metadata of the plugin (and make the shared library recognizeable by GStreamer to be a valid plugin), and to define the name of our entry point function (plugin_init) where we will register all the elements that this plugin provides.

    b"Rust Tutorial Plugin\0",

This is unfortunately not very beautiful yet due to a) GStreamer requiring this information to be statically available in the shared library, not returned by a function (starting with GStreamer 1.14 it can be a function), and b) Rust not allowing raw strings (b”blabla) to be concatenated with a macro like the std::concat macro (so that the b and \0 parts could be hidden away). Expect this to become better in the future.

The static plugin metadata that we provide here is

  1. name of the plugin
  2. short description for the plugin
  3. name of the plugin entry point function
  4. version number of the plugin
  5. license of the plugin (only a fixed set of licenses is allowed here, see)
  6. source package name
  7. binary package name (only really makes sense for e.g. Linux distributions)
  8. origin of the plugin
  9. release date of this version

In addition we’re defining an empty plugin entry point function that just returns true

fn plugin_init(plugin: &gst::Plugin) -> bool {

With all that given, gst-inspect-1.0 should print exactly this information when running on the libgstrstutorial.so file (or .dll on Windows, or .dylib on macOS)

gst-inspect-1.0 target/debug/libgstrstutorial.so

Type Registration

As a next step, we’re going to add another module rgb2gray to our project, and call a function called register from our plugin_init function.

mod rgb2gray;

fn plugin_init(plugin: &gst::Plugin) -> bool {

With that our src/lib.rs is complete, and all following code is only in src/rgb2gray.rs. At the top of the new file we first need to add various use-directives to import various types and functions we’re going to use into the current module’s scope

use glib;
use gst;
use gst::prelude::*;
use gst_video;

use gst_plugin::properties::*;
use gst_plugin::object::*;
use gst_plugin::element::*;
use gst_plugin::base_transform::*;

use std::i32;
use std::sync::Mutex;

GStreamer is based on the GLib object system (GObject). C (just like Rust) does not have built-in support for object orientated programming, inheritance, virtual methods and related concepts, and GObject makes these features available in C as a library. Without language support this is a quite verbose endeavour in C, and the gst-plugin crate tries to expose all this in a (as much as possible) Rust-style API while hiding all the details that do not really matter.

So, as a next step we need to register a new type for our RGB to Grayscale converter GStreamer element with the GObject type system, and then register that type with GStreamer to be able to create new instances of it. We do this with the following code

struct Rgb2GrayStatic;

impl ImplTypeStatic<BaseTransform> for Rgb2GrayStatic {
    fn get_name(&self) -> &str {

    fn new(&self, element: &BaseTransform) -> Box<BaseTransformImpl<BaseTransform>> {

    fn class_init(&self, klass: &mut BaseTransformClass) {

pub fn register(plugin: &gst::Plugin) {
    let type_ = register_type(Rgb2GrayStatic);
    gst::Element::register(plugin, "rsrgb2gray", 0, type_);

This defines a zero-sized struct Rgb2GrayStatic that is used to implement the ImplTypeStatic<BaseTransform> trait on it for providing static information about the type to the type system. In our case this is a zero-sized struct, but in other cases this struct might contain actual data (for example if the same element code is used for multiple elements, e.g. when wrapping a generic codec API that provides support for multiple decoders and then wanting to register one element per decoder). By implementing ImplTypeStatic<BaseTransform> we also declare that our element is going to be based on the GStreamer BaseTransform base class, which provides a relatively simple API for 1:1 transformation elements like ours is going to be.

ImplTypeStatic provides functions that return a name for the type, and functions for initializing/returning a new instance of our element (new) and for initializing the class metadata (class_init, more on that later). We simply let those functions proxy to associated functions on the Rgb2Gray struct that we’re going to define at a later time.

In addition, we also define a register function (the one that is already called from our plugin_init function) and in there first register the Rgb2GrayStatic type metadata with the GObject type system to retrieve a type ID, and then register this type ID to GStreamer to be able to create new instances of it with the name “rsrgb2gray” (e.g. when using gst::ElementFactory::make).

Type Class & Instance Initialization

As a next step we declare the Rgb2Gray struct and implement the new and class_init functions on it. In the first version, this struct is almost empty but we will later use it to store all state of our element.

struct Rgb2Gray {
    cat: gst::DebugCategory,

impl Rgb2Gray {
    fn new(_transform: &BaseTransform) -> Box<BaseTransformImpl<BaseTransform>> {
        Box::new(Self {
            cat: gst::DebugCategory::new(
                "Rust RGB-GRAY converter",

    fn class_init(klass: &mut BaseTransformClass) {
            "RGB-GRAY Converter",
            "Converts RGB to GRAY or grayscale RGB",
            "Sebastian Dröge <sebastian@centricular.com>",

        klass.configure(BaseTransformMode::NeverInPlace, false, false);

In the new function we return a boxed (i.e. heap-allocated) version of our struct, containing a newly created GStreamer debug category of name “rsrgb2gray”. We’re going to use this debug category later for making use of GStreamer’s debug logging system for logging the state and changes of our element.

In the class_init function we, again, set up some metadata for our new element. In this case these are a description, a classification of our element, a longer description and the author. The metadata can later be retrieved and made us of via the Registry and PluginFeature/ElementFactory API. We also configure the BaseTransform class and define that we will never operate in-place (producing our output in the input buffer), and that we don’t want to work in passthrough mode if the input/output formats are the same.

Additionally we need to implement various traits on the Rgb2Gray struct, which will later be used to override virtual methods of the various parent classes of our element. For now we can keep the trait implementations empty. There is one trait implementation required per parent class.

impl ObjectImpl<BaseTransform> for Rgb2Gray {}
impl ElementImpl<BaseTransform> for Rgb2Gray {}
impl BaseTransformImpl<BaseTransform> for Rgb2Gray {}

With all this defined, gst-inspect-1.0 should be able to show some more information about our element already but will still complain that it’s not complete yet.

Caps & Pad Templates

Data flow of GStreamer elements is happening via pads, which are the input(s) and output(s) (or sinks and sources) of an element. Via the pads, buffers containing actual media data, events or queries are transferred. An element can have any number of sink and source pads, but our new element will only have one of each.

To be able to declare what kinds of pads an element can create (they are not necessarily all static but could be created at runtime by the element or the application), it is necessary to install so-called pad templates during the class initialization. These pad templates contain the name (or rather “name template”, it could be something like src_%u for e.g. pad templates that declare multiple possible pads), the direction of the pad (sink or source), the availability of the pad (is it always there, sometimes added/removed by the element or to be requested by the application) and all the possible media types (called caps) that the pad can consume (sink pads) or produce (src pads).

In our case we only have always pads, one sink pad called “sink”, on which we can only accept RGB (BGRx to be exact) data with any width/height/framerate and one source pad called “src”, on which we will produce either RGB (BGRx) data or GRAY8 (8-bit grayscale) data. We do this by adding the following code to the class_init function.

let caps = gst::Caps::new_simple(
                ("width", &gst::IntRange::<i32>::new(0, i32::MAX)),
                ("height", &gst::IntRange::<i32>::new(0, i32::MAX)),
                        gst::Fraction::new(0, 1),
                        gst::Fraction::new(i32::MAX, 1),

        let src_pad_template = gst::PadTemplate::new(

        let caps = gst::Caps::new_simple(
                ("format", &gst_video::VideoFormat::Bgrx.to_string()),
                ("width", &gst::IntRange::<i32>::new(0, i32::MAX)),
                ("height", &gst::IntRange::<i32>::new(0, i32::MAX)),
                        gst::Fraction::new(0, 1),
                        gst::Fraction::new(i32::MAX, 1),

        let sink_pad_template = gst::PadTemplate::new(

The names “src” and “sink” are pre-defined by the BaseTransform class and this base-class will also create the actual pads with those names from the templates for us whenever a new element instance is created. Otherwise we would have to do that in our new function but here this is not needed.

If you now run gst-inspect-1.0 on the rsrgb2gray element, these pad templates with their caps should also show up.

Caps Handling Part 1

As a next step we will add caps handling to our new element. This involves overriding 4 virtual methods from the BaseTransformImpl trait, and actually storing the configured input and output caps inside our element struct. Let’s start with the latter

struct State {
    in_info: gst_video::VideoInfo,
    out_info: gst_video::VideoInfo,

struct Rgb2Gray {
    cat: gst::DebugCategory,
    state: Mutex<Option<State>>,

impl Rgb2Gray {
    fn new(_transform: &BaseTransform) -> Box<BaseTransformImpl<BaseTransform>> {
        Box::new(Self {
            cat: gst::DebugCategory::new(
                "Rust RGB-GRAY converter",
            state: Mutex::new(None),

We define a new struct State that contains the input and output caps, stored in a VideoInfo. VideoInfo is a struct that contains various fields like width/height, framerate and the video format and allows to conveniently with the properties of (raw) video formats. We have to store it inside a Mutex in our Rgb2Gray struct as this can (in theory) be accessed from multiple threads at the same time.

Whenever input/output caps are configured on our element, the set_caps virtual method of BaseTransform is called with both caps (i.e. in the very beginning before the data flow and whenever it changes), and all following video frames that pass through our element should be according to those caps. Once the element is shut down, the stop virtual method is called and it would make sense to release the State as it only contains stream-specific information. We’re doing this by adding the following to the BaseTransformImpl trait implementation

impl BaseTransformImpl<BaseTransform> for Rgb2Gray {
    fn set_caps(&self, element: &BaseTransform, incaps: &gst::Caps, outcaps: &gst::Caps) -> bool {
        let in_info = match gst_video::VideoInfo::from_caps(incaps) {
            None => return false,
            Some(info) => info,
        let out_info = match gst_video::VideoInfo::from_caps(outcaps) {
            None => return false,
            Some(info) => info,

            obj: element,
            "Configured for caps {} to {}",

        *self.state.lock().unwrap() = Some(State {
            in_info: in_info,
            out_info: out_info,


    fn stop(&self, element: &BaseTransform) -> bool {
        // Drop state
        let _ = self.state.lock().unwrap().take();

        gst_info!(self.cat, obj: element, "Stopped");


This code should be relatively self-explanatory. In set_caps we’re parsing the two caps into a VideoInfo and then store this in our State, in stop we drop the State and replace it with None. In addition we make use of our debug category here and use the gst_info! and gst_debug! macros to output the current caps configuration to the GStreamer debug logging system. This information can later be useful for debugging any problems once the element is running.

Next we have to provide information to the BaseTransform base class about the size in bytes of a video frame with specific caps. This is needed so that the base class can allocate an appropriately sized output buffer for us, that we can then fill later. This is done with the get_unit_size virtual method, which is required to return the size of one processing unit in specific caps. In our case, one processing unit is one video frame. In the case of raw audio it would be the size of one sample multiplied by the number of channels.

impl BaseTransformImpl<BaseTransform> for Rgb2Gray {
    fn get_unit_size(&self, _element: &BaseTransform, caps: &gst::Caps) -> Option<usize> {
        gst_video::VideoInfo::from_caps(caps).map(|info| info.size())

We simply make use of the VideoInfo API here again, which conveniently gives us the size of one video frame already.

Instead of get_unit_size it would also be possible to implement the transform_size virtual method, which is getting passed one size and the corresponding caps, another caps and is supposed to return the size converted to the second caps. Depending on how your element works, one or the other can be easier to implement.

Caps Handling Part 2

We’re not done yet with caps handling though. As a very last step it is required that we implement a function that is converting caps into the corresponding caps in the other direction. For example, if we receive BGRx caps with some width/height on the sinkpad, we are supposed to convert this into new caps with the same width/height but BGRx or GRAY8. That is, we can convert BGRx to BGRx or GRAY8. Similarly, if the element downstream of ours can accept GRAY8 with a specific width/height from our source pad, we have to convert this to BGRx with that very same width/height.

This has to be implemented in the transform_caps virtual method, and looks as following

impl BaseTransformImpl<BaseTransform> for Rgb2Gray {
    fn transform_caps(
        element: &BaseTransform,
        direction: gst::PadDirection,
        caps: gst::Caps,
        filter: Option<&gst::Caps>,
    ) -> gst::Caps {
        let other_caps = if direction == gst::PadDirection::Src {
            let mut caps = caps.clone();

            for s in caps.make_mut().iter_mut() {
                s.set("format", &gst_video::VideoFormat::Bgrx.to_string());

        } else {
            let mut gray_caps = gst::Caps::new_empty();

                let gray_caps = gray_caps.get_mut().unwrap();

                for s in caps.iter() {
                    let mut s_gray = s.to_owned();
                    s_gray.set("format", &gst_video::VideoFormat::Gray8.to_string());


            obj: element,
            "Transformed caps from {} to {} in direction {:?}",

        if let Some(filter) = filter {
            filter.intersect_with_mode(&other_caps, gst::CapsIntersectMode::First)
        } else {

This caps conversion happens in 3 steps. First we check if we got caps for the source pad. In that case, the caps on the other pad (the sink pad) are going to be exactly the same caps but no matter if the caps contained BGRx or GRAY8 they must become BGRx as that’s the only format that our sink pad can accept. We do this by creating a clone of the input caps, then making sure that those caps are actually writable (i.e. we’re having the only reference to them, or a copy is going to be created) and then iterate over all the structures inside the caps and then set the “format” field to BGRx. After this, all structures in the new caps will be with the format field set to BGRx.

Similarly, if we get caps for the sink pad and are supposed to convert it to caps for the source pad, we create new caps and in there append a copy of each structure of the input caps (which are BGRx) with the format field set to GRAY8. In the end we append the original caps, giving us first all caps as GRAY8 and then the same caps as BGRx. With this ordering we signal to GStreamer that we would prefer to output GRAY8 over BGRx.

In the end the caps we created for the other pad are filtered against optional filter caps to reduce the potential size of the caps. This is done by intersecting the caps with that filter, while keeping the order (and thus preferences) of the filter caps (gst::CapsIntersectMode::First).

Conversion of BGRx Video Frames to Grayscale

Now that all the caps handling is implemented, we can finally get to the implementation of the actual video frame conversion. For this we start with defining a helper function bgrx_to_gray that converts one BGRx pixel to a grayscale value. The BGRx pixel is passed as a &[u8] slice with 4 elements and the function returns another u8 for the grayscale value.

impl Rgb2Gray {
    fn bgrx_to_gray(in_p: &[u8]) -> u8 {
        // See https://en.wikipedia.org/wiki/YUV#SDTV_with_BT.601
        const R_Y: u32 = 19595; // 0.299 * 65536
        const G_Y: u32 = 38470; // 0.587 * 65536
        const B_Y: u32 = 7471; // 0.114 * 65536

        assert_eq!(in_p.len(), 4);

        let b = u32::from(in_p[0]);
        let g = u32::from(in_p[1]);
        let r = u32::from(in_p[2]);

        let gray = ((r * R_Y) + (g * G_Y) + (b * B_Y)) / 65536;
        (gray as u8)

This function works by extracting the blue, green and red components from each pixel (remember: we work on BGRx, so the first value will be blue, the second green, the third red and the fourth unused), extending it from 8 to 32 bits for a wider value-range and then converts it to the Y component of the YUV colorspace (basically what your grandparents’ black & white TV would’ve displayed). The coefficients come from the Wikipedia page about YUV and are normalized to unsigned 16 bit integers so we can keep some accuracy, don’t have to work with floating point arithmetic and stay inside the range of 32 bit integers for all our calculations. As you can see, the green component is weighted more than the others, which comes from our eyes being more sensitive to green than to other colors.

Afterwards we have to actually call this function on every pixel. For this the transform virtual method is implemented, which gets a input and output buffer passed and we’re supposed to read the input buffer and fill the output buffer. The implementation looks as follows, and is going to be our biggest function for this element

impl BaseTransformImpl<BaseTransform> for Rgb2Gray {
    fn transform(
        element: &BaseTransform,
        inbuf: &gst::Buffer,
        outbuf: &mut gst::BufferRef,
    ) -> gst::FlowReturn {
        let mut state_guard = self.state.lock().unwrap();
        let state = match *state_guard {
            None => {
                gst_element_error!(element, gst::CoreError::Negotiation, ["Have no state yet"]);
                return gst::FlowReturn::NotNegotiated;
            Some(ref mut state) => state,

        let in_frame = match gst_video::VideoFrameRef::from_buffer_ref_readable(
        ) {
            None => {
                    ["Failed to map input buffer readable"]
                return gst::FlowReturn::Error;
            Some(in_frame) => in_frame,

        let mut out_frame =
            match gst_video::VideoFrameRef::from_buffer_ref_writable(outbuf, &state.out_info) {
                None => {
                        ["Failed to map output buffer writable"]
                    return gst::FlowReturn::Error;
                Some(out_frame) => out_frame,

        let width = in_frame.width() as usize;
        let in_stride = in_frame.plane_stride()[0] as usize;
        let in_data = in_frame.plane_data(0).unwrap();
        let out_stride = out_frame.plane_stride()[0] as usize;
        let out_format = out_frame.format();
        let out_data = out_frame.plane_data_mut(0).unwrap();

        if out_format == gst_video::VideoFormat::Bgrx {
            assert_eq!(in_data.len() % 4, 0);
            assert_eq!(out_data.len() % 4, 0);
            assert_eq!(out_data.len() / out_stride, in_data.len() / in_stride);

            let in_line_bytes = width * 4;
            let out_line_bytes = width * 4;

            assert!(in_line_bytes <= in_stride);
            assert!(out_line_bytes <= out_stride);

            for (in_line, out_line) in in_data
                for (in_p, out_p) in in_line[..in_line_bytes]
                    assert_eq!(out_p.len(), 4);

                    let gray = Rgb2Gray::bgrx_to_gray(in_p);
                    out_p[0] = gray;
                    out_p[1] = gray;
                    out_p[2] = gray;
        } else if out_format == gst_video::VideoFormat::Gray8 {
            assert_eq!(in_data.len() % 4, 0);
            assert_eq!(out_data.len() / out_stride, in_data.len() / in_stride);

            let in_line_bytes = width * 4;
            let out_line_bytes = width;

            assert!(in_line_bytes <= in_stride);
            assert!(out_line_bytes <= out_stride);

            for (in_line, out_line) in in_data
                for (in_p, out_p) in in_line[..in_line_bytes]
                    let gray = Rgb2Gray::bgrx_to_gray(in_p);
                    *out_p = gray;
        } else {


What happens here is that we first of all lock our state (the input/output VideoInfo) and error out if we don’t have any yet (which can’t really happen unless other elements have a bug, but better safe than sorry). After that we map the input buffer readable and the output buffer writable with the VideoFrameRef API. By mapping the buffers we get access to the underlying bytes of them, and the mapping operation could for example make GPU memory available or just do nothing and give us access to a normally allocated memory area. We have access to the bytes of the buffer until the VideoFrameRef goes out of scope.

Instead of VideoFrameRef we could’ve also used the gst::Buffer::map_readable() and gst::Buffer::map_writable() API, but different to those the VideoFrameRef API also extracts various metadata from the raw video buffers and makes them available. For example we can directly access the different planes as slices without having to calculate the offsets ourselves, or we get directly access to the width and height of the video frame.

After mapping the buffers, we store various information we’re going to need later in local variables to save some typing later. This is the width (same for input and output as we never changed the width in transform_caps), the input and out (row-) stride (the number of bytes per row/line, which possibly includes some padding at the end of each line for alignment reasons), the output format (which can be BGRx or GRAY8 because of how we implemented transform_caps) and the pointers to the first plane of the input and output (which in this case also is the only plane, BGRx and GRAY8 both have only a single plane containing all the RGB/gray components).

Then based on whether the output is BGRx or GRAY8, we iterate over all pixels. The code is basically the same in both cases, so I’m only going to explain the case where BGRx is output.

We start by iterating over each line of the input and output, and do so by using the chunks iterator to give us chunks of as many bytes as the (row-) stride of the video frame is, do the same for the other frame and then zip both iterators together. This means that on each iteration we get exactly one line as a slice from each of the frames and can then start accessing the actual pixels in each line.

To access the individual pixels in each line, we again use the chunks iterator the same way, but this time to always give us chunks of 4 bytes from each line. As BGRx uses 4 bytes for each pixel, this gives us exactly one pixel. Instead of iterating over the whole line, we only take the actual sub-slice that contains the pixels, not the whole line with stride number of bytes containing potential padding at the end. Now for each of these pixels we call our previously defined bgrx_to_gray function and then fill the B, G and R components of the output buffer with that value to get grayscale output. And that’s all.

Using Rust high-level abstractions like the chunks iterators and bounds-checking slice accesses might seem like it’s going to cause quite some performance penalty, but if you look at the generated assembly most of the bounds checks are completely optimized away and the resulting assembly code is close to what one would’ve written manually (especially when using the newly-added exact_chunks iterators). Here you’re getting safe and high-level looking code with low-level performance!

You might’ve also noticed the various assertions in the processing function. These are there to give further hints to the compiler about properties of the code, and thus potentially being able to optimize the code better and moving e.g. bounds checks out of the inner loop and just having the assertion outside the loop check for the same. In Rust adding assertions can often improve performance by allowing further optimizations to be applied, but in the end always check the resulting assembly to see if what you did made any difference.

Testing the new element

Now we implemented almost all functionality of our new element and could run it on actual video data. This can be done now with the gst-launch-1.0 tool, or any application using GStreamer and allowing us to insert our new element somewhere in the video part of the pipeline. With gst-launch-1.0 you could run for example the following pipelines

# Run on a test pattern
gst-launch-1.0 videotestsrc ! rsrgb2gray ! videoconvert ! autovideosink

# Run on some video file, also playing the audio
gst-launch-1.0 playbin uri=file:///path/to/some/file video-filter=rsrgb2gray

Note that you will likely want to compile with cargo build –release and add the target/release directory to GST_PLUGIN_PATH instead. The debug build might be too slow, and generally the release builds are multiple orders of magnitude (!) faster.


The only feature missing now are the properties I mentioned in the opening paragraph: one boolean property to invert the grayscale value and one integer property to shift the value by up to 255. Implementing this on top of the previous code is not a lot of work. Let’s start with defining a struct for holding the property values and defining the property metadata.

const DEFAULT_INVERT: bool = false;
const DEFAULT_SHIFT: u32 = 0;

#[derive(Debug, Clone, Copy)]
struct Settings {
    invert: bool,
    shift: u32,

impl Default for Settings {
    fn default() -> Self {
        Settings {
            invert: DEFAULT_INVERT,
            shift: DEFAULT_SHIFT,

static PROPERTIES: [Property; 2] = [
        "Invert grayscale output",
        "Shift grayscale output (wrapping around)",
        (0, 255),

struct Rgb2Gray {
    cat: gst::DebugCategory,
    settings: Mutex<Settings>,
    state: Mutex<Option<State>>,

impl Rgb2Gray {
    fn new(_transform: &BaseTransform) -> Box<BaseTransformImpl<BaseTransform>> {
        Box::new(Self {
            cat: gst::DebugCategory::new(
                "Rust RGB-GRAY converter",
            settings: Mutex::new(Default::default()),
            state: Mutex::new(None),

This should all be rather straightforward: we define a Settings struct that stores the two values, implement the Default trait for it, then define a two-element array with property metadata (names, description, ranges, default value, writability), and then store the default value of our Settings struct inside another Mutex inside the element struct.

In the next step we have to make use of these: we need to tell the GObject type system about the properties, and we need to implement functions that are called whenever a property value is set or get.

impl Rgb2Gray {
    fn class_init(klass: &mut BaseTransformClass) {

impl ObjectImpl<BaseTransform> for Rgb2Gray {
    fn set_property(&self, obj: &glib::Object, id: u32, value: &glib::Value) {
        let prop = &PROPERTIES[id as usize];
        let element = obj.clone().downcast::<BaseTransform>().unwrap();

        match *prop {
            Property::Boolean("invert", ..) => {
                let mut settings = self.settings.lock().unwrap();
                let invert = value.get().unwrap();
                    obj: &element,
                    "Changing invert from {} to {}",
                settings.invert = invert;
            Property::UInt("shift", ..) => {
                let mut settings = self.settings.lock().unwrap();
                let shift = value.get().unwrap();
                    obj: &element,
                    "Changing shift from {} to {}",
                settings.shift = shift;
            _ => unimplemented!(),

    fn get_property(&self, _obj: &glib::Object, id: u32) -> Result<glib::Value, ()> {
        let prop = &PROPERTIES[id as usize];

        match *prop {
            Property::Boolean("invert", ..) => {
                let settings = self.settings.lock().unwrap();
            Property::UInt("shift", ..) => {
                let settings = self.settings.lock().unwrap();
            _ => unimplemented!(),

Property values can be changed from any thread at any time, that’s why the Mutex is needed here to protect our struct. And we’re using a new mutex to be able to have it locked only for the shorted possible amount of time: we don’t want to keep it locked for the whole time of the transform function, otherwise applications trying to set/get values would block for up to one frame.

In the property setter/getter functions we are working with a glib::Value. This is a dynamically typed value type that can contain values of any type, together with the type information of the contained value. Here we’re using it to handle an unsigned integer (u32) and a boolean for our two properties. To know which property is currently set/get, we get an identifier passed which is the index into our PROPERTIES array. We then simply match on the name of that to decide which property was meant

With this implemented, we can already compile everything, see the properties and their metadata in gst-inspect-1.0 and can also set them on gst-launch-1.0 like this

# Set invert to true and shift to 128
gst-launch-1.0 videotestsrc ! rsrgb2gray invert=true shift=128 ! videoconvert ! autovideosink

If we set GST_DEBUG=rsrgb2gray:6 in the environment before running that, we can also see the corresponding debug output when the values are changing. The only thing missing now is to actually make use of the property values for the processing. For this we add the following changes to bgrx_to_gray and the transform function

impl Rgb2Gray {
    fn bgrx_to_gray(in_p: &[u8], shift: u8, invert: bool) -> u8 {

        let gray = ((r * R_Y) + (g * G_Y) + (b * B_Y)) / 65536;
        let gray = (gray as u8).wrapping_add(shift);

        if invert {
            255 - gray
        } else {

impl BaseTransformImpl<BaseTransform> for Rgb2Gray {
    fn transform(
        element: &BaseTransform,
        inbuf: &gst::Buffer,
        outbuf: &mut gst::BufferRef,
    ) -> gst::FlowReturn {
        let settings = *self.settings.lock().unwrap();
                    let gray = Rgb2Gray::bgrx_to_gray(in_p, settings.shift as u8, settings.invert);

And that’s all. If you run the element in gst-launch-1.0 and change the values of the properties you should also see the corresponding changes in the video output.

Note that we always take a copy of the Settings struct at the beginning of the transform function. This ensures that we take the mutex only the shorted possible amount of time and then have a local snapshot of the settings for each frame.

Also keep in mind that the usage of the property values in the bgrx_to_gray function is far from optimal. It means the addition of another condition to the calculation of each pixel, thus potentially slowing it down a lot. Ideally this condition would be moved outside the inner loops and the bgrx_to_gray function would made generic over that. See for example this blog post about “branchless Rust” for ideas how to do that, the actual implementation is left as an exercise for the reader.

What next?

I hope the code walkthrough above was useful to understand how to implement GStreamer plugins and elements in Rust. If you have any questions, feel free to ask them here in the comments.

The same approach also works for audio filters or anything that can be handled in some way with the API of the BaseTransform base class. You can find another filter, an audio echo filter, using the same approach here.

In the next blog post in this series I’ll show how to use another base class to implement another kind of element, but for the time being you can also check the GIT repository for various other element implementations.

on January 13, 2018 10:23 PM

January 12, 2018

Following the recent testing of a respin to deal with the BIOS bug on some Lenovo machines, Xubuntu 17.10.1 has been released. Official download sources have been updated to point to this point release, but if you’re using a mirror, be sure you are downloading the 17.10.1 version.

No changes to applications are included, however, this release does include any updates made between the original release date and now.

Note: Even with this fix, you will want to update your system to make sure you get all security fixes since the ISO respin, including the one for Meltdown, addressed in USN-3523, which you can read more about here.

on January 12, 2018 05:34 PM

On Saturday 13th January 2018, Xubuntu 17.04 goes End of Life (EOL). For more information please see the Ubuntu 17.04 EOL Notice.

We strongly recommend upgrading to the current regular release, Xubuntu 17.10.1, as soon as practical. Alternatively you can download the current Xubuntu release and install fresh.

The 17.10.1 release recently saw testing across all flavors to address the BIOS bug found after its release in October 2017. Updated and bugfree ISO files are now available.

on January 12, 2018 02:40 PM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 142 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change at 183 hours per month. It would be nice if we could continue to find new sponsors as the amount of work seems to be slowly growing too.

The security tracker currently lists 21 packages with a known CVE and the dla-needed.txt file 16 (we’re a bit behind in CVE triaging apparently). Both numbers show a significant drop compared to last month. Yet the number of DLA released was not larger than usual (30), instead it looks like December brought us fewer new security vulnerabilities to handle and at the same time we used this opportunity to handle lower priorities packages that were kept on the side for multiple months.

Thanks to our sponsors

New sponsors are in bold (none this month).

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on January 12, 2018 02:15 PM
Kubuntu recently had to pull our 17.10 ISOs because of the so-called lenovo bug. Now that this bug is fixed, the ISOs have been respun, and so now it's time to begin to reseed the torrents.

To speed up the process, I wanted to zsync to the original ISOs before getting the new torrent files. Simon kindly told me the easy way to do this - cd to the directory where the ISOs live, which in my case is 

cd /media/valorie/Data/ISOs/


cp kubuntu-17.10{,.1}-desktop-amd64.iso && zsync http://cdimage.ubuntu.com/kubuntu/releases/17.10.1/release/kubuntu-17.10.1-desktop-amd64.iso.zsync

Where did I get the link to zsync? At http://cdimage.ubuntu.com/kubuntu/releases/17.10.1/release/. All ISOs are found at cdimage, just as all torrents are found at http://torrent.ubuntu.com:6969/.

The final step is to download those torrent files (pro-tip: use control F) and tell Ktorrent to seed them all! I seed all the supported Ubuntu releases. The more people do this, the faster torrents are for everyone. If you have the bandwidth, go for it!

PS: you don't have to copy all the cdimage URLs. Just up-arrow and then back-arrow through your previous command once the sync has finished, edit it, hit return and you are back in business.
on January 12, 2018 08:39 AM
Lubuntu 17.10.1 has been released to fix a major problem affecting many Lenovo laptops that causes the computer to have BIOS problems after installing. You can find more details about this problem here. Please note that the Meltdown and Spectre vulnerabilities have not been fixed in this ISO, so we advise that if you install […]
on January 12, 2018 06:29 AM

January 07, 2018

Beginning 2018

Valorie Zimmerman

2017 began with the once-in-a-lifetime trip to India to speak at KDE.Conf.in. That was amazing enough, but the trip to a local village, and visiting the Kaziranga National Park were too amazing for words.

Literal highlight of last year were the eclipse and trip to see it with my son Thomas, and Christian and Hailey's wedding, and the trip to participate with my daughter Anne, while also spending some time with son Paul, his wife Tara and my grandson Oscar. This summer I was able to spend a few days in Brooklyn with Colin and Rory as well on my way to Akademy. So 2017 was definitely worth living through!

This is reality, and we can only see it during a total eclipse

2018 began wonderfully at the cabin. I'm looking forward to 2018 for a lot of reasons.

First, I'm so happy that soon Kubuntu will again be distributing 17.10 images next week. Right now we're in testing in preparation for that; pop into IRC if you'd like to help with the testing (#kubuntu-devel). https://kubuntu.org/getkubuntu/ next week!

Lubuntu has a nice write-up of the issues and testing procedures: http://lubuntu.me/lubuntu-17-04-eol-and-lubuntu-17-10-respins/

The other serious problems with meltdown and spectre are being handled by the Ubuntu kernel team and those updates will be rolled out as soon as testing is complete. Scary times when dealing with such a fundamental flaw in the design of our computers!

Second, in KDE we're beginning to ramp up for Google Summer of Code. Mentors are preparing the ideas page on the wiki, and Bhushan has started the organization application process. If you want to mentor or help us administer the program this year, now is the time to get in gear!

At Renton PFLAG we had our first support meeting of the year, and it was small but awesome! Our little group has had some tough times in the past, but I see us growing and thriving in this next year.

Finally, my local genealogy society is doing some great things, and I'm so happy to be involved and helping out again. My own searching is going well too. As I find more supporting evidence to the lives of my ancestors and their families, I feel my own place in the cosmos more deeply and my connection to history more strongly. I wish I could link to our website, but Rootsweb is down and until we get our new website up......

Finally, today I saw a news article about a school in India far outside the traditional education model. Called the Tamarind Tree School, it uses an open education model to offer collaborative, innovative learning solutions to rural students. They use free and open source software, and even hardware so that people can build their own devices. Read more about this: https://opensource.com/article/18/1/tamarind-tree-school-india.
on January 07, 2018 10:55 PM

January 06, 2018

You’re supposed to send cards to wish someone a happy anniversary. Well, today, my mum and dad have been married for 45 years (!), so I sent them some cards. Specifically, five playing cards, with weird symbols on them.

Joker, J♠, A♥, A♠, 5♠

So, the first question is: what order should they be in? You might need to be Irish to get this next bit.

There is a card game in Ireland called Forty-Five. It’s basically Whist, or Trumps; you each play a card, and highest card wins, except that a trump card beats a non-trump. My grandad, my mum’s dad, was an absolute demon at it. You’d sit and play a few hands and then he’d say: you reneged! And you’d say, I did what? And he’d say: you should have played your Jack of Spades there. And you’d say: how the bloody hell do you know I have the Jack of Spades? And then he’d beat you nine hundred games to nil.

Anyway, what makes Forty-Five not be Whist is that the trumps are in a weird order. Imagine that, in this hand, trump suit has been chosen as Spades. The highest trump, the best card in the pack, is the Five of Spades. Then the Jack of Spades, then the Joker, then the Ace of Hearts (regardless of which suit is trump; always the A♥ as fourth trump), then the Ace of Spades and down the other trump suit cards in sequence (K♠, Q♠, etc).

And it is their forty-fifth wedding anniversary. (See what I did there?) So if we put the cards in order:

5♠, J♠, Joker, A♥, A♠

then that’s correct. But what about the weird symbols? Well, once you’ve got the cards laid out in order as above, you can look at them from the right-hand-side and the symbols spell a vertical message:

Weird symbols spell out 'HAPPY ANNIVERSARY'


Also, I’m forty-one, so all you people who have suggested that my parents were unmarried (although by using a shorter word for it) are wrong.

Happy anniversary, mum and dad.

on January 06, 2018 12:42 PM

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12h and I had two hours left but I only spent 13h. During this time, I managed the LTS frontdesk during one week, reviewing new security issues and classifying the associated CVE (18 commits to the security tracker).

I also released DLA-1205-1 on simplesamlphp fixing 6 CVE. I prepared and released DLA-1207-1 on erlang with the help of the maintainer who tested the patch that I backported. I handled tkabber but it turned out that the CVE report was wrong, I reported this to MITRE who marked the CVE as DISPUTED (see CVE-2017-17533).

During my CVE triaging work, I decided to mark mp3gain and libnet-ping-external-perl as unsupported (the latter has been removed everywhere already). I re-classified the suricata CVE as not worth an update (following the decision of the security team). I also dropped global from dla-needed as the issue was marked unimportant but I still filed #884912 about it so that it gets tracked in the BTS.

I filed #884911 on ohcount requesting new upstream (fixing CVE) and update of homepage field (that is misleading in current package). I dropped jasperreports from dla-needed.txt as issues are undetermined and upstream is uncooperative, instead I suggested to mark the package as unsupported (see #884907).

Misc Debian Work

Debian Installer. I suggested to switch to isenkram instead of discover for automatic package installation based on recognized hardware. I also filed a bug on isenkram (#883470) and asked debian-cloud for help to complete the missing mappings.

Packaging. I sponsored asciidoc 8.6.10-2 for Joseph Herlant. I uplodaded new versions of live-tools and live-build fixing multiple bugs that had been reported (many with patches ready to merge). Only #882769 required a bit more work to track down and fix. I also uploaded dh-linktree 0.5 with a new feature contributed by Paul Gevers. By the way, I no longer use this package so I will happily give it over to anyone who needs it.

QA team. When I got my account on salsa.debian.org (a bit before the announce of the beta phase), I created the group for the QA team and setup a project for distro-tracker.

Bug reports. I filed #884713 on approx, requesting that systemd’s approx.socket be configured to not have any trigger limit.

Package Tracker

Following the switch to Python 3 by default, I updated the packaging provided in the git repository. I’m now also providing a systemd unit to run gunicorn3 for the website.

I merged multiple patches of Pierre-Elliott Bécue fixing bugs and adding a new feature (vcswatch support!). I fixed a bug related to the lack of a link to the experimental build logs and a bit of bug triaging.

I also filed two bugs against DAK related to bad interactions with the package tracker: #884930 because it does still use packages.qa.debian.org to send emails instead of tracker.debian.org. And #884931 because it sends removal mails to too many email addresses. And I filed a bug against the tracker (#884933) because the last issue also revealed a problem in the way the tracker handles removal mails.


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on January 06, 2018 10:50 AM

January 05, 2018

For up-to-date patch, package, and USN links, please refer to: https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown

This is cross-posted on Canonical's official Ubuntu Insights blog:

Unfortunately, you’ve probably already read about one of the most widespread security issues in modern computing history -- colloquially known as “Meltdown” (CVE-2017-5754) and “Spectre” (CVE-2017-5753 and CVE-2017-5715) -- affecting practically every computer built in the last 10 years, running any operating system. That includes Ubuntu.

I say “unfortunately”, in part because there was a coordinated release date of January 9, 2018, agreed upon by essentially every operating system, hardware, and cloud vendor in the world. By design, operating system updates would be available at the same time as the public disclosure of the security vulnerability. While it happens rarely, this an industry standard best practice, which has broken down in this case.

At its heart, this vulnerability is a CPU hardware architecture design issue. But there are billions of affected hardware devices, and replacing CPUs is simply unreasonable. As a result, operating system kernels -- Windows, MacOS, Linux, and many others -- are being patched to mitigate the critical security vulnerability.

Canonical engineers have been working on this since we were made aware under the embargoed disclosure (November 2017) and have worked through the Christmas and New Years holidays, testing and integrating an incredibly complex patch set into a broad set of Ubuntu kernels and CPU architectures.

Ubuntu users of the 64-bit x86 architecture (aka, amd64) can expect updated kernels by the original January 9, 2018 coordinated release date, and sooner if possible. Updates will be available for:

  • Ubuntu 17.10 (Artful) -- Linux 4.13 HWE
  • Ubuntu 16.04 LTS (Xenial) -- Linux 4.4 (and 4.4 HWE)
  • Ubuntu 14.04 LTS (Trusty) -- Linux 3.13
  • Ubuntu 12.04 ESM** (Precise) -- Linux 3.2
    • Note that an Ubuntu Advantage license is required for the 12.04 ESM kernel update, as Ubuntu 12.04 LTS is past its end-of-life
Ubuntu 18.04 LTS (Bionic) will release in April of 2018, and will ship a 4.15 kernel, which includes the KPTI patchset as integrated upstream.

Ubuntu optimized kernels for the Amazon, Google, and Microsoft public clouds are also covered by these updates, as well as the rest of Canonical's Certified Public Clouds including Oracle, OVH, Rackspace, IBM Cloud, Joyent, and Dimension Data.

These kernel fixes will not be Livepatch-able. The source code changes required to address this problem is comprised of hundreds of independent patches, touching hundreds of files and thousands of lines of code. The sheer complexity of this patchset is not compatible with the Linux kernel Livepatch mechanism. An update and a reboot will be required to active this update.

Furthermore, you can expect Ubuntu security updates for a number of other related packages, including CPU microcode, GCC and QEMU in the coming days.

We don't have a performance analysis to share at this time, but please do stay tuned here as we'll followup with that as soon as possible.

VP of Product
Canonical / Ubuntu
on January 05, 2018 03:20 PM

I'm the proud owner of a new Dell XPS 13 Developer Edition (9630) laptop, pre-loaded from the Dell factory with Ubuntu 16.04 LTS Desktop.

Kudos to the Dell and the Canonical teams that have engineered a truly remarkable developer desktop experience.  You should also check out the post from Dell's senior architect behind the XPS 13, Barton George.

As it happens, I'm also the proud owner of a long loved, heavily used, 1st Generation Dell XPS 13 Developer Edition laptop :-)  See this post from May 7, 2012.  You'll be happy to know that machine is still going strong.  It's now my wife's daily driver.  And I use it almost every day, for any and all hacking that I do from the couch, after hours, after I leave the office ;-)

Now, this latest XPS edition is a real dream of a machine!

From a hardware perspective, this newer XPS 13 sports an Intel i7-7660U 2.5GHz processor and 16GB of memory.  While that's mildly exciting to me (as I've long used i7's and 16GB), here's what I am excited about...

The 500GB NVME storage and a whopping 1239 MB/sec I/O throughput!

kirkland@xps13:~$ sudo hdparm -tT /dev/nvme0n1
Timing cached reads: 25230 MB in 2.00 seconds = 12627.16 MB/sec
Timing buffered disk reads: 3718 MB in 3.00 seconds = 1239.08 MB/sec

And on top of that, this is my first QHD+ touch screen laptop display, sporting a magnificent 3200x1800 resolution.  The graphics are nothing short of spectacular.  Here's nearly 4K of Hollywood hard "at work" :-)

The keyboard is super comfortable.  I like it a bit better than the 1st generation.  Unlike your Apple friends, we still have our F-keys, which is important to me as a Byobu user :-)  The placement of the PgUp, PgDn, Home, and End keys (as Fn + Up/Down/Left/Right) takes a while to get used to.

The speakers are decent for a laptop, and the microphone is excellent.  The webcam is placed in an odd location (lower left of the screen), but it has quite nice resolution and focus quality.

And Bluetooth and WiFi, well, they "just work".  I got 98.2 Mbits/sec of throughput over WiFi.

kirkland@xps:~$ iperf -c
Client connecting to, TCP port 5001
TCP window size: 85.0 KByte (default)
[ 3] local port 40568 connected with port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.1 sec 118 MBytes 98.2 Mbits/sec

There's no external display port, so you'll need something like this USB-C-to-HDMI adapter to project to a TV or monitor.

There's 1x USB-C port, 2x USB-3 ports, and an SD-Card reader.

One of the USB-3 ports can be used to charge your phone or other devices, even while your laptop is suspended.  I use this all the time, to keep my phone topped up while I'm aboard planes, trains, and cars.  To do so, you'll need to enable "USB PowerShare" in the BIOS.  Here's an article from Dell's KnowledgeBase explaining how.

Honestly, I have only one complaint...  And that's that there is no Trackstick mouse (which is available on some Dell models).  I'm not a huge fan of the Touchpad.  It's too sensitive, and my palms are always touching it inadvertently.  So I need to use an external mouse to be effective.  I'll continue to provide this feedback to the Dell team, in the hopes that one day I'll have my perfect developer laptop!  Otherwise, this machine is a beauty.  I'm sure you'll love it too.

on January 05, 2018 03:12 PM

January 04, 2018

An nice additional benefit of the recent Kernel Page Table Isolation (CONFIG_PAGE_TABLE_ISOLATION) patches (to defend against CVE-2017-5754, the speculative execution “rogue data cache load” or “Meltdown” flaw) is that the userspace page tables visible while running in kernel mode lack the executable bit. As a result, systems without the SMEP CPU feature (before Ivy-Bridge) get it emulated for “free”.

Here’s a non-SMEP system with PTI disabled (booted with “pti=off“), running the EXEC_USERSPACE LKDTM test:

# grep smep /proc/cpuinfo
# dmesg -c | grep isolation
[    0.000000] Kernel/User page tables isolation: disabled on command line.
# cat <(echo EXEC_USERSPACE) > /sys/kernel/debug/provoke-crash/DIRECT
# dmesg
[   17.883754] lkdtm: Performing direct entry EXEC_USERSPACE
[   17.885149] lkdtm: attempting ok execution at ffffffff9f6293a0
[   17.886350] lkdtm: attempting bad execution at 00007f6a2f84d000

No crash! The kernel was happily executing userspace memory.

But with PTI enabled:

# grep smep /proc/cpuinfo
# dmesg -c | grep isolation
[    0.000000] Kernel/User page tables isolation: enabled
# cat <(echo EXEC_USERSPACE) > /sys/kernel/debug/provoke-crash/DIRECT
# dmesg
[   33.657695] lkdtm: Performing direct entry EXEC_USERSPACE
[   33.658800] lkdtm: attempting ok execution at ffffffff926293a0
[   33.660110] lkdtm: attempting bad execution at 00007f7c64546000
[   33.661301] BUG: unable to handle kernel paging request at 00007f7c64546000
[   33.662554] IP: 0x7f7c64546000

It should only take a little more work to leave the userspace page tables entirely unmapped while in kernel mode, and only map them in during copy_to_user()/copy_from_user() as ARM already does with ARM64_SW_TTBR0_PAN (or CONFIG_CPU_SW_DOMAIN_PAN on arm32).

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

on January 04, 2018 09:43 PM

The 5th and final bugfix update (5.11.5) of the Plasma 5.11 series is now available for users of Kubuntu Artful Aardvark 17.10 to install via our Backports PPA.

This update also includes an upgrade of KDE Frameworks to version 5.41.

To update, add the following repository to your software sources list:


or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt update
sudo apt full-upgrade

Upgrade notes:

~ The Kubuntu backports PPA includes various other backported applications, so please be aware that enabling the backports PPA for the first time and doing a full upgrade will result in a substantial amount of upgraded packages in addition to Plasma 5.11.5.

~ The PPA may also continue to receive updates to Plasma when they become available, and further updated applications where practical.

~ While we believe that these packages represent a beneficial and stable update, please bear in mind that they have not been tested as comprehensively as those in the main Ubuntu archive, and are supported only on a limited and informal basis. Should any issues occur, please provide feedback on our mailing list [1], IRC [2], and/or file a bug against our PPA packages [3].

1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
3. Kubuntu PPA bugs: https://bugs.launchpad.net/kubuntu-ppa

on January 04, 2018 01:28 PM

January 03, 2018

You are creating containers and you want them to be somewhat preconfigured. For example, you want them to run automatically apt update as soon as they are launched. Or, get some packages pre-installed, or run a few commands. Here is how to perform this early initialization with cloud-init through LXD to container images that support …

Continue reading

on January 03, 2018 03:39 PM

January 02, 2018

So, you’ve clicked on a link or came to check for a new release at smdavis.us, and now you’re here at bluesabre.org. Fear not! Everything is working just as it should.

To kick off 2018, I’ve started tidying up my personal brand. Since my website has consistently been about FOSS updates, I’ve transitioned to a more fitting .org domain. The .org TLD is often associated with community and open source initiatives, and the content you’ll find here is always going to fit that bill. You can continue to expect a steady stream of Xfce and Xubuntu updates.

And that’s enough of that, let’s get started with the new year. 2018 is going to be one of the best yet!

on January 02, 2018 03:16 AM

December 31, 2017

A year ends, a new year begins

Julian Andres Klode

2017 is ending. It’s been a rather uneventful year, I’d say. About 6 months ago I started working on my master’s thesis – it plays with adding linear types to Go – and I handed that in about 1.5 weeks ago. It’s not really complete, though – you cannot actually use it on a complete Go program. The source code is of course available on GitHub, it’s a bunch of Go code for the implementation and a bunch of Markdown and LaTex for the document. I’m happy about the code coverage, though: As a properly developed software project, it achieves about 96% code coverage – the missing parts happening at the end, when time ran out 😉

I released apt 1.5 this year, and started 1.6 with seccomp sandboxing for methods.

I went to DebConf17 in Montreal. I unfortunately did not make it to DebCamp, nor the first day, but I at least made the rest of the conference. There, I gave a talk about APT development in the past year, and had a few interesting discussions. One thing that directly resulted from such a discusssion was a new proposal for delta upgrades, with a very simple delta format based on a variant of bsdiff (with external compression, streamable patches, and constant memory use rather than linear). I hope we can implement this – the savings are enormous with practically no slowdown (there is no reconstruction phase, upgrades are streamed directly to the file system), which is especially relevant for people with slow or data capped connections.

This month, I’ve been buying a few “toys”: I got a pair of speakers (JBL LSR 305), and I got a noise cancelling headphone (a Sony WH-1000XM2). Nice stuff. Been wearing the headphones most of today, and they’re quite comfortable and really make things quite, except for their own noise 😉 Well, both the headphone and the speakers have a white noise issue, but oh well, the prices were good.

This time of the year is not only a time to look back at the past year, but also to look forward to the year ahead. In one week, I’ll be joining Canonical to work on Ubuntu foundation stuff. It’s going to be interesting. I’ll also be moving places shortly, having partially lived in student housing for 6 years (one room, and a shared kitchen), I’ll be moving to a complete apartement.

On the APT front, I plan to introduce a few interesting changes. One of them involves automatic removal of unused packages: This should be happening automatically during install, upgrade, and whatever. Maybe not for all packages, though – we might have a list of “safe” autoremovals. I’d also be interested in adding metadata for transitions: Like if libfoo1 replaces libfoo0, we can safely remove libfoo0 if nothing depends on it anymore. Maybe not for all “garbage” either. It might make sense to restrict it to new garbage – that is packages that become unused as part of the operation. This is important for safe handling of existing setups with automatically removable packages: We don’t suddenly want to remove them all when you run upgrade.

The other change is about sandboxing. You might have noticed that sometimes, sandboxing is disabled with a warning because the method would not be able access the source or the target. The goal is to open these files in the main program and send file descriptors to the methods via a socket. This way, we can avoid permission problems, and we can also make the sandbox stronger – for example, by not giving it access to the partial/ directory anymore.

Another change we need to work on is standardising the “Important” field, which is sort of like essential – it marks an installed package as extra-hard to remove (but unlike Essential, does not cause apt to install it automatically). The latest draft calls it “Protected”, but I don’t think we have a consensus on that yet.

I also need to get happy eyeballs done – fast fallback from IPv6 to IPv4. I had a completely working solution some months ago, but it did not pass CI, so I decided to start from scratch with a cleaner design to figure out if I went wrong somewhere. Testing this is kind of hard, as it basically requires a broken IPv6 setup (well, unreachable IPv6 servers).

Oh well, 2018 has begun, so I’m going to stop now. Let’s all do our best to make it awesome!

Filed under: Debian, General, Ubuntu
on December 31, 2017 11:01 PM

December 30, 2017

instead of connecting to the DeepLens with HDMI micro cable, monitor, keyboard, mouse

Credit for this excellent idea goes to Ernie Kim. Thank you!

Instructions without ssh

The standard AWS DeepLens instructions recommend connecting the device to a monitor, keyboard, and mouse. The instructions provide information on how to view the video streams in this mode:

If you are connected to the DeepLens using a monitor, you can view the unprocessed device stream (raw camera video before being processed by the model) using this command on the DeepLens device:

mplayer –demuxer /opt/awscam/out/ch1_out.h264

If you are connected to the DeepLens using a monitor, you can view the project stream (video after being processed by the model on the DeepLens) using this command on the DeepLens device:

mplayer –demuxer lavf -lavfdopts format=mjpeg:probesize=32 /tmp/ssd_results.mjpeg

Instructions with ssh

You can also view the DeepLens video streams over ssh, without having a monitor connected to the device. To make this possible, you need to enable ssh access on your DeepLens. This is available as a checkbox option in the initial setup of the device. I’m working to get instructions on how to enable ssh access afterwards and will update once this is available.

To view the video streams over ssh, we take the same mplayer command options above and the same source stream files, but send the stream over ssh, and feed the result to the stdin of an mplayer process running on the local system, presumably a laptop.

All of the following commands are run on your local laptop (not on the DeepLens device).

You need to know the IP address of your DeepLens device on your local network:

ip_address=[IP ADDRESS OF DeepLens]

You will need to install the mplayer software on your local laptop. This varies with your OS, but for Ubuntu:

sudo apt-get install mplayer

You can view the unprocessed device stream (raw camera video before being processed by the model) over ssh using the command:

ssh aws_cam@$ip_address cat /opt/awscam/out/ch1_out.h264 |
  mplayer –demuxer -

You can view the project stream (video after being processed by the model on the DeepLens) over ssh with the command:

ssh aws_cam@$ip_address cat /tmp/ssd_results.mjpeg |
  mplayer –demuxer lavf -lavfdopts format=mjpeg:probesize=32 -

Benefits of using ssh to view the video streams include:

  • You don’t need to have an extra monitor, keyboard, mouse, and micro-HDMI adapter cable.

  • You don’t need to locate the DeepLens close to a monitor, keyboard, mouse.

  • You don’t need to be physically close to the DeepLens when you are viewing the video streams.

For those of us sitting on a couch with a laptop, a DeepLens across the room, and no extra micro-HDMI cable, this is great news!


To protect the security of your sensitive DeepLens video feeds:

  • Use a long, randomly generated password for ssh on your DeepLens, even if you are only using it inside a private network.

  • I would recommend setting up .ssh/authorized_keys on the DeepLens so you can ssh in with your personal ssh key, test it, then disable password access for ssh on the DeepLens device. Don’t forget the password, because it is still needed for sudo.

  • Enable automatic updates on your DeepLens so that Ubuntu security patches are applied quickly. This is available as an option in the initial setup, and should be possible to do afterwards using the standard Ubuntu unattended-upgrades package.

Unrelated side note: It’s kind of nice having the DeepLens run a standard Ubuntu LTS release. Excellent choice!

Original article and comments: https://alestic.com/2017/12/aws-deeplens-video-stream-ssh/

on December 30, 2017 05:00 AM

December 29, 2017

GTD tools

Serge Hallyn

I’ve been using GTD to organize projects for a long time. The “tickler file” in particular is a crucial part of how I handle scheduling of upcoming and recurring tasks. I’ve blogged about some of the scripts I’ve written to help me do so in the past at https://s3hh.wordpress.com/2013/04/19/gtd-managing-projects/ and https://s3hh.wordpress.com/2011/12/10/tickler/. This week I’ve combined these tools, slightly updated them, and added an install script, and put them on github at http://github.com/hallyn/gtdtools.


The opinions expressed in this blog are my own views and not those of Cisco.

on December 29, 2017 09:23 PM

Installing retdec on Ubuntu

Simos Xenitellis

retdec (RETargetable DECompiler) is a decompiler, and it is the one that was released recently as open-source software by Avast Software. retdec can take an executable and work back into recreating the initial source code (with limitations). An example with retdec Let’s see first an example. Here is the initial source code, that was compiled …

Continue reading

on December 29, 2017 08:45 PM

I wanted a bench power supply for powering small projects and devices I’m testing. I ended up with a DIY approach for around $30 and am very happy with the outcome. It’s a simple project that almost anyone can do and is a great introductory power supply for any home lab.

I had a few requirements when I set out:

  • Variable voltage (up to ~12V)
  • Current limiting (to protect against stupid mistakes)
  • Small footprint (my electronics work area is only about 8 square feet)
  • Relatively cheap

Initially, I considered buying an off the shelf bench power supply, but most of those are either very expensive, very large, or both. I also toyed with the idea of an ATX power supply as a bench power supply, but those don’t offer current limiting (and are capable of delivering enough current to destroy any project I’m careless with).

I had seen a few DC-DC buck converter modules floating around, but most had pretty bad reviews, until the Ruidong DPS series came out. These have quickly become quite popular modules, with support for up to 50V at 5A – a 250W power supply! Because of the buck topology, they require a DC input at a higher voltage than the output, but that’s easily provided with another power supply. In my case, I decided to use cheap power supplies from electronic devices (commonly called “wall warts”). (I’m actually reusing one from an old router.)

I’m far from the first to do such a project, but I still wanted to share as well as describe what I’d like to do in the future.

power supply

This particular unit consists of a DPS3005 that I got for about $25 from AliExpress. (The DPS5005 is now available on Amazon with Prime. Had that been the case at the time I built this, I likely would have gone with that option.)

I placed the power supply in a plastic enclosure and added a barrel jack for input power, and added 5-way binding posts for the output. This allows me to connect banana plugs, breadboard leads, or spade lugs to the power supply.

power supply inside

Internally, I connected the parts with some 18 AWG red/black zip cord using crimped ring connectors on the binding posts, the screw terminals on the power supply, and solder on the barrel jack. Where possible, the connections were covered with heat shrink tubing.

I used this power supply in developing my Christmas Ornament, and it worked a treat. It allowed me to simulate behavior at lower battery voltages (though note that it is not a battery replacement – it does not simulate the internal resistance of a run down battery) and figure out how long my ornament was likely to run, and how bright it would be as the battery ran down.

I’ve also used it to power a few embedded devices that I’ve been using for security research, and I think it would make a great tool for voltage glitching in the future. (In fact, I saw Dr. Dmitry Nedospasov demonstrate a voltage glitching attack using a similar module at hardwaresecurity.training.)

In the future, I’d like to build a larger version with an internal AC to DC power supply (maybe a repurposed ATX supply) and either two or three of the DPS power modules to provide output. Note that, due to the single AC to DC supply, they would not be isolated channels – both would have the same ground reference, so it would not be possible to reference them to each other. For most use cases, this wouldn’t be a problem, and both channels would be isolated from mains earth if an isolated switching supply is used as the first stage power supply.

on December 29, 2017 08:00 AM

December 28, 2017

Forgotten FOSS Games: Boson

Simon Raffeiner

In 1999 "Boson, our attempt to make a Real Time Strategy game (RTS) for the KDE project" was announced on the kde-announce mailing list. You don't remember KDE having a full 3D RTS? Here's why.
on December 28, 2017 11:01 PM

OwnTracks and a map

Stuart Langridge

Every year we do a bit of a pub crawl in Birmingham between Christmas and New Year; a chance to get away from the turkey risotto, and hang out with people and talk about techie things after a few days away with family and so on. It’s all rather loosely organised — I tried putting exact times on every pub once and it didn’t work out very well. So this year, 2017, I wanted a map which showed where we were so people can come and find us — it’s a twelve-hour all-day-and-evening thing but nobody does the whole thing1 so the idea is that you can drop in at some point, have a couple of drinks, and then head off again. For that, you need to know where we all are.

Clearly, the solution here is technology; I carry a device in my pocket2 which knows where I am and can display that on a map. There are a few services that do this, or used to — Google Latitude, FB messenger does it, Apple find-my-friends — but they’re all “only people with the Magic Software can see this”, and “you have to use our servers”, and that’s not very web-ish, is it? What I wanted was a thing which sat there in the background on my phone and reported my location to my server when I moved around, and didn’t eat battery. That wouldn’t be tricky to write but I bet there’s a load of annoying corner cases, which is why I was very glad to discover that OwnTracks have done it for me.

You install their mobile app (for Android or iOS) and then configure it with the URL of your server and every now and again it reports your location by posting JSON to that URL saying what your location is. Only one word for that: magic darts. Exactly what I wanted.

It’s a little tricky because of that “don’t use lots of battery” requirement. Apple heavily restrict background location sniffing, for lots of good reasons. If your app is the active app and the screen’s unlocked, it can read your location as often as it wants, but that’s impractical. If you want to get notified of location changes in the background on iOS then you only get told if you’ve moved more than 500 metres in less than five minutes3 which is fine if you’re on the motorway but less fine if you’re walking around town and won’t move that far. However, you can nominate certain locations as “waypoints” and then the app gets notified whenever it enters or leaves a waypoint, even if it’s in the background and set to “manual mode”. So, I added all the pubs we’re planning on going to as waypoints, which is a bit annoying to do manually but works fine.

OwnTracks then posts my location to a tiny PHP file which just dumps it in a big JSON list. The #brumtechxmas 2017 map then reads that JSON file and plots the walk on the map (or it will do once we’re doing it; as I write this, the event isn’t until tomorrow, Friday 29th December, but I have tested it out).

The map is an SVG, embedded in the page. This has the nice property that I can change it with CSS. In particular, the page looks at the list of locations we’ve been in and works out whether any of them were close enough to a pub on the map that we probably went in there… and then uses CSS to colour the pub we’re in green, and ones we’ve been in grey. So it’s dynamic! Nice and easy to find us wherever we are. If it works, which is a bit handwavy at this point.

If you’re coming, see you tomorrow. If you’re not coming: you should come. :-)

A static version of the map: you'll want the website for the real dynamic clever one

  1. well, except me. And hero of the revolution Andy Yates.
  2. and you do too
  3. the OwnTracks docs explain this in more detail
on December 28, 2017 11:33 AM

Working on a proposal

Bryan Quigley

Draft of a proposal I'm working on.. Feedback/improvements welcome

on December 28, 2017 04:56 AM

December 27, 2017

Over the Christmas period I had a need to watch some videos from my laptop on my TV via Chromecast.  I once again tried my faithful old VLC player which according to the website should support casting in the latest release.  But alas, Chromecast is disabled:

  * No change rebuild to add some information about why we disable chromecast
    support: it fails to build from source due to protobuf/mir:
    - https://trac.videolan.org/vlc/ticket/18329
    - https://github.com/google/protobuf/issues/206

Source : https://launchpad.net/ubuntu/+source/vlc/3.0.0~rc2-2ubuntu2

Then I came across ‘castnow‘ which is a CLI based app to stream a mp4 file to your chromecast device.  You can see the code here – https://github.com/xat/castnow

To install, I needed the node package manager (npm), to do this on my system I run

sudo apt install npm

Then using npm you can install it by:

sudo npm install castnow

This will install the tool. Instructions for use are here – https://github.com/xat/castnow/blob/master/README.md

Now if you are like me and use the Plasma Desktop, there is now an addon to Dolphin menu which allows you to start the cast directly from Dolphin 🙂

In a dolphin window go to Settings > Configure Dolphin.  In the Services pane click the “Download New Services” button.  In the search box look for “cast” and install “Send to Chromecast” by Shaddar.

Now all you have to do is browse your collection of mp4 videos and use the Dolphin menu to play it on your Chromecast device, pretty handy!  I will certainly enjoy the holidays with this feature, with my favourite movies on full size HD screen.


on December 27, 2017 04:21 PM

December 24, 2017

I’ve been meaning to start a video channel for years. This is more of a test video than anything else, but if you have any ideas or suggestions, then don’t hesitate to comment.

on December 24, 2017 06:13 PM

December 22, 2017

Today I’ve released version 0.10.0 of the Rust GStreamer bindings, and after a journey of more than 1½ years the first release of the GStreamer plugin writing infrastructure crate “gst-plugin”.

Check the repositories¹² of both for more details, the code and various examples.

GStreamer Bindings

Some of the changes since the 0.9.0 release were already outlined in the previous blog post, and most of the other changes were also things I found while writing GStreamer plugins. For the full changelog, take a look at the CHANGELOG.md in the repository.

Other changes include

  • I went over the whole API in the last days, added any missing things I found, simplified API as it made sense, changed functions to take Option<_> if allowed, etc.
  • Bindings for using and writing typefinders. Typefinders are the part of GStreamer that try to guess what kind of media is to be handled based on looking at the bytes. Especially writing those in Rust seems worthwhile, considering that basically all of the GIT log of the existing typefinders consists of fixes for various kinds of memory-safety problems.
  • Bindings for the Registry and PluginFeature were added, as well as fixing the relevant API that works with paths/filenames to actually work on Paths
  • Bindings for the GStreamer Net library were added, allowing to build applications that synchronize their media of the network by using PTP, NTP or a custom GStreamer protocol (for which there also exists a server). This could be used for building video-walls, systems recording the same scene from multiple cameras, etc. and provides (depending on network conditions) up to < 1ms synchronization between devices.

Generally, this is something like a “1.0” release for me now (due to depending on too many pre-1.0 crates this is not going to be 1.0 anytime soon). The basic API is all there and nicely usable now and hopefully without any bugs, the known-missing APIs are not too important for now and can easily be added at a later time when needed. At this point I don’t expect many API changes anymore.

GStreamer Plugins

The other important part of this announcement is the first release of the “gst-plugin” crate. This provides the basic infrastructure for writing GStreamer plugins and elements in Rust, without having to write any unsafe code.

I started experimenting with using Rust for this more than 1½ years ago, and while a lot of things have changed in that time, this release is a nice milestone. In the beginning there were no GStreamer bindings and I was writing everything manually, and there were also still quite a few pieces of code written in C. Nowadays everything is in Rust and using the automatically generated GStreamer bindings.

Unfortunately there is no real documentation for any of this yet, there’s only the autogenerated rustdoc documentation available from here, and various example GStreamer plugins inside the GIT repository that can be used as a starting point. And various people already wrote their GStreamer plugins in Rust based on this.

The basic idea of the API is however that everything is as Rust-y as possible. Which might not be too much due to having to map subtyping, virtual methods and the like to something reasonable in Rust, but I believe it’s nice to use now. You basically only have to implement one or more traits on your structs, and that’s it. There’s still quite some boilerplate required, but it’s far less than what would be required in C. The best example at this point might be the audioecho element.

Over the next days (or weeks?) I’m not going to write any documentation yet, but instead will write a couple of very simple, minimal elements that do basically nothing and can be used as starting points to learn how all this works together. And will write another blog post or two about the different parts of writing a GStreamer plugin and element in Rust, so that all of you can get started with that.

Let’s hope that the number of new GStreamer plugins written in C is going to decrease in the future, and maybe even new people who would’ve never done that in C, with all the footguns everywhere, can get started with writing GStreamer plugins in Rust now.

on December 22, 2017 04:52 PM

December 21, 2017

S10E42 – Tangy Orange Chairs - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

This week we get confy in a new chair, conduct our Perennial Podcast Prophecy Petition Point and go over your feedback. This is the final show of the season and we’ll now be taking a couple of months break to eat curry, have a chat and decide if we’ll be returning for Season 11.

It’s Season Ten Episode Forty-Two of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been up to recently:

We review our 2017 predictions:


  • Multiple devices from Tier one vendors will ship with snappy by default (like Dell, HP, Cisco) “top line big name vendors will ship hardware with Ubuntu snappy as a default OS”
    • No
  • GitHub will downsize their 600 workforce to a much smaller number and may also do something controversial to raise funds
    • No – 723 according to wikipedia
  • Microsoft will provide a Linux build of a significant application – possibly exchange or sharepoint
    • No?
  • Donald Trump will not last a year as president
    • Sadly not.


  • There will be no new Ubuntu phone on sale in 2017
    • Yes
  • The UK government will lose a court case related to the Investigatory Powers Act
  • This time next year, one of the top 5 distros on Distrowatch will be a distro that isn’t currently in the top 20.
    • No


  • Ubuntu 17.10 will be able to run Mir using the proprietary nvidia drivers and Steam will work reliably via XMir. It will also be possible to run Mir in Virtualbox.
    • No
  • A high profile individual (or individuals) will fall victim to one of the many privacy threats introduced as a result of the Investigatory Powers Bill. Intimate details of their online life will be exposed to the world, compiled from one of more databases storing Internet Connection Records. The disclosure will possibly have serious consequences for the individuals concerned, such as losing their job or being professionally discredited.
    • No
  • The hype surrounding VR will build during 2017 but Virtual Reality will continue to lack adoption. Sales figures will be well below market projections.

We make our prediction for 2018:


  • A large gaming hardware vendor from the past will produce new hardware. Someone of the size/significance of Sega. Original hardware, not just re-using the brand-name, but official product.
  • Valve will rev the steamlink, perhaps making it more powerful for 4K gaming, and maybe a minor bump to the steam controller too
  • A large government UK body will accidentally leak a significant body of data. Could be laptop/USB stick on a train or website hack.


  • Either the UK or US government will collapse
  • A major hardware manufacturer (not a crowd funder) will release a device in the form factor of a GPD pocket
  • I will specifically buy (i.e. not in a Humble Bundle) and play through a native Linux game that is initially released in 2018.
  • Canonical will go public and suffer a hostile takeover by the shuffling corpse of SCO. (bonus prediction)


  • Give or take a couple of thousand dollars, BitCoin will have the same US dollar value in December 2018 as it does today.
    • 17205.63 US Dollar per btc at the time of recording.
  • A well established PC OEM, not currently supporting Linux, will offer a pre-installed Linux distro option for their flagship products.
  • Four smart phones will launch in 2018 that cost $1000 or more, thanks to Apple normalising this ludicrous price tag in 2017.

Ubuntu Podcast listenrs share their predictions for 2018:

  • Simon Butcher – The Queen piles into bitcoin and loses her fortune when bitcoin collapses to 10p
  • Jezra – Someone considers open sourcing a graphics driver for chip that works with ARM, and then doesn’t
  • Ian – Canonical will be bought out by Ubuntu Mate.
  • Mattias Wernér – I predict a new push for SteamOS and Steam Machines with a serious marketing effort behind it.
  • Jon Spriggs – I think we’ll see Etherium value exceeding £2,000 before 1st December 2018 (Currently £476 on Coinbase.com). Litecoin will cross £1,000 before 1st Dec 2018 (currently £286)
  • Eddie Figgie‏ – Bitcoin falls below $1k US.
  • McPhail – I saw the call for 2018 predictions. I predict that command line snaps will run natively in Windows and some graphical snaps will run too
  • Leo Arias – Costa Rica wins the FIFA world cup.
  • Ivan Pejić‏ – India will ship RISC-V based Ubuntu netbook/tablet/phone.
  • Sachin Saini* – Solus takes over the world.
  • Laura Czajkowski and Joel J – Year of the (mainstream) Linux desktop 😁
  • Adam Eveleigh‏ – snappy/Flatpak/AppImage(Update/d) will gain more traction as people realize that it solves the stable-for-noobs vs rolling dilemma once and for all. Which of the three will go furthest? Despite being in the snappy camp, I bet Flatpak
  • Marius Gripsgard – Ubuntu touch world domination
  • Jan Sprinz – Ubuntu Touch will rebase to 16.04
  • Ian – Canonical will IPO
  • Simon Butcher – Bitcoins go to £500,000 and the whole brexit divorce bill is funded by a stash of bc found on Gordon Brown’s old laptop
  • Conor Murphy – Linux Steam Integration snap will get wide adoption. Over 30% of all steam installs on linux
  • Jon Spriggs – RPi 4 with either with MOAR MEMORY or Gig Ethernet.
  • Mattias Wernér – I’ll predict that bitcoin will hit six figures in 2018. To be more specific, the six figures will be in dollars.
  • Jon Spriggs – I predict there will be an OggCamp ’18 😉
  • Laura Czajkowski – Microsoft will buy Canonical
  • Mortiz – Pipewire will be included in at least two major distros.
  • Daniel Llewelyn – Snaps will become the defacto standard and appimages and flatpaks will continue to be ignored
  • Jezra – Samsung ports Tizen to another device that is not a Samsung Phone.
  • Badger – Sound will finally work on cherry trail processors
  • Justin – Ubuntu Podcast to return for an eleventh season 🙂

  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • This weeks cover image is taken from Wikimedia.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on December 21, 2017 03:30 PM