November 27, 2015

screen config victory!

Sebastian Kügler

kscreen wayland backend in action

kscreen wayland backend in action

That moment when the application “just works” after all your unit tests pass…

A really nice experience after working on these low-level bits was firing up the kscreen systemsettings module configured to use my wayland test server. I hadn’t done so in a while, so I didn’t expect much at all. The whole thing just worked right out of the box, however. Every single change I’ve tried had exactly the expected effect.
This screenshot shows Plasma’s screen configuration settings (“kscreen”). The settings module uses the new kwayland backend to communicate with a wayland server (which you can see “running” on the left hand side). That means that another big chunk of getting Plasma Wayland-ready for multi-display use-cases is falling nicely into place.


I’m working on this part of the stack using test-driven development methods, so I write unit tests for every bit of functionality, and then implement and polish the library parts. Something is done when all units tests pass reliably, when others have reviewed the code, when everything works in on the application side, and when I am happy with it.
The unit tests stay in place and are from then on compiled and run through our continuous integration system automatically on every code change. This system yells at us as soon as any of the unit tests breaks or shows problems, so we can fix it right away.

Interestingly, we run the unit tests live against a real wayland server. This test server is implemented using the KWayland library. The server runs headless, so it doesn’t do any rendering of windows, and it just implements the bits interesting for screen management. It’s sort of a mini kwin_wayland, the real kwin will use this exact same library on the server side, so our tests are not entirely synthetical. This wasn’t really possible for X11-based systems, because you can’t just fire up an X server that supports XRandR in automated tests — the machine running the test may not allow you to use its graphics card, if it even has one. It’s very easy to do, however, when using wayland.
Our autotests fire up a wayland server from one of many example configurations. We have a whole set of example configurations that we run tests against, and it’s easy to add more that we want to make sure work correctly. (I’m also thinking about user support, where we can ask to send us a problematic configuration written out to a json file, that we can then add to our unit tests, fix, and ensure that it never breaks again.
The wayland test server is only about 500 lines of relatively simple code, but it provides full functionality for setting up screens using the wayland protocol.

Next steps…

The real kwin_wayland will use the exact same library, on the server as we do in our tests, but instead of using “virtual screens”, it does actually interact with the hardware, for example through libdrm on more sensible system or through libhybris on ones less so.
Kwin takes a more central role in our wayland story, as we move initial mode-setting there, it just makes to have it do run-time mode setting as well.

The next steps are to hook the server side of the protocol up in kwin_wayland’s hardware backends.

In the back of my head are a few new features, which so far had a lower priority — first the core feature set needed to be made stable. There are three things which I’d like to see us doing:

  • per-display scaling — This is an interesting one. I’d love to be able to specify a floating point scaling factor. Wayland’s wl_output interface, which represents the application clients, only provides int-precision. I think that sucks since there is a lot of hardware around where a scaling factor of 1 is to small, and 2 is too high. That’s pretty much everything between 140 and 190 DPI according to my eyesight, your mileage may vary here. I’m wondering if I should go ahead and add the necessary API’s at least on our end of the stack to allow better than integer precision.
    Also, of course we want the scaling be controlled per display (and not globally for all displays, as it is on X11), but that’s in fact already solved by just using wayland semantics — it needs to be fixed on the rendering side now.
  • pre-apply checks — at least the drm backend will allow us to ask it if it will be able to apply a new configuration to the hardware. I’d love to hook that up to the UI, so we can do things like enable or disable the apply button, and warn the user of something that the hardware is not going to like doing.
    The low-level bits have arrived with the new drm infrastructure in the kernel, so we can hook it up in the libraries and the user interface.
  • configuration profiles — it would make sense to allow the user to save configurations for different situations and pick between them. It would be quite easy to allow the user to switch between setups not just through the systemsettings ui, but also for example when connecting or disabling a screen. I an imagine that this could be presented very nicely, and in tune with graphical effects that get their timing juuuuust right when switching between graphics setups. Let’s see how glitch-free we can make it.
on November 27, 2015 03:29 AM

November 26, 2015

Hello everybody,

the Community Council has been elected and the results can be viewed here:

On the CC for the next two years are going to be:

  • Daniel Holbach
  • Laura Czajkowski
  • Svetlana Belkin
  • Michael Hall
  • Scarlett Clark
  • C de-Avillez
  • Marco Ceppi

Thanks to all the nominees and all the voters. Thanks a lot also to everyone who served on the CC the last two years.

Originally posted to the community-announce mailing list on Thu Nov 26 16:48:22 UTC 2015 by Daniel Holbach

on November 26, 2015 05:02 PM

Linux Australia has suffered a second leak of data from its servers, according to a message sent to its main mailing list by president Joshua Hesketh.
The umbrella organisation for Linux user groups in the country suffered a data breach in March this year.
Hesketh said a limited amount of personal information had been leaked as a result of the breach. He said it was not related to the earlier breach.
The complete details of how the March breach was effected have yet to be released.
Hesketh said the breach affected Linux Australia’s legacy wiki system which was being used by a small number — 0.5 per cent — of current and non-current members.
A community member had alerted the organisation to the leak. The website has now been taken offline.

Submitted by: Arnfried Walbrecht

on November 26, 2015 03:33 PM

Dojo-Labs announced a Linux-based “Dojo” home security gateway that notifies users of security threats via a mobile app and a glowing orb.

An Israeli startup called Dojo-Labs has launched $99 presales on its Dojo security device, with shipments due March 8. After the first year, yearly subscriptions cost an additional $99 per year. CEO Yossi Atias has confirmed to LinuxGizmos that the device runs on a Linux operating system based on a Broadcom distribution.
Like the $49 Cujo device, which successfully completed its Indiegogo funding on Nov. 13, the Dojo is a Linux-based unified threat management (UTM) security device that sits between your Internet source and Internet router. Other similarities include a soft, consumer friendly design and, weirdly enough, a four-letter name that ends in “jo.”

Submitted by: Arnfried Walbrecht

on November 26, 2015 03:30 PM

An expanded device mono icon set

Canonical Design Team

We will soon push an update of the Suru icon theme that includes more device icons in order to support the Ubuntu convergence story.

Because the existing icon set was focused on mobile, many of the new icons are very specific to the desktop, as we were missing icons such as hard disk, optical drive, printer or touchpad.

When designing new mono icons, we need to make sure that they are consistent with the graphic style of the existing set (thin outlines, rare solid shapes, etc).

A device, like a printer or a hard disk, can be quite complex if you want to make it look realistic, so it’s important to add a level of abstraction to the design. However the icon still has to be recognisable within the right context.

At the moment, if you compare the Suru icon theme to the symbolic mono icons in Gnome, or to the Humanity devices icons, a few icons are missing, so you should expect to see this set expand at some point in the future — but the most common devices are covered.

In the meantime, here is the current full set:

Device icon set

on November 26, 2015 01:42 PM

S08E38 – Santa with Muscles - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Thirty-eight of Season Eight of the Ubuntu Podcast! With Mark Johnson, Laura Cowen, Martin Wimpress, and Alan Pope recording as normal over the internets which are suffering slightly from the storms outside…

In this week’s show:

  • We talk about Laura’s recent experience 3D printing Christmas tree-shaped Christmas tree decorations:

That’s all for this week, please send your comments and suggestions to:

on November 26, 2015 12:28 PM

November 25, 2015

Dropped AAAA record from DNS

Marcin Juszkiewicz

I host my blog on small machine somewhere in OVH. As part of package I got IPv6 address for it. Five minutes ago I decided to no longer use it.

My home Internet provider (UPC) does not offer IPv6 addresses so testing is my blog (or other pages/services I host) reachable via IPv6 was always problematic. Ok, I have tunnel setup on one of routers at home but it is not fun when your browser (and other tools) decide to use IPv6 instead of IPv4 and slow down from 250/20 Mbps to tunnel speed.

So when today I got information that something is not reachable via IPv6 I decided to just drop use of it on server. Will fix configs but do not want to get information that something else break on the other day.

on November 25, 2015 02:11 PM

I recently had a problem with a program behaving badly. As a developer familiar with open source, my normal strategy in this case would be to find the source and debug or patch it. Although I was familiar with the source code, I didn't have it on hand and would have faced significant inconvenience having it patched, recompiled and introduced to the runtime environment.

Conveniently, the program has not been stripped of symbol names, and it was running on Solaris. This made it possible for me to whip up a quick dtrace script to print a log message as each function was entered and exited, along with the return values. This gives a precise record of the runtime code path. Within a few minutes, I could see that just changing the return value of a couple of function calls would resolve the problem.

On the x86 platform, functions set their return value by putting the value in the EAX register. This is a trivial thing to express in assembly language and there are many web-based x86 assemblers that will allow you to enter the instructions in a web-form and get back hexadecimal code instantly. I used the bvi utility to cut and paste the hex code into a copy of the binary and verify the solution.

All I needed was a convenient way to apply these changes to all the related binary files, with a low risk of error. Furthermore, it needed to be clear for a third-party to inspect the way the code was being changed and verify that it was done correctly and that no other unintended changes were introduced at the same time.

Finding or writing a script to apply the changes seemed like the obvious solution. A quick search found many libraries and scripts for reading ELF binary files, but none offered a patching capability. Tools like objdump on Linux and elfedit on Solaris show the raw ELF data, such as virtual addresses, which must be converted manually into file offsets, which can be quite tedious if many binaries need to be patched.

My initial thought was to develop a concise C/C++ program using libelf to parse the ELF headers and then calculating locations for the patches. While searching for an example, I came across pyelftools and it occurred to me that a Python solution may be quicker to write and more concise to review.

elfpatch (on github) was born. As input, it takes a text file with a list of symbols and hexadecimal representations of the patch for each symbol. It then reads one or more binary files and either checks for the presence of the symbols (read-only mode) or writes out the patches. It can optionally backup each binary before changing it.

on November 25, 2015 10:30 AM

The St Denis siege last week and the Brussels lockdown this week provides all of us in Europe with an opportunity to reflect on why over ten thousand refugees per day have been coming here from the middle east, especially Syria.

At this moment, French warplanes and American drones are striking cities and villages in Syria, killing whole families in their effort to shortcut the justice system and execute a small number of very bad people without putting them on trial. Some observers estimate air strikes and drones kill twenty innocent people for every one bad guy. Women, children, the sick, elderly and even pets are most vulnerable. The leak of the collateral murder video simultaneously brought Wikileaks into the public eye and demonstrated how the crew of a US attack helicopter had butchered unarmed civilians and journalists like they were playing a video game.

Just imagine that the French president had sent the fighter jets to St Denis and Molenbeek instead of using law enforcement. After all, how are the terrorists there any better or worse than those in Syria, don't they deserve the same fate? Or what if Obama had offered to help out with a few drone strikes on suburban Brussels? After all, if the drones are such a credible solution for Syria's future, why won't they solve Brussels' (perceived) problems too?

If the aerial bombing "solution" had been attempted in a western country, it would have lead to chaos. Half the population of Paris and Brussels would find themselves camping at the migrant camps in Calais, hoping to sneak into the UK in the back of a truck.

Over a hundred years ago, Russian leaders proposed a treaty agreeing never to drop bombs from balloons and the US and UK happily signed it. Sadly, the treaty wasn't updated after the invention of fighter jets, attack helicopters, rockets, inter-continental ballistic missiles, satellites and drones.

The reality is that asymmetric warfare hasn't worked and never will work in the middle east and as long as it is continued, experts warn that Europe may continue to face the consequences of refugees, terrorists and those who sympathize with their methods. By definition, these people can easily move from place to place and it is ordinary citizens and small businesses who will suffer a lot more under lockdowns and other security measures.

In our modern world, people often look to technology for shortcuts. The use of drones in the middle east is a shortcut from a country that spent enormous money on ground invasions of Iraq and Afghanistan and doesn't want to do it again. Unfortunately, technological shortcuts can't always replace the role played by real human beings, whether it is bringing law and order to the streets or in any other domain.

Aerial bombardment - by warplane or by drone - carries an implicitly racist message, that the people abused by these drone attacks are not equivalent to the rest of us, they can't benefit from the normal procedures of justice, they don't have rights, they are not innocent until proven guilty and they are expendable.

The French police deserve significant credit for the relatively low loss of life in the St Denis siege. If their methods and results were replicated in Syria and other middle eastern hotspots, would it be more likely to improve the situation in the long term than drone strikes?

on November 25, 2015 07:28 AM

There are a number of important organizations in the Open Source and Free Software world that do tremendously valuable work. This includes groups such as the Linux Foundation, Free Software Foundation, Electronic Frontier Foundation, Apache Software Foundation, and others.

One such group is the Software Freedom Conservancy. To get a sense of what they do, they explain it best:

Software Freedom Conservancy is a not-for-profit organization that helps promote, improve, develop, and defend Free, Libre, and Open Source Software (FLOSS) projects. Conservancy provides a non-profit home and infrastructure for FLOSS projects. This allows FLOSS developers to focus on what they do best — writing and improving FLOSS for the general public — while Conservancy takes care of the projects’ needs that do not relate directly to software development and documentation.

Conservancy performs some important work. Examples include bringing projects under their protection, providing input on and driving policy that relates to open software/standards, funding developers to do work, helping refine IP policies, protecting GPL compliance, and more.

This work comes at a cost. The team need to hire staff, cover travel/expenses, and more. I support their work by contributing, and I would like to encourage you do too. It isn’t a lot of money but it goes a long way.

They just kicked off a fundraiser at and I would like to recommend you all take a look. They provide an important public service, they operate in a financially responsible way, and their work is well intended and executed.

on November 25, 2015 03:09 AM

November 24, 2015

A clockwork carrot

Rohan Garg

This weekend I had the opportunity to travel to the yearly LiMux sprint to spend some time with my fellow kubuntu devs and talk about the potential issues we’re facing with the CI system and improving the Debian CI system to be more robust.

Some of the more important issues that were discussed included figuring out a way to improve file tracking in packages, so that the CI can detect file conflicts without having to actually install all the packages. Another important topic that was bought up was using the packagekit and appstream with Muon. This is apparently being held back on account of Ubuntu Touch, but is slated to be resolved soon. Once the necessary packagekit packages are updated, we can play around with the idea of perhaps shipping a muon using the packagekit backend on the next Kubuntu release.

As usual, the LiMux folks are a great bunch to hang out with, and I happened to notice something on the wall of their office while lunching with them. It was a clock. Not just a regular clock though, a timey wimey clock. I’ll let a picture do more of the talking here :


Timey Wimey clock

Timey Wimey clock


Told you. Timey Wimey.

I got quite the headache looking at the clock, but my fascination with it stuck. So once I was back home, I hacked up the regular Plasma 5 analog clock and made it timey-wimey too ;)

Timey Wimey clock

You can download and install the clock from here. Clocks and Carrots, a weekend well spent I say. As usual, you can find me and the other kubuntu devs in #kubuntu-devel on IRC or on in case you want to reach out to us about Kubuntu, Clocks or Carrots.

on November 24, 2015 10:32 PM
(Cough... This should probably be called the "I-almost-forgot-I-had-a-blog release"! :-)

Development has now moved to github:

So you can grab the release files here:

The move to github brings a few advantages, including Travis-CI integration:

Even more awesome being that, thanks to the wonder of webhooks, procenv is now building for lots of distros. Take a look at:

So you get to see the environment those builds run in and OBS is also providing procenv packages!

Here's the funky download page (just click your distro):

Caveat emptor: those packages are building off the tip, so are not necessarily releases - they are built from the latest commit! That said, since procenv runs a lot of tests at build-time, you should be reasonably safe.

If you'd rather opt for official releases, the new version should be in Debian and Ubuntu Xenial soon. It should also arrive in Clear Linux tomorrow.

As for what has changed since the last blog-post, just take a look at the NEWS file:

on November 24, 2015 07:29 PM

It's already time for a third release of Ubuntu Make this month! Thanks to the help of existing and new contributors, here are what's noticeable on this release.

JetBrains' excellent C/C++ IDE named CLion is now available! A simple umake ide clion will get it you at your disposal!


The non linear game editor Twine (that our community team is using as well for other QA purposes) also entered this release and is just a umake games twine away!


Our ZSH users will be pleased to know that the advanced shell completion that we have in bash is now available to them. We refreshed and fixed some translations, especially in russian, portuguese and french for this release. A lot of opportunities in term of translations are available! Do not hesitate to jump in. :)

A bunch of work on tests and the testing infrastructure (cutting the testing time approximately by half!) have been done. Speaking of tests, we spotted and fixed the upstream renamed icon in Visual Studio Code thanks to one of them failing (nice to be at that level of quality granularity)! We also worked on ensuring that people using our PPA with previous ubuntu releases only download the minimal requirements and not our testing dependency (by shifting to another ppa only containing them). Of course, the contributor guide has been updated for matching all of this.

You will thus understand that we got a lot of other small fixes and enhancements with this new package. If you want to read the full and detailed list of what's in this release, please have a read here!

As usual, you can get this latest version direcly through its ppa for the 14.04 LTS, 15.04 and 15.10 ubuntu releases. Xenial version is available directly in the xenial ubuntu archive. This wouldn't be possible withoutl our awesome contributors community, thanks to them again!

Our issue tracker is full of ideas and opportunities, and pull requests remain opened for any issues or suggestions! If you want to be the next featured contributor and want to give an hand, you can refer to this post with useful links!

on November 24, 2015 03:30 PM

After some releases bringing updates, bug fixes, refactoring, tests improvements and more minor features and automations, here is time again for a noticeable feature release!

Thanks to Fabio Colella, we now have NetBeans support in Ubuntu Make! Installing it is just a umake ide netbeans away and just relax while Ubuntu Make is doing the hard work so that you can enjoy this IDE.


Another new feature is the Rust support by Jared Ravetch. umake rust will do all the necessary steps so that you get a good rust developing experience on your favorite ubuntu distro!

Eldar Khayrullin (welcome to him for his first contribution!) updated the Unity 3D game engine support to point to the latest beta released version and Sebastian Schuberth fixed an android NDK environment variable to use a more widespread one.

Other noticeable changes, following upstream webstorm IDE, are update to get their latest available icons (thanks to our test granularity level, we were able to detect this small change!), fixes for the version option, global -r working as the new global --remove, some fixes for zsh users, and as well a bunch of new translations thanks to our awesome translator community (new languages: fa, pt_BR and updated de, en_AU, en_CA, en_GB, eu, hr, it, pl, ru, te, zh_CN, zh_HK). There is of course more refactoring and other tests changes. Full glory details are available here.

As usual, all of those modifications and new features are backed up via a number of small, medium and large tests! We are currently running about 850 tests in our jenkins infrastructure (running all the tests). All commits and pull requests are tested for pep8 and small tests using Travis CI and the health status is of course reported in the file.

As usual, you can get this latest version direcly through its ppa for the 14.04 LTS, 15.05 and 15.10 ubuntu releases. Xenial version is available directly in the xenial ubuntu archive. Thanks again to all our awesome contributors community! A lot more is still in the pipe, but that will be for next release!

Our issue tracker is full of ideas and opportunities, and pull requests remain opened for any issues or suggestions! If you want to be the next featured contributor and want to give an hand, you can refer to this post with useful links!

on November 24, 2015 12:09 PM

Salut Salon

Rhonda D'Vine

I don't really remember where or how I stumbled upon this four women so I'm sorry that I can't give credit where credit is due, and I even do believe that I started writing a blog entry about them already somewhere. Anyway, I want to present you today Salut Salon. They might play classic instruments, but not in a classic way. But see and hear yourself:

  • Wettstreit zu viert: This is the first that I stumbled upon that did catch my attention. Lovely interpretation of classic tunes and sweet mixup.
  • Ievan Polkka: I love the catchy tune—and their interpretation of the song.
  • We'll Meet Again: While the history of the song might not be so laughable the giggling of them is just contagious. :)

So like always, enjoy!

/music | permanent link | Comments: 0 | Flattr this

on November 24, 2015 08:26 AM

Welcome to the Ubuntu Weekly Newsletter. This is issue #443 for the week November 16-22, 2015, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Simon Quigley (tsimonq2)
  • Naeil Zoueidi (Na3iL)
  • Chris Guiver
  • Paul White
  • Aaron Honeycutt
  • Jim Connett
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on November 24, 2015 01:36 AM

November 23, 2015

Junkyard Jam Band

Matthew Helmke

As I opened Junkyard Jam Band, the first thing I thought of was a couple of books I read in the mid-1990s by a man named Craig Anderton. I still have his books covering electronic projects for musicians and do-it-yourself projects for guitarists on my shelf, but they are a bit outdated. The connection is a positive one. I have played guitar for more than 25 years, built my own effects, and even built my own full-on tube amplifier.

Junkyard Jam Band is a worthy heir to the maker-musician throne that Anderton’s books sat on for me. David Erik Nelson does a great job of mixing practical and easy projects with inspiring ideas. Here you will learn how to make some instruments in less than 5 minutes, provided you already have all of the tools and have collected the supplies you need. You will also discover some projects that will take longer, but which are useful as building blocks for larger musical ventures.

I was thrilled to discover a chapter dedicated to a project that I tried more than 15 years ago, just at random. At the time, I learned that piezo pickups and piezo speakers were very similar, so I bought a $1.99 piezo speaker from Radio Shack, cut it out of its plastic case, soldered the leads to a guitar plug jack, and mounted it inside a guitar. It worked! I wish I had potted the project at the time, as it was (still is) a bit noisy, but that procedure is covered in Junkyard Jam Band with the use of Plastidip. Cool idea!

No more spoilers. If you understood any of the contents of the last three paragraphs, take a look at this book. You may find it as fun and enjoyable as I have. It has also given me some ideas for projects that I must make time for soon.

Disclosure: I was given my copy of this book by the publisher as a review copy. See also: Are All Book Reviews Positive?

on November 23, 2015 09:48 PM

The Xubuntu team hears stories about how it is used in organizations all over the world. In this “Xubuntu at..” series of interviews, we seek to interview organizations who wish to share their stories. If your organization is using Xubuntu and you want to share what you’re doing with us please contact Elizabeth K. Joseph at to discuss details about your organization.

Several months ago we learned from Evelyn Lopez at FreeGeek Chicago that they’d been deploying Xubuntu on computers they sell through their efforts to recycle used computers and parts to provide functional computers, education, internet access and job skills training to those who want them. Evelyn took some time out of her schedule to talk to us about the work that FreeGeek Chicago does and some of the tools they use around Xubuntu and flavors being used.

Can you tell us a bit about your role at FreeGeek Chicago and work that FreeGeek Chicago does?

I’m currently the Communications Coordinator at FreeGeek Chicago. My job is to manage FreeGeek’s social networks, create content and occasionally serve as photographer for their events and volunteer days. Our organization, FreeGeek Chicago has a mission to reduce e-waste and to properly recycle computer electronics. The general public and our volunteers donate their old electronics with the intention of recycle and/or re-purpose. Our volunteers seek, test and build new computers out of the working parts donated which in turn are sold to customers with the Linux system at a reduced price. Through volunteering our volunteers learn current computer building skills and open source software.


What influenced your decision to use Open Source Software at FreeGeek Chicago?

In order to use the FreeGeek name we must adhere by certain articles, one of them being using exclusively open source software. We as an organization firmly believe in the use of open source software as a main alternative to the high prices of proprietary software. Also, we understand the positive aspects of giving the user access to modify [the software] as it can lead to a better function and understanding of the software.

What made you select Xubuntu for your deployments?

At the time we chose Xubuntu for a variety of reasons. First of all we believed that this platform best suited the needs of our organization. We also thought that it was the most compatible with the computers that were being donated to us. Lastly, we believed that it was an easier platform to teach our volunteers. At this time we currently use two different systems distros (Ubuntu and Kubuntu).


Can you tell us a bit about your Xubuntu setup?

We try to keep it as simple as we can since our computers will be going out to the sales floor or to donation. Currently, we install LibreOffice, Krita, Inkscape, VLC Player, Firefox, Chromium, GIMP among others. Installs are done by our volunteers as part of their hands-on learning education. They load the operating system from our network and use the command line to install the rest of the programs. After it is installed our Q&A team certifies the installation and the computer goes to our sales floor.


Is there anything else you wish to share with us about FreeGeek Chicago?

FreeGeek Chicago is a community organization that refurbishes used computers and parts to provide functional computers to volunteers and the general public. We provide education in practical computing, hands-on job skills training, and an outlet for community service, recycle non-reusable materials in an ethical, safe, and environmentally responsible manner.

on November 23, 2015 07:20 PM

Vanilla: theme wrapping

Canonical Design Team

If you’ve been following our Vanilla framework series of blog posts you’ll be aware of why we needed this new framework and have a little insight into how we’ve set about using it. With this post we’ll be delving a little further into how we include and customise the framework for our Ubuntu family of websites.

Let’s get started with how Vanilla fits into a site’s structure with an introduction to the concept of Vanilla theming.

Robin touched on how to use the framework in his recent post ‘Vanilla: Creating a modular Sass library’, to recap, in case you didn’t get that far, here’s how you get started.

In this example, we’ll start by installing Vanilla directly into your project. We’ll add the latest version of vanilla-framework to your project’s package.json as follows:

or, if you don’t have a package.json file go ahead and run the following:

Once you have Vanilla installed, your project will have a new folder called ‘node_modules’. An example of your code structure could be:

Some of our larger sites, such as, are served using Django and we like to keep the folder structure as close to a standard setup as possible, therefore, all our static files are kept in a static folder.

To include Vanilla and all its essence (see what I did there?) you need to add the following to your site’s main scss file:

When you compile styles.scss it will include the framework into the compiled CSS. But what if I want to change the default colours? I hear you ask. Well, if you have a look in the vanilla-framework/scss/ folder you’ll see a file entitled _global-settings.scss, in this file you’ll discover the default theme settings such as brand colour and the maximum width of your site, this needs to be imported above the framework. However, if you are likely to use these styles on more than one site, you might want to consider building your own theme.

Building a theme

To take as an example, where we are using ubuntu-vanilla-theme, the file structure looks like this:

Both the theme and framework are in nested folders called node_modules as Vanilla framework is a dependence of the theme. You’ll see in the ubuntu-vanilla-theme/scss folder there’s a file called _theme.scss, a theme specific _global-settings.scss file and a modules folder where any overrides of the vanilla-framework will live.

Here is an example of what _theme.scss could look like:

There’s a fair bit going on here but basically what we’re doing is pulling in our theme specific _global-settings.scss file with overrides and any new variables we may want to include, above the framework imports so any overrides we add will take effect. Now you can add new modules by creating files in /modules/, then add an @import and @include in your _theme.scss file.

You’ll also see that there is a build.scss file, this imports the theme file and includes the ubuntu-vanilla-theme mixin. We use Gulp as our build system to automate common tasks in the development of our websites. Coming up next in our Vanilla flavoured posts, Karl will be showing you how to build a theme.

on November 23, 2015 10:12 AM

November 22, 2015

The prctl() system call provides a rather useful PR_SET_PDEATHSIG option to allow a signal to be sent to child processes when the parent unexpectedly dies. A quick and dirty mechanism is trigger the SIGHUP or SIGKILL signal to kill the child immediately, or perhaps more elegantly to invoke a resource tidy up before exiting.

In the trivial example below, we use the SIGUSR1 signal to inform the child that the parent has died. I know printf() should not be used in a signal handler, it just makes the example simpler.

 #include <stdlib.h>                                 
#include <unistd.h>
#include <signal.h>
#include <sys/prctl.h>
#include <err.h>

void sigusr1_handler(int dummy)
printf("Parent died, child now exiting\n");

int main()
pid_t pid;

pid = fork();
if (pid < 0)
err(1, "fork failed");
if (pid == 0) {
/* Child */
if (signal(SIGUSR1, sigusr1_handler) == SIG_ERR)
err(1, "signal failed");
if (prctl(PR_SET_PDEATHSIG, SIGUSR1) < 0)
err(1, "prctl failed");

for (;;)
if (pid > 0) {
/* Parent */
printf("Parent exiting...\n");

return 0;

..the child process sits in an infinite loop, performing 60 second sleeps.  The parent sleeps for 5 seconds and then exits.  The child is then sent a SIGUSR1 signal and the handler exits.  In practice the signal handler would be used to trigger a more sophisticated clean up of resources if required.

Anyhow, this is a useful Linux feature that seems to be overlooked.
on November 22, 2015 11:44 PM

Muon 5.5 and Carrots

Harald Sitter


Jonathan Riddell, Leader of Flies, kept holding me until I write a blog post, so here is one.

After 2 days of obscenely unsubsidized drinking and vicious discussions about carrots, the KDE and Kubuntu developers here at the developer sprint in Munich decided to release the Debian package manager Muon in version 5.5.0.

A very prominent thing to take away from this sprint is “oops”. I am not sure that is good, but oh well.

Hearts and kisses!

on November 22, 2015 04:11 PM

Links to articles not necessarily written today, but still interesting.

The post Links of the Day – 2015-11-22 appeared first on Milo Casagrande.

on November 22, 2015 01:16 PM

Gruss vom Krampus!

Lubuntu Blog

Dedicated to all our friends in Finland, Germany, Austria, etc. have a nice celebration, the 5th of December, with Krampuslauf fest. Christmas begins!
on November 22, 2015 11:55 AM

November 21, 2015

Openly Thankful

Benjamin Kerensa

ThankfulSo next week has a certain meaning for millions of Americans that we relate to a story of indians and pilgrims gathering to have a meal together. While that story may be distorted from the historical truth, I do think the symbolic holiday we celebrate is important.

That said, I want to name some individuals I am thankful for….



Lukas Blakk

I’m thankful for Lukas for being a excellent mentor to me at Mozilla for the last two years she was at Mozilla. Lukas helped me learn skills and have opportunities that many Mozillians would not have the opportunity to do. I’m very grateful for her mentoring, teaching, and her passion to help others, especially those who have less opportunity.

Jeff Beatty

I’m especially thankful for Jeff. This year, out of the blue, he came to me this year and offered to have his university students support an open source project I launched and this has helped us grow our l10n community. I’m also grateful for Jeff’s overall thoughtfulness and my ability to go to him over the last couple of years for advice and feedback.

Majken Connor

I’m thankful for Majken. She is always a very friendly person who is there to welcome people to the Mozilla Community but also I appreciate how outspoken she is. She is willing to share opinions and beliefs she has that add value to conversations and help us think outside the box. No matter how busy she is, she has been a constant in the Mozilla Project. always there to lend advice or listen.

Emma Irwin

I’m thankful for Emma. She does something much different than teaching us how to lead or build community, she teaches us how to participate better and build better participation into open source projects. I appreciate her efforts in teaching future generations the open web and being such a great advocate for participation.

Stormy Peters

I’m thankful for Stormy. She has always been a great leader and it’s been great to work with her on evangelism and event stuff at Mozilla. But even more important than all the work she did at Mozilla, I appreciate all the work she does with various open source nonprofits the committees and boards she serves on or advises that you do not hear about because she does it for the impact.


Jonathan Riddell

I’m thankful for Jonathan. He has done a lot for Ubuntu, Kubuntu, KDE and the great open source ecosystem over the years. Jonathan has been a devout open source advocate always standing for what is right and unafraid to share his opinion even if it meant disappointment from others.

Elizabeth Krumbach Joseph

I’m thankful for Elizabeth. She has been a good friend, mentor and listener for years now and does so much more than she gets credit for. Elizabeth is welcoming in the multiple open source projects she is involved in and if you contribute to any of those projects you know who she is because of the work she does.


Paolo Rotolo

I’m thankful for our lead Android developer who helps lead our Android development efforts and is a driving force in helping us move forward the vision behind Glucosio and help people around the world. I enjoy near daily if not multiple time a day conversations with him about the technical bits and big picture.

The Core Team + Contributors

I’m very thankful for everyone on the core team and all of our contributors at Glucosio. Without all of you, we would not be what we are today, which is a growing open source project doing amazing work to bring positive change to Diabetes.


Leslie Hawthorne

I’m thankful for Leslie. She is always very helpful for advice on all things open source and especially open source non-profits. I think she helps us all be better human beings. She really is a force of good and perhaps the best friend you can have in open source.

Jono Bacon

I’m thankful for Jono. While we often disagree on things, he always has very useful feedback and has an ocean of community management and leadership experience. I also appreciate Jono’s no bullshit approach to discussions. While it can be rough for some, the cut to the chase approach is sometimes a good thing.

Christie Koehler

I’m thankful for Christie. She has been a great listener over the years I have known her and has been very supportive of community at Mozilla and also inclusion & diversity efforts. Christie is a teacher but also an organizer and in addition to all the things I am thankful for that she did at Mozilla, I also appreciate her efforts locally with Stumptown Syndicate.

on November 21, 2015 01:58 AM

November 20, 2015

Ubuntu Community Appreciation Day

Elizabeth K. Joseph

Often times, Ubuntu Community Appreciation Day sneaks up on me and I don’t have an opportunity to do a full blog post. This time I was able to spend several days reflecting on who has had an impact on my experience this year, and while the list is longer than I can include here (thanks everyone), there are some key people who I do need to thank.

José Antonio Rey

If you’ve been involved with Ubuntu for any length of time, you know José. He’s done extraordinary work as a volunteer across various areas in Ubuntu, but this year I got to know him just a little bit better. He and his father picked me up from the airport in Lima, Peru when visited his home country for UbuCon Latinoamérica back in August. In the midst of preparing for a conference, he also played tour guide my first day as we traveled the city to pick up shirts for the conference and then took time to have lunch at one of the best ceviche places in town. I felt incredibly welcome as he introduced me to staff and volunteers and checked on me throughout the conference to make sure I had what I needed. Excellent conference with incredible support, thank you José!

Naudy Urquiola

I met Naudy at UbuCon Latinoamérica, and I’m so glad I did. He made the trip from Venezuela to join us all, and I quickly learned how passionate and dedicated to Ubuntu he was. When he introduced himself he handed me a Venezuelan flag, which hung off my backpack for the rest of the conference. Throughout the event he took photos and has been sharing them since, along with other great Ubuntu tidbits that he’s excited about, a constant reminder of the great time we all had. Thanks for being such an inspirational volunteer, Naudy!

Naudy, me, Jose

Richard Gaskin

For the past several years Richard has led UbuCon at the Southern California Linux Expo, rounding up a great list of speakers for each event and making sure everything goes smoothly. This year I’m proud to say it’s turning into an even bigger event, as the UbuCon Summit. He’s also got a great Google+ feed. But for this post, I want to call out that he reminds me why we’re all here. It can become easy to get burnt out as a volunteer on open source, feel uninspired and tired. During my last one-on-one call with Richard, his enthusiasm around Ubuntu for enabling us to accomplish great things brought back my energy. Thanks to Ubuntu I’m able to work with Partimus and Computer Reach to bring computers to people at home and around the world. Passion for bringing technology to people who lack access is one of the reasons I wake up in the morning. Thanks to Richard for reminding me of this.

Laura Czajkowski, Michael Hall, David Planella and Jono Bacon

What happens when you lock 5 community managers in a convention center for three days to discuss hard problems in our community? We laugh, we cry, we come up with solid plans moving forward! I wrote about the outcome of our discussions from the Community Leadership Summit in July here, but beyond the raw data dump provided there, I was able to connect on a very personal level with each of them. Whether it was over a conference table or over a beer, we were able to be honest with each other to discuss hard problems and still come out friends. No blame, no accusations, just listening, talking and more listening. Thank you all, it’s an honor to work with you.

Laura, David, Michael and me (Jono took the picture!)

Paul White

For the past several years, Paul White has been my right hand man with the Ubuntu Weekly Newsletter. If you enjoy reading the newsletter, you should thank him as well. As I’ve traveled a lot this year and worked on my next book, he’s been keeping the newsletter going, from writing summaries to collecting links, with me just swinging in to review, make sure all the ducks are lined up and that the release goes out on time. It’s often thankless work with only a small team (obligatory reminder that we always need more help, see here and/or email to learn more). Thank you Paul for your work this year.

Matthew Miller

Matthew Miller is the Fedora Project Lead, we were introduced last week at LISA15 by Ben Cotton in an amusing Twitter exchange. He may seem like an interesting choice for an Ubuntu appreciation blog post, but this is your annual reminder that as members of Linux distribution communities, we’re all in this together. In the 20 or so minutes we spoke during a break between sessions, we were able to dive right into discussing leadership and community, understanding each others jokes and pain points. I appreciate him today because his ability to listen and insights have enriched my experience in Ubuntu by bringing in a valuable outside perspective and making me feel like we’re not in this alone. Thanks mattdm!

Matt holds my very X/Ubuntu laptop, I hold a Fedora sticker


If you’re reading this, you probably care about Ubuntu. Thank you for caring. I’d like to send you a holiday card!

on November 20, 2015 05:15 PM
Este (mi primer agradecimiento) tengo muy claro a quien darlo: ¡Gracias Planella! :)
Por estar siempre ahí dispuesto a escuchar, aconsejar y ayudar en todo momento. Todo un ejemplo del espíritu Ubuntu :)
Un abrazo compañero.
on November 20, 2015 03:02 PM
  • rbasak listed areas that he thinks needs looking at before Xenial feature freeze on 18 Feb. hallyn pointed out that this should be in a blueprint, so rbasak agreed to take an action to create one. Some work item assignments were made for the blueprint.
  • No other discussion was required for the other standing agenda items.
  • Meeting actions assigned:
    • rbasak look at and
    • rbasak to create blueprint for Xenial feature work
    • rbasak to find kickinz1 a merge to do
  • The next meeting will be on Tue Nov 24 16:00:00 UTC 2015 in #ubuntu-meeting.

Full agenda and log

on November 20, 2015 02:31 PM

I have so many people to thank this time around I’m just going to post them all throughout the next few days. The first is Merlijn Sebrechts, one of the new breed of big data experts collaborating around Big Data.

Merlijn is bringing the Tengu Platform to users, which is a platform for big data experimentation, you can find more about it here.

You can find Merlijn’s work here.

on November 20, 2015 02:20 PM
This is a burst of notes that I wrote in an e-mail in June when asked about it, and I'm not going to have any better steps since I don't remember even that amount as back then. I figured it's better to have it out than not.

So... if you want to use LUKS In-Place Conversion Tool, the notes below on converting a shipped-with-Ubuntu Dell XPS 13 Developer Edition (2015 Intel Broadwell model) may help you. There were a couple of small learnings to be had...
The page itself is good and without errors, although funnily uses reiserfs as an example. It was only a bit unclear why I did save the initial_keyfile.bin since it was then removed in the next step (I guess it's for the case you want to have a recovery file hidden somewhere in case you forget the passphrase).

For using the tool I booted from a 14.04.2 LTS USB live image and operated there, including downloading and compiling luksipc in the live session. The exact reason of resizing before luksipc was a bit unclear to me at first so I simply indeed resized the main rootfs partition and left unallocated space in the partition table.

Then finally I ran ./luksipc -d /dev/sda4 etc.

I realized I want /boot to be on an unencrypted partition to be able to load the kernel + initrd from grub before entering into LUKS unlocking. I couldn't resize the luks partition anymore since it was encrypted... So I resized what I think was the empty small DIAGS partition (maybe used for some system diagnostic or something, I don't know), or possibly the next one that is the actual recovery partition one can reinstall the pre-installed Ubuntu from. And naturally I had some problems because it seems vfatresize tool didn't do what I wanted it to do and gparted simply crashed when I tried to use it first to do the same. Anyway, when done with getting some extra free space somewhere, I used the remaining 350MB for /boot where I copied the rootfs's /boot contents to.

After adding the passphrase in luks I had everything encrypted etc and decryptable, but obviously I could only access it from a live session by manual cryptsetup luksOpen + mount /dev/mapper/myroot commands. I needed to configure GRUB, and I needed to do it with the grub-efi-amd64 which was a bit unfamiliar to me. There's also grub-efi-amd64-signed I have installed now but I'm not sure if it was required for the configuration. Secure boot is not enabled by default in BIOS so maybe it isn't needed.

I did GRUB installation – I think inside rootfs chroot where I also mounted /dev/sda6 as /boot (inside the rootfs chroot), ie mounted dev, sys with -o bind to under the chroot (from outside chroot) and mount -t proc proc proc too. I did a lot of trial and effort so I surely also tried from outside the chroot, in the live session, using some parameters to point to the mounted rootfs's directories...

I needed to definitely install cryptsetup etc inside the encrypted rootfs with apt, and I remember debugging for some time if they went to the initrd correctly after I executed mkinitramfs/update-initramfs inside the chroot.

At the end I had grub asking for the password correctly at bootup. Obviously I had edited the rootfs's /etc/fstab to include the new /boot partition, I changed / to be "UUID=/dev/mapper/myroot /     ext4    errors=remount-ro 0       ", kept /boot/efi as coming from the /dev/sda1 and so on. I had also added "myroot /dev/sda4 none luks" to /etc/crypttab. I seem to also have GRUB_CMDLINE_LINUX="cryptdevice=/dev/sda4:myroot root=/dev/mapper/myroot" in /etc/default/grub.

The only thing I did save from the live session was the original partition table if I want to revert.

So the original was:

Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 500118192 sectors, 238.5 GiB
Logical sector size: 512 bytes
First usable sector is 34, last usable sector is 500118158
Partitions will be aligned on 2048-sector boundaries
Total free space is 6765 sectors (3.3 MiB)
Number  Start (sector)    End (sector)  Size       Code  Name
1            2048         1026047   500.0 MiB   EF00  EFI system partition
2         1026048         1107967   40.0 MiB    FFFF  Basic data partition
3         1107968         7399423   3.0 GiB     0700  Basic data partition
4         7399424       467013631   219.2 GiB   8300
5       467017728       500117503   15.8 GiB    8200

And I now have:

Number  Start (sector)    End (sector)  Size       Code  Name

1            2048         1026047   500.0 MiB   EF00  EFI system partition
2         1026048         1107967   40.0 MiB    FFFF  Basic data partition
3         1832960         7399423   2.7 GiB     0700  Basic data partition
4         7399424       467013631   219.2 GiB   8300
5       467017728       500117503   15.8 GiB    8200
6         1107968         1832959   354.0 MiB   8300

So it seems I did not edit DIAGS (and it was also originally just 40MB) but did something with the recovery partition while preserving its contents. It's a FAT partition so maybe I was able to somehow resize it after all.

The 16GB partition is the default swap partition. I did not encrypt it at least yet, I tend to not run into swap anyway ever in my normal use with the 8GB RAM.

If you go this route, good luck! :D
on November 20, 2015 02:14 PM

At first I requested a review copy of The Smart Girl’s Guide to Privacy: Practical Tips for Staying Safe Online because I wanted to see whether it would be helpful for my middle school aged daughters. As I got into the book, I found tips and tricks that were helpful to an internet savvy long-timer like me could use.

The book covers topics like controlling what you share and where, keeping your personally identifying information safe, how to mitigate against data loss and cracked passwords, dealing with harassment as a female, and more. The topics range from middle school appropriate up to things that I wish more adults knew and thought about. I found the text easy to read and the tips intelligent and clear. One caveat for the more squeamish adults considering this book for their kids, read it first and make sure you are comfortable with the author’s discussion of things like dating and sexytime activities. I think the discussions are right in line with and perfect for today’s societal norms, but they may be uncomfortable for the more conservative among us.

The book is a quick read, filled with useful links and information alongside the advice. I am passing it on to my girls and will probably recommend that my son read it as well, when he is a little older.

Disclosure: I was given my copy of this book by the publisher as a review copy. See also: Are All Book Reviews Positive?

on November 20, 2015 02:11 PM

UCADay-64pxThe Ubuntu Community Appreciation Day is a really nice tradition and it’s always to think of somebody I could thank (Thanks Ahmed Shams for setting it up in the first place!). Narrowing down my list of thank-yous to just one or two for a blog post is much harder for me. :-)

First I’d like to thank Elizabeth Krumbach. Liz has been all over the place in the Ubuntu world for ages and has helped out in many many forms. She does all this on top of a demanding full-time job, speaker engagements, involvement in other communities and much more. I really liked working with her on the Community Council where she stayed calm even when the CC was under pressure. She stayed focused and her main goal was always to get the best out of the situation for everyone. Liz remained committed to helping people, no matter how busy she was and how trivial their request was – she sets a true example. Thanks a lot Liz!

I’d also like to thank Sergio Schvezov. I’ve worked together with him on phone bits and snappy and snapcraft things as well and I’m always amazed by how many balls he keeps in the air, how thoughtful he his, while staying pragmatic and staying cheerful. With him working on snapcraft, I have no doubt that the next generation of software maintainers in Snappy land will have a great time. Thanks a lot Sergio!

There are many more to thank, you all, the Ubuntu Community, make it very easy to still be part of this fantastic group of individuals and look forward to more! Big hugs everyone! :-)

on November 20, 2015 12:43 PM

It’s Nov, 20th again, and as every year it’s time for the Ubuntu Community Appreciation Day :-)


The spirit of UCA

And, as the wiki page of the event says, Ubuntu is not just an Operating System, it is a whole community in which everybody collaborates with everybody to bring to the life a wonderful Human experience. When you download the iso, burn it, install it and start to enjoy it, you know that a lot of people made magnificent efforts to deliver the best Ubuntu OS possible.

For all the effort exerted, for every minute of the year, some gave of their time talent or treasure. There’s the Ubuntu Community Appreciation Day, when everybody – whether user, developer, or non-developer contributor anyone who gives a hand making Ubuntu what it is today – everybody takes a moment to thank someone for their contribution. Every contribution counts! Take time to say, “Thank you!

The word Thank You inspires and gives a huge amount of motivation to encourage people to become even better contributors thus making Ubuntu and the community even better.

Choose someone to thank

While a global thanks to all contributors it is a must, I prefer to thank someone in particular, to show all my support to him/her.

Now, choosing someone is very difficult, because I met a lot of amazing people in the Ubuntu Community, and a lot more contribute to Ubuntu in some way.

So it’s a very hard task to choose someone.

Last year I said Thank you to mzanetti. I’m sure you know who he is, and he totally deserves our gratitude.

As him, a lot of other Canonical employers - I was lucky enough to contribute side by side with popey and oSoMoN and dpm and dholbach and a lot of other guys, from a lot of different teams. To all of you, and you know who you are, THANKS.

And then there a lot of guys from community, an I should thank each one of them.

But I want, for a day, turn the spotlight to someone who is not so well known in the community.

I chose that guy because I think he’s the perfect example of the perfect contributor: someone who works hard, day by day, without seeking fame and glory.

And as him there are a lot of contributors: people who work hard to make Ubuntu better every day. You never hear of them, but they are essential to create this dream called Ubuntu.

To all of you unknown hard-working contributors, thanks.

And my biggest thanks goes to Bartosz!

Who’s Bartosz

I (virtually) met Bartosz working on the calculator app for Ubuntu for Phones. Together we crafted the calculator reboot. But since the end of the Summer I have no much more time to contribute to Ubuntu, and he’s keeping the calculator updated and bugs free.

He’s a pleasure to work with him: he’s talented and very patient, sometimes he waits weeks before I review his code (shame on me!), but he never complains.

I know he also contributes to clock app and weather app and reports bug about the experience on the phone.

So THANKS Bartosz, and keep up the awesome work :-)

And now it’s your turn, choose someone in the Ubuntu Community, and write a mail, a tweet, a G+ post and say Thanks. Volunteers do it for free, and your gratitude is what’s need to make them happy!


on November 20, 2015 08:00 AM

Community Appreciation Day

Charles Profitt

Today is Ubuntu Community Appreciation Day, but this year I am going to expand my appreciation beyond the boundaries of the Ubuntu Community to include anyone in open source that has impacted my journey in open source.


Mark Shuttleworth
For his assistance in helping me stay calm and focused over the last two years despite the cacophony that arose from a multitude of issues. Mark provided me, and others, with friendly advice at several times when the pressure was peaking. Mark does an excellent job of balancing the needs of Canonical and the Ubuntu Community. Every time I speak with Mark I gain a new perspective on the issue we are discussing.


Elizabeth Krumbach Joseph
Lyz continues to be an inspiration for me with regards to what a dedicated person can achieve in the world of open source. Despite her success I constantly see her reaching out to help others as well. She is devoted to open source and the Ubuntu Community.


Landon Jurgens
Landon and I met early along my adventure in to open source as we endeavoured to build an Ubuntu group in Syracuse, NY. Landon is another person who has proven that it is possible to succeed in the open source world and serves as an inspiratoin for me. Landon currently works for Rackspace.


Remy DeCausemaker
Remy and I have known each other for a long time and both of us are very active in the Rochester, NY open sources community. Remy helped grow the FOSS movement at RIT and continues to be active to this day. The man is a legend in his own time. Remy is currently employed by RedHat and serves as the Community Action and Impac Lead for the Fedora Project. Remy plays an integral role in helping universities include open source in their academic programs. Remy is also the co-founder of is a not-for-profit organization that focuses on access, openness and transparency of public information.

Joe Anderson
Joe and I have known each other since 2008 when I first got invovled with the New York State Ubuntu LoCo. Joe has a fantasticly intelligent, witty and dry sense of humor that comes out when discussing open source topics that normally devolve in to holy wars (Emacs vs Vim). Joe constantly inspires me to think more deeply about the open source movement.

on November 20, 2015 05:00 AM

November 19, 2015

S08E37 – Code Name: K.O.Z. - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Thirty-seven of Season Eight of the Ubuntu Podcast! With Mark Johnson, Laura Cowen, Martin Wimpress, and Alan Pope recording as normal over the internets which are suffering slightly from the storms outside…

In this week’s show:

We look at what’s been going on in the news:

We also take a look at what’s been going on in the community:

There are even events:

That’s all for this week, please send your comments and suggestions to:
Join us on IRC in #ubuntu-podcast on Freenode
Follow us on Twitter
Follow us on Facebook
Follow us on Google+
Discuss us on Reddit

on November 19, 2015 12:14 PM
The Intel Platform Shared Resource Monitoring features were introduced in the Intel Xeon E5v3 processor family. These new features provide a mechanism to measure platform shared resources, such as L3 cache occupancy via Cache Monitoring Technology (CMT) and memory bandwidth utilisation via Memory Bandwidth Monitoring (MBM).

Intel have written a Platform Quality of Service Tool (pqos) to use these monitoring features and I've packaged this up for Ubuntu 16.04 Xenial Xerus.

To install, use:

sudo apt-get install intel-cmt-cat

The tool requires access to the Intel MSRs, so one has to also install the msr module if it is not already loaded:

sudo modprobe msr

To see the Last Level Cache (llc) utilisation on a system, listing the most used first, use:

sudo pqos -T

pqos running on a 48 thread Xeon based server

The -p option allows one to specify specific monitoring events for specific process IDs. Event types can be Last Level Cache (llc), Local Memory Bandwidth (mbl) and Remote Memory Bandwidth (mbr).  For example, on a Xeon E5-2680 I have just Last Level Cache monitoring capability, so lets view the llc for stress-ng while running some VM stressor tests:

sudo pqos -T -p llc:$(pidof stress-ng | tr ' ' ',')

pqos showing equally shared cache between two stressor processes

Cache and Memory Bandwidth monitoring is especially useful to examine the impact of memory/cache hogging processes (such as VM instances).  pqos allows one to identify these processes simply and effectively.

Future Intel Xeon processors will provide capabilities to configure cache resources to specific classes of service using Intel Cache Allocation Technology (CAT).  The pqos tool allows one to modify the CAT settings, however, not having access to a CPU with these capabilities I was unable to experiment with this feature.  I refer you to the pqos manual for more details on this useful feature.  The beauty of CAT is that is allows one to tweak and fine tune the cache allocation for specific demanding use cases.  Given that the cache is a shared resource that can be impacted by badly behaving processes, the ability to tune the cache behaviour is potentially a big performance win.

For more details of these features, see the Intel 64 And IA-32 Architecture Software Development manual, section 17.15 "Platform Share Resource Monitoring: Cache Monitoring Technology" and 17.16 "Platform Shared Resource Control: Cache Allocation Technology".
on November 19, 2015 11:17 AM

No UI is some UI

Stuart Langridge

Tony Aubé writes interestingly about how No UI is the New UI:

Out of all the possible forms of input, digital text is the most direct one. Text is constant, it doesn’t carry all the ambiguous information that other forms of communication do, such as voice or gestures. Furthermore, messaging makes for a better user experience than traditional apps because it feels natural and familiar. When messaging becomes the UI, you don’t need to deal with a constant stream of new interfaces all filled with different menus, buttons and labels. This explains the current rise in popularity of invisible and conversational apps, but the reason you should care about them goes beyond that.

He’s talking here about “invisible apps”: Magic and Operator and to some extent Google Now and Siri; apps that aren’t on a screen. Voice or messaging or text control. And he’s wholly right. Point and click has benefits — it’s a lot easier to find a thing you want to do, if you don’t know what it’s called — but it throws away all the nuance and skill of language and reduces us to cavemen jabbing a finger at a fire and grunting. We’ve spent thousands of years refining words as a way to do things; they are good at communicating intent1. On balance, they’re better than pictures, although obviously some sort of harmony of the two is better still. Ikea do a reasonable job of providing build instructions for Billy bookcases without using any words at all, but I don’t think I’d like to see their drawings of what “honour” is, or how to run a conference.

The problem is that, until very recently, and honestly pretty much still, a computer can’t understand the nuance of language. So “use language to control computers” meant “learn the computer’s language”, not “the computer learns yours”. Echo, Cortana, Siri, Google Now, Mycroft are all steps in a direction of improving that; Soli is a step in a different direction, but still a valuable one. But we’re still at the stage of “understand the computer’s language”, although the computer’s language has got better. I can happily ask Google Now “what’s this song?”, or “what am I listening to?”, but if I ask it “who sang this?” then my result is a search rather than music identification. Interactive fiction went from “OPEN DOOR” to being able to understand a bewildering variety of more complex statements, but you still have to speak in “textadventurese”: “push over the large jewelled idol” is fine, but “gently push it over” generally isn’t. And tellingly IF still tends to avoid conversations, replacing them with conversation menus or “tell Eric about (topic)”.

User interface” doesn’t just mean “pixels on a screen”, though. “In a world where computer can see, listen, talk, understand and reply to you, what is the purpose of a user interface?”, asks Aubé. The computer seeing you, listening to you, talking to you, understanding you, and replying to you is the user interface.

In that list, currently:

  • seeing you is hard and not very reliable (obvious example: Kinect)
  • listening to you is either easy (if “listening” and “hearing” are the same thing) or very difficult (if “listening” implies active interest rather than just passively recording everything said around it)
  • talking to you is easy, although (as with humans) working out what to say is not, and it’s still entirely obvious that a voice is a computer
  • understanding you is laughably incomplete and is obviously the core of the problem, although explaining one’s ideas and being understood by people is also the core problem of civilisation and we haven’t cracked that one yet either
  • replying to you requires listening to you, talking to you, and understanding you.

Replying by having listened, talked, and understood works fine if you’re asking “what’s this song?” But “Should I eat this chocolate bar?” is a harder question to answer. The main reason it’s hard is because of an important thing that isn’t even on that list: knowing you. Which is not the same thing as “knowing a huge and rather invasive list of things about your preferences”, and is also not something a computer is good at. In fact, if a computer were to actually know you then it wouldn’t collect the huge list of trivia about your preferences because it would know that you find it a little bit disquieting. If a friend of mine asks “should I eat this chocolate bar?”, what do I consider in my answer? Do I like that particular one myself? Do I know if they like it? Do lots of other people like it? Are they diabetic? Are they on a diet? Do they generally eat too much chocolate? Did they ask the question excitedly or resignedly? Have they had a bad day and need a pick-me-up? Do I care?

That list of questions I might ask myself before replying starts off with things computers are good at knowing — did the experts rate Fry’s Turkish Delight on MSN? And ends up with things we’re still a million, million miles away from being able to analyse. Does the computer care? What does it even mean to ask that question? But we can do the first half, so we do do it… and that leads inevitably to the disquieting database collection, the replacement of understanding with a weighted search over all knowledge. Like making a chess champion by just being able to analyse all possible games. Fun technical problem, certainly. Advancement in our understanding of chess? Not so much.

“When I was fifteen years old, I missed a period. I was terrified. Our family dog started treating me differently - supposedly, they can smell a pregnant woman. My mother was clueless. My boyfriend was worse than clueless. Anyway, my grandmother came to visit. And then she figured out the whole situation in, maybe, ten minutes, just by watching my face across the dinner table. I didn’t say more than ten words — ‘Pass the tortillas.’ I don’t know how my face conveyed that information, or what kind of internal wiring in my grandmother’s mind enabled her to accomplish this incredible feat. To condense fact from the vapor of nuance.”

That’s understanding, and thank you Neal Stephenson’s Snow Crash for the definition. Hell, we can’t do that, most of us, most of the time. Until we can… are apps controlled with words doomed to failure? I don’t know. I will say that point-and-grunt is not a very sophisticated way of communicating, but it may be all that technology can currently understand. Let’s hope Mycroft and Siri and Echo and Magic and Operator and Cortana and Google Now are the next step. Aulé’s right when he says this: “It will push us to leave our comfort zone and look at the bigger picture, bringing our focus on the design of the experience rather than the actual screen. And that is an exciting future for designers.” Exciting future for people generally, I think.

  1. and I leave completely aside here that French is not English is not Kiswahili, although this is indeed a problem for communication too
on November 19, 2015 09:45 AM

In the last couple of weeks, we had to completely rework the packaging for the SDK tools and jump through hoops to bring the same experience to everyone regardless if they are on LTS or the development version of Ubuntu. It was not easy but we finally are ready to hand this beauty to the developer’s hands.

The two new packages are called “ubuntu-sdk-ide” and “ubuntu-sdk-dev” (applause now please).

The official way to get the Ubuntu SDK installed is from now on by using the Ubuntu SDK Team release PPA:

Releasing from the archive with this new way of packaging is sadly not possible yet, in Debian and Ubuntu Qt libraries are installed into a standard location that does not allow installing multiple minor versions next to each other. But since both, the new QtCreator and Ubuntu UI Toolkit, require a more recent version of Qt than the one the last LTS has to offer we had to improvise and ship our own Qt versions. Unfortunately that also blocks us from using the archive as a release path.

If you have the old SDK installed, the default QtCreator from the archive will be replaced with a more recent version. However apt refuses to automatically remove the packages from the archive, so that is something that needs to be done manually, best before the upgrade:

sudo apt-get remove qtcreator qtcreator-plugin*

Next step is to add the ppa and get the package installed.

sudo add-apt-repository ppa:ubuntu-sdk-team/ppa \
    && sudo apt update \
    && sudo apt dist-upgrade \
    && sudo apt install ubuntu-sdk

That was easy, wasn’t it :).

Starting the SDK IDE is just as before, either by running qtcreator or ubuntu-sdk directly and also by running it from the dash. We tried to not break old habits and just reused the old commands.

However, there is something completely new. An automatically registered Kit called the “Ubuntu SDK Desktop Kit”. That kit consists of the most recent UITK and Qt used on the phone images. Which means it offers a way to develop and run apps easily even on an LTS Ubuntu release. Awesome, isn’t it Stuart?

The old qtcreator-plugin-ubuntu package is going to be deprecated and will most likely be removed in one of the next Ubuntu versions. Please make sure to migrate to the new release path to always get the most recent versions.

on November 19, 2015 09:43 AM

Sometimes it is just quicker to type a few commands on the cli than opening your browser window, going to and typing out a search term to see what is available to you. So I wrote a tiny plugin to speed that up a bit.

Install the plugin

Currently supports Trusty, Vivid, and Wily

$ sudo apt-add-repository ppa:adam-stokes/juju-query
$ sudo apt-get update
$ sudo apt-get install juju-query

Searching the charmstore

If you know the exact name of the charm:

$ juju search ghost

Results in




 juju deploy cs:precise/ghost-3

Get additional information:

 juju info cs:precise/ghost-3

Or part of a charm name

$ juju search nova-cloud\*

Gives us








 juju deploy cs:~landscape/trusty/nova-cloud-controller-6

Get additional information:

 juju info cs:~landscape/trusty/nova-cloud-controller-6

Getting more information

This will give you the output of the README stored on

$ juju info ghost|less

And the output of the README right to your screen \o/


Ghost is a simple, powerful publishing platform.


# Overview

Ghost is an Open Source application which allows you to write and publish your  
own blog, giving you the tools to make it easy and even fun to do. It's simple,  
elegant, and designed so that you can spend less time making your blog work and  
more time blogging.

This is an updated charm originally written by Jeff Pihach and ported over to  
the charms.reactive framework and updated for Trusty and the latest Ghost release.

# Quick Start

After you have a Juju cloud environment running:

    $ juju deploy ghost
    $ juju expose ghost

To access your newly installed blog you'll need to get the IP of the instance.

    $ juju status ghost

Visit `<your URL>:2368/ghost/` to create your username and password.  
Continue setting up Ghost by following the  
[usage documentation](

You will want to change the URL that Ghost uses to generate links internally to  
the URL you will be using for your blog.

    $ juju set ghost url=<your url>

# Configuration

To view the configuration options for this charm open the `config.yaml` file or:

    $ juju get ghost

This plugin utilizes theblues python library for interfacing with the charmstore's api. Check out the project on their github page.

If you want to contribute to the plugin you can find that on my github page. Some other features I'd like to add is getting the configuration options, searching bundles, show what relations are provided/required, etc.

on November 19, 2015 04:58 AM

November 18, 2015

Google Code In 2015

Nicholas Skaggs

As you may have heard, ubuntu has been selected as a mentoring organization for Google Code In (GCI). GCI is a opportunity for high school students to learn about and participate in open source communities. As a mentoring organization, we'll create tasks and review the students work. Google recruits the students and provides rewards for those who do the best work. The 2015 contest runs from December 7, 2015 to January 25, 2016.

Are you excited?
On December 7th, we'll be gaining a whole slew of potential contributors. Interested students will select from the tasks we as a community have put forth and start working them. That means we need your help to both create those tasks, and mentor incoming students.

I know, I know, it sounds like work. And it is a bit of work, but not as much as you think. Mentors need to provide a task description and be available for questions if needed. Once the task is complete, check the work and mark the task complete. You can be a mentor for as little as a single task. The full details and FAQ can be found on the wiki. Volunteering to be a mentor means you get to create tasks to be worked, and you agree to review them as well. You aren't expected to teach someone how to code, write documentation, translate, do QA, etc, in a few weeks. Breathe easy.

You can help!
I know there is plenty of potential tasks lying in wait for someone to come along and help out. This is a great opportunity for us as a community to both gain a potential contributor, and get work done. I trust you will consider being a part of the process.

I'm still not sure
Please, do have a look at the FAQ, as well as the mentor guide. If that's not enough to convince you of the merits of the program, I'd invite you to read one student's feedback about his experience participating last year. Being a mentor is a great way to give back to ubuntu, get invovled and potentially gain new members.

I'm in, what should I do?
Contact myself, popey, or José who can add you as a mentor for the organization. This will allow you to add tasks and participate in the process. Here's to a great GCI!
on November 18, 2015 10:07 PM

Unbricking APM Mustang

Marcin Juszkiewicz

Firmware update usually ends well. Previous (1.15.19) firmware failed to boot on some of Mustangs at Red Hat but worked fine on one under my desk. Yesterday I got 1.15.22 plus slimpro update and managed to get machine into non-bootable state (firmware works fine on other machines).

So how to get APM Mustang back into working state?

  • Get a SD card and connect it to an PC Linux machine with reader support.
  • Download Mustang software from MyAPM (1.5.19 was latest available there).
  • Unpack “mustang_sq_1.15.19.tar.xz” and then “mustang_binaries_1.15.19.tar.xz” tarballs.
  • Write the boot loader firmware to the SD card: “dd if=tianocore_media.img of=/dev/SDCARD“.
  • Take FAT formatted USB drive and put there some files from “mustang_binaries_1.15.19.tar.xz” archive (all into root directory):
    • potenza/apm_upgrade_tianocore.cmd
    • potenza/tianocore_media.img
    • potenza/UpgradeFirmware.efi
  • Power off your Mustang
  • Configure the Mustang to boot from SD card via these jumpers change:
    • Find HDR9 (close to HDR8, which is next to PCIe port)
    • Locate pin 11-12 and 17-18.
    • Connect 11-12 and 17-18 with jumpers
  • Insert SD card to Mustang SD port
  • Connect serial cable to Mustang and your PC.
  • Run minicom/picocom/screen/other-preferred-serial-terminal and connect to Mustang serial port
  • Power up Mustang and you should boot with SD UEFI firmware:
X-Gene Mustang Board
Boot firmware (version 1.1.0 built at 12:25:21 on Jun 22 2015)
TianoCore 1.1.0 UEFI 2.4.0 Jun 22 2015 12:24:25
CPU: APM ARM 64-bit Potenza Rev A3 1400MHz PCP 1400MHz
     SOC 2000MHz IOBAXI 400MHz AXI 250MHz AHB 200MHz GFC 125MHz
Board: X-Gene Mustang Board
The default boot selection will start in   2 second 
  • Press any key to get into UEFI menu.
  • Select “Shell” option and you will be greeted with a list of recognized block devices and filesystems. Check which is USB (“FS6” in my case).
Shell> fs6:
FS6:\> ls  
Directory of: FS6:\
08/04/2015  00:28              39,328  UpgradeFirmware.efi
08/27/2015  19:20                  56  apm_upgrade_tianocore.cmd
08/27/2015  19:20           2,098,176  tianocore_media.img
  • Flash firmware using “UpgradeFirmware.efi apm_upgrade_tianocore.cmd” command.
  • Power off
  • Change jumpers back to normal (11-12 and 17-18 to be open).
  • Eject SD card from Mustang
  • Power on

And your Mustang should be working again. You can also try to write other versions of firmware of course or grab files from internal hdd.

on November 18, 2015 06:03 PM