February 06, 2016

Justo hoy hace un año que tengo un Ubuntu Phone :)) Aquella primera prerelease para insiders, donde conocí a unos compañeros maravillosos, marcó la salida al mundo del primer dispositivo con Ubuntu Touch.

Presentación en Londres hace 1 año

Es imposible no echar la vista atrás, recopilando de la hemeroteca, intentando hacer memoria y comparar como era Ubuntu antes y después de esa presentación.

Haciendo fotos al móvil con Fernando Lanero
Quedó claro el compromiso de BQ, sacando a posteriori el E5 (y el próximo mes de este año, la primera tablet con convergencia). También vende fundas propias, dispone de foros de soporte y es la primera vez que BQ vende un móvil en todo el mundo.

BQ E4.5

Meizu, con un gran móvil como es el MX4, no ha sacado más modelos, aunque ha habido rumores apuntando a la salida del MX5. Ojalá :D


Y tras un año, ¿cuales son las armas de Ubuntu Phone? En mi opinión, principalmente la convergencia, la privacidad y la libertad del software.

Privacy :) Yeah!

Si, la palabra so dicha mil veces, casi rayando la maldición, What's App. No, aún no existe esa aplicación para el móvil y posiblemente sea el único lastre para la plataforma.

Nada más que añadir
Y sí :) También podemos jugar

No hay millones y millones de aplicaciones como en Android o iOS (¿realmente necesitamos disponer de 300 aplicaciones distintas para hacer lo mismo?), pero las que hay son libres y de muchísima calidad. Y lo importante (para mi) es tener eso: un sistema operativo totalmente abierto y las aplicaciones suficientes para usar mi móvil en el día a día.

Porque quien se siente atraído por un Ubuntu Phone es un usuario que busca un dispositivo gobernado por software libre y que se respete su privacidad. Desde ahí debemos partir. Y ahí es donde Ubuntu cumple con creces. Ubuntu tiene su nicho y realmente no es un nicho pequeño.

Os aseguro que tener un móvil gobernado por un GNU/Linux real, un auténtico Ubuntu en el bolsillo, no tiene precio.

El cerebro de la bestia :)

Añade ratón + teclado + monitor y tendrás un Ubuntu Escritorio. Foto de Marius Quabeck

La otra gran baza es la convergencia, donde la competencia aun no se ha puesto las pilas y a excepción de Windows Phone, nadie ofrece lo que ofrecerá inminentemente Canonical.
Ubuntu ha ido entramando la telaraña con paso firme y decidido y ahora es el momento de recoger los frutos.
La nueva era, en la que la CPU de un escritorio es tu móvil, ha llegado. Un mismo Ubuntu para el móvil, la tablet y el escritorio. El ecosistema se ha completado :))

Conecta ratón + teclado y tendrás un mini PC perfecto para viajar

No quiero finalizar sin agradecer a todos los que hayan hecho que Ubuntu Phone sea lo que es hoy :) ¡Gracias!

Fotos: De Fernando Lanero, David Castañón y mías.
on February 06, 2016 10:00 AM

People = People

Joe Liau

Technology made *for* people Trapped in technology? (source)

“Ubuntu is about people.”
“Ubuntu is for human beings.”

We have heard these phrases as good reminders as to “why” we are making Ubuntu. However, there is a growing sense of disconnect from the definition of “what” we are doing for the people. The “what” has to come back to the “why”. So, we need to clarify and simplify what we are doing.

Ubuntu = Ubuntu (oo-boon-too) — A free operating system inspired by an African philosophy that says that we all are one.

Ubuntu = People. When we are people-focused, then we are making Ubuntu. Anyone can make a product that people use. Anyone can create convergence of people’s devices. But, Ubuntu brings it all back to the people, and for the people. We don’t get trapped in the technology.

People = People. Ubuntu is about people. But, everyone is unique. We are not all technology-focused, and we don’t all have the freedom to enjoy technology without advanced knowledge. When we create Ubuntu we think of the humans before the technology. When we come together to celebrate Ubuntu, we celebrate the humans who are involved in the project. Our events and attention focus on the people and not just software. This means that we establish environments that allow and encourage people to be people. We don’t get Ubuntu by simply having people there. We get Ubuntu by acknowledging that those people are human beings who are part of the bigger picture. The things that we create are great, but Ubuntu is about people, so it always comes full circle, back to the people.


on February 06, 2016 05:11 AM

February 05, 2016

Do services like Facebook and Twitter really help worthwhile participation in democracy, or are they the most sinister and efficient mechanism ever invented to control people while giving the illusion that they empower us?

Over the last few years, groups on the left and right of the political spectrum have spoken more and more loudly about the problems in the European Union. Some advocate breaking up the EU, while behind the scenes milking it for every handout they can get. Others seek to reform it from within.

Yanis Varoufakis on motorbike

Most recently, former Greek finance minister Yanis Varoufakis has announced plans to found a movement (not a political party) that claims to "democratise" the EU by 2025. Ironically, one of his first steps has been to create a web site directing supporters to Facebook and Twitter. A groundbreaking effort to put citizens back in charge? Or further entangling activism in the false hope of platforms that are run for profit by their Silicon Valley overlords? A Greek tragedy indeed, in the classical sense.

Varoufakis rails against authoritarian establishment figures who don't put the citizens' interests first. Ironically, big data and the cloud are a far bigger threat than Brussels. The privacy and independence of each citizen is fundamental to a healthy democracy. Companies like Facebook are obliged - by law and by contract - to service the needs of their shareholders and advertisers paying to study and influence the poor user. If "Facebook privacy" settings were actually credible, who would want to buy their shares any more?

Facebook is more akin to an activism placebo: people sitting in their armchair clicking to "Like" whales or trees are having hardly any impact at all. Maintaining democracy requires a sufficient number of people to be actively involved, whether it is raising funds for worthwhile causes, scrutinizing the work of our public institutions or even writing blogs like this. Keeping them busy on Facebook and Twitter renders them impotent in the real world (but please feel free to alert your friends with a )

Big data is one of the areas that requires the greatest scrutiny. Many of the professionals working in the field are actually selling out their own friends and neighbours, their own families and even themselves. The general public and the policy makers who claim to represent us are oblivious or reckless about the consequences of this all-you-can-eat feeding frenzy on humanity.

Pretending to be democratic is all part of the illusion. Facebook's recent announcement to deviate from their real-name policy is about as effective as using sunscreen to treat HIV. By subjecting themselves to the laws of Facebook, activists have simply given Facebook more status and power.

Data means power. Those who are accumulating it from us, collecting billions of tiny details about our behavior, every hour of every day, are fortifying a position of great strength with which they can personalize messages to condition anybody, anywhere, to think the way they want us to. Does that sound like the route to democracy?

I would encourage Mr Varoufakis to get up to speed with Free Software and come down to Zurich next week to hear Richard Stallman explain it the day before launching his DiEM25 project in Berlin.

Will the DiEM25 movement invite participation from experts on big data and digital freedom and make these issues a core element of their promised manifesto? Is there any credible way they can achieve their goal of democracy by 2025 without addressing such issues head-on?

Or put that the other way around: what will be left of democracy in 2025 if big data continues to run rampant? Will it be as distant as the gods of Greek mythology?

on February 05, 2016 10:07 PM

Have We Converged Yet?

Randall Ross

Apologies for the long period with no updates. I'll be bringing back this blog with a fresh look and more new exciting and original topics soon. I wanted to get this article out without further delay though because it captures an important and timely idea that has been missed by the tech news sites... again.

Convergence is not about a unified computing experience across all your devices. Although that's an important goal, convergence is more about that point in time where your philosophy that technology should respect people converges with that of a group or company that believes the same.

Recently, my friend Wayne (who's a long-time Ubuntu Vancouverite) pointed out his thoughts on Ubuntu's convergence announcement.

Here's a teaser from Wayne's blog:

    "... it became even more apparent to me that the ‘battle for the operating system’ will eventually be won by Ubuntu in numbers (it is already won in principle)"

    "You see, Ubuntu cares about you, because it’s built by people who care about things other than shareholders’ dividends."

Please read Wayne's full article here. http://wayneoutthere.com/race-or-marathon-to-convergence/ It's a quick read and will make you say "Hmmm..."

Like Wayne, I hope you will reject those in the tech industry that insist on keeping you focused on what's unimportant. It's *never* about widget this, or kernel that.

It's about the agenda that is behind the technology.

The friendly folks who make Ubuntu are charting a course in computing that respects people. The Ubuntu Tablet is another way to deliver that goal. That's the real news.

Image "Happy Boys" by https://www.flickr.com/photos/deepblue66/ cc-by-nc-sa

on February 05, 2016 08:43 PM


Thomas Ward

The NGINX PPAs have had some cleanup done to them today.

Previously, the PPAs kept the ‘older’ package versions in them for now-EOL releases (this included keeping ancient versions for Maverick, Natty, Oneiric, Quantal, Raring, Saucy, and Utopic). This was decided upon in order to prevent people from seeing 404 errors on PPA checking. We also included a large list of “Final Version” items for each Ubuntu release, stating there would be no more updates for that release, but keeping the ancient packages in place for installation.

Looking back on this, this is a bad thing for multiple reasons. Firstly, it means people in ‘older releases’ can still use the PPA for that release. This means security-holed versions of NGINX could still be used. Secondly, it implies that we still ‘support’ the use of older releases of Ubuntu in the PPAs. This has the security connotation that we are OK with people using no-longer-updated releases, which in turn have their own security holes.

So, today, in an effort to discourage the use of ancient Ubuntu versions which get no security updates or support anymore, I’ve made changes to the way that the PPAs will operate going forward: Unless a release recently went End of Life, versions of the nginx package in the PPAs for older Ubuntu releases are no longer going to be kept, and will be deleted a week after the version goes End of Life.

Therefore, as of today, I have deleted all the packages in the NGINX PPAs (both Stable and Mainline, in both staging and release PPAs) for the following releases of Ubuntu:

  • Maverick (10.10)
  • Natty (11.04)
  • Oneiric (11.10)
  • Quantal (12.10)
  • Raring (13.04)
  • Saucy (13.10)
  • Utopic (15.04)

People still using ancient versions of NGINX or Ubuntu are strongly recommended to upgrade to get continued support and security/bug fixes.

on February 05, 2016 05:20 PM

In the last two days, I’ve installed two Android apps (names redacted because it’s not their fault!) which, on install, have popped up a custom notification saying that the app “requests Sensitive Permissions”.

UPDATE: this is not these apps’s fault. It is ES File Explorer’s fault. Uninstall ES File Explorer. And everything below applies to the ES File Explorer people.

Tapping this notification pops up a thing named “Apps Analyze” which pretends to be analysing the stuff on your phone and then shows you a bunch of irrelevant information about your phone and weather and Facebook info, which have nothing whatsoever to do with the app you installed.

Let me be clear. This is bullshit. This is nothing more than malware. I wanted to dim my screen, or buy a sandwich. I did not want to have my phone “analysed”; I did not want “sensitive permissions”. I don’t think this thing needs permissions at all; at the very best it’s a completely unwanted bundled thing, like Oracle bundling adware with their Java installer. At worst, it’s some sort of unpleasant malware which harvests data from my phone and ships it off somewhere. I don’t know what it does; it’s certainly bloatware at the very least; there’s a Reddit thread about it.

I don’t know where this is coming from; since it’s shown up in two separate apps, it’s presumably some sort of third-party component, and presumably the authors of it pay app developers to include it. I do know where it’s coming from; it’s from ES File Explorer. If you are an Android app developer and you are using this thing, fucking pack it in. This is a hysterical betrayal of your users’ trust. I know it’s hard work to monetise software that you write. I know it’s tempting to scrape the barrel like this. But if you are using this, you are a terrible person and you should sit down and have a bloody word with yourself. Stop it. You’re pissing in the waterhole and ruining things for everyone. Do you really want to be part of this race to the bottom?

It’s possible that this is an official Android thing, since it’s also showing up in Google Sheets and so on. If so, Android people, what the hell are you thinking of?

on February 05, 2016 12:52 PM

This is a follow-up to the End of Life warning sent last month to confirm that as of today (February 4, 2016), Ubuntu 15.04 is no longer supported. No more package updates will be accepted to 15.04, and it will be archived to old-releases.ubuntu.com in the coming weeks.

The original End of Life warning follows, with upgrade instructions:

Ubuntu announced its 15.04 (Vivid Vervet) release almost 9 months ago, on April 23, 2015. As a non-LTS release, 15.04 has a 9-month month support cycle and, as such, the support period is now nearing its end and Ubuntu 15.04 will reach end of life on Thursday, February 4th. At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 15.04.

The supported upgrade path from Ubuntu 15.04 is via Ubuntu 15.10. Instructions and caveats for the upgrade may be found at:


Ubuntu 15.10 continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:


Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.

Originally posted to the ubuntu-security-announce mailing list on Fri Feb 5 03:54:55 UTC 2016 by Adam Conrad, on behalf of the Ubuntu Release Team

on February 05, 2016 04:22 AM

February 04, 2016

People of earth, waving at Saturn, courtesy of NASA.
“It Doesn't Look Like Ubuntu Reached Its Goal Of 200 Million Users This Year”, says Michael Larabel of Phoronix, in a post that it seems he's been itching to post for months.

Why the negativity?!? Are you sure? Did you count all of them?

No one has.

How many people in the world use Ubuntu?

Actually, no one can count all of the Ubuntu users in the world!

Canonical, unlike Apple, Microsoft, Red Hat, or Google, does not require each user to register their installation of Ubuntu.

Of course, you can buy laptops preloaded with Ubuntu from Dell, HP, Lenovo, and Asus.  And there are millions of them out there.  And you can buy servers powered by Ubuntu from IBM, Dell, HP, Cisco, Lenovo, Quanta, and compatible with the OpenCompute Project.

In 2011, hardware sales might have been how Mark Shuttleworth hoped to reach 200M Ubuntu users by 2015.

But in reality, hundreds of millions of PCs, servers, devices, virtual machines, and containers have booted Ubuntu to date!

Let's look at some facts...
  • Docker users have launched Ubuntu images over 35.5 million times.
  • HashiCorp's Vagrant images of Ubuntu 14.04 LTS 64-bit have been downloaded 10 million times.
  • At least 20 million unique instances of Ubuntu have launched in public clouds, private clouds, and bare metal in 2015 itself.
    • That's Ubuntu in clouds like AWS, Microsoft Azure, Google Compute Engine, Rackspace, Oracle Cloud, VMware, and others.
    • And that's Ubuntu in private clouds like OpenStack.
    • And Ubuntu at scale on bare metal with MAAS, often managed with Chef.
  • In fact, over 2 million new Ubuntu cloud instances launched in November 2015.
    • That's 67,000 new Ubuntu cloud instances launched per day.
    • That's 2,800 new Ubuntu cloud instances launched every hour.
    • That's 46 new Ubuntu cloud instances launched every minute.
    • That's nearly one new Ubuntu cloud instance launched every single second of every single day in November 2015.
  • And then there are Ubuntu phones from Meizu.
  • And more Ubuntu phones from BQ.
  • Of course, anyone can install Ubuntu on their Google Nexus tablet or phone.
  • Or buy a converged tablet/desktop preinstalled with Ubuntu from BQ.
  • Oh, and the Tesla entertainment system?  All electric Ubuntu.
  • Google's self-driving cars?  They're self-driven by Ubuntu.
  • George Hotz's home-made self-driving car?  It's a homebrewed Ubuntu autopilot.
  • Snappy Ubuntu downloads and updates for Raspberry Pi's and Beagle Bone Blacks -- the response has been tremendous.  Download numbers are astounding.
  • Drones, robots, network switches, smart devices, the Internet of Things.  More Snappy Ubuntu.
  • How about Walmart?  Everyday low prices.  Everyday Ubuntu.  Lots and lots of Ubuntu.
  • Are you orchestrating containers with Kubernetes or Apache Mesos?  There's plenty of Ubuntu in there.
  • Kicking PaaS with Cloud Foundry?  App instances are Ubuntu LXC containers.  Pivotal has lots of serious users.
  • And Heroku?  You bet your PaaS those hosted application containers are Ubuntu.  Plenty of serious users here too.
  • Tianhe-2, the world's largest super computer.  Merely 80,000 Xeons, 1.4 TB of memory, 12.4 PB of disk, all number crunching on Ubuntu.
  • Ever watch a movie on Netflix?  You were served by Ubuntu.
  • Ever hitch a ride with Uber or Lyft?  Your mobile app is talking to Ubuntu servers on the backend.
  • Did you enjoy watching The Hobbit?  Hunger Games?  Avengers?  Avatar?  All rendered on Ubuntu at WETA Digital.  Among many others.
  • Do you use Instagram?  Say cheese!
  • Listen to Spotify?  Music to my ears...
  • Doing a deal on Wall Street?  Ubuntu is serious business for Bloomberg.
  • Paypal, Dropbox, Snapchat, Pinterest, Reddit. Airbnb.  Yep.  More Ubuntu.
  • Wikipedia and Wikimedia, among the busiest sites on the Internet with 8 - 18 billion page views per month, are hosted on Ubuntu.
How many "users" of Ubuntu are there ultimately?  I bet there are over a billion people today, using Ubuntu -- both directly and indirectly.  Without a doubt, there are over a billion people on the planet benefiting from the services, security, and availability of Ubuntu today.
  • More people use Ubuntu than we know.
  • More people use Ubuntu than you know.
  • More people use Ubuntu than they know.
More people use Ubuntu than anyone actually knows.

Because of who we all are.

on February 04, 2016 08:08 PM

A new look for tablet

Canonical Design Team

Today we launched a new and redesigned tablet section on ubuntu.com that introduces all the cool features of the upcoming BQ Aquaris M10 Ubuntu Edition tablet.

Breaking out of the box

In this redesign, we have broken out of the box, removing the container that previously held the content of the pages. This makes each page feel more spacious, giving the text and the images plenty of room to shine.

This is something we’ve wanted to do for a while across the entire site, so we thought that having the beautiful, large tablet photos to work with gave us a good excuse to try out this new approach.


The overview page of the tablet section of ubuntu.com, before (left) and after


For most of the section, we’ve used existing patterns from our design framework, but the removal of the container box allowed us to play with how the images behave across different screen sizes. You will notice that if you look at the tablet pages on a medium to small screen, some of the images will be cropped by the edge of the viewport, but if you see the same image in a large screen, you can see it in its entirety.


From the top: the same row on a large, medium and small screen


How we did it

This project was a concerted effort across the design, marketing, and product management teams.

To understand the key goals for this redesign, we collected the requirements and messaging from the key stakeholders of the project. We then translated all this information into wireframes that guide the reader through what Ubuntu Tablet is. These went through a few rounds of testing and iteration with both users and stakeholders. Finally, we worked with a copywriter to refine the words of each section of the tablet pages.


Some of the wireframes


To design the pages, we started with exploring the flow of each page in large and small screens in flat mockups, which were quickly built into a fully functioning prototype that we could keep experimenting and testing on.


Some of the flat mockups created for the redesign


This design process, where we start with flat mockups and move swiftly into a real prototype, is how we design and develop most of our projects, and it is made easier by the existence of a flexible framework and design patterns, that we use (and sometimes break!) as needed.


Testing the new tablet section on real devices


To showcase the beautiful tablet screen designs on the new BQ tablet, we coordinated with professional photographers to deliver the stunning images of the real device that you can enjoy along every new page of the section.


One of the many beautiful device photos used across the new tablet section of ubuntu.com


Many people were involved in this project, making it possible to deliver a redesign that looks great, and is completed on time — which is always good a thing :)

In the future

In the near future, we want to remove the container box from the other sections of ubuntu.com, although you may see this change being done gradually, section by section, rather than all in one go. We will also be looking at redesigning our navigation, so lots to look forward to.

Now go experience tablet for yourself and let us know what you think!

on February 04, 2016 04:15 PM

Embeddable cards for Juju

Canonical Design Team

Juju is a cloud orchestration tool with a lot of unique terminology. This is not so much of a problem when discussing or explaining terms or features within the site or the GUI, but, when it comes to external sources, the context is sometimes lost and everything can start to get a little confusing.

So a project was started to create embeddable widgets of information to not only give context to blog posts mentioning features of Juju, but also to help user adoption by providing direct access to the information on jujucharms.com.

This project was started by Anthony Dillon, one of the developers, to create embeddable information cards for three topics in particular. Charms, bundles and user profiles. These cards would function similarly to embedded YouTube videos, or embedding a song from Soundcloud on your own site as seen bellow:



Multiple breakpoints of the cards were established (small, 300px and below. medium: 301px to 625px and large: 626px and up) so that they would work responsively and therefore work in a breadth of different situations and compliment the user’s content referring to a charm, bundle or a user profile without any additional effort for the user.

We started the process by determining what information we would want to include within the card and then refining that information as we went through the different breakpoints. Here are some of the initial ideas that we put together:

charm  bundle  profile

We wrote down all the information there could be related to each type of card and then discussed how that might carry down to smaller card sizes and removed the unnecessary information as we went through the process. For the profile cards, we felt there was not enough information to display a profile card above 625px break point so we limited the card to the medium size.

Just enter the bundle or the charm name and the card will be generated for you to copy the code snippet to embed into your own content.

embed card thing

You can create your own here: http://www.jujugui.org/community/cards

Below are some examples of the responsive cards are different widths:


on February 04, 2016 02:36 PM

13 ways to PulseAudio

David Henningsson

All roads lead to Rome, but PulseAudio is not far behind! In fact, how the PulseAudio client library determines how to try to connect to the PulseAudio server has no less than 13 different steps. Here they are, in priority order:

1) As an application developer, you can specify a server string in your call to pa_context_connect. If you do that, that’s the server string used, nothing else.

2) If the PULSE_SERVER environment variable is set, that’s the server string used, and nothing else.

3) Next, it goes to X to check if there is an x11 property named PULSE_SERVER. If there is, that’s the server string, nothing else. (There is also a PulseAudio module called module-x11-publish that sets this property. It is loaded by the start-pulseaudio-x11 script.)

4) It also checks client.conf, if such a file is found, for the default-server key. If that’s present, that’s the server string.

So, if none of the four methods above gives any result, several items will be merged and tried in order.

First up is trying to connect to a user-level PulseAudio, which means finding the right path where the UNIX socket exists. That in turn has several steps, in priority order:

5) If the PULSE_RUNTIME_PATH environment variable is set, that’s the path.

6) Otherwise, if the XDG_RUNTIME_DIR environment variable is set, the path is the “pulse” subdirectory below the directory specified in XDG_RUNTIME_DIR.

7) If not, and the “.pulse” directory exists in the current user’s home directory, that’s the path. (This is for historical reasons – a few years ago PulseAudio switched from “.pulse” to using XDG compliant directories, but ignoring “.pulse” would throw away some settings on upgrade.)

8) Failing that, if XDG_CONFIG_HOME environment variable is set, the path is the “pulse” subdirectory to the directory specified in XDG_CONFIG_HOME.

9) Still no path? Then fall back to using the “.config/pulse” subdirectory below the current user’s home directory.

Okay, so maybe we can connect to the UNIX socket inside that user-level PulseAudio path. But if it does not work, there are still a few more things to try:

10) Using a path of a system-level PulseAudio server. This directory is /var/run/pulse on Ubuntu (and probably most other distributions), or /usr/local/var/run/pulse in case you compiled PulseAudio from source yourself.

11) By checking client.conf for the key “auto-connect-localhost”. If so, also try connecting to tcp4:…

12) …and tcp6:[::1], too. Of course we cannot leave IPv6-only systems behind.

13) As the last straw of hope, the library checks client.conf for the key “auto-connect-display”. If it’s set, it checks the DISPLAY environment variable, and if it finds a hostname (i e, something before the “:”), then that host will be tried too.

To summarise, first the client library checks for a server string in step 1-4, if there is none, it makes a server string – out of one item from steps 5-9, and then up to four more items from steps 10-13.

And that’s all. If you ever want to customize how you connect to a PulseAudio server, you have a smorgasbord of options to choose from!

on February 04, 2016 12:51 PM
One issue when running parallel processes is contention of shared resources such as the Last Level Cache (aka LLC or L3 Cache).  For example, a server may be running a set of Virtual Machines with processes that are memory and cache intensive hence producing a large amount of cache activity. This can impact on the other VMs and is known as the "Noisy Neighbour" problem.

Fortunately the next generation Intel processors allow one to monitor and also fine tune cache allocation using Intel Cache Monitoring Technology (CMT) and Cache Allocation Technology (CAT).

Intel kindly loaned me a 12 thread development machine with CMT and CAT support to experiment with this technology using the Intel pqos tool.   For my experiment, I installed Ubuntu Xenial Server on the machine. I then installed KVM and an VM instance of Ubuntu Xenial Server.   I then loaded the instance using stress-ng running a memory bandwidth stressor:

 stress-ng --stream 1 -v --stream-l3-size 16M  
..which allocates 16MB in 4 buffers and performs various read/compute and writes to these, hence causing a "noisy neighbour".

Using pqos,  one can monitor and see the cache/memory activity:
sudo apt-get install intel-cmt-cat
sudo modprobe msr
sudo pqos -r
TIME 2016-02-04 10:25:06
0 0.59 168259k 9144.0 12195.0 0.0
1 1.33 107k 0.0 3.3 0.0
2 0.20 2k 0.0 0.0 0.0
3 0.70 104k 0.0 2.0 0.0
4 0.86 23k 0.0 0.7 0.0
5 0.38 42k 24.0 1.5 0.0
6 0.12 2k 0.0 0.0 0.0
7 0.24 48k 0.0 3.0 0.0
8 0.61 26k 0.0 1.6 0.0
9 0.37 11k 144.0 0.9 0.0
10 0.48 1k 0.0 0.0 0.0
11 0.45 2k 0.0 0.0 0.0
Now to run a stress-ng stream stressor on the host and see the performance while the noisy neighbour is also running:
stress-ng --stream 4 --stream-l3-size 2M --perf --metrics-brief -t 60
stress-ng: info: [2195] dispatching hogs: 4 stream
stress-ng: info: [2196] stress-ng-stream: stressor loosely based on a variant of the STREAM benchmark code
stress-ng: info: [2196] stress-ng-stream: do NOT submit any of these results to the STREAM benchmark results
stress-ng: info: [2196] stress-ng-stream: Using L3 CPU cache size of 2048K
stress-ng: info: [2196] stress-ng-stream: memory rate: 1842.22 MB/sec, 736.89 Mflop/sec (instance 0)
stress-ng: info: [2198] stress-ng-stream: memory rate: 1847.88 MB/sec, 739.15 Mflop/sec (instance 2)
stress-ng: info: [2199] stress-ng-stream: memory rate: 1833.89 MB/sec, 733.56 Mflop/sec (instance 3)
stress-ng: info: [2197] stress-ng-stream: memory rate: 1847.16 MB/sec, 738.86 Mflop/sec (instance 1)
stress-ng: info: [2195] successful run completed in 60.01s (1 min, 0.01 secs)
stress-ng: info: [2195] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s
stress-ng: info: [2195] (secs) (secs) (secs) (real time) (usr+sys time)
stress-ng: info: [2195] stream 22101 60.01 239.93 0.04 368.31 92.10
stress-ng: info: [2195] stream:
stress-ng: info: [2195] 547,520,600,744 CPU Cycles 9.12 B/sec
stress-ng: info: [2195] 69,959,954,760 Instructions 1.17 B/sec (0.128 instr. per cycle)
stress-ng: info: [2195] 11,066,905,620 Cache References 0.18 B/sec
stress-ng: info: [2195] 11,065,068,064 Cache Misses 0.18 B/sec (99.98%)
stress-ng: info: [2195] 8,759,154,716 Branch Instructions 0.15 B/sec
stress-ng: info: [2195] 2,205,904 Branch Misses 36.76 K/sec ( 0.03%)
stress-ng: info: [2195] 23,856,890,232 Bus Cycles 0.40 B/sec
stress-ng: info: [2195] 477,143,689,444 Total Cycles 7.95 B/sec
stress-ng: info: [2195] 36 Page Faults Minor 0.60 sec
stress-ng: info: [2195] 0 Page Faults Major 0.00 sec
stress-ng: info: [2195] 96 Context Switches 1.60 sec
stress-ng: info: [2195] 0 CPU Migrations 0.00 sec
stress-ng: info: [2195] 0 Alignment Faults 0.00 sec
.. so about 1842 MB/sec memory rate and 736 Mflop/sec per CPU across 4 CPUs.  And pqos shows the cache/memory actitivity as:
sudo pqos -r
TIME 2016-02-04 10:35:27
0 0.14 43060k 1104.0 2487.9 0.0
1 0.12 3981523k 2616.0 2893.8 0.0
2 0.26 320k 48.0 18.0 0.0
3 0.12 3980489k 1800.0 2572.2 0.0
4 0.12 3979094k 1728.0 2870.3 0.0
5 0.12 3970996k 2112.0 2734.5 0.0
6 0.04 20k 0.0 0.3 0.0
7 0.04 29k 0.0 1.9 0.0
8 0.09 143k 0.0 5.9 0.0
9 0.15 0k 0.0 0.0 0.0
10 0.07 2k 0.0 0.0 0.0
11 0.13 0k 0.0 0.0 0.0
Using pqos again, we can find out how much LLC cache the processor has:
sudo pqos -v
NOTE: Mixed use of MSR and kernel interfaces to manage
CAT or CMT & MBM may lead to unexpected behavior.
INFO: Monitoring capability detected
INFO: CPUID.0x7.0: CAT supported
INFO: CAT details: CDP support=0, CDP on=0, #COS=16, #ways=12, ways contention bit-mask 0xc00
INFO: LLC cache size 9437184 bytes, 12 ways
INFO: LLC cache way size 786432 bytes
INFO: L3CA capability detected
INFO: Detected PID API (perf) support for LLC Occupancy
INFO: Detected PID API (perf) support for Instructions/Cycle
INFO: Detected PID API (perf) support for LLC Misses
ERROR: IPC and/or LLC miss performance counters already in use!
Use -r option to start monitoring anyway.
Monitoring start error on core(s) 5, status 6
So this CPU has 12 cache "ways", each of 786432 bytes (768K).  One or more  "Class of Service" (COS)  types can be defined that can use one or more of these ways.  One uses a bitmap with each bit representing a way to indicate how the ways are to be used by a COS.  For example, to use all the 12 ways on my example machine, the bit map is 0xfff  (111111111111).   A way can be exclusively mapped to a COS or shared, or not used at all.   Note that the ways in the bitmap must be contiguously allocated, so a mask such as 0xf3f (111100111111) is invalid and cannot be used.

In my experiment, I want to create 2 COS types, the first COS will have just 1 cache way assigned to it and CPU 0 will be bound to this COS as well as pinning the VM instance to CPU 0  The second COS will have the other 11 cache ways assigned to it, and all the other CPUs can use this COS.

So, create COS #1 with just 1 way of cache, and bind CPU 0 to this COS, and pin the VM to CPU 0:
sudo pqos -e llc:1=0x0001
sudo pqos -a llc:1=0
sudo taskset -apc 0 $(pidof qemu-system-x86_64)
And create COS #2, with 11 ways of cache and bind CPUs 1-11 to this COS:
sudo pqos -e "llc:2=0x0ffe"
sudo pqos -a "llc:2=1-11"
And let's see the new configuration:
sudo pqos  -s
NOTE: Mixed use of MSR and kernel interfaces to manage
CAT or CMT & MBM may lead to unexpected behavior.
L3CA COS definitions for Socket 0:
L3CA COS0 => MASK 0xfff
L3CA COS1 => MASK 0x1
L3CA COS2 => MASK 0xffe
L3CA COS3 => MASK 0xfff
L3CA COS4 => MASK 0xfff
L3CA COS5 => MASK 0xfff
L3CA COS6 => MASK 0xfff
L3CA COS7 => MASK 0xfff
L3CA COS8 => MASK 0xfff
L3CA COS9 => MASK 0xfff
L3CA COS10 => MASK 0xfff
L3CA COS11 => MASK 0xfff
L3CA COS12 => MASK 0xfff
L3CA COS13 => MASK 0xfff
L3CA COS14 => MASK 0xfff
L3CA COS15 => MASK 0xfff
Core information for socket 0:
Core 0 => COS1, RMID0
Core 1 => COS2, RMID0
Core 2 => COS2, RMID0
Core 3 => COS2, RMID0
Core 4 => COS2, RMID0
Core 5 => COS2, RMID0
Core 6 => COS2, RMID0
Core 7 => COS2, RMID0
Core 8 => COS2, RMID0
Core 9 => COS2, RMID0
Core 10 => COS2, RMID0
Core 11 => COS2, RMID0
..showing Core 0 bound to COS1, and Cores 1-11 bound to COS2, with COS1 with 1 cache way and COS2 with the remaining 11 cache ways.
Now re-run the stream stressor and see if the VM has less impact on the LL3 cache:
stress-ng --stream 4 --stream-l3-size 1M --perf --metrics-brief -t 60
stress-ng: info: [2232] dispatching hogs: 4 stream
stress-ng: info: [2233] stress-ng-stream: stressor loosely based on a variant of the STREAM benchmark code
stress-ng: info: [2233] stress-ng-stream: do NOT submit any of these results to the STREAM benchmark results
stress-ng: info: [2233] stress-ng-stream: Using L3 CPU cache size of 1024K
stress-ng: info: [2235] stress-ng-stream: memory rate: 2616.90 MB/sec, 1046.76 Mflop/sec (instance 2)
stress-ng: info: [2233] stress-ng-stream: memory rate: 2562.97 MB/sec, 1025.19 Mflop/sec (instance 0)
stress-ng: info: [2234] stress-ng-stream: memory rate: 2541.10 MB/sec, 1016.44 Mflop/sec (instance 1)
stress-ng: info: [2236] stress-ng-stream: memory rate: 2652.02 MB/sec, 1060.81 Mflop/sec (instance 3)
stress-ng: info: [2232] successful run completed in 60.00s (1 min, 0.00 secs)
stress-ng: info: [2232] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s
stress-ng: info: [2232] (secs) (secs) (secs) (real time) (usr+sys time)
stress-ng: info: [2232] stream 62223 60.00 239.97 0.00 1037.01 259.29
stress-ng: info: [2232] stream:
stress-ng: info: [2232] 547,364,185,528 CPU Cycles 9.12 B/sec
stress-ng: info: [2232] 97,037,047,444 Instructions 1.62 B/sec (0.177 instr. per cycle)
stress-ng: info: [2232] 14,396,274,512 Cache References 0.24 B/sec
stress-ng: info: [2232] 14,390,808,440 Cache Misses 0.24 B/sec (99.96%)
stress-ng: info: [2232] 12,144,372,800 Branch Instructions 0.20 B/sec
stress-ng: info: [2232] 1,732,264 Branch Misses 28.87 K/sec ( 0.01%)
stress-ng: info: [2232] 23,856,388,872 Bus Cycles 0.40 B/sec
stress-ng: info: [2232] 477,136,188,248 Total Cycles 7.95 B/sec
stress-ng: info: [2232] 44 Page Faults Minor 0.73 sec
stress-ng: info: [2232] 0 Page Faults Major 0.00 sec
stress-ng: info: [2232] 72 Context Switches 1.20 sec
stress-ng: info: [2232] 0 CPU Migrations 0.00 sec
stress-ng: info: [2232] 0 Alignment Faults 0.00 sec
Now with the noisy neighbour VM constrained to use just 1 way of LL3 cache, the stream stressor on the host now can achieve about 2592 MB/sec and about 1030 Mflop/sec per CPU across 4 CPUs.

This is a relatively simple example.  With the ability to monitor cache and memory bandwidth activity with one can carefully tune a system to make best use of the limited LL3 cache resource and maximise throughput where needed.

There are many applications where Intel CMT/CAT can be useful, for example fine tuning containers or VM instances, or pinning user space networking buffers to cache ways in DPDK for improved throughput.

on February 04, 2016 12:03 PM

Two Australians have achieved prominence (or notoriety, depending on your perspective) for the difficulty in questioning them about their knowledge of alleged sex crimes.

One is Julian Assange, holed up in the embassy of Ecuador in London. He is back in the news again today thanks to a UN panel finding that the UK is effectively detaining him, unlawfully, in the Ecuadorian embassy. The effort made to discredit and pursue Assange and other disruptive technologists, such as Aaron Swartz, has an eerie resemblance to the way the Inquisition hunted witches in the middle ages and beyond.

The other Australian stuck abroad is Cardinal George Pell, the most senior figure in the Catholic Church in Australia. The Royal Commission into child sex abuse by priests has heard serious allegations claiming the Cardinal knew about and covered up abuse. This would appear far more sinister than anything Mr Assange is accused of. Like Mr Assange, the Cardinal has been unable to travel to attend questioning in person. News reports suggest he is ill and can't leave Rome, although he is being accommodated in significantly more comfort than Mr Assange.

If you had to choose, which would you prefer to leave your child alone with?

on February 04, 2016 10:30 AM

Welcome Back Poster

Benjamin Mako Hill

My office door is on the second floor in front the major staircase in my building. I work with my door open so that my colleagues and my students know when I’m in. The only time I consider deviating from this policy is the first week of the quarter when I’m faced with a stream of students, usually lost on their way to class and that, embarrassingly, I am usually unable to help.

I made this poster so that these conversations can, in a way, continue even when I am not in the office.



on February 04, 2016 06:25 AM

February 03, 2016

Scale and Ubucon

Philip Ballew

Just a little over a week ago I had the privilege of both attending and helping out at the Southern California Linux Expo. As someone who simi-frequently travels to other far away lands for Linux related software events, it is nice to be able to visit a place so close to home and see the impacts of a conference like this first hand.

The first portion of the conference involved me assisting and attending UbuCon at SCALE. What was different about this UbuCon, was that this UbuCon was a cooperative of both the Ubuntu community, and Canonical. All of the talks were amazing. One of the talks was about building a Juju Charm. Another was about gtting started in Free software as a career. I saw a keynote by Mark Shuttleworth, and also a member of my local community, Nathan Haines that have pictures right below. IMG_20160121_102222810
(Mark Shuttleworth giving an opening Keynote to the conference)

(Nathan Haines talking at UbuCon)

Once the Ubuntu mini conference ended, I was able to work with my favorite part of SCALE, The Next Generation. It is a part of SCALE, I work on. Here, we have Children from all over the country and beyond come and speak about the amazing things they are doing with Free Software. This year we had a child speak about statistics with R, that made me feel like I should know about statistics!

Below is a picture from a local high school that came out to present on some of the work they have been doing.

Needless to say, it was a great time and I cannot wait until next year!

on February 03, 2016 11:30 PM

[T]he next Ubuntu Online Summit is going to be from 3rd – 5th May 2016, which is going to two weeks after 16.04 release.

Summit and related pages will be updated in due time.

Originally posted to the community-announce mailing list on Wed Feb 3 10:07:47 UTC 2016 by Daniel Holbach

on February 03, 2016 10:42 AM

February 02, 2016


Rhonda D'Vine

Today is one of these moods. And sometimes one needs certain artists/music to foster it. Music is powerful. There are certain bands I know that I have to stay away from when feeling down to not get too deep into it. Knowing that already helps a lot. The following is an artist that is not completely in that area, but he got powerful songs and powerful messages nevertheless; and there was this situation today that one of his songs came to my mind. That's the reason why I present you today Moby. These are the songs:

  • Why Does My Heart Feel So Bad?: The song for certain moods. And lovely at that, not dragging me too much down. Hope you like the song too. :)
  • Extreme Ways: The ending tune from the movie The Bourne Ultimatum, and I fell immediately in love with the song. I used it for a while as morning alarm, a good start into the day.
  • Disco Lies: If you consider the video disturbing you might be shutting your eyes from what animals are facing on a daily basis.

Hope you like the selection; and like always: enjoy!

/music | permanent link | Comments: 2 | Flattr this

on February 02, 2016 11:08 PM

It’s time once again for the Ubuntu Free Culture Showcase!

The Ubuntu Free Culture Showcase is a way to celebrate the Free Culture movement, where talented artists across the globe create media and release it under licenses that encourage sharing and adaptation. We're looking for content which shows off the skill and talent of these amazing artists and will greet Ubuntu 16.04 LTS users.

Not only will the chosen content be featured on the next set of pressed Ubuntu discs shared worldwide across the next two years, but it will serve the joint purposes of providing a perfect test for new users testing Ubuntu’s live session or new installations, but also celebrating the fantastic talents of artists who embrace Free content licenses.

While we hope to see contributions from the video, audio, and photographic realms, I also want to thank the artists who have provided wallpapers for Ubuntu release after release. Ubuntu 15.10 shipped with wallpapers from the following contributors:

I'm looking forward to seeing the next round of entrants and a difficult time picking final choices to ship with Ubuntu 16.04 LTS.

For more information, please visit the Ubuntu Free Culture Showcase page on the Ubuntu wiki.

on February 02, 2016 11:33 AM

February 01, 2016

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

I did not ask for any paid hours this month and won’t be requesting paid hours for the next 5 months as I have a big project to handle with a deadline in June. That said I still did a few LTS related tasks:

  • I uploaded a new version of debian-security-support (2016.01.07) to officialize that virtualbox-ose is no longer supported in Squeeze and that redmine was not really supportable ever since we dropped support for rails.
  • Made a summary of the discussion about what to support in wheezy and started a new round of discussions with some open questions. I invited contributors to try to pickup one topic, study it and bring the discussion to some conclusion.
  • I wrote a blog post to recruit new paid contributors. Brian May, Markus Koschany and Damyan Ivanov candidated and will do their first paid hours over February.

Distro Tracker

Due to many nights spent on playing Splatoon (I’m at level 33, rank B+, anyone else playing it?), I did not do much work on Distro Tracker.

After having received the bug report #809211, I investigated the reasons why SQLite was no longer working satisfactorily in Django 1.9 and I opened the upstream ticket 26063 and I had a long discussion with two upstream developers to find out the best fix. The next point release (1.9.2) will fix that annoying regression.

I also merged a couple of contributions (two patches from Christophe Siraut, one adding descriptions to keywords, cf #754413, one making it more obvious that chevrons in action items are actionable to show more data, a patch from Balasankar C in #810226 fixing a bad URL in an action item).

I fixed a small bug in the “unsubscribe” command of the mail bot, it was not properly recognizing source packages.

I updated the task notifying of new upstream versions to use the data generated by UDD (instead of the data generated by Christoph Berg’s mole-based implementation which was suffering from a few bugs). 

Debian Packaging

Testing experimental sbuild. While following the work of Johannes Schauer on sbuild, I installed the version from experimental to support his work and give him some feedback. In the process I uncovered #810248.

Python sponsorship. I reviewed and uploaded many packages for Daniel Stender who keeps doing great work maintaining prospector and all its recursive dependencies: pylint-common, python-requirements-detector, sphinx-argparse, pylint-django, prospector. He also prepared an upload of python-bcrypt which I requested last month for Django.

Django packaging. I uploaded Django 1.8.8 to jessie-backports.
My stable updates for Django 1.7.11 was not handled before the release of Debian 8.3 even though it was filed more than 1.5 months before.

Misc stuff. My stable update for debian-handbook has been accepted fairly shortly after my last monthly report (thank you Adam!) so I uploaded the package once acked by a release manager. I also sponsor a backports upload of zim prepared by Joerg Desch.

Kali related work

Kernel work. The switch to Linux 4.3 in Kali resulted in a few bug reports that I investigated with the help of #debian-kernel and where I reported my findings back so that the Debian kernel could also benefit from the fixes I uploaded to Kali: first we included a patch for a regression in the vmwgfx video driver used by VMWare virtual machines (which broke the gdm login screen), then we fixed the input-modules udeb to fix support of some Logitech keyboards in debian-installer (see #796096).

Misc work. I made a non-maintainer upload of python-maxminddb to fix #805689 which had been removed from stretch and that we needed in Kali. I also had to NMU libmaxminddb since it was no longer available on armel and we actually support armel in Kali. During that NMU, it occurred to me that dh-exec could offer a feature of “optional install”, that is installing a file that exists but not failing if it doesn’t exist. I filed this as #811064 and it stirred up quite some debate.


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on February 01, 2016 07:31 PM


If the above doesn’t load, click this link.

on February 01, 2016 04:51 PM

UI Toolkit for OTA9

Ubuntu App Developer Blog

Hello folks, it’s been a while since the last update came from our busy toolkit ants. As OTA9 came out recently, it is the time for a refreshment from our side to show you the latest and greatest cocktail of features our barmen have prepared. Beside the bugfixes we’ve provided, here is a list of the big changes we’ve introduced in OTA9. Enjoy!


One of the most awaited components is the PageHeader. This now makes it possible to have a detached header component which then can be used in a Page, a Rectangle, an Item, wherever you wish. It is composed of a base, plain Header component, which does not have any layout, but handles the default behavior like showing, hiding the header and dealing with the auto-hiding when an attached Flickable is moved. Some part of that API has been introduced in OTA8, but because it wasn’t yet polished enough, we decided not to announce it there and provide more distilled functionality now.

The PageHeader then adds the navigation and the trailing actions through the - hopefully - well known ActionBar component.


Yes, it’s back. Voldemort is back! But this time it is back as a detached component :) The API is pretty similar to PageHeader (it contains a leading and trailing ActionBar), and you can place it wherever you wish. The only restriction so far is that its layout only supports horizontal orientation.

Facelifted Scrollbar

Yes, finally we got a loan headcount to help us out in creating some nice facelift for the Scrollbar. The design follows the same principles we have for the upcoming 16.04 desktop, with the scroll handler residing inside the bar, and having two pointers to drive page up/down scrolling.

This guy also convinced us that we need a Scrollview, like in QtQuick Controls v1, so we can handle the “buddy” scrollbars, the situation when horizontal and vertical scrollbars are needed at the same time and their overlapping should be dealt with. So, we have that one too :) And let's name the barman: Andrea Bernabei aka faenil is the one!

The unified BottomEdge experience

Finally we got a complete design pattern ready for the bottom edge behavior, so it was about the time to get a component around the pattern. It can be placed within any component, and its content can be staged, meaning it can be changed while the content is dragged. The content is always loaded asynchronously for now, we will add support to force synchronous loading in the upcoming releases.

Focus handling in CheckBox, Switch, Button ActionBar

Starting now, pressing Tab and Shift+Tab on a keyboard will show a focus ring on components that support it. CheckBox, Switch, Button and ActionBar have this right now, others will follow soon.

Action mnemonics

As we are heading towards the implementation of contextual menus, we are preparing a few features as prerequisite work for the menus. For one adding mnemonic handling to Action.

So far there was only one way to define shortcuts for an Action, through the shortcut property. This now can be achieved by specifying the mnemonic in the text property of the Action using the ‘&’ character. This character will then be converted into a shortcut and, if there is a hardware keyboard attached, it will underline the mnemonic.

on February 01, 2016 10:21 AM

I was about to title this “Injecting code, for fun and profit”, until I realized that this may give a different sense than I originally intended… :P

I won’t cover the reasons behind doing such, because I’m pretty sure that if you landed on this article, you would already have a pretty good sense of why you want to do this …. for fun, profit, or both ;)

Anyway, after trying various programs and reading on how to do it manually (not easy!), I came across linux-inject, a program that injects a .so into a running application, similar to how LD_PRELOAD works, except that it can be done while a program is running… and it also doesn’t actually replace any functions either (but see the P.S. at the bottom of this post for a way to do that). In other words, maybe ignore the LD_PRELOAD simile :P

The documentation of it (and a few other programs I tried) was pretty lacking though. And for good reason, the developers probably expect that most users who would be using these kinds of programs wouldn’t be newbies in this field, and would know exactly what to do. Sadly, however, I am not part of this target audience :P It took me a rather long time to figure out what to do, so in hopes that it may help someone else, I’m writing this post! :D

Let’s start by quickly cloning and building it:

git clone https://github.com/gaffe23/linux-inject.git
cd linux-inject

Once that’s done, let’s try the sample example bundled in with the program. Open another terminal (so that you have two free ones), cd to the directory you cloned linux-inject to (e.g. cd ~/workspace/linux-inject), and run ./sample-target.

Back in the first terminal, run sudo ./inject -n sample-target sample-library.so

What this does is that it injects the library sample-library.so to a process by the -name of sample-target. If instead, you want to choose your victim target by their PID, simply use the -p option instead of -n.

But … this might or might not work. Since Linux 3.4, there’s a security module named Yama that can disable ptrace-based code injections (or code injections period, I doubt there is any other way). To allow this to work, you’ll have to run either one of these commands (I prefer the second, for security reasons):

echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope # Allows any process to inject code into any other process started by the same user. Root can access all processes
echo 2 | sudo tee /proc/sys/kernel/yama/ptrace_scope # Only allows root to inject code

Try it again, and you will hopefully see “I just got loaded” in-between the “sleeping…” messages.

Before I get to the part about writing your own code to inject, I have to warn you: Some applications (such as VLC) will segfault if you inject code into them (via linux-inject, I don’t know about other programs, this is the first injection program that I managed to get working, period :P). Make sure that you are okay with the possibility of the program crashing when you inject the code.

With that (possibly ominous) warning out of the way, let’s get to writing some code!

#include <stdio.h>

void hello() {
    puts("Hello world!");

If you know C, most of this should be pretty easy to understand. The part that confused me was __attribute__((constructor)). All this does is that it says to run this function as soon as the library is loaded. In other words, this is the function that will be run when the code is injected. As you may imagine, the name of the function (in this case, hello) can be whatever you wish.

Compiling is pretty straightforward, nothing out of the ordinary required:

gcc -shared -fPIC -o libhello.so hello.c

Assuming that sample-target is running, let’s try it!

sudo ./inject -n sample-target libhello.so

Amongst the wall of “sleeping…”, you should see “Hello world!” pop up!

There’s a problem with this though: the code interrupts the program flow. If you try looping puts("Hello world!");, it will continually print “Hello world!” (as expected), but the main program will not resume until the injected library has finished running. In other words, you will not see “sleeping…” pop up.

The answer is to run it in a separate thread! So if you change the code to this …

#include <stdio.h>
#include <unistd.h>
#include <pthread.h>

void* thread(void* a) {
    while (1) {
        puts("Hello world!");
    return NULL;

void hello() {
    pthread_t t;
    pthread_create(&t, NULL, thread, NULL);

… it should work, right? Not if you inject it to sample-target. sample-target is not linked to libpthread, and therefore, any function that uses pthread functions will simply not work. Of course, if you link it to libpthread (by adding -lpthread to the linking arguments), it will work fine.

However, let’s keep it as-is, and instead, use a function that linux-inject depends on: __libc_dlopen_mode(). Why not dlopen()? dlopen() requires the program to be linked to libdl, while __libc_dlopen_mode() is included in the standard C library! (glibc’s version of it, anyways)

Here’s the code:

#include <stdio.h>
#include <unistd.h>
#include <pthread.h>
#include <dlfcn.h>

/* Forward declare these functions */
void* __libc_dlopen_mode(const char*, int);
void* __libc_dlsym(void*, const char*);
int   __libc_dlclose(void*);

void* thread(void* a) {
    while (1) {
        puts("Hello world!");

void hello() {
    /* Note libpthread.so.0. For some reason,
       using the symbolic link (libpthread.so) will not work */
    void* pthread_lib = __libc_dlopen_mode("libpthread.so.0", RTLD_LAZY);
    pthread_t t;

    *(void**)(&pthread_lib_create) = __libc_dlsym(pthread_lib, "pthread_create");
    pthread_lib_create(&t, NULL, thread, NULL);


If you haven’t used the dl* functions before, this code probably looks absolutely crazy. I would try to explain it, but the man pages are quite readable, and do a way better job of explaining than I could ever hope to try.

And on that note, you should (hopefully) be well off to injecting your own code into other processes!

If anything doesn’t make sense, or you need help, or just even to give a thank you (they are really appreciated!!), feel more than free to leave a comment or send me an email! :D And if you enjoy using linux-inject, make sure to thank the author of it as well!!

P.S. What if you want to change a function inside the host process? This tutorial was getting a little long, so instead, I’ll leave you with this: http://www.ars-informatica.com/Root/Code/2010_04_18/LinuxPTrace.aspx and specifically http://www.ars-informatica.com/Root/Code/2010_04_18/Examples/linkerex.c . I’ll try to make a tutorial on this later if someone wants :)

on February 01, 2016 05:44 AM
let's start charminglet’s start charming

This is the basic blueprint of my system for juju charming. I found that this was the quickest and least problematic setup for myself.

Below is the guide to working with this setup. Most of this will apply for working with a local server as well.

While I was working on my first juju charm, I found that the documentation was quite helpful, but I also ran into some recurring issues. As a result, I curated a lot of content, and created notes, which are now in the form of a supplementary guide for those heading down a similar path.


  • Install and setup juju on a single, local desktop system for creating and testing charms
  • Give you basic terminal commands for working with juju
  • Give some tips for troubleshooting the juju environment

Follow the guide on the LEFT, and refer to the RIGHT when necessary.

Terminal commands in this guide will look like this.

Juju guide Troubleshooting

sudo add-apt-repository ppa:juju/stable
sudo apt-get update


Install juju for local environment use:

sudo apt-get install juju-core juju-local
juju generate-config
juju switch local


juju bootstrap

The juju environment should be ready for working in now.


This guide will assume that you are working from the home directory, so please setup in home:

cd ~
mkdir -p charms/trusty
(swap “trusty” for “precise” if necessary)

You can now put charms that you are working on into ~/charms/trusty and deploy them via the local repository method (see below). Each charm will have its own unique directory that should match the charm name.

Install charm-tools for creating new charms, or testing existing ones:

sudo apt-get install charm-tools

Bootstrap Errors:
If you get any errors during bootstrap, then the environment is probably already boostrapped. You may need to restart the juju db and agent services. This might happen if you reboot the computer (you will notice that the juju commands just hang).

sudo service juju-db-$USER-local start
sudo service juju-agent-$USER-local start

Wait a few minutes for the agent to reconnect.

Destroying environment:

If the whole environment becomes messy or faulty, you can start over.

juju destroy-environment local
You will probably have to enter the super user password. And re-bootstrap.

In the worst case you might have to purge the juju installation and start again:
sudo apt-get purge juju*
sudo rm -rf ~/.juju

Some other errors might require a juju package upgrade.


juju status

This will give you the details of what your current juju environment is doing.
Pay attention to public-address (IP), and current state of your charm. Don’t interact with it until it is “started”.

juju debug-log

A running log of whatever juju is doing. It will show you where charms are at, if there is an error, and when hooks are “completed.”
(You must CTRL C to get out of it.)

Status Checks:
It is important to be patient when checking on the status of charms. Some issues are resolved by waiting. You can check juju status periodically to see changes.

DEPLOYING SERVICES (i.e. “installing” charms)

a) You can deploy any “recommended” charm with:

juju deploy charmName

e.g. juju deploy juju-gui

You can deploy multiple charms without waiting for the previous one to finish.
Just don’t add relations until they are BOTH “started.”

b) If you want to deploy a charm that you are working on locally (one-line command):

juju deploy --repository=/home/$USER/charms/ local:trusty/charmname
e.g. juju deploy –repository=/home/$USER/charms/ local:trusty/diaspora

Replace “trusty” with “precise” if necessary.

c) You can also deploy from personal trunks that haven’t yet been recommended:

juju deploy cs:~launchpadUserId/trusty/charmname

e.g. juju deploy cs:~joe/trusty/ethercalc-6

d) Deploying from the GUI (see GUI section below)

Destroy services (i.e. charm installations):

Maybe you installed the wrong one, or it “failed” to install or configure.
(You should probably destroy relations first.)

juju destroy-service serviceName
e.g. juju destroy-service suitecrm

“Un-Dead” Services ( Can’t Destroy )

Sometimes things are “dying” forever, but don’t actually die because they are in an “error state.”

If a charm/relation is in an “error” state, it will hang indefinitely at each error. You can’t even destroy it.

You can “resolve” the errors until all the hooks have gone through the cycle at which point the thing may die.

juju resolved serviceName/0

e.g. juju resolved suitecrm/0

if you have more than one of the same service, the /# will indicate which one.

*Be sure to spell “resolved” correctly. I never get it right first type :(

ssh into a service (remember a service is running inside its own “machine” by default)

If you want to go into the virtual machine that your service is running on to fix/break things more:

juju ssh serviceName/0

e.g. juju ssh suitecrm/0

No username or password needed. You have root access with “sudo”. Keep track of where you are (purple circle vs orange circle). “exit” to return to the local terminal


You can link/relate charms to each other if compatible. Commonly a database and another service.

juju add-relation charmName1 charmName2

WAIT again. Check the status do see “x-relation-changed” hook running etc.

Can’t add relation:

Check the charm’s readme to see if a special syntax is required for the relation.

Generally I like to wait for both services/charms to be in a ready state before adding a relation between them.

Destroy relations:

relation-changed hook failed, or the charms don’t like each other anymore:

juju destroy-relation charm1 charm2

e.g. juju destroy-relation suitecrm mysql

SAMPLE WORKFLOW: Deploying Services
Let’s see if that suitecrm charm is working for you.

Deploy database:

juju deploy mysql

Deploy suitecrm:

juju deploy suitecrm

Run juju status or juju debug-log to see when BOTH charms are done.

Just because it has a public-address does not mean that it’s ready to be used.

Add relation:

juju add-relation suitecrm mysql

Check status…… WAIT!

While you’re waiting… why not check out the readme document.

Access the service:

juju status and get the public-address of suitecrm, then visit in your browser. You should see the login page.

User: Admin
Pass: thisisaTEST!


This will give you graphical way of working with juju. It is quite magical, but requires manual installation if not using juju-quickstart.

juju deploy juju-gui

WAIT. Do a “juju status” to see what stage of deployment is in.

Run this in your terminal again to get the admin password

cat ~/.juju/environments/local.jenv | grep password

Once started, copy the public-address. Usually 10.0.#.### and visit that in a browser. It will likely complain about an insecure connection. For our purposes you can add the exception.

Login with: admin and the password from the above command.

The GUI is the simplest way to deploy and manage services. It does not provide much debugging information at this time. Most of the usage is pretty self-explanatory.

When you deploy a service it will show the icon with colours to indicate its status (these have been varying lately):

Yellow = wait
Red = Stop. error…

You do have to “commit” the changes to the canvas.

your first charmyour first charm (source)

Hopefully this guide has provided you with an acceptable environment for working on your charm(s). Further documentation exists for starting a charm, but I also recommend finding an existing charm that is similar to the one that you want to create so that you can model the structure. More on this process can be seen in an earlier post.

Remember that you are not alone in this project: juju add-relation me community 😉

on February 01, 2016 05:34 AM

The Hybrid Desktop

Jono Bacon

OK, folks, I want to share a random idea that cropped up after a long conversation with Langridge a few weeks back. This is merely food for thought and designed to trigger some discussion.

Today my computing experience is comprised of Ubuntu and Mac OS X. On Ubuntu I am still playing with GNOME Shell and on Mac I am using the standard desktop experience.

I like both. Both have benefits and disadvantages. My Mac has beautiful hardware and anything I plug into it just works out the box (or has drivers). While I spend most of my life in Chrome and Atom, I use some apps that are not available on Ubuntu (e.g. Bluejeans and Evernote clients). I also find multimedia is just easier and more reliable on my Mac.

My heart will always be with Linux though. I love how slick and simple Shell is and I depend on the huge developer toolchain available to me in Ubuntu. I like how customizable my desktop is and that I can be part of a community that makes the software I use. There is something hugely fulfilling about hanging out with the people who make the tools you use.

So, I have two platforms and use the best of both. The problem is, they feel like two different boxes of things sat on the same shelf. I want to jumble the contents of those boxes together and spread them across the very same shelf.

The Idea

So, imagine this (this is total fantasy, I have no idea if this would be technically feasible.)

You want the very best computing experience, so you first go out and buy a Mac. They have arguably the nicest overall hardware combo (looks, usability, battery etc) out there.

You then download a distribution from the Internet. This is shipped as a .dmg and you install it. It then proceeds to install a bunch of software on your computer. This includes things such as:

  • GNOME Shell
  • All the GNOME 3 apps
  • Various command line tools commonly used on Linux
  • An ability to install Linux packages (e.g. Debian packages, RPMs, snaps) natively

When you fire up the distribution, GNOME Shell appears (or Unity, KDE, Elementary etc) and it is running natively on the Mac, full screen like you would see on Linux. For all intents and purposes it looks and feels like a Linux box, but it is running on top of Mac OS X. This means hardware issues (particularly hardware that needs specific drivers) go away.

Because shell is native it integrates with the Mac side of the fence. All the Mac applications can be browsed and started from Shell. Nautilus shows your Mac filesystem.

If you want to install more software you can use something such as apt-get, snappy, or another service. Everything is pulled in and available natively.

Of course, there will be some integration points where this may not work (e.g. alt-tab might not be able to display Shell apps as well as Mac apps), but importantly you can use your favorite Linux desktop as your main desktop yet still use your favorite Mac apps and features.

I think this could bring a number of benefits:

  • It would open up a huge userbase as a potential audience. Switching to Linux is a big deal for most people. Why not bring the goodness to the Mac userbase?
  • It could be a great opportunity for smaller desktops to differentiate (e.g. Elementary).
  • It could be a great way to introduce people to open source in a more accessible way (it doesn’t require a new OS).
  • It could potentially bring lots of new developers to projects such as GNOME, Unity, KDE, or Elementary.
  • It could significantly increase the level of testing, translations and other supplemental services due to more people being able to play with it.

Of course, from a purely Free Software perspective it could be seen as a step back. Then again, with Darwin being open source and the desktop and apps you install in the distribution being open source, it would be a mostly free platform. It wouldn’t be free in the eyes of the FSF, but then again, neither is Ubuntu. 😉

So, again, just wanted to throw the idea out there to spur some discussion. I think it could be a great project to see. It wouldn’t replace any of the existing Linux distros, but I think it could bring an influx of additional folks over to the open source desktops.

So, two questions for you all to respond to:

  1. What do you think? Could it be an interesting project?
  2. If so, technically how do you think this could be accomplished?
on February 01, 2016 03:17 AM
With a slew of updates and a new build system, Catfish 1.3.4 is now available! This update fixes a number of bugs, adds initial support for PolicyKit, and introduces a new PPA for Ubuntu users. What’s New? New Features Initial PolicyKit integration for requesting administrative rights to update the search database. Bug Fixes Fixes for […]
on February 01, 2016 02:41 AM

Today, I added a new wiki page in the Ubuntu Membership board area called Best Practices.  This page will hold guides on how to apply, what to expect, ect. from those who, in the past, applied.  Right now, it only has the blog post that I wrote about the lessons that I learned when I applied about this time last year.

Hopefully this page can help the new applicants.

P.S. Thank you wxl for this idea.

on February 01, 2016 12:41 AM

January 31, 2016

The Mythbuntu team would like some feedback on our current MythTV theme. We would appreciate it if you filled out this survey (no personal data is collected) whether you use the theme or not.

on January 31, 2016 06:27 PM


Jonathan Riddell

KDE people getting to know our Gnome friends. The Gnome chap gave me a bit hug just after so it must have gone well whatever they were talking about.

Ruphy on WikiToLearn one of the more stylish speakers of the day

Rasterman gave a talk on Enlightenment and how it’s being ported to Wayland for use in Tizen projects and more. Turns out Rasterman is a real person called Carsten, good speaker too.

Hallway track

Paul holds court to discuss in Project Kobra. No I’ve no idea.

Stephen Kelly on his CMake addon CMakeDaemon which lets IDEs understand CMake files for code completion and highlighting goodness.

It’s the KDE neon launch party, what a happy bunch.

facebooktwittergoogle_pluslinkedinby feather
on January 31, 2016 05:45 PM
Over the past month I've been finding the odd moments [1] to add some small improvements and fix a few bugs to pagemon (a tool to monitor process memory).  The original code went from a sketchy proof of concept prototype to a somewhat more usable tool in a few weeks, so my main concern recently was to clean up the code and make it more efficient.

With the use of tools such as valgrind's cachegrind and perf I was able to work on some of the code hot-spots [2] and reduce it from ~50-60% CPU down to 5-9% CPU utilisation on my laptop, so it's definitely more machine friendly now.  In addition I've added the following small features:
  • Now one can specify the name of a process to monitor as well as the PID.  This also allows one to run pagemon on itself(!), which is a bit meta.
  • Perf events showing Page Faults and Kernel Page Allocates and Frees, toggled on/off with the 'p' key.
  • Improved and snappier clean up and exit when a monitored process exits.
  • Far more efficient page map reading and rendering.
  • Out of Memory (OOM) scores added to VM statistics window.
  • Process activity (busy, sleeping, etc) to VM statistics window.
  • Zoom mode min/max with '[' (min) and ']' (max) keys.
  • Close pop-up windows with key 'c'.
  • Improved handling of rapid map expansion and shrinking.
  • Jump to end of map using 'End' key.
  • Improve the man page.
I've tried to keep the tool small and focused and I don't want feature bloat to make it unwieldy and overly complexed.  "Do one job, and do it well" is the philosophy behind pagemon. At just 1500 lines of C, it is as complex as I want it to be for now.

Version 0.01.08 should be hitting the Ubuntu 16.04 Xenial Xerus archive in the next 24 hours or so.  I have also the lastest version in my PPA (ppa:colin-king/pagemon) built for Trusty, Vivid, Wily and Xenial.

Pagemon is useful for spotting unexpected memory activity and it is just interesting watching the behaviour memory hungry processes such as web-browsers and Virtual Machines.

[1] Mainly very late at night when I can't sleep (but that's another story...).  The git log says it all.
[2] Reading in /proc/$PID/maps and efficiently reading per page data from /proc/$PID/pagemap
on January 31, 2016 05:09 PM

Well yes, not only reviews and tests only for products like computer but today we are also on car industry testing the new Opel Astra Elective which supports Android Auto.

First of all, before I start, I want to thanks my friend Davide, who works in Opel workshop, for all of his patience and availability gave to me. Really thanks because he lets me to write this review.
Of course, I tested this all with him smartphone (OnePlus Two) and mine (Nexus 5). Screenshots made by him :)

Details about new Opel Astra Elective

Unfortunately in this first “episode” I will focus on use of Android Auto but, if you want to see more (explained) car’s details, you can go to Opel website: http://www.opel.com/

What is Android Auto?

I don’t know how to say.. Android Auto is all and nothing…

What I want to say, informally, is that Android Auto is what we see on our car’s display..
It seems a common operative system where, by default, there are different separate applications installed where you may have to update them for fixing bugs and you may need to go to your trusted mechanical.

No, you have not to do all these things I wrote. It’s only a sort of “Screen Mirroring” or better, you see in your car’s display, what you should see on your smartphone’s screen. You may ask “what about apps?”.
No problem with apps, they are directly to your smartphone! It sounds strange, no?

So ok, you might not understand what I’m saying and of course, what you can do with Android Auto.
To give an example, you can think about using modern technology system (including functionalities of your smarphone) into every car that have Android Auto receiver.
After installing official application via Play Store, you will be able to open, through display, apps which gives support such as Spotify, WhatsApp, standard functionalities like messages, calls, Google Maps, Google Play Music and many more.
Of course to go on, you should connect your smartphone to your car using the USB cables.

Exploring Android Auto

If we remove the automatic access to Android Auto when we connect the phone, or we exit from this mode without disconnect it from USB, the “Trasmission” icon will be the one of Android Auto.

1_Android_auto_opelastra(Click image to enlarge)

After connection, you can see a short and fast tutorial which explain you how to use Android Auto.
Of course if you haven’t install “Android Auto” application, you will be required to do it through Play Store.

This is the first thing that appear after the connection is this:

2_Android_auto_opelastra(Click image to enlarge)

Here is the small tutorial where the app explain the first steps.. In this screen, Android Auto say that we can press button (available on steering wheel) to active vocal commands (and here we will use Google Now functionalities).

3_Android_auto_opelastra(Click image to enlarge)

We don’t like the button or it does not work anymore because we broke it? No problem, we could use the vocal commands by pressing microphone icon on the top right.

4_Android_auto_opelastra(Click image to enlarge)

And here, the app shows us menu’ which have the role of launcher. It lets us to access to various applications like navigator, phone, music, etc.
About navigator, by default, it’s used Google Maps, Google Play Music for our music and the default app we have for everything we want.

Of course we can use different messaging apps like WhatsApp, Telegram and so on but only if they are compatible with Android Auto. (They will be more soon).


(Click image to enlarge)

When all it’s loaded (and accepted warnings), we will be in front of first screen which is what we see into Google Now and so, what we see on our phone, we will see into car’s display..

In my case, I see that I can start navigator to drive to Modena or I can choose the last address that I’ve inserted into Google Maps. Other informations displayed are traffic, and how much the trip would take, in terms of time.

7_Android_auto_opelastra(Click image to enlarge)

We don’t like Google Play Music or we don’t have internet connection? No problem we can use the default radio Opel system. Here’s how it will be.
Note, we can also see the weather forecast.. It’s surely the Google Now!

8_Android_auto_opelastra(Click image to enlarge)

Touching the “Phone” icon, we can see our address book. If we want, we can also digit phone number and start calling.
Naturally, we can use vocal commands when available by press on button on steering wheel or display.

9_Android_auto_opelastra(Click image to enlarge)

Starting navigation, as said, we have 3D view of Google Maps and also here, we can insert address by talking (through vocal commands) or write it with touchscreen.

10_Android_auto_opelastra(Click image to enlarge)

We was talking about choose different application instead of default..
When we press button on menu, if more apps are available for the selection, before the default we will be asked to select which one we want to use.
For example, pressing music icon, we will be asked if open Google Play Music or Spotify.

11_Android_auto_opelastra(Click image to enlarge)

Yeah, Spotify is available (and we can see it through selections) and we choose it!
What say, we have convergence with different devices, in fact, using and installing only one application, we can have compatibility for phone and Android Auto.
Spotify of course it’s really similar to what appears on our phone.


(Click image to enlarge)

Also WhatsApp it’s available with Android Auto and you can notice how the small notification appears when we receive message.
Touching the notification (on display), the system will read us the message via TTS (Text-To-Speech) and we can obviously reply thanks to the vocal system.

Anyway, we can also send a message saying “Send Message” followed with the name of contact.

14_Android_auto_opelastra(Click image to enlarge)

You received message and you have to reply without press keys on touchscreen? If available, we can answer with a fast standard reply like “I’m driving, sorry!” as WhatsApp do.

15_Android_auto_opelastra(Click image to enlarge)

You don’t like or don’t want to use Android Auto anymore? You’re free to exit (also with smartphone connected).

16_Android_auto_opelastra(Click image to enlarge)


What I can say, in my honest opinion it’s really curious and nice.. Maybe if the maps of Google Maps would be more updated like “Tom Tom”, it would be really something perfect.
After all, also the possibility of developing own applications for Android Auto or using existing one for music, messages, and so on.. It’s something really nice.

About me, I like it and it deverves all the best. Really congratulations to Google!

Of course, if you’re instered into developing apps, having more details or see who supports Android Auto, you can visit this website: https://www.android.com/intl/en/auto/.

Note: if you found something incorrect about what I said or about my not best english, write into comments please.. Thank you!

L'articolo Test Android Auto – New Opel Astra [ENG] sembra essere il primo su Mirko Pizii | Grab The Penguin.

on January 31, 2016 04:18 PM
Before we begin, thank you. Thank you, all, thank you, thank you…
Dr Hook, The Millionaire

Bad Voltage on stage

This has been a busy few weeks. Culminating in me becoming forty years of age, of which more later.

I went to the US to see Jono and Erica. Watched The Martian on the plane on the way out. It is a very excellent film indeed, and if you have not seen it, go and see it. And I got to hang out in Walnut Creek for a few days; I can recommend Sasa, Ike’s Sandwiches1, and Library on Main2 if you find yourself in town. And see my friends, of course. Dr3 Matthew Garrett introduced me to Longitude, the best cocktail bar in Oakland. And I scored a new laptop.

Aside: amusing story about the laptop. A few months ago I mentioned to Jono that my dad’s phone (a Moto G) was dying, and he said, hey, I’ve got a Samsung Galaxy S5 you can have to give to him if you want. Cool, said I, and handed over fifty euros for it4. Sadly, on returning to the UK, I discovered that the phone was locked to its US T-Mobile SIM and so my dad couldn’t use it. So this trip saw its return to the US so Jono could get it unlocked. He rings up T-Mobile USA, and the conversation went something like this5:

Jono: I would like you to give me the unlock code for this here Galaxy S5 that I bought from you
Helpful T-Mobile USA person: No, sir, we can’t do that
J: This is ridiculous. You phone operators are all terrible and try to lock in your customers. I know that phones are unlockable, and you’re just keeping this secret in a further attempt to deny me my rights over my purchased hardware. I can’t believe you’d keep lying about this; give me the unlock code, which I’m technical enough to know exists.
TM: No, sir, we’re not refusing to unlock the phone because we’re oppressing your hardware rights. We’re refusing to unlock the phone because you haven’t finished paying for it yet.
J: Really?
TM: Yup. You still have twelve months to go on the contract.
J: … oh. (turns to me) Do you want that laptop you borrowed instead?

So, result. Every one’s a winner. And it meant I had a machine that didn’t have to be plugged in all the time; my poor Dell M1330 had finally given up the ghost. Nice one, Jono.6

The real purpose for being in the US7 was to travel to SCaLE 14x. Did a couple of talks about Ubuntu phone stuff at the colocated “Ubucon”; one on adding analytics and advertising to Ubuntu phone apps (SCaLE video8) and one with Alan Pope about Marvin, our cloud testing service for phone apps (SCaLE video) (footnote ditto). I was one of the panelists on The Weakest Geek, a “quiz show” where as far as I can tell the rules are that quizmaster extraordinaire Gareth Greenaway asks various people increasingly hard questions about tech and sci-fi, and then Ruth Suehle wins. And there was the main purpose for my travel to SCaLE: Bad Voltage Live, our third live show and second at SCaLE. Video will be out tomorrow. It was a fun show to do, although rather dogged by problems with the AV. Still, we soldiered on, and at the after party a number of people said that they thought that our struggles with the audio and a Mac9 added to the comedy, so that’s OK. I’d like to say a special thank you to Linode for flying me out to LA10, the other sponsors for making the show possible, and to Tara for putting up with the dodgy software I wrote to run the Family Feud/Family Fortunes scoreboard for the show.

And the SCaLE team gave us some jerseys with our names on!

The four Bad Voltage presenters, being presented with branded SCaLE jerseys with names on the back

I feel a bit guilty about that. You see, we love SCaLE. The team try really hard to support Bad Voltage, and in return we use them shamelessly. We pressed Ilan into service as our audience fluffer, and made him wear lederhosen11, but we showed our gratitude with a decent bottle or bourbon in return for all his hard work. And we pressed Gareth into service as our Family Fortunes quizmaster, and made him wear a spangly gold quizmaster suit, and then showed our gratitude by getting him… a pink Hello Kitty stepstool so he can be even taller, and a pink Hello Kitty hat with movable ears. Sorry, Gareth. We love ya, buddy. And thank you both for everything.

But the real event for me was a bit at the end of the live show. You see, as of yesterday, as I post this, I am forty years old.

Forty. Cool, eh?

That makes all of this the 2016 iteration of the famous once-a-year birthday post (now in its 12th great year!), but my birthday this year was rather special. It appears to be what Cristian referred to as a “birthday week”, and I am perfectly happy with that. It started during the live show, where Jono had obviously done a whole bunch of behind the scenes hassling of lots of people to have them wish me happy birthday on video. I managed to resist the urge to actually cry on stage, but… not by much. Then a whole bunch of us went out on the Saturday night and drank yards of ale and then some sort of all night pie shop12. The day before my birthday13 a whole bunch of us went out to Rub Smokehouse14 and then had celebratory beers15. Niamh and my parents and I went to Amantia for tapas16 and Niamh and I are going to Gordon Ramsay’s maze for sushi. I have a gorgeous new watch (a Roamer Ceraline Saphira, which I have not stopped constantly looking at since the moment it was strapped to my wrist, nor have I stopped telling people the time when they don’t want to know it). A copy of Watchmen which is signed by Dave Gibbons!17 A little model of Ron Weasley!18 A potato!

So I would like to say thank you. To all the people on Facebook, because once a year I get a million emails of people wishing me many happy returns. To Sam and Andrew. To the people on my birthday video: Rob McQueen, Matthew Walster, Jorge Castro, Rikki Endsley19, Ted Haeger, Adam Sweet20, Ron Wellsted, Bill21, Jono and Erica’s parents, Tarus Balog22, Ronnie Trommer, Jessi Hustace and the OpenNMS team, Erica Bacon23, Michael Hall24, Christian Heilmann, Cristian Parrino25, Alan Pope, Bruce Lawson, and Niamh. To the Bad Voltage team: Jono, Jeremy, and Bryan. To the Saturday night partiers26: Jono, Jeremy, Tara, Ilan Maru, Hannah Anderson, Ian Santopietro, popey, mhall, Pete and Amber Graner. To the Birmingham crew: Dan, Ebz, Kev, Matt Somerville27, Matt Machell, Charles, and Rich. To Mike, who is skiing. To Andy and Tom, who I’m seeing next weekend.28 To Jono, for everything, including a rather lovely blog post. To mum and dad. And to Niamh.

I have the best friends. I really do.

Rather enjoying being forty.

It’s 11.27, by the way.

my gorgeous new watch showing the time

  1. stupid sandwich names, great actual sandwiches
  2. used to be Eleve
  3. this is important
  4. technically, I paid for half his lederhosen instead
  5. a certain amount of artistic licence is, I admit, taken here
  6. more on the ‘nice one Jono’ front later, too
  7. sandwiches and cocktails aside
  8. with rather dodgy sound; more on the AV issues later
  9. followed by brutal berating from aforementioned Dr Garrett during Wrong in 60 Seconds
  10. never did get to go to JPL in Pasadena. Or Buffalo Wild Wings
  11. we find lederhosen way funnier than I think we should
  12. where I may have left my hat
  13. we skip over here a week of me suffering from the most brutal jet lag I have every experienced. It was not a pretty sight.
  14. not actually recommended, unless you’re a professional stodge appreciator
  15. and Jura whisky. And Sambuca. And some other things
  16. also not actually recommended, it turns out. Go to La Tasca instead.
  17. nice one Charles!
  18. nice one Dan and Ebz!
  19. who, I am told, got stuck in that cemetery and couldn’t get out; cheers, Rikki
  20. I am now officially “the big ginger web lothario”
  21. sorry we haven’t managed to get together for beers, pal. It will happen. Promise. Plus, you’ve only got six months to go now…
  22. who is fifty! congrats!
  23. the “nearly crying on stage” thing? that was you, Erica
  24. I promise to look at the JS scopes stuff once I’m running 16.04
  25. I like the “decade of wisdom” thing. That sounds a lot like me, that
  26. hope you all enjoyed the gorilla joke
  27. who fixed traintimes.org.uk as a birthday present!
  28. as the birthday week turns into a birthday fortnight
on January 31, 2016 03:30 PM
Meet Walter Lapchynski, aka wxl! How did you first get started using Linux? What distros, software or resources did you use while learning? I started rolling my own kernels in Slackware on an old ?ThinkPad when there was really only one page on the Internet *briefly* dealing with the subject of running Linux on laptops. […]
on January 31, 2016 02:52 PM
I usually don't code in my travels :P Then I don't need to carry my laptop.
But If I can, I like to write a journal in my blog. Upload a few pictures and share thoughts, mainly to myself.

The issue with a phone is that I have over ~450 ppm in a real keyboard and I really hate to write in the screen phone with only 1 finger.
Then I bought a bluetooth keyboard (7,7€) and mouse (7,9€). They arrived this week and I discovered a new Ubuntu Phone :O

Yes, I watched videos or pictures about the convergence in the web, but when I experimented it by myself on my BQ E4.5, all changed :O :O

Terminal maximized in the background, Twiter & Music are the foreground windows. The mouse cursor forcing show Unity launcher

From now, I'll traveling with the keyboard, mouse and the phone. Now my phone is my real portable and small PC :))
on January 31, 2016 10:34 AM

An intro to git gui

Aurélien Gâteau

I have been using git for years now, I think I can say I know the tool quite well, yet I do all my commits with git gui. This often surprises my coworkers because a) it looks a bit ugly and b) it's a graphical application! The horror!

This is what it looks like:

git gui screenshot

Yes, it's indeed a bit ugly, thanks to it using tcl-tk, just like its most widely known brother, gitk.

On the left side you can see two lists: the top list contains all your unstaged changes, the bottom list contains all your staged changes (ie: files which have been added with git add, or removed with git rm).

The right side, contains a large view which shows the change of the currently selected file and in the bottom a text area where you can enter your commit message as well as a few widgets to trigger different actions.

How does one uses it? Easy: to stage a file to commit click on the icon of the file in the top-left side: the file disappears from the top list and appears in the bottom one. If you click on the name of the file, it gets selected, and you can see the changes in the main area.

Why use git gui instead of the command line?

For a few reasons: first it provides an easy way to review your commits before they get in. I have often caught a debug line I forgot to remove or some added trailing spaces while going through my changes this way.

Second, and most importantly, it is much easier to do partial commits with git gui. Partial commits, if you are not familiar with this, is the ability to commit only parts of a file. This (slightly controversial) feature is useful to clean up a commit or to break a set of unrelated changes in separate commits. Often necessary when I land back on Earth after a frenzy coding session. It's also useful to split commits when doing an interactive rebase.

The command-line way to do so is git add -p, but that is really tedious because it shows one hunk at a time, you don't have a global view of all the changes. With git gui you can just scroll the diff, just right-click on a change and select "Stage Hunk For Commit". If you change your mind, select the file in the Staged list, right click the staged hunk and select "Unstage Hunk From Commit".

It's even better when you want to do finer grained commits and stage only lines: with git add -p you have to edit diffs. That is really not efficient and very error prone. This is where git gui really shines: select the lines you want to commit (either additions or removals), right click and select "Stage Lines For Commit". Done.

In this little animation I create two commits from my current changes:

Creating partial commits

It works the other way as well: stage all the file or a few hunks, then right click on that debug line or that extra blank line and select "Unstage Line From Commit".

Here I remove a debug line after staging all changes:

Removing a debug line

"But it's a graphical application, it can't be as fast as the command line!"

It turns out that, at least for me, git gui is fast enough. It starts up instantly and has a set of shortcuts which makes it possible to do many operations without using the mouse. Here is the list of shortcuts I use most often:

  • Ctrl+T/Ctrl+U: Stage/unstage selected file
  • Ctrl+I: Stage all files (asks if you want to add new files if there are any)
  • Ctrl+J: Revert changes
  • Ctrl+Enter: Commit
  • Ctrl+P: Push

What about other frontends?

I must confess I haven't tried a lot of other frontends. I played a bit with git cola a few years ago but I did not feel as productive as with git gui. There are probably nicer alternatives out there but one of the main advantages of git gui is that it is an official part of Git, so it is available wherever Git is available, I have used git gui on Windows and Mac OS X: it works just like on Linux.

on January 31, 2016 06:54 AM

January 30, 2016

Ubuntu at SCALE14x

Elizabeth K. Joseph

I spent a long weekend in Pasadena from January 21-24th to participate in the 14th Annual Southern California Linux Expo (SCALE14x). As I mentioned previously, a major part of my attendance was focused on the Ubuntu-related activities. Wednesday evening I joined a whole crowd of my Ubuntu friends at a pre-UbuCon meet-and-greet at a wine bar (all ages were welcome) near the venue.

It was at this meet-and-greet where I first got to see several folks I hadn’t seen since the last Ubuntu Developer Summit (UDS) back in Copenhagen in 2012. Others I had seen recently at other open source conferences and still more I was meeting for the first time, amazing contributors to our community who I’d only had the opportunity to get to know online. It was at that event that the excitement and energy I used to get from UDS came rushing back to me. I knew this was going to be a great event.

The official start of this first UbuCon Summit began Thursday morning. I arrived bright and early to say hello to everyone, and finally got to meet Scarlett Clark of the Kubuntu development team. If you aren’t familiar with her blog and are interested in the latest updates to Kubuntu, I highly recommend it. She’s also one of the newly elected members of the Ubuntu Community Council.

Me and Scarlett Clark

After morning introductions, we filed into the ballroom where the keynote and plenaries would take place. It was the biggest ballroom of the conference venue! The SCALE crew really came through with support of this event, it was quite impressive. Plus, the room was quite full for the opening and Mark Shuttleworth’s keynote, particularly when you consider that it was a Thursday morning. Richard Gaskin and Nathan Haines, familiar names to anyone who has been to previous UbuCon events at SCALE, opened the conference with a welcome and details about how the event had grown this year. Logistics and other details were handled now too, and then they quickly went through how the event would work, with a keynote, series of plenaries and then split User and Developer tracks in the afternoon. They concluded by thanking sponsors and various volunteers and Canonical staff who made the UbuCon Summit a reality.

UbuCon Summit introduction by Richard Gaskin and Nathan Haines

The welcome, Mark’s keynote and the morning plenaries are available on YouTube, starting here and continuing here.

Mark’s keynote began by acknowledging the technical and preference diversity in our community, from desktop environments to devices. He then reflected upon his own history in Linux and open source, starting in university when he first installed Linux from a pile of floppies. It’s been an interesting progression to see where things were twenty years ago, and how many of the major tech headlines today are driven by Linux and Ubuntu, from advancements in cloud technology to self-driving cars. He continued by talking about success on a variety of platforms, from the tiny Raspberry Pi 2 to supercomputers and the cloud, Ubuntu has really made it.

With this success story, he leapt into the theme of the rest of his talk: “Great, let’s change.” He dove into the idea that today’s complex, multi-system infrastructure software is “too big for apt-get” as you consider relationships and dependencies between services. Juju is what he called “apt-get for the cloud/cluster” and explained how LXD, the next evolution of LXC running as a daemon, gives developers the ability to run a series of containers to test deployments of some of these complex systems. This means that just like the developers and systems engineers of the 90s and 00s were able to use open source software to deploy demonstrations of standalone software on our laptops, containers allow the students of today to deploy complex systems locally.

He then talked about Snappy, the new software packaging tooling. His premise was that even a six month release cycle is too long as many people are continuously delivering software from sources like GitHub. Many places have a solid foundation of packages we rely upon and then a handful of newer tools that can be packaged quickly in Snappy rather than going through the traditional Debian Packaging route, which is considerably more complicated. It was interesting to listen to this, as a former Debian package maintainer myself I always wanted to believe that we could teach everyone to do software packaging. However, seeing these efforts play out the community work with app developers it became clear between their reluctance and the backlog felt by the App Review Board, it really wasn’t working. Snappy moves us away from PyPI, PPAs and such into an easier, but still packaged and managed, way to handle software on our systems. It’ll be fascinating to see how this goes.

Mark Shuttleworth on Snappy

He concluded by talking about the popular Internet of Things (IoT) and how Ubuntu Core with Snappy is so important here. DJI, “the market leader in easy-to-fly drones and aerial photography systems,” now offers an Ubuntu-driven drone. The Open Source Robotics Institute uses Ubuntu. GE is designing smart kitchen appliances powered by Ubuntu and many (all?) of the self-driving cars known about use Ubuntu somewhere inside them. There was also a business model here, a company that produces the hardware and a minimal features set that comes with it, also sells a more advanced version, and then industry-expert third parties who further build upon it to sell industry-specific software.

After Mark’s talk there were a series of plenaries that took place in the same room.

First up was Sergio Schvezov who followed on Mark’s keynote nicely as he gave a demo of Snapcraft, the tool used to turn software into a .snap package for Ubuntu Core. Next up was Jorge Castro who gave a great talk about the state of Gaming on Ubuntu, which he said was “Not bad.” Having just had this discussion with my sister, the timing was great for me. On the day of his talk, there were 1,516 games on Steam that would natively run on Linux, a nice selection of which are modern games that are new and exciting across multiple platforms today. He acknowledged the pre-made Steam Boxes but also made the case for homebrewed Steam systems with graphics card recommendations, explaining that Intel did fine, AMD is still lagging behind high performance with their open source drivers and giving several models of NVidia cards today that do very well (from low to high quality, and cost: 750Ti, 950, 960, 970, 980, 980Ti). He also passed around a controller that works with Linux to the audience. He concluded by talking about some issues remaining with Linux Gaming, including regressions in drivers that cause degraded performance, the general performance gap when compared to some other gaming systems and the remaining stigma that there are “no games” on Linux, which talks like this are seeking to reverse. Plenaries continued with Didier Roche introducing Ubuntu Make, a project which makes creating a developer platform out of Ubuntu with several SDKs much easier so that developers reduce the bootstrapping time. His blog has a lot of great posts on the tooling. The last talk of the morning was by Scarlett Clark, who gave us a quick update on Kubuntu Development, explaining that the team had recently joined forces with KDE packagers in Debian to more effectively share resources in their work.

It was then time for group photo! Which included my xerus, and where I had a nice chat (and selfie!) with Carla Sella as we settled in for the picture.

Me and Carla Sella

In the afternoon I attended the User track, starting off with Nathan Haines on The Future of Ubuntu. In this talk he talked about what convergence of devices meant for Ubuntu and warded off concerns that the work on the phone was done in isolation and wouldn’t help the traditional (desktop, server) Ubuntu products. With Ubuntu Core and Snappy, he explained, all the work done on phones is being rolled back into progress made on the other systems, and even IoT devices, that will use them in the future. Following Nathan was the Ubuntu Redux talk by Jono Bacon. His talk could largely be divided into two parts: History of Ubuntu and how we got here, and 5 recommendations for the Ubuntu community. He had lots of great stories and photos, including one of a very young Mark, and moved right along to today with Unity 8 and the convergence story. His 5 recommendations were interesting, so I’ll repeat them here:

  1. Focus on core opportunities. Ubuntu can run anywhere, but should it? We have finite resources, focus efforts accordingly.
  2. Rethink what community in Ubuntu is. We didn’t always have Juju charmers and app developers, but they are now a major part of our community. Understand that our community has changed and adjust our vision as to where we can find new contributors.
  3. Get together more in person. The Ubuntu Online Summit works for technical work, but we’ve missed out on the human component. In person interactions are not just a “nice to have” in communities, they’re essential.
  4. Reduce ambiguity. In a trend that would continue in our leadership panel the next day, some folks (including Jono) argue that there is still ambiguity around Intellectual Propoerty and licensing in the Ubuntu community (Mark disagrees).
  5. Understand people who are not us.

Nathan Haines on The Future of Ubuntu

The next presentation was my own, on Building a career with Ubuntu and FOSS where I drew upon examples in my own career and that of others I’ve worked with in the Ubuntu community to share recommendations for folks looking to contribute to Ubuntu and FOSS as a tool to develop skills and tools for their career. Slides here (PDF). David Planella on The Ubuntu phone and the road to convergence followed my talk. He walked audience members through the launch plan for the phone, going through the device launch with BQ for Ubuntu enthusiasts, the second phase for “innovators and early adopters” where they released the Meizu devices in Europe and China and went on to explain how they’re tackling phase three: general customer availability. He talked about the Ubuntu Phone Insiders group of 30 early access individuals who came from a diverse crowd to provide early feedback and share details (via blog posts, social media) to others. He then gave a tour of the phones themselves, including how scopes (“like mini search engines on your phone”) change how people interact with their device. He concluded with a note about the availability of the SDK for phones available at developer.ubuntu.com, and that they’re working to make it easy for developers to upload and distribute their applications.

Video from the User track can be found here. The Developer track was also happening, video for that can be found here. If you’re scanning through these to find a specific talk, note that each is 1 hour long.

Presentations for the first day concluded with a Q&A with Richard Gaskin and Nathan Haines back in the main ballroom. Then it was off to the Thursday evening drinks and appetizers at Porto Alegre Churrascaria! Once again, a great opportunity to catch up with friends old and new in the community. It was great running into Amber Graner and getting to talk about our respective paid roles these days, and even touched upon key things we worked on in the Ubuntu community that helped us get there.

The UbuCon Summit activities continued after a SCALE keynote with an Ubuntu Leadership panel which I participated in along with Oliver Ries, David Planella, Daniel Holbach, Michael Hall, Nathan Haines and José Antonio Rey with Jono Bacon as a moderator. Jono had prepared a great set of questions, exploring the strengths and weaknesses in our community, things we’re excited about and eager to work on and more. We also took questions from the audience. Video for this panel and the plenaries that followed, which I had to miss in order to give a talk elsewhere, are available here. The link takes you to 1hr 50min in, where the Leadership panel begins.

The afternoon took us off into unconference mode, which allowed us to direct our own conference setup. Due to aforementioned talk I was giving elsewhere, I wasn’t able to participate in scheduling, but I did attend a couple sessions in the afternoon. First was proposed by Brendan Perrine where we talked about strategies for keeping the Ubuntu documentation up to date, and also talked about the status of the Community Help wiki, which has been locked down due to spam for nearly a month(!). I then joined cm-t arudy to chat about an idea the French team is floating around to have people quickly share stories and photos about Ubuntu in some kind of community forum. The conversation was a bit tool-heavy, but everyone was also conscious of how it would need to be moderated. I hope I see something come of this, it sounds like a great project.

With the UbuCon Summit coming to a close, the booth was the next great task for the team. I couldn’t make time to participate this year, but the booth featured lots of great goodies and a fleet of contributors working the booth who were doing a fantastic job of talking to people as the crowds continued to flow through each day.

Huge thanks to everyone who spent months preparing for the UbuCon Summit and booth on the SCALE14x expo hall. It was a really amazing event that I was proud to be a part of. I’m already looking forward to the next one!

Finally, I took responsibility for the @ubuntu_us_ca Twitter account throughout the weekend. It was the first time I’ve done such a comprehensive live-tweeting of an event from a team/project account. I recommend a browse through the tweets if you’re interested in hearing more from other great people live-tweeting the event. It was a lot of fun, but also surprisingly exhausting!

More photos from my time at SCALE14x (including lots of Ubuntu ones!) here: https://www.flickr.com/photos/pleia2/albums/72157663821501532

on January 30, 2016 11:40 PM

Master Spotlight: Na3iL

Linux Padawan

Today we interview Na3iL   1) How did you first get started using Linux? What distros, software or resources did you use while learning? When I was 12 years old, I was very interested about learning security & hacking stuff. Thus, after a while searching in the web about good OS that protects you while […]
on January 30, 2016 11:05 PM

About 15 years ago I met Stuart ‘Aq’ Langridge when he walked into the new Wolverhampton Linux Users Group I had just started with his trademark bombastic personality and humor. Ever since those first interactions we have become really close friends.

Today Stuart turns 40 and I just wanted to share a few words about how remarkable a human being he is.

Many of you who have listened to Stuart on Bad Voltage, seen him speak, worked with him, or socialized with him will know him for his larger than life personality. He is funny, warm, and passionate about his family, friends, and technology. He is opinionated, and many of you will know him for the amusing, insightful, and tremendously articulate way in which he expresses his views.

He is remarkably talented and has an incredible level of insight and perspective. He is not just a brilliant programmer and software architect, but he has a deft knowledge and understanding of people, how they work together, and the driving forces behind human interaction. What I have always admired is that while bombastic in his views, he is always open to fresh ideas and new perspectives. For him life is a journey and new ways of looking at the road are truly thrilling for him.

As I have grown as a person in my career, with my family, and particularly when moving to America, he has always supported yet challenged me. He is one of those rare friends that can enthusiastically validate great steps forward yet, with the same enthusiasm, illustrate mistakes too. I love the fact that we have a relationship that can be so open and honest, yet underlined with respect. It is his personality, understanding, humor, thoughtfulness, care, and mentorship that will always make him one of my favorite people in the world.

Stuart, I love you, pal. Have an awesome birthday, and may we all continue to cherish your friendship for many years to come.

on January 30, 2016 09:05 PM

KDE neon Website Now Live

Jonathan Riddell

KDE  neon website is now live.

Serving the freshest packages of KDE software.  Developers’ archive with packages built from KDE Git available now, stable archive with packages built from released tars coming soon.

Launch party tonight in La Paon, Grand Place, Brussels


(Under a .uk domain name until we finish the KDE incubation process.)

facebooktwittergoogle_pluslinkedinby feather
on January 30, 2016 02:57 PM

Four gunmen outside

Dimitri John Ledkov

There are four gunmen outside of my hotel. They are armed with automatic rifles and pistols. I am scared for my life having sneaked past them inside. Everyone else is acting as if everything is normal. Nobody is scared or running for cover. Nobody called the police. I've asked the reception to talk to the gunmen and ask them to leave. They looked at me as if I am mad. Maybe I am. Is this what shizophrenia feels like?! Can you see them on the picture?! Please help. There are four gunmen outside of my hotel. I am not in central Beirut, I am in central Brussels.

on January 30, 2016 01:39 AM

January 29, 2016

Xenial Xerus alpha 2

Lubuntu Blog

The second alpha of the Xenial Xerus (to become 16.04) has now been released! As usual, you are asked to read the release notes so that you are aware of issues and thus save you time filing them again. But, please do test it as widely as possible. Features: LXQt is still in development, as such […]
on January 29, 2016 08:41 PM