October 01, 2014

The XDA Developer community had its second conference last weekend, this time in Manchester, UK. We were asked to sponsor the event and were happy to do so. I went along with Daniel Holbach from the Community Team and Ondrej Kubik from the Phone Delivery Team at Canonical.

This was my first non-Ubuntu conference for a while, so it was interesting for me to meet people from so many different projects. As well as us representing Ubuntu Phone, there were guys from the Jolla project showing off SailfishOS and their handset and ports. Asa Dotzler was also there to represent Mozilla & FirefoxOS.

Daniel did a small Ubuntu app development workshop which enabled us to learn a lot from our materials and process around App Dev Schools which we’ll feed back to later sessions. Ondrej gave a talk to a packed room about hardware bring-up and porting Ubuntu to other devices. It was well receieved and explained the platform nicely. I talked about the history of Ubuntu phone and what the future might hold.

There were other sponsor booths including big names like nVidia showing off the Sheild tablet and Sony demonstrating their rather bizarre Smart EyeGlass technology. Oppo and OnePlus had plenty of devices to lust after too including giant phones with beautiful displays. I enjoyed a bunch of the talks including MediaTek making a big announcement, and demonstrating their new LinkIT One platform.

The ~200 attendees were mostly pretty geeky guys whose ages ranged from 15 to 50. There were Android developers, ROM maintainers, hardware hackers and tech enthusiasts who all seemed very friendly and open to discuss all kinds of tech subjects at every opportunity.

One thing I’d not seen at other conferences which was big at XDA:DevCon was the hardware give-aways. The organisers had obtained a lot of tech from the sponsors to give away. This ranged from phone covers through bluetooth speakers, mobile printers, hardware hacking kits through to phones, smart watches & tablets, including an Oppo Find 7, pebble watch and nVidia Sheild & controller. These were often handed out as a ‘reward’ for attendees asking good questions, or as (free) raffle prizes. It certainly kept everyone on their toes and happy! I was delighted to see an Ubuntu community member get the Oppo Find 7 :) I was rewarded with an Anker MP141 Portable Bluetooth Speaker during one talk for some reason :)

On the whole I found the conference to be an incredibly friendly, well organised event. There was plenty of food and drink at break times and coffee and snacks in between with relaxing beers in the evening. A great conference which I’d certainly go to again.

on October 01, 2014 10:09 AM

Over the last week, I started to think about how to improve the collaboration between the Open Science groups and researchers and also between the groups themselves. One of the ideas that I thought about using simple tools that are around in other Open * places (mainly Open Source/Linux distros). These tools are your forums (Discourse and other ones), Planet feeds, and wikis. Using these creates a meta community where members of the community can start there and get themselves involved in one or more groups. Open Science seems to lack this meta community.

Even though I think that meta community is not present, I do think that there is one group that can maintain this meta community and that group is the Open Knowledge Foundation Network (OKFN). They have a working group for Open Science. Therefore, I think, if they take the time and the resources, then it could happen or else some other group can be created for this.

What this meta community tool-wise needs:

Planet Feeds

Since I’m an official Ubuntu Member, I’m allowed to add my blog’s feed to Planet Ubuntu.  Planet Ubuntu allows anyone to read blog posts from many Ubuntu Members because it’s one giant feed reader.  This is well needed for Open Science, as Reddit doesn’t work for academia.  I asked on the Open Science OKFN mailing list and five people e-mailed me saying that they are interested in seeing one.  My next goal is to ask the folks of Open Science OKFN for help on building a Planet for Open Science.


I can only think of one forum, which is the Mozilla Science Lab one, that I wrote about last a few hours ago.  Having some general forum allows users to talk about various projects to job posting for their groups.  I don’t know if Discourse would be the right platform for the forums.  To me, it’s dynamicness is a bit too much at times.


I have no idea if a wiki would work for this meta Open Science community but at least having a guide that introduces newcomers to the groups is worthwhile to have.  There is a plan for a guide.

I hope these ideas can be used by some group within the Open Science community and allow it the grow.

on October 01, 2014 01:30 AM

September 30, 2014

I am pleased to announce that the Mozilla Science Lab now has a forum that anyone can use.  Anyone can introduce themselves in this topic or the category.

on September 30, 2014 10:08 PM


  • Review ACTION points from previous meeting
  • rbasak to review mysql-5.6 transition plans with ABI breaks with infinity
  • blueprint updating
  • U Development
  • Server & Cloud Bugs (caribou)
  • Weekly Updates & Questions for the QA Team (psivaa)
  • Weekly Updates & Questions for the Kernel Team (smb, sforshee)
  • Ubuntu Server Team Events
  • Open Discussion
  • Announce next meeting date, time and chair
  • ACTION: meeting chair (of this meeting, not the next one) to carry out post-meeting procedure (minutes, etc) documented at https://wiki.ubuntu.com/ServerTeam/KnowledgeBase


    • re: rbasak noted that regarding mysql mysql-5.6 transition / abi infinity action, we decided to defer the 5.6 move for this cycle, as we felt it was too late given the ABI concerns.
    • LINK: https://wiki.ubuntu.com/UtopicUnicorn/ReleaseSchedule
    • LINK: http://reqorts.qa.ubuntu.com/reports/rls-mgr/rls-u-tracking-bug-tasks.html#ubuntu-server
    • LINK: http://status.ubuntu.com/ubuntu-u/group/topic-u-server.html
    • LINK: https://blueprints.launchpad.net/ubuntu/+spec/topic-u-server
    • Nothing to report.
    • Nothing to report.
    • smb reports that he is digging into a potential race between libvirt and xen init
    • None to report.
    • Pretty quiet. Not even any bad jokes. Back to crunch time!
    • next meeting will be : Tue Oct 7 16:00:00 UTC 2014 chair will be lutostag
    • ACTION: all to review blueprint work items before next weeks meeting.

People present (lines said)

  • beisner (54)
  • smb (8)
  • meetingology (4)
  • smoser (3)
  • rbasak (3)
  • kickinz1 (3)
  • caribou (2)
  • gnuoy (1)
  • matsubara (1)
  • jamespage (1)
  • arges (1)
  • hallyn (1)


on September 30, 2014 07:24 PM

The sos team is pleased to announce the release of sos-3.2. This release includes a large number of enhancements and fixes, including:

  • Profiles for plugin selection
  • Improved log size limiting
  • File archiving enhancements and robustness improvements
  • Global plugin options:
    • --verify, --log-size, --all-logs
  • Better plugin descriptions
  • Improved journalctl log capture
  • PEP8 compliant code base
  • oVirt support improvements
  • New and updated plugins: hpasm, ctdb, dbus, oVirt engine hosted, MongoDB, ActiveMQ, OpenShift 2.0, MegaCLI, FCoEm, NUMA, Team network driver, Juju, MAAS, Openstack


on September 30, 2014 05:55 PM

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.


20140930 Meeting Agenda

Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt

Status: Utopic Development Kernel

The Utopic kernel remainds rebased on the v3.16.3 upstream stable
kernel. The latest uploaded to the archive is 3.16.0-19.26. Please
test and let us know your results.
Also, Utopic Kernel Freeze is next week on Thurs Oct 9. Any patches
submitted after kernel freeze are subject to our Ubuntu kernel SRU
Important upcoming dates:
Thurs Oct 9 – Utopic Kernel Freeze (~1 week away)
Thurs Oct 16 – Utopic Final Freeze (~2 weeks away)
Thurs Oct 23 – Utopic 14.10 Release (~3 weeks away)

Status: CVE’s

The current CVE status can be reviewed at the following link:


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Precise/Lucid

Status for the main kernels, until today (Sept. 30):

  • Lucid – Verification and Testing
  • Precise – Verification and Testing
  • Trusty – Verification and Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html


    cycle: 19-Sep through 11-Oct
    19-Sep Last day for kernel commits for this cycle
    21-Sep – 27-Sep Kernel prep week.
    28-Sep – 04-Oct Bug verification & Regression testing.
    05-Oct – 11-Oct Regression testing & Release to -updates.

Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

on September 30, 2014 05:15 PM

“The Internet sees censorship as damage and routes around it” was a very motivating tagline during my early forays into the internet. Having grown up in Apartheid-era South Africa, where government control suppressed the free flow of ideas and information, I was inspired by the idea of connecting with people all over the world to explore the cutting edge of science and technology. Today, people connect with peers and fellow explorers all over the world not just for science but also for arts, culture, friendship, relationships and more. The Internet is the glue that is turning us into a super-organism, for better or worse. And yes, there are dark sides to that easy exchange – internet comments alone will make you cry. But we should remember that the brain is smart even if individual brain cells are dumb, and negative, nasty elements on the Internet are just part of a healthy whole. There’s no Department of Morals I would trust to weed ‘em out or protect me or mine from them.

Today, the pendulum is swinging back to government control of speech, most notably on the net. First, it became clear that total surveillance is the norm even amongst Western democratic governments (the “total information act” reborn).  Now we hear the UK government wants to be able to ban organisations without any evidence of involvement in illegal activities because they might “poison young minds”. Well, nonsense. Frustrated young minds will go off to Syria precisely BECAUSE they feel their avenues for discourse and debate are being shut down by an unfair and unrepresentative government – you couldn’t ask for a more compelling motivation for the next generation of home-grown anti-Western jihadists than to clamp down on discussion without recourse to due process. And yet, at the same time this is happening in the UK, protesters in Hong Kong are moving to peer-to-peer mechanisms to organise their protests precisely because of central control of the flow of information.

One of the reasons I picked the certificate and security business back in the 1990′s was because I wanted to be part of letting people communicate privately and securely, for business and pleasure. I’m saddened now at the extent to which the promise of that security has been undermined by state pressure and bad actors in the business of trust.

So I think it’s time that those of us who invest time, effort and money in the underpinnings of technology focus attention on the defensibility of the core freedoms at the heart of the internet.

There are many efforts to fix this under way. The IETF is slowly become more conscious of the ways in which ideals can be undermined and the central role it can play in setting standards which are robust in the face of such inevitable pressure. But we can do more, and I’m writing now to invite applications for Fellowships at the Shuttleworth Foundation by leaders that are focused on these problems. TSF already has Fellows working on privacy in personal communications; we are interested in generalising that to the foundations of all communications. We already have a range of applications in this regard, I would welcome more. And I’d like to call attention to the Edgenet effort (distributing network capabilities, based on zero-mq) which is holding a sprint in Brussels October 30-31.

20 years ago, “Clipper” (a proposed mandatory US government back door, supported by the NSA) died on the vine thanks to a concerted effort by industry to show the risks inherent to such schemes. For two decades we’ve had the tide on the side of those who believe it’s more important for individuals and companies to be able to protect information than it is for security agencies to be able to monitor it. I’m glad that today, you are more likely to get into trouble if you don’t encrypt sensitive information in transit on your laptop than if you do. I believe that’s the right side to fight for and the right side for all of our security in the long term, too. But with mandatory back doors back on the table we can take nothing for granted – regulatory regimes can and do change, as often for the worse as for the better. If you care about these issues, please take action of one form or another.

Law enforcement is important. There are huge dividends to a society in which people to make long term plans, which depends on their confidence in security and safety as much as their confidence in economic fairness and opportunity. But the agencies in whom we place this authority are human and tend over time, like any institution, to be more forceful in defending their own existence and privileges than they are in providing for the needs of others. There has never been an institution in history which has managed to avoid this cycle. For that reason, it’s important to ensure that law enforcement is done by due process; there are no short cuts which will not be abused sooner rather than later. Checks and balances are more important than knee-jerk responses to the last attack. Every society, even today’s modern Western society, is prone to abusive governance. We should fear our own darknesses more than we fear others.

A fair society is one where laws are clear and crimes are punished in a way that is deemed fair. It is not one where thinking about crime is criminal, or one where talking about things that are unpalatable is criminal, or one where everybody is notionally protected from the arbitrary and the capricious. Over the past 20 years life has become safer, not more risky, for people living in an Internet-connected West. That’s no thanks to the listeners; it’s thanks to living in a period when the youth (the source of most trouble in the world) feel they have access to opportunity and ideas on a world-wide basis. We are pretty much certain to have hard challenges ahead in that regard. So for all the scaremongering about Chinese cyber-espionage and Russian cyber-warfare and criminal activity in darknets, we are better off keeping the Internet as a free-flowing and confidential medium than we are entrusting an agency with the job of monitoring us for inappropriate and dangerous ideas. And that’s something we’ll have to work for.

on September 30, 2014 02:24 PM
A StackExchange question, back in February of this year inspired a new feature in Byobu, that I had been thinking about for quite some time:
Wouldn't it be nice to have a hot key in Byobu that would send a command to multiple splits (or windows?
This feature was added and is available in Byobu 5.73 and newer (in Ubuntu 14.04 and newer, and available in the Byobu PPA for older Ubuntu releases).

I actually use this feature all the time, to update packages across multiple computers.  Of course, Landscape is a fantastic way to do this as well.  But if you don't have access to Landscape, you can always do this very simply with Byobu!

Create some splits, using Ctrl-F2 and Shift-F2, and in each split, ssh into a target Ubuntu (or Debian) machine.

Now, use Shift-F9 to open up the purple prompt at the bottom of your screen.  Here, you enter the command you want to run on each split.  First, you might want to run:

sudo true

This will prompt you for your password, if you don't already have root or sudo access.  You might need to use Shift-Up, Shift-Down, Shift-Left, Shift-Right to move around your splits, and enter passwords.

Now, update your package lists:

sudo apt-get update

And now, apply your updates:

sudo apt-get dist-upgrade

Here's a video to demonstrate!

In a related note, another user-requested feature has been added, to simultaneously synchronize this behavior among all splits.  You'll need the latest version of Byobu, 5.87, which will be in Ubuntu 14.10 (Utopic).  Here, you'll press Alt-F9 and just start typing!  Another demonstration video here...

on September 30, 2014 01:44 PM

Thanks to the sponsorship of multiple companies, I have been paid to work 11 hours on Debian LTS this month.

CVE triagingI started by doing lots of triage in the security tracker (if you want to help, instructions are here) because I noticed that the dla-needed.txt list (which contains the list of packages that must be taken care of via an LTS security update) was missing quite a few packages that had open vulnerabilities in oldstable.

In the end, I pushed 23 commits to the security tracker. I won’t list the details each time but for once, it’s interesting to let you know the kind of things that this work entailed:

  • I reviewed the patches for CVE-2014-0231, CVE-2014-0226, CVE-2014-0118, CVE-2013-5704 and confirmed that they all affected the version of apache2 that we have in Squeeze. I thus added apache2 to dla-needed.txt.
  • I reviewed CVE-2014-6610 concerning asterisk and marked the version in Squeeze as not affected since the file with the vulnerability doesn’t exist in that version (this entails some checking that the specific feature is not implemented in some other file due to file reorganization or similar internal changes).
  • I reviewed CVE-2014-3596 and corrected the entry that said that is was fixed in unstable. I confirmed that the versions in squeeze was affected and added it to dla-needed.txt.
  • Same story for CVE-2012-6153 affecting commons-httpclient.
  • I reviewed CVE-2012-5351 and added a link to the upstream ticket.
  • I reviewed CVE-2014-4946 and CVE-2014-4945 for php-horde-imp/horde3, added links to upstream patches and marked the version in squeeze as unaffected since those concern javascript files that are not in the version in squeeze.
  • I reviewed CVE-2012-3155 affecting glassfish and was really annoyed by the lack of detailed information. I thus started a discussion on debian-lts to see whether this package should not be marked as unsupported security wise. It looks like we’re going to mark a single binary packages as unsupported… the one containing the application server with the vulnerabilities, the rest is still needed to build multiple java packages.
  • I reviewed many CVE on dbus, drupal6, eglibc, kde4libs, libplack-perl, mysql-5.1, ppp, squid and fckeditor and added those packages to dla-needed.txt.
  • I reviewed CVE-2011-5244 and CVE-2011-0433 concerning evince and came to the conclusion that those had already been fixed in the upload 2.30.3-2+squeeze1. I marked them as fixed.
  • I droppped graphicsmagick from dla-needed.txt because the only CVE affecting had been marked as no-dsa (meaning that we don’t estimate that a security updated is needed, usually because the problem is minor and/or that fixing it has more chances to introduce a regression than to help).
  • I filed a few bugs when those were missing: #762789 on ppp, #762444 on axis.
  • I marked a bunch of CVE concerning qemu-kvm and xen as end-of-life in Squeeze since those packages are not currently supported in Debian LTS.
  • I reviewed CVE-2012-3541 and since the whole report is not very clear I mailed the upstream author. This discussion led me to mark the bug as no-dsa as the impact seems to be limited to some information disclosure. I invited the upstream author to continue the discussion on RedHat’s bugzilla entry.

And when I say “I reviewed” it’s a simplification for this kind of process:

  • Look up for a clear explanation of the security issue, for a list of vulnerable versions, and for patches for the versions we have in Debian in the following places:
    • The Debian security tracker CVE page.
    • The associated Debian bug tracker entry (if any).
    • The description of the CVE on cve.mitre.org and the pages linked from there.
    • RedHat’s bugzilla entry for the CVE (which often implies downloading source RPM from CentOS to extract the patch they used).
    • The upstream git repository and sometimes the dedicated security pages on the upstream website.
  • When that was not enough to be conclusive for the version we have in Debian (and unfortunately, it’s often the case), download the Debian source package and look at the source code to verify if the problematic code (assuming that we can identify it based on the patch we have for newer versions) is also present in the old version that we are shipping.

CVE triaging is often almost half the work in the general process: once you know that you are affected and that you have a patch, the process to release an update is relatively straightforward (sometimes there’s still work to do to backport the patch).

Once I was over that first pass of triaging, I had already spent more than the 11 hours paid but I still took care of preparing the security update for python-django. Thorsten Alteholz had started the work but got stuck in the process of backporting the patches. Since I’m co-maintainer of the package, I took over and finished the work to release it as DLA-65-1.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on September 30, 2014 01:24 PM

Welcome to the Ubuntu Weekly Newsletter. This is issue #385 for the week September 22 – 28, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on September 30, 2014 03:16 AM

September 29, 2014

The way you beat an incumbent is by coming up with a thing that people want, that you do, and that your competitors can’t do.

Not won’t. Can’t.

How did Apple beat Microsoft? Not by making a better desktop OS. They did it by shifting the goalposts. By creating a whole new field of competition where Microsoft’s massive entrenched advantage didn’t exist: mobile. How did Microsoft beat Digital and the mainframe pushers? By inventing the idea that every desktop should have a real computer on it, not a terminal.

How do you beat Google and Facebook? By inventing a thing that they can’t compete against. By making privacy your core goal. Because companies who have built their whole business model on monetising your personal information cannot compete against that. They’d have to give up on everything that they are, which they can’t do. Facebook altering itself to ensure privacy for its users… wouldn’t exist. Can’t exist. That’s how you win.

If you ask actual people whether they want privacy, they say, yes. Always. But if you then ask, are they, are we, prepared to give that privacy up to get things? They say yes again. They, we, want privacy, but not as much as we want stuff. Not as much as we want to talk to one another. Giving up our personal data to enable that, that’s a reasonable cost to pay, because we don’t value our personal data. Some of that’s because there’s no alternative, and some of that’s because nobody’s properly articulated the alternative.

Privacy will define the next major change in computing.

We saw the change to mobile. The change to social. These things fundamentally redefined the way technology looked to the mainstream. The next thing will be privacy. The issue here is that nobody has worked out a way of articulating the importance of privacy which convinces actual ordinary people. There are products and firms trying to do that right now. Look at Blackphone. Look at the recent fertile ground for instant messaging with privacy included from Telegram and Threema and Whisper System‘s Text Secure. They’re all currently basically for geeks. They’re doing the right thing, but they haven’t worked out how to convince real people that they are the right thing.

The company who work out how to convince people that privacy is important will define the next five years of technology.

Privacy, historically the concern of super-geeks, is beginning to poke its head above the parapet. Tim Berners-Lee calls for a “digital Magna Carta”. The EFF tries to fix it and gets their app banned because it’s threatening Google’s business model to have people defend their own data. The desire for privacy is becoming mainstream enough that the Daily Mash are prepared to make jokes about it. Apple declare to the world that they can’t unlock your iPhone, and Google are at pains to insist that they’re the same. We’re seeing the birth of a movement; the early days before the concern of the geeks becomes the concern of the populace.

So what about the ind.ie project?

The ind.ie project will tell you that this is what they’re for, and so you need to get on board with them right now. That’s what they’ll tell you.

The ind.ie project is to open source as Brewdog are to CAMRA. Those of you who are not English may not follow this analogy.

CAMRA is the Campaign for Real Ale: a British society created in the 1970s and still existing today who fight to preserve traditionally made beer in the UK, which they name “real ale” and have a detailed description of what “real ale” is. Brewdog are a brewer of real ale who were founded in 2007. You’d think that Brewdog were exactly what CAMRA want, but it is not so. Brewdog, and a bunch of similar modern breweries, have discovered the same hatred that new approaches in other fields also discovered. In particular, Brewdog have done a superb job at bringing a formerly exclusive insular community into the mainstream. But that insular community feel resentful because people are making the right decisions, but not because they’ve embraced the insular community. That is: people drink Brewdog beer because they like it, and Brewdog themselves have put that beer into the market in such a way that it’s now trendy to drink real ale again. But those drinking it are not doing it because they’ve bought into CAMRA’s reasoning. They like real ale, but they don’t like it for the same reasons that CAMRA do. As Daniel Davies said, every subculture has this complicated relationship with its “trendy” element. From the point of view of CAMRA nerds, who believe that beer isn’t real unless it has moss floating in it, there is a risk that many new joiners are fair-weather friends just jumping on a trendy bandwagon and the Brewdog popularity may be a flash in the pan. The important point here is that the new people are honestly committed to the underlying goals of the old guard (real ale is good!) but not the old guard’s way of articulating that message. And while that should get applause, what it gets is resentment.

Ind.ie is the same. They have, rather excellently, found a way of describing the underlying message of open source software without bringing along the existing open source community. That is, they’ve articulated the value of being open, and of your data being yours without it being sold to others or kept as commercial advantage, but have not done so by pushing the existing open source message, which is full of people who start petty fights over precisely which OS you use and what distribution A did to distribution B back in the mists of prehistory. This is a deft and smart move; people in general tend to agree with the open source movement’s goals, but are hugely turned off by interacting with that existing open source movement, and ind.ie have found a way to have that cake and eat it.

Complaints from open source people about ind.ie are at least partially justified, though. It is not reasonable to sneer at existing open source projects for knowing nothing about users and at the same time take advantage of their work. It is not at all clear how ind.ie will handle a bunch of essential features — reading an SD card, reformatting a drive, categorising applications, storing images, sandboxing apps from one another, connecting to a computer, talking to the cloud — without using existing open source software. The ind.ie project seem confident that they can overlay a user experience on this essential substrate and make that user experience relevant to real people rather than techies; but it is at best disingenuous and at worst frankly offensive to simultaneously mock open source projects for knowing nothing about users and then also depend on their work to make your own project successful. Worse, it ignores the time and effort that companies such as Canonical have put in to user testing with actual people. It’s blackboard economics of the worst sort, and it will have serious repercussions down the line when the ind.ie project approaches one of its underlying open source projects and says “we need this change made because users care” and the project says “but you called us morons who don’t care about users” and so ignores the request. Canonical have suffered this problem with upstream projects, and they were nowhere near as smugly, sneeringly dismissive as ind.ie have been of the open source substrate on which they vitally depend.

However, they, ind.ie, are doing the right thing. The company who work out how to convince people that privacy is important will define the next five years of technology. This is not an idle prediction. The next big wave in technology will be privacy.

There are plenty of companies right now who would say that they’re already all over that. As mentioned above, there’s Blackphone and Threema and Telegram and ello and diaspora. All of them are contributors and that’s it. They’re not the herald who usher in the next big wave. They’re ICQ, or Friends Reunited: when someone writes the History Of Tech In The Late 2010s, Blackphone and ello and Diaspora will be footnotes, with the remark that they were early adopters of privacy-based technology. There were mp3 players before the iPod. There were social networks before Facebook. All the existing players who are pushing privacy as their raison d’etre and writing manifestos are creating an environment which is ripe for someone to do it right, but they aren’t themselves the agent of change; they’re the Diamond Rio who come before the iPod, the ICQ who come before WhatsApp. Privacy hasn’t yet found its Facebook. When it does, that Facebook of privacy will change the world so that we hardly understand that there was a time when we didn’t care about it. They’ll take over and destroy all the old business models and make a new tech universe which is better for us and better for them too.

I hope it comes soon.

on September 29, 2014 11:41 PM

Cloud Images and Bash Vulnerabilities

The Ubuntu Cloud Image team has been monitoring the bash vulnerabilities. Due to the scope, impact and high profile nature of these vulnerabilties, we have published new images. New cloud images to address the lastest bash USN-2364-1 [1, 8, 9] are being released with a build serials of 20140927. These images include code to address all prior CVEs, including CVE-2014-6271 [6] and CVE-2014-7169 [7], and supersede images published in the past week which addressed those CVEs.

Please note: Securing Ubuntu Cloud Images requires users to regularly apply updates[5]; using the latest Cloud Images are insufficient. 

Addressing the full scope of the Bash vulnerability has been an iterative process. The security team has worked with the upstream bash community to address multiple aspects of the bash issue. As these fixes have become available, the Cloud Image team has published daily[2]. New released images[3] have been made available at the request of the Ubuntu Security team.

Canonical has been in contact with our public Cloud Partners to make these new builds available as soon as possible.

Cloud image update timeline

Daily image builds are automatically triggered when new package versions become available in the public archives. New releases for Cloud Images are triggered automatically when a new kernel becomes available. The Cloud Image team will manually trigger new released images when either requested by the Ubuntu Security team or when a significant defect requires.

Please note:  Securing Ubuntu cloud images requires that security updates be applied regularly [5], using the latest available cloud image is not sufficient in itself.  Cloud Images are built only after updated packages are made available in the public archives. Since it takes time to build the  images, test/QA and finally promote the images, there is time (sometimes  considerable) between public availablity of the package and updated Cloud Images. Users should consider this timing in their update strategy.

[1] http://www.ubuntu.com/usn/usn-2364-1/
[2] http://cloud-images.ubuntu.com/daily/server/
[3] http://cloud-images.ubuntu.com/releases/
[4] https://help.ubuntu.com/community/Repositories/Ubuntu/
[5] https://wiki.ubuntu.com/Security/Upgrades/
[6] http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-6271.html
[7] http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-7169.html
[8] http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-7187.html
[9] http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-7186.html

on September 29, 2014 05:45 PM

S07E26 – The One Where Underdog Gets Away

Ubuntu Podcast from the UK LoCo

We’re back with Season Seven, Episode Twenty-Six of the Ubuntu Podcast! Just Alan Pope and Laura Cowen with a set of interviews from Mark Johnson this week.

In this week’s show:

python 2.x:
python -m SimpleHTTPServer [port]

python 3.x
python -m http.server [port]

  • And we read your feedback. Thanks for sending it in!

We’ll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on September 29, 2014 03:25 PM

Quick question – we have Cloud Foundry in private beta now, is there anyone in the Ubuntu community who would like to use a Cloud Foundry instance if we were to operate that for Ubuntu members?

on September 29, 2014 03:17 PM

September 26'th I undertook a rather daunting task of trialing something I strongly believe in that really took me out of my comfort zone and put me front and center of an audience's attention, for not only my talents, but also the technical implementation of their experience.

The back story

I've been amateur DJ'ing on Secondlife for about the last 7 months, and recently left the metaverse to pursue a podcast format of my show(s). What I found was I really missed the live interaction with people during the recording of the set. It was great to get feedback, audience participation, and I could really gauge the flow of energy that I'm broadcasting. To some this may sound strange, but when your primary interaction is over text, and you see a feed erupt with actions as you put on more high energy music, it just 'clicks' and makes sense.

The second aspect to this was I wanted to showcase how you can get moving with Juju in less than a week to bring a production ready app online and ready for scale (depending on the complexity of the app of course). It's been a short while since I've pushed a charm from scratch into the charm store - and this will definately get me re-acquiainted with the process our new users go through on their Juju journey.

So, I've got a habit of mixing my passions in life. If you know me very well you know that I am deeply passionate about what I'm working on, my hobbies, and the people that I surround myself with that i consider my support network. How can I leverage this to showcase and run a 'Juju lab' study?

The Shoutcast charm is born

I spent a sleepless night hacking away at a charm for a SHOUTCast DNAS server. They offer several PAAS, scaling solutions that might work for people that are making money off of their hobby - but I myself prefer to remain an enthusiast and not turn a profit from my hobby. Juju is a perfect fit for deploying pretty much anything, and making sure that all the components work together in a distributed service environment. It's getting better every day - proof of this is the Juju GUI just announced machine view - where you can easily do co-location of services on the same server, and get a deep dive look at how your deployment is comprised of machines vs services.

Observations & Lessons

Testing what you expect, never yields the unexpected

Some definate changes to just the shoutcast charm itself are in order.

  • Change the default stream MIME from AAC to MP3 so its cross compat on every os without installing quicktime.
  • Test EVERY os before you jam out to production - which may seem like a rookie mistake. I tested on Mac OSX and Ubuntu Linux (default configuration for 14.04) and everything was in order. Windows users however, that are not savvy with tech that stems from back in the 90's were left out in the cold and prompted to install Quicktime when they connected. This is not ideal.
  • the 'automatic' failover that I touted in the readme is dependent on the client consuming the playlist. If the client doesn't support multiple streams in the playlist, its not really automatic forwarding load balancing, but polling failure cases with resources.

Machine Metrics tell most of the story

I deployed this setup on Digital Ocean to run my 'lab test' - as the machines are cheap, performant, and you get 1TB of transfer unmetered before you have to jump up a pricing teir. This is a great mixture for testing the setup. But how well did the VPS perform?

I consumed 2 of the 'tiny' VPS servers for this. And the metrics of the transcoders were light enough that it barely touched the CPU. As a matter of fact I saw more activity out of supporting infra services such as LogStash, than I did out of the SHOUTCast charm. Excellent work on the implementation Shoutcast devs. This was a pleasant surprise!

Pre-scaling was the winner

Having a relay setup out of the gate really helped to mitigate issues as I saw people get temporary hiccups in their network. I saw several go from the primary stream to the relay and finish out the duration of the broadcast connected there.

The fact that the clients supported this, tells me that any time I do this live, I need to have at bare minimum 2 hosts online transmitting the broadcast.

Had this been a single host - every blip in the network would yield dead airspace before they realized something had gone wrong.

Juju Scaled Shoutcast Service

Supportive people are amazing, and make what you do, worthwhile

Those that tuned in genuinely enjoyed that I had the foresight to pre-record segments of the show to interact with them. This was more so I could investigate the server(s), watch htop metrics, refresh shoutcast, etc. However the fan interaction was genuinely empowering. I found myself wanting to turn around and see what was said next during the live-mixing segments.

The Future for Radio Bundle Development

Putting the auto in automation

I've found a GREAT service that I want to consume and deploy to handle the station automation side of this deployment. SourceFabric produces Airtime which makes setting up Radio Automation very simple, and supports such advanced configurations as mixing in Live DJ's into your lineup on a schedule. How awesome is this? It's open-source to boot!

I'm also well on my way to having revision 1 of this bundle completed, since I started the blog post on Friday. Hacked on the bundle through the weekend, and landed here on Monday.

I'll be talking more about this after it's officially unveiled in Brussels.

Where to find the 'goods'

The Shoutcast Juju Charm can be found on Launchpad: lp:~lazypower/charms/trusty/shoutcast/trunk or github

The up-coming Airtime Radio Automation Charm can be found on github

Actual metrics and charts to be uploaded at a later date, once I've sussed out how I want to parse these and present them.

on September 29, 2014 01:36 PM

September 28, 2014

A few weeks ago, I decided to make an experiment and completely rework the global shortcuts of my KDE desktop. I wanted them to make a bit more sense instead of being the agglomerated result of inspirations from other systems, and was ready to pay the cost of brain retraining.

My current shortcut setup relies on a few "design" decisions:

  • All workspace-level shortcuts must use the Windows (also known as Meta) modifier key, application shortcuts are not allowed to use this modifier.

  • There is a logical link between a shortcut and its meaning. For example, the shortcut to maximize a window is Win + M.

  • The Shift modifier is used to provide a variant of a shortcut. For example the shortcut to minimize a window is Win + Shift + M.

I am still playing with it, but it is stabilizing these days, so I thought I'd write a summary of what I came up with:

Window management

  • Maximize: Win + M.

  • Minimize: Win + Shift + M.

  • Close: Win + Escape. This is somehow consistent with the current Win + Shift + Escape to kill a window.

  • Always on top: Win + T.

  • Shade: Win + S.

  • Switch between windows: Win + Tab and Win + Shift + Tab (yes, this took some work to retrain myself, and yes, it means I no longer have shortcuts to switch between activities).

  • Maximize left, Maximize right: Win + :, Win + !. This is very localized: ':' and '!' are the keys under 'M' on my French keyboard. Definitely not a reusable solution. I used to use Win + '(' and Win + ')' but it made more sense to me to have the maximize variants close to the full Maximize shortcut.

  • Inner window modifier key: Win. I actually changed this from Alt a long time ago: it is necessary to be able to use Inkscape, as it uses Alt + Click to select shapes under others.

Virtual desktop

  • Win + Left, Win + Right: Go to previous desktop, go to next desktop.

  • Win + Shift + Left, Win + Shift + Right: Bring the current window to the previous desktop, bring the current window to the next desktop.

  • Win + F1, Win + F2, Win + F3: Switch to desktop 1, 2 or 3.

Application launch

  • Win + Space: KRunner.

  • Win + Shift + Space: Homerun.


  • Win + L: Lock desktop.

How does it feel?

I was a bit worried about the muscle-memory retraining, but it went quite well. Of course I am a bit lost nowadays whenever I use another computer, but that was to be expected.

One nice side-effect I did not foresee is that this change turned the Win modifier into a sort of quasimode: all global workspace operations are done by holding the Win key. I said "sort of" because some operations requires you to release the Win key before they are done, for example when switching from one window to another, no shortcuts work as long as the window switcher is visible, so one needs to release the Win key after switching and press it again to do something else. I notice this most often when maximizing left or right.

Another good point of this approach is that, almost no shortcuts use function keys. This is a good thing because: a) it can be quite a stretch for small hands to hold both the Win or Alt modifier together with a function key and b) many laptops these days come with the function keys mapped to multimedia controls and need another modifier to be held to become real function keys, some other laptops do not even come with any function keys at all! (heresy I know, but such is the world we live in...)

What about you, do you have unusual shortcut setups?

Flattr this

on September 28, 2014 06:09 PM

If you follow FCM on Google+. Facebook, or Twitter (if not, why not?) then you’ll have seen the post I (Ronnie) made showing our current Google Play stats.

This time I’d like to share with you our Issuu stats:

(this is as of Saturday 27th Sept 2014 – click image to enlarge)


on September 28, 2014 10:30 AM

Earlier this year, I helped plan and run the Community Data Science Workshops: a series of three (and a half) day-long workshops designed to help people learn basic programming and tools for data science tools in order to ask and answer questions about online communities like Wikipedia and Twitter. You can read our initial announcement for more about the vision.

The workshops were organized by myself, Jonathan Morgan from the Wikimedia Foundation, long-time Software Carpentry teacher Tommy Guy, and a group of 15 volunteer “mentors” who taught project-based afternoon sessions and worked one-on-one with more than 50 participants. With overwhelming interest, we were ultimately constrained by the number of mentors who volunteered. Unfortunately, this meant that we had to turn away most of the people who applied. Although it was not emphasized in recruiting or used as a selection criteria, a majority of the participants were women.

The workshops were all free of charge and sponsored by the UW Department of Communication, who provided space, and the eScience Institute, who provided food.

cdsw_combo_images-1The curriculum for all four session session is online:

The workshops were designed for people with no previous programming experience. Although most our participants were from the University of Washington, we had non-UW participants from as far away as Vancouver, BC.

Feedback we collected suggests that the sessions were a huge success, that participants learned enormously, and that the workshops filled a real need in the Seattle community. Between workshops, participants organized meet-ups to practice their programming skills.

Most excitingly, just as we based our curriculum for the first session on the Boston Python Workshop’s, others have been building off our curriculum. Elana Hashman, who was a mentor at the CDSW, is coordinating a set of Python Workshops for Beginners with a group at the University of Waterloo and with sponsorship from the Python Software Foundation using curriculum based on ours. I also know of two university classes that are tentatively being planned around the curriculum.

Because a growing number of groups have been contacting us about running their own events based on the CDSW — and because we are currently making plans to run another round of workshops in Seattle late this fall — I coordinated with a number of other mentors to go over participant feedback and to put together a long write-up of our reflections in the form of a post-mortem. Although our emphasis is on things we might do differently, we provide a broad range of information that might be useful to people running a CDSW (e.g., our budget). Please let me know if you are planning to run an event so we can coordinate going forward.

on September 28, 2014 05:02 AM

September 27, 2014

I heard a really interesting little show on the radio tonight, about the man who explained 'bands of nothing.' "Astronomer Daniel Kirkwood... is best known for explaining gaps in the asteroid belt and the rings of Saturn — zones that are clear of the normal debris." http://stardate.org/radio/program/daniel-kirkwood. He taught himself algebra, and used his math background to analyze the work of others, rather than making his own observations. The segment is only 5 minutes; give it a listen.

This reminded me of the how much progress I used to make when I did genealogy research, by looking over the documents I had gotten long ago, in light of facts I more recently uncovered. All of a sudden, I made new discoveries in those old docs. So that has become part of my regular research routine.

And perhaps all of these thoughts were triggered by the BASH bug which I keep hearing about on the news in very vague terms, and in quite specific discussion in IRC and mail lists. Old, stable code can yield new, interesting bugs! Maybe even dangerous vulnerabilities. So it's always worth raking over old ground, to see what comes to the surface.
on September 27, 2014 09:54 AM

I know the program ended, almost a month ago. But I haven't had the opportunity of sharing my thoughts of the GSOC 2014. This summer, I coded for the BeagleBoard.org organization. It was a great experience. It was my third time trying to be part of the GSOC, and finally I was accepted.

The main idea of the project is a platform for viewing and creating tutorials. You can see it here. Right now I'm working on migrating this to Jekyll. This is the next step the BeagleBoard community is taking.

After the program finish I convinced Jason Kridner cofounder of the BeagleBoard.org to give a small hangout about what's BeagleBoard.org, talk about the Beagle Bone Black and his view of the organizations.

Why I decide to talk with Jason, so he can give a talk? Well for motivating more Honduran students to involve on the open source moment. I was the first Honduran Student, that was part of the Google Summer of Code.

Hope this motivates more Honduran student.

on September 27, 2014 12:10 AM

September 26, 2014

Fiddling around with LXC Containers

Nekhelesh Ramananthan

In my previous post, I explained about my personal need to have a Utopic Environment for the purpose of running the test suites since they are require the latest ubuntu-ui-toolkit which is not available for Trusty 14.04 which my main laptop runs. For quite some time I used Virtualbox VMs to get around this issue. But anyone who has used Virtualbox VMs will agree when I say they are too resource intensive and slow making it a bit frustating to use it.

I am thankful to Sergio Schvezov for introducing me to this cool concept of Linux Containers (LXC). It took me some time to get acquainted with the concept and use it on a daily basis. I can now share my experiences and also show how I got it setup to provide a Ubuntu Touch app development environment.

Brief Introduction to LXC

Linux Containers (LXC) is a novel concept developed by Stéphane Graber and Serge Hallyn. One could describe Linux containers as,

LXC are lightweight virtualization technology. They are more akin to an enhanced chroot than to full virtualization like VMware or QEMU. They do not emulate hardware and containers share the same operating system as the host.

I think to fully appreciate LXC, it would best to compare it with Virtualbox VMs as shown below.


A LXC container uses the host machine's kernel and is somewhere in the middle of a schroot and a full fledged virtual machine. Each ofcourse has its advantages and disadvantages. For instance since LXC containers use the host machine's kernel, they are limited to the Linux Kernels and cannot be used to create Windows or other OS containers. However they do not have any overhead since they only run the most essential services needed for your use case.

They perfectly fit my use case of providing an utopic environment inside which I can run my test suites. In fact, in this post I will show you some tricks that I learnt which will provide seamless integration between LXC and your host machine to the point where you would be unable to tell the difference between a native app and a container app.

Getting Started with LXC

Stephane Graber's original post provides an excellent tutorial on getting started with LXC. If you are stuck at any step, I highly recommend talking to Stephane Graber on IRC at #ubuntu-devel. His nick is stgraber. The instructions below are a quick way of getting started with lxc containers for ubuntu touch app development and as such I have avoided detailed instructions explaining why we run a certain command.

Without further ado, let's get started!

Installing LXC

LXC is available to install directly from the Ubuntu Archives. You can install it by,

sudo apt-get install lxc systemd-services uidmap

Prerequisite configuration (One-Time Configuration)

Linux containers are run using root by default. However this can be a little inconvenient for our use case since our containers will essentially be used to launch common applications like Qtcreator, terminal etc. So we will be first performing some prerequisite steps to creating unpriviledged containers (run by a normal user).

Note: These steps below are required only if you want to create unpriviledged containers (required for our use case).

sudo usermod --add-subuids 100000-165536 $USER
sudo usermod --add-subgids 100000-165536 $USER
sudo chmod +x $HOME

Create ~/.config/lxc/default.conf with the following contents,

lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536

And then run,

echo "$USER veth lxcbr0 10" | sudo tee -a /etc/lxc/lxc-usernet

Unpriveledged Containers

Now with the prerequisite steps complete, we can proceed with creating the linux container itself. We are going to create a generic utopic container by,

lxc-create --template download --name qmldevel -- --dist ubuntu --release utopic --arch amd64

This should create a LXC container with a ubuntu utopic environment of architecture amd64. On the other hand, if you want to see a list of the various distros, releases and architectures supported and choose it interactively, you should run,

lxc-create -t download -n qmldevel

Once the container has finished downloading, you should be provided with a default user "ubuntu" with password "ubuntu". You will be able to find the container files at ~/.local/share/lxc/qmldevel.

sudo chown -R 1000:1000 ~/.local/share/lxc/qmldevel/rootfs/home/ubuntu

Add the following to your container config file found at ~/.local/share/lxc/qmldevel/config,

# Container specific configuration
lxc.id_map = u 0 100000 1000
lxc.id_map = g 0 100000 1000
lxc.id_map = u 1000 1000 1
lxc.id_map = g 1000 1000 1
lxc.id_map = u 1001 101001 64535
lxc.id_map = g 1001 101001 64535

# Custom Mounts
lxc.mount.entry = /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry = /dev/snd dev/snd none bind,optional,create=dir
lxc.mount.entry = /tmp/.X11-unix tmp/.X11-unix none     bind,optional,create=dir
lxc.mount.entry = /dev/video0 dev/video0 none bind,optional,create=file
lxc.mount.entry = /home/krnekhelesh/Documents/Ubuntu-Projects home/ubuntu none bind,create=dir

Notice the line lxc.mount.entry = /home/krnekhelesh/Documents/Ubuntu-Projects home/ubuntu none bind,create=dir which basically maps (mounts) your host machine's folder to a location in the container. So if we go to /home/ubuntu you would see the contents of /home/krnekhelesh/.../Ubuntu-Projects. Isn't that nifty? We are seamlessly sharing data between the host and the container.

Shelling into our container

So yay we created this awesome container. How about accessing it and installing some of the applications we want to have? That's quite easy.

lxc-start -n qmldevel -d
lxc-attach -n qmldevel

At this point, your command line prompt should show be in the container. Here you can run any command you so wish. We are going to install the ubuntu-sdk and also terminator which is now my favourite terminal.

sudo apt-get install ubuntu-sdk terminator

Type exit to exit out of the container.

At this point our container configuration is complete. This was the hardest and longest part. If you are past this, then we have one last final step left which is to create shortcuts to applications that we would like to launch from within our container. Onward to the next section!

Application shortcuts

So basically here we create a few scripts and .desktop files to launch our applications we just installed in the previous section. First let's create those scripts. I will explain in a moment why we need those scripts.

Create a script called start-qtcreator with the following contents,

CMD_LINE="qtcreator $*"


if ! lxc-wait -n $CONTAINER -s RUNNING -t 0; then
    lxc-start -n $CONTAINER -d
    lxc-wait -n $CONTAINER -s RUNNING

lxc-attach --clear-env -n $CONTAINER -- sudo -u ubuntu -i \

if [ "$STARTED" = "true" ]; then
    lxc-stop -n $CONTAINER -t 10

Make the script executable by chmod +x start-qtcreator. What the script essentially does is that it starts the container (if not running already) and then launches qtcreator while ensuring the proper environment variables are set.

We are going to create a similar script for launching terminator as well called start-terminator and make it executable.

CMD_LINE="terminator $*"


if ! lxc-wait -n $CONTAINER -s RUNNING -t 0; then
    lxc-start -n $CONTAINER -d
    lxc-wait -n $CONTAINER -s RUNNING

lxc-attach --clear-env -n $CONTAINER -- sudo -u ubuntu -i \

if [ "$STARTED" = "true" ]; then
    lxc-stop -n $CONTAINER -t 10

Now for the very last bit which is the .desktop file, for Qtcreator and Terminator, I created the following .desktop files,

[Desktop Entry]
Exec=/home/krnekhelesh/.local/share/lxc/qmldevel/start-qtcreator %F
Name=Ubuntu SDK (LXC)
GenericName=Integrated Development Environment
Keywords=Ubuntu SDK;SDK;Ubuntu Touch;Qt Creator;Qt

Make sure to replace the Exec path with your path. Save the .desktop file as ubuntusdklxc.desktop in ~/.local/share/applications. Do the same for the terminal desktop file,

[Desktop Entry]
Name=Terminator (LXC)
Comment=Multiple terminals in one window
[NewWindow Shortcut Group]
Name=Open a New Window

That's it! When you go to the Unity Dash and search for "Terminator", you should see the entry "Terminator (LXC) appear. When you launch it, it will seamlessly start the linux container and then launch terminator from within it. Best part of it is that you won't even notice the difference between a native app and the container app.

Check out the screenshow below as proof!


What I usually do is that I have my clock app files in /home/krnekhelesh/Documents/Ubuntu-Projects. I do my coding and testing on the host machine. Then while running the test suite, I quickly open the terminator (lxc) and then run the tests since it already points at the correct folder.

I hope you found this useful.

on September 26, 2014 10:38 PM

VPN in containers

Stéphane Graber

I often have to deal with VPNs, either to connect to the company network, my own network when I’m abroad or to various other places where I’ve got servers I manage.

All of those VPNs use OpenVPN, all with a similar configuration and unfortunately quite a lot of them with overlapping networks. That means that when I connect to them, parts of my own network are no longer reachable or it means that I can’t connect to more than one of them at once.

Those I suspect are all pretty common issues with VPN users, especially those working with or for companies who over the years ended up using most of the rfc1918 subnets.

So I thought, I’m working with containers every day, nowadays we have those cool namespaces in the kernel which let you run crazy things as a a regular user, including getting your own, empty network stack, so why not use that?

Well, that’s what I ended up doing and so far, that’s all done in less than 100 lines of good old POSIX shell script :)

That gives me, fully unprivileged non-overlapping VPNs! OpenVPN and everything else run as my own user and nobody other than the user spawning the container can possibly get access to the resources behind the VPN.

The code is available at: git clone git://github.com/stgraber/vpn-container

Then it’s as simple as: ./start-vpn VPN-NAME CONFIG

What happens next is the script will call socat to proxy the VPN TCP socket to a UNIX socket, then a user namespace, network namespace, mount namespace and uts namespace are all created for the container. Your user is root in that namespace and so can start openvpn and create network interfaces and routes. With careful use of some bind-mounts, resolvconf and byobu are also made to work so DNS resolution is functional and we can start byobu to easily allow as many shell as you want in there.

In the end it looks like this:

stgraber@dakara:~/vpn$ ./start-vpn stgraber.net ../stgraber-vpn/stgraber.conf 
WARN: could not reopen tty: No such file or directory
lxc: call to cgmanager_move_pid_abs_sync(name=systemd) failed: invalid request
Fri Sep 26 17:48:07 2014 OpenVPN 2.3.2 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [EPOLL] [PKCS11] [eurephia] [MH] [IPv6] built on Feb  4 2014
Fri Sep 26 17:48:07 2014 WARNING: No server certificate verification method has been enabled.  See http://openvpn.net/howto.html#mitm for more info.
Fri Sep 26 17:48:07 2014 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts
Fri Sep 26 17:48:07 2014 Attempting to establish TCP connection with [AF_INET] [nonblock]
Fri Sep 26 17:48:07 2014 TCP connection established with [AF_INET]
Fri Sep 26 17:48:07 2014 TCPv4_CLIENT link local: [undef]
Fri Sep 26 17:48:07 2014 TCPv4_CLIENT link remote: [AF_INET]
Fri Sep 26 17:48:09 2014 [vorash.stgraber.org] Peer Connection Initiated with [AF_INET]
Fri Sep 26 17:48:12 2014 TUN/TAP device tun0 opened
Fri Sep 26 17:48:12 2014 Note: Cannot set tx queue length on tun0: Operation not permitted (errno=1)
Fri Sep 26 17:48:12 2014 do_ifconfig, tt->ipv6=1, tt->did_ifconfig_ipv6_setup=1
Fri Sep 26 17:48:12 2014 /sbin/ip link set dev tun0 up mtu 1500
Fri Sep 26 17:48:12 2014 /sbin/ip addr add dev tun0 broadcast
Fri Sep 26 17:48:12 2014 /sbin/ip -6 addr add 2001:470:b368:1035::50/64 dev tun0
Fri Sep 26 17:48:12 2014 /etc/openvpn/update-resolv-conf tun0 1500 1544 init
dhcp-option DNS
dhcp-option DNS
dhcp-option DNS 2001:470:b368:1020:216:3eff:fe24:5827
dhcp-option DNS nameserver
dhcp-option DOMAIN stgraber.net
Fri Sep 26 17:48:12 2014 add_route_ipv6(2607:f2c0:f00f:2700::/56 -> 2001:470:b368:1035::1 metric -1) dev tun0
Fri Sep 26 17:48:12 2014 add_route_ipv6(2001:470:714b::/48 -> 2001:470:b368:1035::1 metric -1) dev tun0
Fri Sep 26 17:48:12 2014 add_route_ipv6(2001:470:b368::/48 -> 2001:470:b368:1035::1 metric -1) dev tun0
Fri Sep 26 17:48:12 2014 add_route_ipv6(2001:470:b511::/48 -> 2001:470:b368:1035::1 metric -1) dev tun0
Fri Sep 26 17:48:12 2014 add_route_ipv6(2001:470:b512::/48 -> 2001:470:b368:1035::1 metric -1) dev tun0
Fri Sep 26 17:48:12 2014 Initialization Sequence Completed

To attach to this VPN, use: byobu -S /home/stgraber/vpn/stgraber.net.byobu
To kill this VPN, do: byobu -S /home/stgraber/vpn/stgraber.net.byobu kill-server
or from inside byobu: byobu kill-server

After that, just copy/paste the byobu command and you’ll get a shell inside the container. Don’t be alarmed by the fact that you’re root in there. root is mapped to your user’s uid and gid outside the container so it’s actually just your usual user but with a different name and with privileges against the resources owned by the container.

You can now use the VPN as you want without any possible overlap or conflict with any route or VPN you may be running on that system and with absolutely no possibility that a user sharing your machine may access your running VPN.

This has so far been tested with 5 different VPNs, on a regular Ubuntu 14.04 LTS system with all VPNs being TCP based. UDP based VPNs would probably just need a couple of tweaks to the socat unix-socket proxy.


on September 26, 2014 10:00 PM

The Ubuntu team is pleased to announce the final beta release of Ubuntu 14.10 Desktop, Server, Cloud, and Core products.

Codenamed "Utopic Unicorn", 14.10 continues Ubuntu’s proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been hard at work through this cycle, introducing new features and fixing bugs.

This beta release includes images from not only the Ubuntu Desktop, Server, Cloud, and Core products, but also the Kubuntu, Lubuntu, Ubuntu GNOME, Ubuntu Kylin, Ubuntu Studio and Xubuntu flavours.

The beta images are known to be reasonably free of showstopper CD build or installer bugs, while representing a very recent snapshot of 14.10 that should be representative of the features intended to ship with the final release expected on October 23rd, 2014.

Ubuntu, Ubuntu Server, Ubuntu Core, Cloud Images

Utopic Final Beta includes updated versions of most of our core set of packages, including a current 3.16.2 kernel, apparmor improvements, and many more.

To upgrade to Ubuntu 14.10 Final Beta from Ubuntu 14.04, follow these instructions:


The Ubuntu 14.04 Final Beta images can be downloaded at:

http://releases.ubuntu.com/14.10/ (Ubuntu and Ubuntu Server)

Additional images can be found at the following links:

http://cloud-images.ubuntu.com/releases/14.10/beta-2/ (Cloud Images)
http://cdimage.ubuntu.com/releases/14.10/beta-2/ (Community Supported)
http://cdimage.ubuntu.com/ubuntu-core/releases/14.10/beta-2/ (Core)
http://cdimage.ubuntu.com/netboot/14.10/ (Netboot)

The full release notes for Ubuntu 14.10 Final Beta can be found at:



Kubuntu is the KDE based flavour of Ubuntu. It uses the Plasma desktop and includes a wide selection of tools from the KDE project.

The Final Beta images can be downloaded at: http://cdimage.ubuntu.com/kubuntu/releases/14.10/beta-2/

More information on Kubuntu Final Beta can be found here: https://wiki.ubuntu.com/UtopicUnicorn/Beta2/Kubuntu


Lubuntu is a flavor of Ubuntu that targets to be lighter, less resource hungry and more energy-efficient by using lightweight applications and LXDE, The Lightweight X11 Desktop Environment, as its default GUI.

The Final Beta images can be downloaded at: http://cdimage.ubuntu.com/lubuntu/releases/14.10/beta-2/

Ubuntu GNOME

Ubuntu GNOME is a flavor of Ubuntu featuring the GNOME desktop environment.

The Final Beta images can be downloaded at: http://cdimage.ubuntu.com/ubuntu-gnome/releases/14.10/beta-2/

More information on Ubuntu GNOME Final Beta can be found here: https://wiki.ubuntu.com/UtopicUnicorn/Beta2/UbuntuGNOME


UbuntuKylin is a flavor of Ubuntu that is more suitable for Chinese users.

The Final Beta images can be downloaded at: http://cdimage.ubuntu.com/ubuntukylin/releases/14.10/beta-2/

Ubuntu Studio

Ubuntu Studio is a flavor of Ubuntu that provides a full range of multimedia content creation applications for each key workflows: audio, graphics, video, photography and publishing.

The Final Beta images can be downloaded at: http://cdimage.ubuntu.com/ubuntustudio/releases/14.10/beta-2/


Xubuntu is a flavor of Ubuntu that comes with Xfce, which is a stable, light and configurable desktop environment.

The Final Beta images can be downloaded at: http://cdimage.ubuntu.com/xubuntu/releases/14.10/beta-2/

Regular daily images for Ubuntu can be found at: http://cdimage.ubuntu.com

Ubuntu is a full-featured Linux distribution for clients, servers and clouds, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional technical support is available from Canonical Limited and hundreds of other companies around the world. For more information about support, visit http://www.ubuntu.com/support

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at: http://www.ubuntu.com/community/participate

Your comments, bug reports, patches and suggestions really help us to improve this and future releases of Ubuntu. Instructions can be ound at: https://help.ubuntu.com/community/ReportingBugs

You can find out more about Ubuntu and about this beta release on our website, IRC channel and wiki.

To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:


Originally posted to the ubuntu-announce mailing list on Fri Sep 26 02:30:26 UTC 2014 by Adam Conrad

on September 26, 2014 06:29 PM

This month:
* Command & Conquer
* How-To : Install Oracle, LibreOffice, and dmc4che.
* Graphics : GIMP Perspective Clone Tool and Inkscape.
* Linux Labs: Kodi/XBMC, and Compiling a Kernel Pt.2
* Arduino
plus: News, Q&A, Ubuntu Games, and soooo much more.

Grab it while it’s hot


on September 26, 2014 06:06 PM

LEGO. There, now I have your attention.

The LEGO Neighborhood Book is another addition to the series of cool LEGO books published by No Starch Press. In it, you find a set of instructions for building anything from small features like furniture or traffic lights to large things like buildings to populate an entire neighborhood. Unlike the creations of my youth, these buildings are detailed structures. Gone are the standard, boxy things I used to make. Replacing them are fancy window frames, building mouldings, and seriously beautiful architectural touches. In fact, many of those features are discussed and described, giving a context for the builder to understand a little bit about them. Also included are instructions for creating different types of features to put in those buildings. Everything from art work to plants to kitchen appliances is in there.

I’ve said so much about the books in this series, and it all holds true here, too. Part of me feels bad for the short review here, but the other part of me hates to repeat myself. In this instance, the praise of the past still applies. If you are a LEGO enthusiast, this is worthy of your consideration. Pick it up and take a look.

Disclosure: I was given my copy of this book by the publisher as a review copy. See also: Are All Book Reviews Positive?

on September 26, 2014 02:24 PM

A Core App Dev's Workflow

Nekhelesh Ramananthan


Many a times I get asked which version of Ubuntu I use to develop and test Ubuntu Touch apps or even which device I run my stuff on. So I figured that it would be interesting to share how I get around doing what I do and also at the same time share some tips that might help you setup your workflow.

I am going to start of with my needs which are,

  1. Develop core apps like Clock, Calendar and be able to test them on a phone form factor (amongst others) to ensure they work as expected.
  2. Develop test suites (Autopilot, QML, Manual Tests) which needs to be run on the device before every merge proposal to prevent regressions.

1. Developing and running Core Apps

My primary machine runs Trusty 14.04 period. It is my main machine that I use for development and also for other important purposes like University, Personal uses cases etc, and I am not a big fan of updating it every 6 months. And to be honest it has served me quite well up and I don't want to pass that on.

My primary machine runs Trusty 14.04 period.

When I heard that the Ubuntu SDK wouldn't be updated in Trusty, I was shocked! I was so fixated on keeping Trusty that I decided to look for alternative ways of developing core apps while still keeping Trusty. So I naturally created a Utopic Virtualbox VM and used that for a while.

Disclamier: You have to understand though that it is a legitimate challenge to backport newer versions of the SDK to Trusty since it requires the entire Qt 5.3 which is a massive undertaking if it was to be backported.

That's when I talked to Zoltán Balogh and he explained things to me. There is a distinction between the development environment and the testing environment. So while it is necessary for an application developer to test his application on an environment that best simulates the real device may that be a phone, tablet or anything, the developing environment can very well just be any ordinary system (without the latest ubuntu-ui-toolkit and other packages).

This is done by integrating the test environment (Ubuntu Emulator) closely with the Ubuntu SDK IDE. In recent times, it has been a breeze getting the core apps like Clock and Calendar running on the phone and the emulator. The i386 emulator starts up rather quickly (around 20-40 seconds) and running your app on the emulator takes about 4-5 seconds. The SDK devs also ensure that the test environment tools like the ubuntu emulator runtime package, qtcreator-ubuntu-plugin are up to date on Trusty.

David Callé has done a brilliant job in adding some much needed tutorials to get started with the Ubuntu Emulator and the SDK that you can find here.

And as such I use Trusty 14.04 to develop and run all core apps.

2. Test suites for Core Apps

This one is a bit tricky and is part of the reason why I cannot have one universal golden device to work with. Test suites are an important part of the core apps development process. If your merge proposal doesn't pass the tests, then it certainly will not be accepted. As a result it is important that your testing environment is able to run the test suite to verify that you are aren't introducing any regressions.

With Autopilot tests this isn't so much of an issue since with the help of autopkgtests, running tests on the device is quite simple. However as of now, I haven't found a way to run QML tests on the device or emulator despite my best attempts at it. If you do find a way please do answer it here and you would be my hero :D. As a result the next best environment is the development environment. However since Trusty isn't getting the latest SDK which is required for running the tests, I was rather stuck with a Virtualbox VM (which I hate since they are awfully slow and heavy).

As usual, I did what I do best which is to go and complain about that on IRC :P. That's when Sergio Schvezov introduced me to LXC Containers. I had absolutely no idea about them at the time. If I were to describe LXC Containers in a few words it would be,

"LXC Containers are schroots on steroids. They allow you to have any distro's environment without the unnecessary overhead of the desktop shell, linux kernel etc.."

So they are somewhat like the smarter cousins of Virtualbox VMs which requires a hefty amount of resources to run. If you are interested in reading more about LXC then I highly recommend that you take a look at this. If you want a shorter version of how to apply that for Ubuntu Touch development, you will have to wait for my next post :-) which will be about setting up LXC containers and installing the Ubuntu SDK in it.


As a 3rd party app dev, you should be able to do pretty much everything related to developing Ubuntu Touch apps on Trusty 14.04 LTS. Don't let anyone convince you otherwise that you would need the latest release of Ubuntu to do that. If you are having issues getting your emulator up and running after reading through the tutorials here, please bring it up in the mailing list or on IRC at #ubuntu-app-devel, #ubuntu-touch.

on September 26, 2014 12:57 PM

Back in April, I upstreamed (that is, reported a bug to Debian) regarding the `nginx-naxsi` packages. The initial bug I upstreamed was about the outdated naxsi version in the naxsi packages. (see this bug in Ubuntu and the related bug in Debian)

The last update on the Debian bug is on September 10, 2014. That update says the following, and was made by Christos Trochalakis:

After discussing it with the fellow maintainers we have decided that it is
better to remove the nginx-naxsi package before jessie is freezed.

Packaging naxsi is not trivial and, unfortunately, none of the maintainers uses
it. That’s the reason nginx-naxsi is not in a good shape and we are not feeling
comfortable to release and support it.

We are sorry for any inconvenience caused.

I asked what the expected timeline was for the packages being dropped. In a response from Christos today, September 15, 2014, it was said:

It ‘ll get merged and released (1.6.1-3) by the end of the month.

In Ubuntu, these changes will likely not make it into 14.10, but future versions of Ubuntu beyond 14.10 (such as 15.04) will likely have this change.

In the PPAs, the naxsi packages will be dropped with stable 1.6.2-2+precise0 +trusty0 +utopic0 and mainline 1.7.5-2+precise0 +trusty0 +utopic0.

In Debian, these changes have been applied as 1.6.2-2.

on September 26, 2014 12:34 PM

If you are interested in the eBook, take a look. Valid only on 26 September 2014.

InformIT eBook Deal of the Day

on September 26, 2014 11:43 AM

14.10 Beta 2


Kubuntu 14.10 beta 2 is out now for testing by early adopters. This release comes with the stable Plasma 4 we know and love. It also adds another flavour - Kubuntu Plasma 5 Tech Preview.
on September 26, 2014 11:37 AM

Overcoming fear

Valorie Zimmerman

In the last few posts, I've been exploring ideas expressed by Ed Catmull in Creativity, Inc. Everyone likes good ideas! But putting them into practice can be both difficult, and frightening. Change is work, and creating something which has never existed before, is creating the future. The unknown is daunting.

In meetings with the Braintrust, where new film ideas are viewed and judged, Catmull says,
It is natural for people to fear that such an inherently critical environment will feel threatening and unpleasant, like a trip to the dentist. The key is to look at the viewpoints being offered, in any successful feedback group, as additive, not competitive. A competitive approach measures other ideas against your own, turning the discussion into a debate to be won or lost. An additive approach, on the other hand, starts with the understanding that each participant contributes something (even if it's only an idea that fuels the discussion--and ultimately doesn't work). The Braintrust is valuable because it broadens your perspective, allowing you to peer--at least briefly--through other eyes.[101]
Catmull presents an example where the Braintrust found a problem in The Incredibles film. In this case, they knew something was wrong, but failed to correctly diagnose it. Even so, the director was able, with the help of his peers, to ultimately fix the scene. The problem turned out not to be the voices, but the physical scale of the characters on the screen!

This could happen because the director and the team let go of fear and defensiveness, and trust that everyone is working for the greater good. I often see us doing this in KDE, but in the Community Working Group cases which come before us, I see this breaking down sometimes. It is human nature to be defensive. It takes healthy community to build trust so we can overcome that fear.
on September 26, 2014 09:21 AM

Utopic Unicorn Beta 2

Ubuntu GNOME


Ubuntu GNOME Team is pleased to announce the release of Ubuntu GNOME Utopic Unicorn Beta 2 (Final Beta).

Please do read the release notes.


This is Beta 2 Release. Ubuntu GNOME Beta Releases are NOT recommended for:

  • Regular users who are not aware of pre-release issues
  • Anyone who needs a stable system
  • Anyone uncomfortable running a possibly frequently broken system
  • Anyone in a production environment with data or workflows that need to be reliable

Ubuntu GNOME Beta Releases are recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Ubuntu GNOME developers

For those who wish to use the latest releases, please remember to do an upgrade test from Trusty Tahr (Ubuntu GNOME 14.04 LTS) to Utopic Unicorn Beta 2. Needless to say, Ubuntu GNOME 14.04 is an LTS release that is supported for 3 years, so this test is for those who seek the latest system/packages and don’t mind the LTS (Long Term Support) Releases.

To help with testing Ubuntu GNOME:
Please see Testing Ubuntu GNOME Wiki Page.

To contact Ubuntu GNOME:
Please see our full list of contact channels.

Thank you for choosing and testing Ubuntu GNOME!

See Also:
Ubuntu 14.10 (Utopic Unicorn) Final Beta Released – Official Announcement

on September 26, 2014 04:29 AM

The Xubuntu team is pleased to announce the immediate release of Xubuntu 14.10 Beta 2. This is the final beta towards the release in October. Before this beta we have landed various of enhancements and some new features. Now it’s time to start polishing the last edges and improve the stability.
The Beta 2 release is available for download by torrents and direct downloads from

Highlights and known issues

To celebrate the 14.10 codename “Utopic Unicorn” and to demonstrate the easy customisability of Xubuntu, highlight colors have been turned pink for this release. You can easily revert this change by using the theme configuration application (gtk-theme-config) under the Settings Manager; simply turn Custom Highlight Colors “Off” and click “Apply”. Of course, if you wish, you can change the highlight color to something you like better than the default blue!

Known Issues

  • com32r error on boot with usb (1325801)
  • Installation into some virtual machines fails to boot (1371651)
  • Failure to configure wifi in live-session (1351590)
  • Black background to Try/Install dialogue (1365815)

Workarounds for issues in virtual machines

  • Move to TTY1 (with VirtualBox, Right-Ctrl+F1), login and then start lightdm with “sudo service lightdm start”
  • Some people have been able to boot successfully after editing grub and removing the “quiet” and “splash” options
  • Install appears to start OK when systemd is enabled; append “init=/lib/systemd/systemd” to the “linux” line in grub
on September 26, 2014 03:44 AM

September 25, 2014

reduce the risk of losing control of your AWS account by not knowing the root account password

As Amazon states, one of the best practices for using AWS is

Don’t use your AWS root account credentials to access AWS […] Create an IAM user for yourself […], give that IAM user administrative privileges, and use that IAM user for all your work.

The root account credentials are the email address and password that you used when you first registered for AWS. These credentials have the ultimate authority to create and delete IAM users, change billing, close the account, and perform all other actions on your AWS account.

You can create a separate IAM user with near-full permissions for use when you need to perform admin tasks, instead of using the AWS root account. If the credentials for the admin IAM user are compromised, you can use the AWS root account to disable those credentials to prevent further harm, and create new credentials for ongoing use.

However, if the credentials for your AWS root account are compromised, the person who stole them can take over complete control of your account, change the associated email address, and lock you out.

I have consulted companies who lost control over the root AWS account which contained their assets. You want to avoid this.



  • The AWS root account is not required for regular use as long as you have created an IAM user with admin privileges

  • Amazon recommends not using your AWS root account

  • You can’t accidentally expose your AWS root account password if you don’t know it and haven’t saved it anywhere

  • You can always reset your AWS root account password as long as you have access to the email address associated with the account

Consider this approach to improving security:

  1. Create an IAM user with full admin privileges. Use this when you need to do administrative tasks. Activate IAM user access to account billing information for the IAM user to have access to read and modify billing, payment, and account information.

  2. Change the AWS root account password to a long, randomly generated string. Do not save the password. Do not try to remember the password. On Ubuntu, you can use a command like the following to generate a random password for copy/paste into the change password form:

    pwgen -s 24 1
  3. If you need access to the AWS root account at some point in the future, use the “Forgot Password” function on the signin form.

It should be clear from this that protecting access to your email account is critical to your overall AWS security, as that is all that is needed to change your password, but that has been true for many online services for many years.


You currently need to use the AWS root account in the following situations:

  • to change the email address and password associated with the AWS root account

  • to deactivate IAM user access to account billing information

  • to cancel AWS services (e.g., support)

  • to close the AWS account

  • to buy stuff on Amazon.com, Audible.com, etc. if you are using the same account (not recommended)

  • anything else? Let folks know in the comments.


For completeness, I should also reiterate Amazon’s constant and strong recommendation to use MFA (multi-factor authentication) on your root AWS account. Consider buying the hardware MFA device, associating it with your root account, then storing it in a lock box with your other important things.

You should also add MFA to your IAM accounts that have AWS console access. For this, I like to use Google Authenticator software running on a locked down mobile phone.

MFA adds a second layer of protection beyond just knowing the password or having access to your email account.

Original article: http://alestic.com/2014/09/aws-root-password

on September 25, 2014 10:04 PM

Amazon Web Services recently announced an AWS Community Heroes Program where they are starting to recognize publicly some of the many individuals around the world who contribute in so many ways to the community that has grown up around the services and products provided by AWS.

It is fun to be part of this community and to share the excitement that so many have experienced as they discover and promote new ways of working and more efficient ways of building projects and companies.

Here are some technologies I have gotten the most excited about over the decades. Each of these changed my life in a significant way as I invested serious time and effort learning and using the technology. The year represents when I started sharing the “good news” of the technology with people around me, who at the time usually couldn’t have cared less.

  • 1980: Computers and Programming - “You can write instructions and the computer does what you tell it to! This is going to be huge!”

  • 1987: The Internet - “You can talk to people around the world, access information that others make available, and publish information for others to access! This is going to be huge!”

  • 1993: The World Wide Web - “You can view remote documents by clicking on hyperlinks, making it super-easy to access information, and publishing is simple! This is going to be huge!”

  • 2007: Amazon Web Services - “You can provision on-demand disposable compute infrastructure from the command line and only pay for what you use! This is going to be huge!”

I feel privileged to have witnessed amazing growth in each of these and look forward to more productive use on all fronts.

There are a ton of local AWS meetups and AWS user groups where you can make contact with other AWS users. AWS often sends employees to speak and share with these groups.

A great way to meet thousands of people in the AWS community (and to spend a few days in intense learning about AWS no matter your current expertise level) is to attend the AWS re:Invent conference in Las Vegas this November. Perhaps I’ll see you there!

Original article: http://alestic.com/2014/09/aws-community-heroes

on September 25, 2014 10:03 PM
Cubieboard2 (CPU A20)

I bought a Cubieboard2 and I made a Lubuntu 14.04 image! Now, it's really fast and easy to deploy that image in a cubieboard2 with a NAND = 4GB.

Download the Lubuntu 14.04 image for CubieBoard2 here.

Boot with a Live distro, by example, with Cubian into a microSD (>8GB) with these steps.

Copy this Lubuntu image downloaded into the root of the microSD.

Boot the Cubieboard2 with Cubian from the microSD.

Open a Terminal (Menu / Accesories / LXTerminal) and run:
sudo su -
[password is "cubie"]
cd /
gunzip lubuntu-14.04-cubieboard2-nand.img.gz
dd if=/lubuntu-14.04-cubieboard2-nand.img conv=sync,noerror bs=64K of=/dev/nand

It's done! Reboot :) You must to have Lubuntu 14.04.1 running with 4GB as NAND partition. User: linaro, password: linaro.

    As sudo for the next steps:
    sudo su -
    • Add your new user (change 'username' for your new user):
    useradd -m username -G adm,dialout,cdrom,audio,dip,video,plugdev,admin,inet -s /bin/bash ; passwd username

    echo 'setxkbmap -layout "es"' >> /etc/xdg/lxsession/Lubuntu/autostart

    • Set localtime (By example, for Spain local time = Europe/Madrid), in other way, the browser will have problems with the https web pages:
    rm /etc/localtime ; ln -s /usr/share/zoneinfo/Europe/Madrid /etc/localtime ; ntpdate ntp.ubuntu.com

    • Change password to linaro user or remove (logout required) that user (it's sudo and all people know this password, do it ;):
    userdel -r linaro

    • Install ssh-client for connect by ssh or pulseaudio pavucontrol for audio.

    For this image I installed an official Lubuntu 13.04 Image from here, and I did this changes:
    - Resized NAND to 4GB (Ubuntu will use 1.5GB; 2GB free). You can use a microSD or SATA HD as external storage.
    - Updated to 13.10 and then to 14.04 LTS (Updated lxde* packages to last versions).
    - Installed ntp, firefox, audacious, sylpheed, pidgin, gpicview, lxappearance and ufw (not enabled)
    - Rewritabled and group owner for avoid ufw warnings: /etc, /, /lib
    - Removed chromium-browser, gnome-network-manager and gnome-disk-utility
    - Removed no password for admin users (edited /etc/sudoers)
    - Created this dd image

    Insert a microSD card in your current OS:
    sudo su -
    dd if=/dev/nand conv=sync,noerror bs=64K | gzip -c -9 > /nand.img.gz

    cd /
    gunzip nand.img.gz
    dd if=/nand.img conv=sync,noerror bs=64K of=/dev/nand
    on September 25, 2014 05:31 PM

    Just a quick post to help those who might be running older/unsupported distributions of linux, mainly Ubuntu 8.04 who need to patch their version of bash due to the recent exploit here:


    I found this post and can confirm it works:


    Here are the steps(make a backup of /bin/bash just in case):

    #assume that your sources are in /src
    cd /src
    wget http://ftp.gnu.org/gnu/bash/bash-4.3.tar.gz
    #download all patches
    for i in $(seq -f “%03g” 0 25); do wget http://ftp.gnu.org/gnu/bash/bash-4.3-patches/bash43-$i; done
    tar zxvf bash-4.3.tar.gz
    cd bash-4.3
    #apply all patches
    for i in $(seq -f “%03g” 0 25);do patch -p0 < ../bash43-$i; done
    #build and install
    ./configure && make && make install

    on September 25, 2014 05:03 PM
    Kubuntu KDE Plasma 5.0.2

    Kubuntu 14.10 with KDE Plasma 5.0.2

    KDE Frameworks 5.2.0 Has been released to Utopic archive!
    (Actually a few days ago, we are playing catch up since Akademy)

    Also, I have finished packaging Plasma 5.0.2, it looks and runs great!
    We desperately need more testers! If you would like to help us test,
    please join us in IRC in #kubuntu-devel thanks!

    on September 25, 2014 05:00 PM
    KDE Akademy 2014

    KDE Akademy 2014 in Brno, Czech Republic

    A few weeks ago I was blessed with the opportunity to attend KDE’s Akademy Conference for the first time. (Thank you Ubuntu Donors for sponsoring me!).
    Akademy is a week long conference that begins with a weekend of keynote speakers, informative lectures, and many hacking groups scattered about.
    This Akademy also had a great pre-release party held by Red Hat.

    I have not traveled such a distance since I was a child, so I was not prepared for the adventures to come. Hint: Pack lightly! I still have nightmares of the giant suitcase I thought I would need! I was lucky to have a travel buddy / roommate (Thank you Valorie Zimmerman!) to assist me in my travels, and most importantly, introducing me to my peers at KDE/Kubuntu that I had never met in person. It was wonderful to finally put a face to the names.

    My first few days were rather difficult. I was fighting my urge to stand in a corner and be shy. Luckily, some friendly folks dragged me out of the corner and introduced me to more and more people. With each introduction and conversation it became easier. I also volunteered at the registration desk, which gave me an opportunity to meet new people. As the days went on and many great conversations later, I forgot I was shy! In the end I made many friends during Akademy, turning this event into one of the most memorable moments of my life.

    The weekend brought Keynote speakers and many informative lectures. Unfortunately, I could not be in several places at once, so I missed a few that I wanted to see.
    Thankfully, you can see them here: https://conf.kde.org/en/Akademy2014/public/schedule/2014-09-06

    Due to circumstances out of their control, the audio is not great. The rest of the week was filled with BoF sessions / Workshops / Hacking / Collaboration / Anything we could think of that need to get done. In the BoF sessions we covered a lot of ground and hashed out ways to resolve problems we were facing. All that I attended were extremely productive. Yet another case where I wish I could split into multiple people so I could attend all that I wanted too!

    Kubuntu Day @ Akademy 2014

    Kubuntu Day @ Akademy 2014

    On Thursday we got an entire Kubuntu Day! We accomplished many things including working with Debian’s Sune and Pino to move some of our packaging to Debian git to reduce duplicate packaging work. We discussed the details of going to continuous packaging which includes Jenkins CI. We also had the pleasure of München’s Limux project joining us to update us with the progress of Kubuntu in Munich, Germany!

    While there was a lot of work accomplished during Akademy, there was also plenty of play as well! In the evenings many of us would go out on the town for dinner and drinks.
    On Wednesday,on the day trip, we visited (what a hike!) an old castle via a nice ferry ride. Unfortunately I forgot my camera in the hostel.. :( The hackroom in the hostel was always bustling with activity. We even had the pleasure of very tasty home cooked meals by Jos Poortvliet in the tiny hostel kitchen a couple nights, that took some creative thinking! In the end, there was never a moment of boredom and always moments of learning, discussions, hacking and laughing.

    If you ever have the opportunity to attend Akademy, do not pass it up!

    on September 25, 2014 04:37 PM

    Today I not only submitted my bachelor thesis to the printing company, I also released a new version of hardlink, my file deduplication tool.

    hardlink 0.3 now features support for xattr support, contributed by Tom Keel at Intel. If this does not work correctly, please blame him.

    I also added support for a –minimum-size option.

    Most of the other code has been tested since the upload of RC1 to experimental in September 2012.

    The next major version will split up the code into multiple files and clean it up a bit. It’s getting a bit long now in a single file.

    Filed under: Uncategorized
    on September 25, 2014 12:42 PM

    September 24, 2014

    Today, we worked, with the help of ioerror on IRC, on reducing the attack surface in our fetcher methods.

    There are three things that we looked at:

    1. Reducing privileges by setting a new user and group
    2. chroot()
    3. seccomp-bpf sandbox

    Today, we implemented the first of them. Starting with 1.1~exp3, the APT directories /var/cache/apt/archives and /var/lib/apt/lists are owned by the “_apt” user (username suggested by pabs). The methods switch to that user shortly after the start. The only methods doing this right now are: copy, ftp, gpgv, gzip, http, https.

    If privileges cannot be dropped, the methods will fail to start. No fetching will be possible at all.

    Known issues:

    • We drop all groups except the primary gid of the user
    • copy breaks if that group has no read access to the files

    We plan to also add chroot() and seccomp sandboxing later on; to reduce the attack surface on untrusted files and protocol parsing.

    Filed under: Uncategorized
    on September 24, 2014 09:06 PM