April 25, 2017

Welcome to the Ubuntu Weekly Newsletter. This is issue #505 for the weeks April 10 – 23, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • Chris Guiver
  • Jim Connett
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on April 25, 2017 02:31 AM

April 24, 2017

Fragmentation is the nature of the beast in the IoT space with a variety of non-interoperable protocols, devices and vendors which are the natural results of years of evolution in the industrial space especially. However traditional standardisation processes and proprietary implementations have been the norm. But the slow nature of their progress make them a liability for the burgeoning future of IoT. For these reasons, multiple actions are being taken by many organisations to change the legacy IoT mode of operations in the quest for accelerated innovation and improved efficiencies.

To aid this progress, today, the Linux Foundation has announced a new open source software project called the EdgeX Foundry. The aim is to create an open framework and unify the marketplace to build an ecosystem of companies offering plug and play components on IoT edge solutions. The Linux Foundation has gathered over 50 companies to be the founding members of this project and Canonical is proud to be one of these.
Here at Canonical, we have been pushing for open source approaches to IoT fragmentation. Last year’s introduction of snaps is one example of this – the creation of a universal Linux packaging format to make it easy for developers to manage the distribution of their applications across devices, distros and releases. They are also safer to run and faster to install. Looking forward, we want to see snaps as the default format across the board to work on any distribution or device from IoT to desktops and beyond.

Just like snaps, the EdgeX framework is designed to run on any operating system or hardware. It can quickly and easily deliver interoperability between connected devices, applications and services across a wide range of use cases. Fellow founding member, Dell, is seeding EdgeX Foundry with its FUSE source code base consisting of more than a dozen microservices and over 125,000 lines of code.

Adopting an open source edge software platform benefits the entire IoT ecosystem incorporating the system integrators, hardware manufacturers, independent software vendors and end customers themselves who are deploying IoT edge solutions. The project is also collaborating with other relevant open source projects and industry alliances to further ensure consistency and interoperability across IoT. These include the Cloud Foundry Foundation,EnOcean Alliance and ULE Alliance.

The EdgeX platform will be on display at the Hannover Messe in Germany from April 24th-28th 2017. Head to the Dell Technologies booth in Hall 8, Stand C24 to see the main demo.


on April 24, 2017 02:09 PM

  • City Network joins the Ubuntu Certified Public Cloud (CPC) programme
  • First major CPC Partner in the Nordics

City Network, a leading European provider of OpenStack infrastructure-as-a-service (IaaS) today joined the Ubuntu Certified Public Cloud programme. Through its public cloud service ‘City Cloud’, companies across the globe can purchase server and storage capacity as needed, paying for the capacity they use and leveraging the flexibility and scalability of the OpenStack-platform.

With dedicated and OpenStack-based City Cloud nodes in the US. Europe and Asia, City Network recently launched in Dubai. As such they are now the first official Ubuntu Certified Public Cloud in the Middle East offering a pure OpenStack-based platform running on Ubuntu OpenStack. Dubai has recently become the co-location and data center location of choice for the Middle East, as Cloud, IoT, and Digitization see massive uptake and market need from public sector, enterprise and SMEs in the region.

City Network provides public, private and hybrid cloud solutions based on OpenStack from 27 data centers around the world. Through its industry specific IaaS, City Network can ensure that their customers can comply with demands originating from specific laws and regulations concerning auditing, reputability, data handling and data security such as Basel and Solvency.

City Cloud Ubuntu lovers—from Stockholm to Dubai to Tokyo—will now be able to use official Ubuntu images, always stable and with the latest OpenStack release included, to run VMs and servers on their favourite cloud provider. Users of other distros on City Cloud are also now able to move to Ubuntu, the no. 1 cloud OS, and opt-in to Ubuntu Advantage support offering, which helps leading organisations around the world to manage their Ubuntu deployments.

“The disruptions of traditional business models and the speed in digital innovations, are key drivers for the great demand in open and flexible IaaS across the globe. Therefore, I am very pleased that we are now entering the Ubuntu Certified Public Cloud program, adding yet another opportunity for our customers to run their IT-infrastructure on an open, scalable and flexible platform,” said Johan Christenson, CEO and founder of City Network.

“Canonical is passionate about bringing the best Ubuntu user experience to users of every public cloud, but is especially pleased to have an OpenStack provider such as City Cloud offering Ubuntu, the world’s most widely used guest Linux,” said Udi Nachmany, Head of Public Cloud, Canonical. “City Cloud is known for its focus on compliance, and will now bring their customers additional choice for their public infrastructure, with an official, secure, and supportable Ubuntu experience.”

Ubuntu Advantage offers enterprise-grade SLAs for business-critical workloads, access to our Landscape systems management tool, the Canonical Livepatch Service for security vulnerabilities, and much more—all available from buy.ubuntu.com.

To start using Ubuntu on the City Cloud Infrastructure please visit https://www.citycloud.com

on April 24, 2017 09:04 AM

April 23, 2017

As you may know, Ubuntu Membership is a recognition of significant and sustained contribution to Ubuntu and the Ubuntu community. To this end, the Community Council recruits from our current member community for the valuable role of reviewing and evaluating the contributions of potential members to bring them on board or assist with having them achieve this goal.

We have seven members of our boards expiring from their terms , which means we need to do some restaffing of this Membership Board.

We have the following requirements for nominees:

  • be an Ubuntu member (preferably for some time)
  • be confident that you can evaluate contributions to various parts of our community
  • be committed to attending the membership meetings broad insight into the Ubuntu community at large is a plus

Additionally, those sitting on membership boards should have a proven track record of activity in the community. They have shown themselves over time to be able to work well with others and display the positive aspects of the Ubuntu Code of Conduct. They should be people who can discern character and evaluate contribution quality without emotion while engaging in an interview/discussion that communicates interest, a welcoming atmosphere, and which is marked by humanity, gentleness, and kindness. Even when they must deny applications, they should do so in such a way that applicants walk away with a sense of hopefulness and a desire to return with a more complete application rather than feeling discouraged or hurt.

To nominate yourself or somebody else (please confirm they wish to accept the nomination and state you have done so), please send a mail to the membership boards mailing list (ubuntu-membership-boards at lists.ubuntu.com). You will want to include some information about the nominee, a launchpad profile link and which time slot (20:00 or 22:00) the nominee will be able to participate in.

We will be accepting nominations through Friday May 26th at 12:00 UTC. At that time all nominations will be forwarded to the Community Council who will make the final decision and announcement.

Thanks in advance to you and to the dedication everybody has put into their roles as board members.

Originally posted to the ubuntu-news-team mailing list on Sun Apr 23 20:20:38 UTC 2017 by Michael Hall

on April 23, 2017 08:30 PM

KDE neon Translations

Jonathan Riddell

One of the best things about making software collaboratively is the translations.  Sure I could make a UML diagramming tool or whatever all by my own but it’s better if I let lots of other people help out and one of the best crowd-sourcing features of open community development is you get translated into many popular and obscure languages which it would cost a fortune to pay some company to do.

When KDE was monolithic is shipping translation files in separate kde-l10n tars so users would only have to install the tar for their languages and not waste disk space on all the other languages.  This didn’t work great because it’s faffy for people to work out they need to install it and it doesn’t help with all the other software on their system.  In Ubuntu we did something similar where we extracted all the translations and put them into translation packages, doing it at the distro level makes more sense than at the collection-of-things-that-KDE-ships level but still has problems when you install updated software.  So KDE has been moving to just shipping the translations along with the individual application or library which makes sense and it’s not like the disk space from the unused languages is excessive.

So when KDE neon came along we had translations for KDE frameworks and KDE Plasma straight away because those are included in the tars.  But KDE Applications still made kde-l10n tars which are separate and we quietly ignored them in the hope something better would come along, which pleasingly it now has.  KDE Applications 17.04 now ships translations in the tars for stuff which uses Frameworks 5 (i.e. the stuff we care about in neon). So KDE neon User Editions now include translations for KDE Applications too.  Not only that but Harald has done his genius and turned the releaseme tool into a library so KDE neon’s builder can use it to extract the same translation files into the developer edition packages so translators can easily try out the Git master versions of apps to see what translations look missing or broken.  There’s even an x-test language which makes xxTextxx strings so app developers can use it to check if any strings are untranslated in their applications.

The old kde-l10n packages in the Ubuntu archive would have some file clashes with the in-tar translations which would often break installs in non-English languages (I got complaints about this but not too many which makes me wonder if KDE neon attracts the sort of person who just uses their computer in English).  So I’ve built dummy empty kde-l10n packages so you can now install these without clashing files.

Still plenty to do.  docs aren’t in the Developer Edition builds.  And System Settings needs some code to make a UI for installing locales and languages of the base system, currently that needs done by hand if it’s not done at install time  (apt install language-pack-es).  But at last another important part of KDE’s software is now handled directly by KDE rather than hoping a third party will do the right thing and trying them out is pleasingly trivial.




Facebooktwittergoogle_pluslinkedinby feather
on April 23, 2017 01:00 PM

April 22, 2017

Creating an Education Programme

Sridhar Dhanapalan

OLPC Australia had a strong presence at linux.conf.au 2012 in Ballarat, two weeks ago.

I gave a talk in the main keynote room about our educational programme, in which I explained our mission and how we intend to achieve it.

Even if you saw my talk at OSDC 2011, I recommend that you watch this one. It is much improved and contains new and updated material. The YouTube version is above, but a higher quality version is available for download from Linux Australia.

The references for this talk are on our development wiki.

Here’s a better version of the video I played near the beginning of my talk:

I should start by pointing out that OLPC is by no means a niche or minor project. XO laptops are in the hands of 8000 children in Australia, across 130 remote communities. Around the world, over 2.5 million children, across nearly 50 countries, have an XO.

Investment in our Children’s Future

The key point of my talk is that OLPC Australia have a comprehensive education programme that highly values teacher empowerment and community engagement.

The investment to provide a connected learning device to every one of the 300 000 children in remote Australia is less than 0.1% of the annual education and connectivity budgets.

For low socio-economic status schools, the cost is only $80 AUD per child. Sponsorships, primarily from corporates, allow us to subsidise most of the expense (you too can donate to make a difference). Also keep in mind that this is a total cost of ownership, covering the essentials like teacher training, support and spare parts, as well as the XO and charging rack.

While our principal focus is on remote, low socio-economic status schools, our programme is available to any school in Australia. Yes, that means schools in the cities as well. The investment for non-subsidised schools to join the same programme is only $380 AUD per child.

Comprehensive Education Programme

We have a responsibility to invest in our children’s education — it is not just another market. As a not-for-profit, we have the freedom and the desire to make this happen. We have no interest in vendor lock-in; building sustainability is an essential part of our mission. We have no incentive to build a dependency on us, and every incentive to ensure that schools and communities can help themselves and each other.

We only provide XOs to teachers who have been sufficiently enabled. Their training prepares them to constructively use XOs in their lessons, and is formally recognised as part of their professional development. Beyond the minimum 15-hour XO-certified course, a teacher may choose to undergo a further 5-10 hours to earn XO-expert status. This prepares them to be able to train other teachers, using OLPC Australia resources. Again, we are reducing dependency on us.

OLPC Australia certificationsCertifications

Training is conducted online, after the teacher signs up to our programme and they receive their XO. This scales well to let us effectively train many teachers spread across the country. Participants in our programme are encouraged to participate in our online community to share resources and assist one another.

OLPC Australia online training processOnline training process

We also want to recognise and encourage children who have shown enthusiasm and aptitude, with our XO-champion and XO-mechanic certifications. Not only does this promote sustainability in the school and give invaluable skills to the child, it reinforces our core principle of Child Ownership. Teacher aides, parents, elders and other non-teacher adults have the XO-basics (formerly known as XO-local) course designed for them. We want the child’s learning experience to extend to the home environment and beyond, and not be constrained by the walls of the classroom.

There’s a reason why I’m wearing a t-shirt that says “No, I won’t fix your computer.” We’re on a mission to develop a programme that is self-sustaining. We’ve set high goals for ourselves, and we are determined to meet them. We won’t get there overnight, but we’re well on our way. Sustainability is about respect. We are taking the time to show them the ropes, helping them to own it, and developing our technology to make it easy. We fundamentally disagree with the attitude that ordinary people are not capable enough to take control of their own futures. Vendor lock-in is completely contradictory to our mission. Our schools are not just consumers; they are producers too.

As explained by Jonathan Nalder (a highly recommended read!), there are two primary notions guiding our programme. The first is that the nominal $80 investment per child is just enough for a school to take the programme seriously and make them a stakeholder, greatly improving the chances for success. The second is that this is a schools-centric programme, driven from grassroots demand rather than being a regime imposed from above. Schools that participate genuinely want the programme to succeed.

OLPC Australia programme cycleProgramme cycle

Technology as an Enabler

Enabling this educational programme is the clever development and use of technology. That’s where I (as Engineering Manager at OLPC Australia) come in. For technology to be truly intrinsic to education, there must be no specialist expertise required. Teachers aren’t IT professionals, and nor should they be expected to be. In short, we are using computers to teach, not teaching computers.

The key principles of the Engineering Department are:

  • Technology is an integral and seamless part of the learning experience – the pen and paper of the 21st century.
  • To eliminate dependence on technical expertise, through the development and deployment of sustainable technologies.
  • Empowering children to be content producers and collaborators, not just content consumers.
  • Open platform to allow learning from mistakes… and easy recovery.

OLPC have done a marvellous job in their design of the XO laptop, giving us a fantastic platform to build upon. I think that our engineering projects in Australia have been quite innovative in helping to cover the ‘last mile’ to the school. One thing I’m especially proud of is our instance on openness. We turn traditional systems administration practice on its head to completely empower the end-user. Technology that is deployed in corporate or educational settings is typically locked down to make administration and support easier. This takes control completely away from the end-user. They are severely limited on what they can do, and if something doesn’t work as they expect then they are totally at the mercy of the admins to fix it.

In an educational setting this is disastrous — it severely limits what our children can learn. We learn most from our mistakes, so let’s provide an environment in which children are able to safely make mistakes and recover from them. The software is quite resistant to failure, both at the technical level (being based on Fedora Linux) and at the user interface level (Sugar). If all goes wrong, reinstalling the operating system and restoring a journal (Sugar user files) backup is a trivial endeavour. The XO hardware is also renowned for its ruggedness and repairability. Less well-known are the amazing diagnostics tools, providing quick and easy indication that a component should be repaired/replaced. We provide a completely unlocked environment, with full access to the root user and the firmware. Some may call that dangerous, but I call that empowerment. If a child starts hacking on an XO, we want to hire that kid 🙂


My talk features the case study of Doomadgee State School, in far-north Queensland. Doomadgee have very enthusiastically taken on board the OLPC Australia programme. Every one of the 350 children aged 4-14 have been issued with an XO, as part of a comprehensive professional development and support programme. Since commencing in late 2010, the percentage of Year 3 pupils at or above national minimum standards in numeracy has leapt from 31% in 2010 to 95% in 2011. Other scores have also increased. Think what you may about NAPLAN, but nevertheless that is a staggering improvement.

In federal parliament, Robert Oakeshott MP has been very supportive of our mission:

Most importantly of all, quite simply, One Laptop per Child Australia delivers results in learning from the 5,000 students already engaged, showing impressive improvements in closing the gap generally and lifting access and participation rates in particular.

We are also engaged in longitudinal research, working closely with respected researchers to have a comprehensive evaluation of our programme. We will release more information on this as the evaluation process matures.

Join our mission

Schools can register their interest in our programme on our Education site.

Our Prospectus provides a high-level overview.

For a detailed analysis, see our Policy Document.

If you would like to get involved in our technical development, visit our development site.


Many thanks to Tracy Richardson (Education Manager) for some of the information and graphics used in this article.

on April 22, 2017 12:28 PM

Adam Holt and I were interviewed last night by the Australian Council for Computers in Education Learning Network about our not-for-profit work to improve educational opportunities for children in the developing world.

We talked about One Laptop per Child, OLPC Australia and Sugar Labs. We discussed the challenges of providing education in the developing world, and how that compares with the developed world.

Australia poses some of its own challenges. As a country that is 90% urbanised, the remaining 10% are scattered across vast distances. The circumstances of these communities often share both developed and developing world characteristics. We developed the One Education programme to accommodate this.

These lessons have been developed further into Unleash Kids, an initiative that we are currently working on to support the community of volunteers worldwide and take to the movement to the next level.

on April 22, 2017 12:14 PM

April 21, 2017


Rhonda D'Vine

A fair amount of things happened since I last blogged something else than music. First of all we did actually hold a Debian Diversity meeting. It was quite nice, less people around than hoped for, and I account that to some extend to the trolls and haters that defaced the titanpad page for the agenda and destroyed the doodle entry for settling on a date for the meeting. They even tried to troll my blog with comments, and while I did approve controversial responses in the past, those went over the line of being acceptable and didn't carry any relevant content.

One response that I didn't approve but kept in my mailbox is even giving me strength to carry on. There is one sentence in it that speaks to me: Think you can stop us? You can't you stupid b*tch. You have ruined the Debian community for us. The rest of the message is of no further relevance, but even though I can't take credit for being responsible for that, I'm glad to be a perceived part of ruining the Debian community for intolerant and hateful people.

A lot of other things happened since too. Mostly locally here in Vienna, several queer empowering groups were founding around me, some of them existed already, some formed with the help of myself. We now have several great regular meetings for non-binary people, for queer polyamory people about which we gave an interview, a queer playfight (I might explain that concept another time), a polyamory discussion group, two bi-/pansexual groups, a queer-feminist choir, and there will be an European Lesbian* Conference in October where I help with the organization …

… and on June 21st I'll finally receive the keys to my flat in Que[e]rbau Seestadt. I'm sooo looking forward to it. It will be part of the Let me come Home experience that I'm currently in. Another part of that experience is that I started changing my name (and gender marker) officially. I had my first appointment in the corresponding bureau, and I hope that it won't last too long because I have to get my papers in time for booking my flight to Montreal, and somewhen along the process my current passport won't contain correct data anymore. So for the people who have it in their signing policy to see government IDs this might be your chance to finally sign my key then.

I plan to do a diversity BoF at debconf where we can speak more directly on where we want to head with the project. I hope I'll find the time to do an IRC meeting beforehand. I'm just uncertain how to coordinate that one to make it accessible for interested parties while keeping the destructive trolls out. I'm open for ideas here.

/personal | permanent link | Comments: 3 | Flattr this

on April 21, 2017 08:01 AM

Since we missed by a whisker getting updated PIM (kontact, kmail, akregator, kgpg etc..) into Zesty for release day, and we believe it is important that our users have access to this significant update, packages are now available for testers in the Kubuntu backports landing ppa.

While we believe these packages should be relatively issue-free, please bear in mind that they have not been tested as comprehensively as those in the main ubuntu archive.

Testers should be prepared to troubleshoot and hopefully report issues that may occur. Please provide feedback on our mailing list [1], IRC [2], or optionally via social media.

After a period of testing and verification, we hope to move this update to the main backports ppa.

You should have some command line knowledge before testing.
Reading about how to use ppa purge is also advisable.

How to test KDE PIM 16.12.3 for Zesty:

Testing packages are currently in the Kubuntu Backports Landing PPA.

sudo add-apt-repository ppa:kubuntu-ppa/backports-landing
sudo apt-get update
sudo apt-get dist-upgrade

1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net

on April 21, 2017 01:31 AM

April 20, 2017

S10E07 – Black Frail Silver - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

We spend some time discussing one rather important topic in the news and that’s the announcement of Ubuntu’s re-focus from mobile and convergence to the cloud and Internet of Things.

It’s Season Ten Episode Seven of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Emma Marshall are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on April 20, 2017 02:00 PM
Over the past 6 months I've been running static analysis on linux-next with CoverityScan on a regular basis (to find new issues and fix some of them) as well as keeping a record of the defect count.

Since the beginning of September over 2000 defects have been eliminated by a host of upstream developers and the steady downward trend of outstanding issues is good to see.  A proportion of the outstanding defects are false positives or issues where the code is being overly zealous, for example, bounds checking where some conditions can never happen. Considering there are millions of lines of code, the defect rate is about average for such a large project.

I plan to keep the static analysis running long term and I'll try and post stats every 6 months or so to see how things are progressing.
on April 20, 2017 12:47 PM

April 18, 2017

Goodbye WordPress!

For a long while my personal blog has been running WordPress. Every so often I've looked at other options but never really been motivated to change it, because everything worked, and it was not too much effort to manage.

Then I got 'hacked'. :(

I host my blog on a Bitfolk VPS. I had no idea my server had been compromised until I got a notification on Boxing Day from the lovely Bitfolk people. They informed me that there was a deluge of spam originating from my machine, so it was likely compromised. Their standard procedure is to shutdown the network connection, which they did.

At this point I had access to a console to diagnose and debug what had happened. My VPS had multiple copies of WordPress installed, for various different sites. It looks like I had an old theme or plugin on one of them, which the attackers used to splat their evil doings on my VPS filesystem.

Being the Christmas holidays I didn't really want to spend the family time doing lots of phorensics or system admin. I had full backups of the machine, so I requested that Bitfolk just nuke the machine from orbit and I'd start fresh.

Bitfolk have a really handy self-service provisioning tool for just these eventualities. All I needed to do was ssh to the console provided and follow the instructions on the wiki, after the network connection was re-enabled, of course.

However, during the use of the self-serve installer we unconvered a bug and a billing inconsistency. Andy at Bitfolk spent some time on Boxing Day to fix both the bug and the billing glitch, and by midnight that night I'd had a bank-transfer refund! He also debugged some DNS issues for me too. That's some above-and-beyond level of service right there!

Hello Nikola!

Once I'd got a clean Ubuntu 16.04 install done, I had a not-so-long think about what I wanted to do for hosting my blog going forward. I went for Nikola - a static website generator. I'd been looking at Nikola on and off since talking about it over a beer with Martin in Heidelberg

Beer in Heidelberg

As I'd considered this before, I was already a little prepared. Nikola supports importing data from an existing WordPress install. I'd already exported out my WordPress posts some weeks ago, so importing that dump into Nikola was easy, even though my server was offline.

The things that sold me on Nikola were pretty straightforward.

Being static HTML files on my server, I didn't have to worry about php files being compromised, so I could take off my sysadmin hat for a bit, as I wouldn't have to do WordPress maintenance all the time.

Nikola allows me to edit offline easily too. So I can just open my text editor of choice start bashing away some markdown (other formats are supported). Here you can see what it looks like when I'm writing a blog post in todays favourite editor, Atom. With the markdown preview on the right, I can easily see what my post is going to look like as I type. I imagine I could do this with WordPress too, sure.

Writing this post

Once posts are written I can easily preview the entire site locally before I publish. So I get two opportunities to spot errors, once in Atom while editing and previewing, and again when serving the content locally. It works well for me!

Nikola Workflow

Nikola is configured easily by editing conf.py. In there you'll find documentation in the form many comments to supplement the online Nikola Handbook. I set a few things like the theme, disqus comments account name, and configuration of the Bitfolk VPS remote server where I'm going to host it. With ssh keys all setup, I configured Nikola to deploy using rsync over ssh.

When I want to write a new blog post, here's what I do.

cd popey.com/site
nikola new_post -t "Switching from WordPress to Nikola" -f markdown

I then edit the post at my leisure locally in Atom, and enable preview there with CTRL+SHIFT+M.

Once I'm happy with the post I'll build the site:-

nikola build

I can then start nikola serving the pages up on my laptop with:-

nikola serve

This starts a webserver on port 8000 on my local machine, so I can check the content in various browsers, and on mobile devices should I want to.

Obviously I can loop through those few steps over and again, to get my post right. Finally once I'm ready to publish I just issue:-

nikola deploy

This sends the content to the remote host over rsync/ssh and it's live!


Nikola is great! The documentation is comprehensive, and the maintainers are active. I made a mistake in my config and immediately got a comment from the upstream author to let me know what to do to fix it!

I'm only using the bare bones features of Nikola, but it works perfectly for me. Easy to post & maintain and simple to deploy and debug.

Have you migrated away from WordPress? What did you use? Let me know in the comments below.

on April 18, 2017 12:00 PM

I thought I was being smart.  By not buying through AVADirect I wasn’t going to be using an insecure site to purchase my new computer.

For the curious I ended purchasing through eBay (A rating) and Newegg (A rating) a new Ryzen (very nice chip!) based machine that I assembled myself.   Computer is working mostly ok, but has some stability issues.   A Bios update comes out on the MSI website promising some stability fixes so I decide to apply it.

The page that links to the download is HTTPS, but the actual download itself is not.
I flash the BIOS and now appear to have a brick.

As part of troubleshooting I find that the MSI website has bad HTTPS security, the worst page being:

Given the poor security and now wanting a motherboard with a more reliable BIOS  (currently I need to send the board back at my expense for an RMA) I looked at other Micro ATX motherboards starting with a Gigabyte which has even less pages using any HTTPS and the ones that do are even worse:

Unfortunately a survey of motherboard vendors indicates MSI failing with Fs might put them in second place.   Most just have everything in the clear, including passwords.   ASUS clearly leads the pack, but no one protects the actual firmware/drivers you download from them.

Main Website Support Site RMA Process Forum Download Site Actual Download
MSI F F F F F Plain Text
AsRock Plain text Email Email Plain text Plain Text Plain Text
Gigabyte (login site is F) Plain text Plain Text Plain Text Plain text Plain Text Plain Text
EVGA Plain text default/A- Plain text Plain text A Plain Text Plain Text
ASUS A- A- B Plain text default/A A- Plain Text
BIOSTAR Plain text Plain text Plain text n/a? Plain Text Plain Text

A quick glance indicates that vendors that make full systems use more security (ASUS and MSI being examples of system builders).

We rely on the security of these vendors for most self-built PCs.  We should demand HTTPS by default across the board.   It’s 2017 and a BIOS file is 8MB, cost hasn’t been a factor for years.

on April 18, 2017 12:50 AM

April 17, 2017

March was a busy month, so this monthly report is a little late. I worked two weekends, and I was planning my Easter holiday, so there wasn’t a lot of spare time.


  •  Updated Dominate to the latest version and uploaded to experimental (due to the Debian Stretch release freeze).
  • Uploaded the latest version of abcmidi (also to experimental).
  • Pinged the bugs for reverse dependencies of pygoocanvas and goocanvas with a view to getting them removed from the archive during the Buster cycle.
  • Asked for help on the Ubuntu Studio developers and users mailing lists to test the coming Ubuntu Studio 17.04 release ISO, because I would be away on holiday for most of it.


  • Worked on ubuntustudio-controls, reverting it back to an earlier revision that Len said was working fine. Unfortunately, when I built and installed it from my ppa, it crashed. Eventually found my mistake with the bzr reversion, fixed it and prepared an upload ready for sponsorship. Submitted a Freeze Exception bug in the hope that the Release Team would accept it even though we had missed the Final Beta.
  • Put a new power supply in an old computer that was kaput, and got it working again. Set up Ubuntu Server 16.04 on it so that I could get a bit more experience with running a server. It won’t last very long, because it is a 32 bit machine, and Ubuntu will probably drop support for that architecture eventually. I used two small spare drives to set up RAID 1 & LVM (so that I can add more space to it later). I set up some Samba shares, so that my wife will be able to get at them from her Windows machine. For music streaming, I set up Emby Server. I wold be great to see this packaged for Debian. I uploaded all of my photos and music for Emby to serve around the home (and remotely as well). Set up Obnam to back up the server to an external USB stick (temporarily until I set up something remote). Set LetsEncrypt with the wonderful Certbot program.
  • Did the Release Notes for Ubuntu Studio 17.04 Final Beta. As I was in Brussels for two days, I was not able to do any ISO testing myself.


  • Measured up the new model railway layout and documented it in xtrkcad.
  • Started learning Ansible some more by setting up ssh on all my machines so that I could access them with Ansible and manipulate them using a playbook.
  • Went to the Open Source Days conference just down the road in Copenhagen. Saw some good presentations. Of interest for my previous work in the Debian GIS Team, was a presentation from the Danish Municipalities on how they run projects using Open Source. I noted how their use of Proj 4 and OSGeo. I was also pleased to see a presentation from Ximin Luo on Reproducible Builds, and introduced myself briefly after his talk (during the break).
  • Started looking at creating a Django website to store and publish my One Name Study sources (indexes).  Started by creating a library to list some of my recently read Journals. I will eventually need to import all the others I have listed in a cvs spreadsheet that was originally exported from the commercial (Windows only) Custodian software.

Plan status from last month & update for next month


For the Debian Stretch release:

  • Keep an eye on the Release Critical bugs list, and see if I can help fix any. – In Progress


  • Package all the latest upstream versions of my Debian packages, and upload them to Experimental to keep them out of the way of the Stretch release. – In Progress
  • Begin working again on all the new stuff I want packaged in Debian.


  • Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in. – Started
  • Start testing & bug triaging Ubuntu Studio packages. – In progress
  • Test Len’s work on ubuntustudio-controls – Done
  • Do the Ubuntu Studio Zesty 17.04 Final Beta release. – Done
  • Sort out the Blueprints for the coming Ubuntu Studio 17.10 release cycle.


  • Give JMRI a good try out and look at what it would take to package it. – In progress
  • Also look at OpenPLC for simulating the relay logic of real railway interlockings (i.e. a little bit of the day job at home involving free software – fun!). – In progress

on April 17, 2017 02:35 PM

While my daily driver shell is ZSH, when I script, I tend to target Bash. I’ve found it’s the best mix of availability & feature set. (Ideally, scripts would be in pure posix shell, but then I’m missing a lot of features that would make my life easier. On the other hand, ZSH is not available everywhere, and certainly many systems do not have it installed by default.)

I’ve started trying to use the Bash “extended test command” ([[) when I write tests in bash, because it has fewer ways you can misuse it with bad quoting (the shell parses the whole test command rather than parsing it as arguments to a command) and I find the operations available easier to read. One of those operations is pattern matching of strings, which allows for stupidly simple substring tests and other conveniences. Take, for example:

$animals="bird cat dog"
if [[ $animals == *dog* ]] ; then
  echo "We have a dog!"

This is an easy way to see if an item is contained in a string.

Anyone who’s done programming or scripting is probably aware that the equality operator (i.e., test for equality) is a commutative operator. That is to say the following are equivalent:

if [[ $a == $b ]] ; then
  echo "a and b are equal."
if [[ $b == $a ]] ; then
  echo "a and b are still equal."

Seems obvious right? If a equals b, then b must equal a. So surely we can reverse our test in the first example and get the same results.

$animals="bird cat dog"
if [[ *dog* == $animals ]] ; then
  echo "We have a dog!"
  echo "No dog found."

Go ahead, give it a try, I’ll wait here.

OK, you probably didn’t even need to try it, or this would have been a particularly boring blog post. (Which isn’t to say that this one is a page turner to begin with.) Yes, it turns out that sample prints No dog found., but obviously we have a dog in our animals. If equality is commutative and the pattern matching worked in the first place, then why doesn’t this test work?

Well, it turns out that the equality test operator in bash isn’t really commutative – or more to the point, that the pattern expansion isn’t commutative. Reading the Bash Reference Manual, we discover that there’s a catch to pattern expansion:

When the ‘==’ and ‘!=’ operators are used, the string to the right of the operator is considered a pattern and matched according to the rules described below in Pattern Matching, as if the extglob shell option were enabled. The ‘=’ operator is identical to ‘==’. If the nocasematch shell option (see the description of shopt in The Shopt Builtin) is enabled, the match is performed without regard to the case of alphabetic characters. The return value is 0 if the string matches (‘==’) or does not match (‘!=’)the pattern, and 1 otherwise. Any part of the pattern may be quoted to force the quoted portion to be matched as a string.

(Emphasis mine.)

It makes sense when you think about it (I can’t begin to think how you would compare two patterns) and it is at least documented, but it wasn’t obvious to me. Until it bit me in a script – then it became painfully obvious.

Like many of these posts, writing this is intended primarily as a future reference to myself, but also in hopes it will be useful to someone else. It took me half an hour of Googling to get the right keywords to discover this documentation (I didn’t know the double bracket syntax was called the “extended test command”, which helps a lot), so hopefully it took you less time to find this post.

on April 17, 2017 07:00 AM

Probably like most of you I have a Raspberry Pi 2 sitting around not doing a lot. A project that I wanted to use mine for is setting up reliable network access to my home network when I'm away. I'm a geek, so network access for me means SSH. The problem with a lot of solutions out there is that ISPs on home networks change IPs, routers have funky port configurations, and a host of other annoyances that make setting up access unreliable. That's where Pagekite comes in.

Pagekite is a service that is based in Iceland and allows tunneling various protocols, including SSH. It gives a DNS name at one end of that tunnel and allows connecting from anywhere. They run on Open Source software and their libraries are all Open Source. They charge a small fee, which I think is reasonable, but they also provide a free trial account that I used to set this up and test it. You'll need to signup for Pagekite to get the name and secret to fill in below.

The first thing I did was setup Ubuntu core on my Pi and get it booting and configured. Using the built in configure tool it grabs my SSH keys already, so I don't need to do any additional configuration of SSH. You should always use key based login when you can. Then I SSH'd in on the local network to install and setup a small Pagekite snap I made like this:

# Install the snap
sudo snap install pagekite-ssh
# Configure the snap
snap set pagekite-ssh kitename=<your name>.pagekite.me kitesecret=<a bunch of hex>
# Restart the service to pickup the new config
sudo systemctl restart snap.pagekite-ssh.pagekite-ssh.service 
# Look at the logs to make sure there are no errors
journalctl --unit snap.pagekite-ssh.pagekite-ssh.service 

I then I configured my SSH to connect through Pagekite by editing my .ssh/config

Host *.pagekite.me
    User <U1 name> 
    IdentityFile ~/.ssh/id_launchpad
    CheckHostIP no
    ProxyCommand /bin/nc -X connect -x %h:443 %h %p

Now I can SSH into my Raspberry Pi from anywhere on the Internet! You could also install this on other boards Ubuntu core supports or anywhere snapd runs.

What is novel to me is that I now have a small low-power board that I can plug into any network, it will grab an IP address and setup a tunnel to a known address to access it. It will also update itself without me interacting with it at all. I'm considering putting one at my Dad's house as well to enable helping him with his network issues when the need arises. Make sure to only put these on networks that you have permission though!

on April 17, 2017 05:00 AM

Coming Up To Periscope Depth

Stephen Michael Kellat

I can say that work has not been pretty as of late. Some people have contacted me via Telegram. The secure call function works nicely provided I wear headphones with a built-in microphone so I can see the screen to read off the emojis to the other participant. When we get to April 28th I should have a clearer picture as to how bad things are going to get at work as a federal civil servant. My ability to travel to OggCamp 17 will probably be a bit more known by then.

I have not been to Europe since 1998 so it would be nice to leave the Americas again for at least a little while after having paid visits to the British Virgin Islands, American Samoa, and Canada in the intervening time. If I could somehow figure out how to engage in a "heritage quest" to Heachem in Norfolk, that would be nice too. I know we've seen Daniel Pocock touch on the mutability of citizenship on Planet Ubuntu before but, as I tell the diversity bureaucrats at work from time to time, some of my forebears met the English coming off the boat at Jamestown. If I did the hard work I could probably join Sons of the American Revolution.

So, what might I talk about at OggCamp 17 if I had the time? Right now the evaluation project, with the helpful backing of three fabulous sponsors, is continuing in limited fashion relative to Outernet. We have a rig and it is set up. The garage is the safest place to put it although I eventually want to get an uninterruptible power supply for it due to the flaky electrical service we have at times.

The hidden rig with a WiFi access point attached for signal boost

Although there was a talk at Chaos Communications Congress 33 that I have still not watched by Daniel Estévez about Outernet, it appears I am approaching things from a different angle. I'm looking at this from evaluating the content being distributed and how it is chosen. Daniel evaluated hardware.

The What's New screen seen from my cell phone's Firefox browser on Android
The tuner screen

There is still much to review. Currently I'm copying the receiver's take and putting on a Raspberry Pi in the house that is running on the internal network with lighttpd pointing at the directory so that I can view the contents. Eventually I'll figure out how to get the necessary WiFi bridging down to bring the CHIP board's miniscule hotspot signal from the detached garage into the house. Currently there is a WiFi extender right next to the CHIP so I don't have to be so close to the board to be able to pick up the hotspot signal while using my laptop.

Things remain in progress, of course.

on April 17, 2017 03:10 AM

April 15, 2017

My Ubuntu 16.04 GNOME Setup

This is a post for friends who saw my desktop screenshot and anyone else who likes Unity and is looking at alternatives. A big thanks to Stuart Langridge and Joey Sneddon whose linked posts inspired some of this.

The recent news that upcoming versions of Ubuntu will use GNOME as the default desktop rather than Unity, made me take another look at the GNOME desktop

If you're not interested in my opinion but just want to know what I did, then jump to "Migration from Unity to GNOME" below.

Why Unity?

I'm quite a Unity fan - yes, we exist! I've used it on my laptops and desktops my daily desktop pretty much since it came out, long before I worked at Canonical. I've tried a few other desktop environments, usually for no more than a week or so before getting frustrated and running back to Unity.

Here's what my typical desktop looks like on Ubuntu 16.04

Unity as I use it

At this point I'm sure there are a few people reading this and wondering why I like Unity, incredulous that anyone would. I get this from time to time. Some people seem to bizzarely think "I don't like Unity, therefore nobody does." which is ludicrous, but very obviously happening.

Anecdotally, I still see way more Unity screenshots than other desktops in random non-Linux videos on YouTube, on stranger's laptops on trains & on "millions of dollars" worth of laptops sold by Dell, System76 etc. I've also been told in person by people who like it, but don't like speaking up for fear of unwanted confrontation. ¯\_(ツ)_/¯

But it's a vocal minority of Linux users who tell me what desktop I (and everyone else) shouldn't use. Screw them, it's my PC, I'll run what I like. :)

However, that said, Unity is "dead", apparently, despite it having a few years of support left on the 16.04 LTS release. So I thought I'd take a fresh look at GNOME to see if I can switch to it easily and keep the parts of the Linux desktop I like, change the things I don't and tolerate the things I can't.

For me, it's not one single feature that made me come back to Unity time and time again, but a variety of reasons. Here's a non-exhaustive list of features I enjoy:-

  • Dash - Single button + search to launch apps and find files
  • HUD - Single button + search to find application features in menus
  • Launcher - Quick access via keyboard (or mouse) to top 10+ apps I use, always on screen
  • Window controls - Top left is their rightful place
  • Menus - In the titlebar or top bar (global)
  • App & Window switch behaviour via Alt+Tab & Alt+(key-above-tab)
  • App Spread - Super+S and Super+W to see all windows, or all windows of an app
  • Focus follows mouse - Initially global menu broke this but it was fixed

Much of this comes down to "is really well managed with just a keyboard" which is amusing given how many people tell me Unity (before Unity 8) is awful because it's designed for touch screens.

The things I think could be improved in Unity comprise a pretty short list, and if I thought really hard, I might expand this. If I did they'd probably only be papercut nit-picks rather than significant issues. So, I would have liked these things to have been fixed at some point, but that probably won't happen now :(

  • Memory footprint - It would be nice if the RAM usage of Unity was lower.
  • CPU/GPU overhead - Sometimes it can take a second or two to launch the dash, which should be near-instant all the time
  • Incompleteness - There were interesting designs & updates which never really got finished in Unity7
  • Cross distro support - It would have been nice to have Unity on other distros than just Ubuntu

So let's say a fond farewell to my primary desktop for over 6 years and make the switch.

Migration from Unity to GNOME

With that said, to move from Unity to GNOME on my ThinkPad T450 running Ubuntu 16.04 LTS I did the following:-

Install GNOME

I decided to go with the GNOME version shipping in the archive. People have suggested I try PPAs, but for my primary desktop I'd rather keep using the archive builds, unless there's some really compelling reason to upgrade.

So I backed up my laptop - well, I didn't - my laptop is backed up automatically every 6 hours, so I just figured if anything went belly-up I'd go back to the earlier backup. I then installed GNOME using this command:-

sudo apt install ubuntu-gnome-desktop^

Logout from Unity, choose GNOME at the login screen and we're in.

Default GNOME Desktop

Default GNOME Desktop

First impresssions

These are the things that jump out at me that I don't like and how they're fixed. One thing that's pretty neat about GNOME Shell is the ability to modify it via extensions. For most of the things I didn't like, there was an extension to change the behaviour.

Some are just plain extensions installed via GNOME Extensions, but some needed extra fiddling with Tweak Tool.

Activites hot corner

I find this too easily triggered, so I used No TopLeft Hot Corner. Later, I also discovered the Hide Activtes Button which helps even more by moving the window controls to the very top left, without the "Activities" in the way. I can still use the Super key to find things, with Activities hidden.

No Launcher

GNOME hides the launcher until you press Activites or the Super key. I fixed that with Dash to Dock.

In Tweak Tool, Dash to Dock settings -> Position and size -> tick "Panel mode: extend to the screen edge". I set "Intelligent Autohide" off, because I never liked that feature in Unity, although it had some vocal fans. Also I set the pixel size to 32px. In the Launchers tab I set "Move the applications button at the beginning of the dock".

Legacy indicators

Apparently developers are terrible people and haven't updated their indicators to some new spec, so they get relegated to the "Lower Left Corner of Shame". This is dumb. I used TopIcons Plus to put them where $DEITY intended, and where my eyes are already looking, the top right corner.

Volume control

In Unity I'm used to adjusting the master volume with the mouse wheel while the sound indicator is clicked. I fixed this with Better Volume Indicator

Giant titlebars

GNOME always feels to me like it's designed to waste vertical space with titlebars so I added Pixel Saver.

Missing Rubbish Bin

I like having the Trash / Rubbish Bin / Recycle Bin / Basket on screen. In Unity it's at the bottom of the launcher. I couldn't find an extension which did this so I used trash extension to move it to the top panel indicator area.

Slow animations

Some things felt a bit sluggish to me, so it was recommend that I install the Impatience extension, which seems to have helped my perception, if nothing else.

Remaining niggles

Things I haven't figured out yet. If you have any suggestions, do let me know in the comments below.

  • How to hide the clock completely
    • I frequently record screencasts of my laptop and the time jumping around in the recording can be distracting. So I just hide the clock. I don't see an easy way to do that yet.
  • Make accelerator keys work in alt+space window menu
    • For many years I have used the accelerators in the window controls menu accessed via Alt+space to do things like maximize the window. Alt+Space,x is welded in my muscle memory. I don't understand why they were disabled in GNOME Shell (they work in other desktops).
  • Alt-Tab behaviour is broken (by design (IMHO))
    • All windows of an application come to front when Alt+Tabbed to, even if I only want one window. I have to dance around with Alt+Tab & Alt+Grave.

Reader Suggestions

In the comments below, the following addtional extensions have been suggested.

Greg suggested the Alt Tab List First Window Extension which on initial play seems to fix the Alt-Tab issue listed above! Many thanks Greg!

Alif mentioned Status Area Horizontal Spacing which is great for compressing the gaps out of the indicator area in the top right, to use the space more efficiently. Thanks Alif!


So this is now my desktop, and I'm quite pleased with it! Massive thanks to the GNOME team, the Ubuntu GNOME flavour team, and all the extension authors who made this possible.

My new Ubuntu GNOME Desktop

My new Ubuntu GNOME Desktop

Initially I was a bit frustrated by the default behaviour of GNOME Shell. I've been pleasantly surprised by the extent and functionality of extensions available. Without them, there's no way I'd use this on a daily basis, as it's just too irritating. I'm sure somebody loves the default behaviour though, and that's great :)

I realise I'm running an 'old' version of GNOME Shell (3.18) coming directly from the Ubuntu 16.04 (LTS) archive. It may be there's additional features or fixes that are going to improve things further. I won't be upgrading to 16.10, 17.04 or 17.10 however, and likely won't use a GNOME PPA for my primary desktop. I'll stick with this until 18.04 (the next Ubuntu LTS) has baked for a while. I don't want to upgrade to 18.04 and find extensions break and put me backwards again.

I've had this setup for a few days now, and I'm pretty pleased with how it went. Did you try this? Any other changes you made? Let me know in a comment below! Thanks. :D

on April 15, 2017 09:50 PM

Outdoors laptop

Serge Hallyn

i like to work outside, at a park, on the beach, etc. For years I’ve made do with regular laptops, but all those year’s I’ve really wanted an e-ink laptop to avoid the squinting and the headaches and the search for shade. The pixel-qi displays raised my hopes, but those were quickly dashed when they closed their doors. For a brief time there were two e-ink laptops for sale. They were quite underpowered and expensive, but more importantly they’re no longer around.

Maybe it’s time to build one. There are many ways one could go about it:

  • Get a toughbook with a transflective display
  • Get a rooted nook and run vncclient connected to a server on my laptop or in a vm
  • Get a dasung e-ink monitor connected to my laptop. Not cheap, and dubious linux support.
  • Actually it seems an external pixel-qi display may be available right now. Still pretty steep price.
  • Attach a keyboard to a nook and use that standalone
  • Get a used pixelqi, put it in some sort of case, and hook it up as a separate display
  • Get a small e-ink (2″) display, hook it up to a rpi or beaglebone black
  • Get a used pixelqi display and install it in something like a used lenovo s10-3
  • Get a freewrite and hack it to be an ssh terminal. Freewrite themselves don’t like that idea.
  • Get a used OLPG with pixelqi display.

So is there anyone in the community with similar goals? What are you using? How’s it working for you?

on April 15, 2017 04:03 PM

April 13, 2017

Thanks to all the hard work from our contributors, Lubuntu 17.04 has been released! With the codename Zesty Zapus, Lubuntu 17.04 is the 12th release of Lubuntu, with support until January of 2018. What is Lubuntu? Lubuntu is an official Ubuntu flavor based on the Lightweight X11 Desktop Environment (LXDE). The project’s goal is to […]
on April 13, 2017 10:52 PM
We are happy to announce the release of our latest version, Ubuntu Studio 17.04 Zesty Zapus! As a regular version, it will be supported for 9 months. Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a complete list […]
on April 13, 2017 10:31 PM

Kubuntu 17.04 Released!

Kubuntu General News

Codenamed “Zesty Zapus”, Kubuntu 17.04 continues our proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.

The team has been hard at work through this cycle, introducing new features and fixing bugs.

Under the hood, there have been updates to many core packages, including a new 4.10-based kernel, KDE Frameworks 5.31, Plasma 5.9.4 and KDE Applications 16.12.3.

The Kubuntu Desktop has seen some exciting improvements, with newer versions of Qt, updates to major packages like Krita, Kdenlive, Firefox and LibreOffice, and stability improvements to the Plasma desktop environment.

For a list of other application updates, upgrading notes and known bugs be sure to read our release notes.

Download 17.04 or read about how to upgrade from 16.10.

on April 13, 2017 04:19 PM

The Xubuntu team is happy to announce the immediate release of Xubuntu 17.04.

Xubuntu 17.04 is a regular release and will be supported for 9 months, until January 2018. If you need a stable environment with longer support time, we recommend that you use Xubuntu 16.04 LTS instead.

The final release images are available as torrents and direct downloads from

As the main server might be busy in the first few days after the release, we recommend using the torrents if possible.

We’d like to thank everybody who contributed to this release of Xubuntu!


For support with the release, navigate to Help & Support for a complete list of methods to get help.

Highlights, Notes and Known Issues


Several Xfce panel plugins and applications have been ported to GTK+ 3, paving the way for improved theming and further development. Core Xfce libraries exo and libxfce4ui have also been updated with full GTK+ 3 support, the latter adding support for Glade development in Xubuntu with the installation of libxfce4ui-glade. The Greybird and Numix themes have also been refreshed with improved support for the toolkit.

Camera functionality has been restored in Mugshot, Parole introduced a new mini mode and improvements for network streams, and a number of welcome fixes have made their way into Thunar and Ristretto. Simon Tatham’s Portable Puzzle Collection (sgt-puzzles), an addicting collection of logic games, has been included along with the new SGT Puzzles Collection (sgt-launcher).


For new installs a swap file will be used instead of a swap partition. Upgrades from earlier versions are not affected.

Known Issues

  • System encryption password is set before setting the keyboard layout (bug 1047384), giving users errors about the wrong password when decrypting in some cases. The workaround for this is to start the installation with the correct keyboard layout; press F3 to set your keyboard layout before booting either installation option.
  • While recent patches for Thunar fixed problems for many, it still has some unresolved issues.
  • Parole has some issues as well and can crash in certain situations.

For more information on affecting bugs, bug fixes and a list of new package versions, please refer to the Release Notes.

on April 13, 2017 04:15 PM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In March, about 190 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Antoine Beaupré did 19 hours (out of 14.75h allocated + 10 remaining hours, thus keeping 5.75 extra hours for April).
  • Balint Reczey did nothing (out of 14.75 hours allocated + 2.5 hours remaining) and gave back all his unused hours. He took on a new job and will stop his work as LTS paid contributor.
  • Ben Hutchings did 14.75 hours.
  • Brian May did 10 hours.
  • Chris Lamb did 14.75 hours.
  • Emilio Pozuelo Monfort did 11.75 hours (out of 14.75 hours allocated + 0.5 hours remaining, thus keeping 3.5 hours for April).
  • Guido Günther did 4 hours (out of 8 hours allocated, thus keeping 4 extra hours for April).
  • Hugo Lefeuvre did 4 hours (out of 13.5 hours allocated, thus keeping 9.5 extra hours for April).
  • Jonas Meurer did 11.25 hours (out of 14.75 hours allocated, thus keeping 3.5 extra hours for April).
  • Markus Koschany did 14.75 hours.
  • Ola Lundqvist did 23.75 hours (out of 14.75h allocated + 9 hours remaining).
  • Raphaël Hertzog did 15 hours (out of 10 hours allocated + 6.25 hours remaining, thus keeping 1.25 hours for April).
  • Roberto C. Sanchez did 21.5 hours (out of 14.75 hours allocated + 7.75 hours remaining, thus keeping 1 extra hour for April).
  • Thorsten Alteholz did 14.75 hours.

Evolution of the situation

The number of sponsored hours has been unchanged but will likely decrease slightly next month as one sponsor will not renew his support (because they have switched to CentOS).

The security tracker currently lists 52 packages with a known CVE and the dla-needed.txt file 40. The number of open issues continued its slight increase… not worrisome yet but we need to keep an eye on this situation.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on April 13, 2017 04:05 PM

S10E06 – Tasty Different Cow - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

We discuss soldering the zerostem, review the new Dell XPS 13, share some command line love and go over your feedback.

This show was recorded prior to Mark Shuttleworth’s announcement about Growing Ubuntu for Cloud and IoT, rather than Phone and convergence. We’ll be discussing that news in a future episode of the Ubuntu Podcast.

It’s Season Ten Episode Six of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Stuart Langridge are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on April 13, 2017 02:00 PM

The Ubuntu GNOME developers are proud to announce our latest non-LTS release 17.04. For the first time in Ubuntu GNOME’s history, this release includes the latest stable release of GNOME, 3.24.

Although Ubuntu’s release schedule was originally centered around shipping the latest GNOME release, this had not been possible since Ubuntu GNOME’s first release four years ago.

Take a look at our release notes for a list of highlighted changes and improvements.

The future of Ubuntu GNOME

As announced last week by Ubuntu founder Mark Shuttleworth, Ubuntu 18.04 LTS will include GNOME instead of Unity. Specifically, it will be GNOME (including gnome-shell) with minimal Ubuntu customization. Next year, if you are using either Ubuntu 16.04 LTS or Ubuntu GNOME 16.04 LTS, you will be prompted to upgrade to Ubuntu 18.04 LTS. For normal release users, this upgrade should happen with the release of 17.10.

As a result of this decision there will no longer be a separate GNOME flavor of Ubuntu. The development teams from both Ubuntu GNOME and Ubuntu Desktop will be merging resources and focusing on a single combined release, that provides the best of both GNOME and Ubuntu. We are currently liaising with the Canonical teams on how this will work out and more details will be announced in due course as we work out the specifics.

The Ubuntu community is important to Mark Shuttleworth and he has publicly stated on Google+: “We will invest in Ubuntu GNOME with the intent of delivering a fantastic all-GNOME desktop. We’re helping the Ubuntu GNOME team, not creating something different or competitive with that effort. While I am passionate about the design ideas in Unity, and hope GNOME may be more open to them now, I think we should respect the GNOME design leadership by delivering GNOME the way GNOME wants it delivered.”

The Ubuntu GNOME developers were already members of the Ubuntu Desktop team since both teams shared responsibility for maintaining many parts of GNOME. We will continue to work closely with them and push the visions of Ubuntu GNOME into the main Ubuntu Community.

We started and maintained Ubuntu GNOME with the vision of bringing the most popular desktop experience1 to the most popular Linux distribution. We are excited that we are now able to bring this powerful combination to many more people. If you want an early preview of what this will look like, try Ubuntu GNOME 17.04 today!

On behalf of the Ubuntu GNOME team,
Jeremy Bicha and Tim Lunn

1 GNOME is shipped as the default desktop on a majority of Linux distributions.

on April 13, 2017 01:15 PM
Yesterday I was laid off by Canonical after working 6 years with them as a QA Engineer. I really loved my job and learned quite a lot, but now I must find a new job to survive. I have been involved with Ubuntu for close to 8 years, started as a contributor to the Ubuntu community, later I was offered a role at Canonical to work on Ubuntu.

I am very passionate about software. When I am not working, I write code and when I am working I write code. I never stopped learning technologies of different domains during last few years, so apart from my full-time job, I taught myself Android app development, Django to write RESTful APIs (not a full-stack developer yet) and to some extent I am also a DevOp and have been managing a lot of my deployments.

As a QA Engineer, I can help you setup test plans, find coverage gaps, automate your tests and enable them to run as part of CI in Jenkins. Apart from automation, I do have extensive experience with manual testing as well, so I can really break your product(for good).

My Linux skills are quite competitive, having used Ubuntu exclusively for 8 years I am very comfortable with command-line, with remote debugging over ssh. I am experienced with both git and bzr. I am also very passionate about embedded devices, have experimented very cool things on the RaspberryPi.

I live in Pakistan (GMT+5) but I am pretty flexible with work hours, so if an opportunity arrives I can really work in a different timezone. I don't have any specific preference over the size of the company, so I am very willing to work for companies of all size. I am also up for any freelance opportunities, so if you don't have a full-time role, I can be a freelancer/consultant.

my linkedin: https://www.linkedin.com/in/omer-akram-44830248/
my github: https://www.github.com/om26er
my launchpad: https://www.launchpad.net/~om26er
email: om26er@gmail.com

Exciting times ahead.
on April 13, 2017 01:14 PM

April 12, 2017

Jonas Öberg has recently blogged about Using Proprietary Software for Freedom. He argues that it can be acceptable to use proprietary software to further free and open source software ambitions if that is indeed the purpose. Jonas' blog suggests that each time proprietary software is used, the relative risk and reward should be considered and there may be situations where the reward is big enough and the risk low enough that proprietary software can be used.

A question of leadership

Many of the free software users and developers I've spoken to express frustration about how difficult it is to communicate to their family and friends about the risks of proprietary software. A typical example is explaining to family members why you would never install Skype.

Imagine a doctor who gives a talk to school children about the dangers of smoking and is then spotted having a fag at the bus stop. After a month, if you ask the children what they remember about that doctor, is it more likely to be what he said or what he did?

When contemplating Jonas' words, it is important to consider this leadership factor as a significant risk every time proprietary software or services are used. Getting busted with just one piece of proprietary software undermines your own credibility and posture now and well into the future.

Research has shown that when communicating with people, what they see and how you communicate is ninety three percent of the impression you make. What you actually say to them is only seven percent. When giving a talk at a conference or a demo to a client, or communicating with family members in our everyday lives, using a proprietary application or a product or service that is obviously proprietary like an iPhone or Facebook will have far more impact than the words you say.

It is not only a question of what you are seen doing in public: somebody who lives happily and comfortably without using proprietary software sounds a lot more credible than somebody who tries to explain freedom without living it.

The many faces of proprietary software

One of the first things to consider is that even for those developers who have a completely free operating system, there may well be some proprietary code lurking in their BIOS or other parts of their hardware. Their mobile phone, their car, their oven and even their alarm clock are all likely to contain some proprietary code too. The risks associated with these technologies may well be quite minimal, at least until that alarm clock becomes part of the Internet of Things and can be hacked by the bored teenager next door. Accessing most web sites these days inevitably involves some interaction with proprietary software, even if it is not running on your own computer.

There is no need to give up

Some people may consider this state of affairs and simply give up, using whatever appears to be the easiest solution for each problem at hand without thinking too much about whether it is proprietary or not.

I don't think Jonas' blog intended to sanction this level of complacency. Every time you come across a piece of software, it is worth considering whether a free alternative exists and whether the software is really needed at all.

An orderly migration to free software

In our professional context, most software developers come across proprietary software every day in the networks operated by our employers and their clients. Sometimes we have the opportunity to influence the future of these systems. There are many cases where telling the client to go cold-turkey on their proprietary software would simply lead to the client choosing to get advice from somebody else. The free software engineer who looks at the situation strategically may find that it is possible to continue using the proprietary software as part of a staged migration, gradually helping the user to reduce their exposure over a period of months or even a few years. This may be one of the scenarios where Jonas is sanctioning the use of proprietary software.

On a technical level, it may be possible to show the client that we are concerned about the dangers but that we also want to ensure the continuity of their business. We may propose a solution that involves sandboxing the proprietary software in a virtual machine or a DMZ to prevent it from compromising other systems or "calling home" to the vendor.

As well as technical concerns about a sudden migration, promoters of free software frequently encounter political issues as well. For example, the IT manager in a company may be five years from retirement and is not concerned about his employer's long term ability to extricate itself from a web of Microsoft licenses after he or she has the freedom to go fishing every day. The free software professional may need to invest significant time winning the trust of senior management before he is able to work around a belligerant IT manager like this.

No deal is better than a bad deal

People in the UK have probably encountered the expression "No deal is better than a bad deal" many times already in the last few weeks. Please excuse me for borrowing it. If there is no free software alternative to a particular piece of proprietary software, maybe it is better to simply do without it. Facebook is a great example of this principle: life without social media is great and rather than trying to find or create a free alternative, why not just do something in the real world, like riding motorcycles, reading books or getting a cat or dog?

Burning bridges behind you

For those who are keen to be the visionaries and leaders in a world where free software is the dominant paradigm, would you really feel satisfied if you got there on the back of proprietary solutions? Or are you concerned that taking such shortcuts is only going to put that vision further out of reach?

Each time you solve a problem with free software, whether it is small or large, in your personal life or in your business, the process you went through strengthens you to solve bigger problems the same way. Each time you solve a problem using a proprietary solution, not only do you miss out on that process of discovery but you also risk conditioning yourself to be dependent in future.

For those who hope to build a successful startup company or be part of one, how would you feel if you reach your goal and then the rug is pulled out underneath you when a proprietary software vendor or cloud service you depend on changes the rules?

Personally, in my own life, I prefer to avoid and weed out proprietary solutions wherever I can and force myself to either make free solutions work or do without them. Using proprietary software and services is living your life like a rat in a maze, where the oligarchs in Silicon Valley can move the walls around as they see fit.

on April 12, 2017 06:43 AM

April 10, 2017

Alan Turing's name and his work are well known to anybody with a theoretical grounding in computer science. Turing developed his theories well before anybody invented file sharing, overclocking or mass surveillance. In fact, Turing was largely working in the absence of any computers at all: the transistor was only invented in 1947 and the microchip, the critical innovation that has made computing both affordable and portable, only came in 1960, four years after Turing's death. To this day, the Turing Test remains a well known challenge in the field of Artificial Intelligence. The most prestigious prize in computing, the A.M. Turing Award from the ACM, equivalent to the Nobel Prize in other fields of endeavour, is named in Turing's honour. (This year's award is another British scientist, Sir Tim Berners-Lee, inventor of the World Wide Web).

Potentially far more people know of Alan Turing for his groundbreaking work at Bletchley Park and the impact it had on cracking the Nazi's Enigma machines during World War 2, giving the allies an advantage against Hitler.

While in his lifetime, Turing exposed the secret communications of the Nazis, in his death, he exposed something manifestly repugnant about his own society. Turing's challenges with his sexuality (or Britain's challenge with it) are just as well documented as his greatest scientific achievements. The 2014 movie The Imitation Game tells Turing's story, bringing together the themes from his professional and personal life.

Had Turing chosen to flee British persecution by going abroad, he would be a refugee in the same sense as any person who crossed the seas to reach Europe today to avoid persecution elsewhere.

Please prove me wrong

In March, I blogged about the problem of racism that plagues Britain today. While some may have felt the tone of the blog was quite strong, I was in no way pleased to find my position affirmed by the events that occurred in the two days after the blog appeared.

Two days and two more human beings (both immigrants and both refugees) subjected to abhorrent and unnecessary acts of abuse in Great Britain. Both cases appear to be fuelled directly by the evil that has been oozing out of number 10 Downing Street since they decided to have a referendum on "Brexit".

What stands out about these latest crimes is not that they occurred (this type of thing has been going on for months now) but certain contrasts between their circumstances and to a lesser extent, the fact they occurred immediately after Theresa May formalized Britain's departure from the EU. One of the victims was almost beaten to death by a street gang, while the other was abused by men wearing uniforms. One was only a child, while the other is a mature adult who has been in the UK almost three decades, completely assimilated into British life, working and paying taxes. Both were doing nothing out of the ordinary at the time the abuse occurred: one had engaged in a conversation at a bus stop, the other was on a routine visit to a Government office. There is no evidence that either of them had done anything to provoke or invite the abhorrent treatment meted out to them by the followers of Theresa May and Nigel Farage.

The first victim, on 30 March, was Stojan Jankovic, a refugee from Yugoslavia who has been in the UK for 26 years. He had a routine meeting at an immigration department office where he was ambushed, thrown in the back of a van and sent to rot in a prison cell by Theresa May's gestapo. On Friday, 31 March, it was Reker Ahmed, a 17 year old Kurdish-Iranian beaten to the brink of death by a crowd in south London.

One of the more remarkable facts to emerge about these two cases is that while Stojan Jankovic was basically locked up for no reason at all, the street thugs who the police apprehended for the assault on Ahmed were kept in a cell for less than 48 hours and released again on bail. While the harmless and innocent Jankovic was eventually released after a massive public outcry, he spent more time locked up than that gang of violent criminals who beat Reker Ahmed.

In other words, Theresa May and Nigel Farage's Britain has more concern for the liberty of violent criminals than somebody like Jankovic who has been working and paying taxes in the UK since before any of those street thugs were born.

A deeper insight into Turing's fate

With gay marriage having been legal in the UK for a number of years now, the rainbow flag flying at the Tate and Sir Elton John achieving a knighthood, it becomes difficult for people to relate to the world in which Turing and many other victims were collectively classified by their sexuality, systematically persecuted by the state and ultimately died far sooner than they should have. (Turing was only 41 when he died).

In fact, the cruel and brutal forces that ripped Turing apart (and countless other victims too) haven't dissipated at all, they have simply shifted their target. The slanderous comments insinuating that immigrants "steal" jobs or that Islam is about terrorism are eerily reminiscent of suggestions that gay men abduct young boys or work as Soviet spies. None of these lies has any basis in fact, but repeat them often enough in certain types of newspaper and these ideas spread like weeds.

In an ironic twist, Turing's groundbreaking work at Bletchley Park was founded on the contributions of Polish mathematicians, their own country having been the first casualty to Hitler, they were also both immigrants and refugees in Britain. Today, under the Theresa May/Nigel Farage leadership, Polish citizens have been subjected to regular vilification by the media and some have even been killed in the street.

It is said that a picture is worth a thousand words. When you compare these two pieces of propaganda: a 1963 article in the Sunday Mirror advising people "How to spot a possible homo" and a UK Government billboard encouraging people to be on the lookout for people who look different, could you imagine the same type of small-minded and power-hungry tyrants crafting them, singling out a minority so as to keep the public's attention in the wrong place?

Many people have noticed that these latest UK Government posters portray foreigners, Muslims and basically anybody who is not white using a range of characteristics found in anti-semetic propaganda from the Third Reich:

Do the people who create such propaganda appear to have any concern whatsoever for the people they hurt? How would Alan Turing have felt when he encountered propaganda like that from the Sunday Mirror? Do posters like these encourage us to judge people by their gifts in science, the arts or sporting prowess or do they encourage us to lump them all together based on their physical appearance?

It is a basic expectation of scientific methodology that when you repeat the same experiment, you should get the same result. What type of experiment are Theresa May and Nigel Farage conducting and what type of result would you expect?

Playing ping-pong with children

If anybody has any doubt that this evil comes from the top, take a moment to contemplate the 3,000 children who were baited with the promise of resettlement from the Calais "jungle" camp into the UK under the Dubs amendment.

When French authorities closed the "jungle" in 2016, the children were lured out of the camp and left with nowhere to go as Theresa May and French authorities played ping-pong with them. Given that the UK parliament had already agreed they should be accepted, was there any reason for Theresa May to dig her heels in and make these children suffer? Or was she just trying to prove her credentials as somebody who can bastardize migrants just the way Nigel Farage would do it?

How do British politicians really view migrants?

Parliamentarian Keith Vaz, former chair of the Home Affairs Select Committee (responsible for security, crime, prostitution and similar things) was exposed with young men from eastern Europe, encouraging them to take drugs before he ordered them "Take your shirt off. I'm going to attack you.". How many British MP's see foreigners this way? Next time you are groped at an airport security checkpoint, remember it was people like Keith Vaz and his committee who oversee those abuses, writing among other things that "The wider introduction of full-body scanners is a welcome development". No need to "take your shirt off" when these machines can look through it as easily as they can look through your children's underwear.

According to the World Health Organization, HIV/AIDS kills as many people as the September 11 attacks every single day. Keith Vaz apparently had no concern for the possibility he might spread this disease any further: the media reported he doesn't use any protection in his extra-marital relationships.

While Britain's new management continue to round up foreigners like Stojan Jankovic who have done nothing wrong, they chose not to prosecute Keith Vaz for his antics with drugs and prostitution.

Who is Britain's next Alan Turing?

Britain's next Alan Turing may not be a homosexual. He or she may have been a child turned away by Theresa May's spat with the French at Calais, a migrant bundled into a deportation van by the gestapo (who are just following orders) or perhaps somebody of Muslim appearance who is set upon by thugs in the street who have been energized by Nigel Farage. If you still have any uncertainty about what Brexit really means, this is it. A country that denies itself the opportunity to be great by subjecting itself to be ruled under the "divide and conquer" mantra of the colonial era.

Throughout the centuries, Britain has produced some of the most brilliant scientists of their time. Newton, Darwin and Hawking are just some of those who are even more prominent than Turing, household names around the world. One can only wonder what the history books will have to say about Theresa May and Nigel Farage however.

Next time you see a British policeman accosting a Muslim, whether it is at an airport, in a shopping centre, keeping Manchester United souvenirs or simply taking a photograph, spare a thought for Alan Turing and the era when homosexuals were their target of choice.

on April 10, 2017 08:01 PM

If you’ve been following the news, you’ll probably know about Ubuntu dropping Unity. I would say this is probably a surprise to many of us, due to the many years of efforts they have invested in Unity 8, and it being so close to completion.

It was speculated that, since Unity 8 is now dropped, Mir would also be dropped. However, it looks like it will still be developed, but not necessarily for desktop usage.

But speaking of that post, I found it quite unfortunate how Mark talked about “Mir-hating”, simplifying it to seem like it was irrational hatred with very little rational grounds:

“It became a political topic as irrational as climate change or gun control, where being on one side or the other was a sign of tribal allegiance”

“[…] now I think many members of the free software community are just deeply anti-social types who love to hate on whatever is mainstream”

“The very same muppets would write about how terrible it was that IOS/Android had no competition and then how terrible it was that Canonical was investing in (free software!) compositing and convergence”

Now, in all fairness, I haven’t been involved enough in the community to know much about the so-called “Mir hate-fest”. It is very possible that I haven’t seen the irrational tribal-like hatred he was talking about. However, the “hatred” I have seen would be spread into 2 categories (mainly):

  1. People (like me) who were worried about Mir splitting up the linux desktop, basically forcing any linux user who cares about graphical performance to be under Canonical’s control.
  2. People worrying about architectural problems in the codebase (or other code-related issues).

Both of these, IMO, are quite valid concerns, and should be allowed to be voiced, without being disregarded as “irrational hate”.

I’ll admit, my original post on this topic was pretty strong (and admittedly not very well articulated either). However, I believe that it’s important, especially in a free software community, to be able to voice our opinions about projects and decisions. In software circles that tend to stifle open discussion (I’ve seen this especially in various proprietary software communities), it is honestly a terrible atmosphere (at least IMO), and the community tends to suffer as a whole (due to companies feeling that they have power over their users, and feel that they can do anything they want, in hopes of gaining more profit).

In Mark’s defense, I agree that it is very important to stay respectful and constructive, and I apologize for the tone in my first post. I haven’t seen many other rude comments towards Mir, but as I said, I could be wrong. Having a lot of rude comments towards your software is very difficult for those behind the project to handle, and usually doesn’t amount to anywhere constructive anyways.

But I think that saying something along the lines of “anyone who disagrees that Mir is a good project is an idiot” (“I agree, it’s a very fast, clean and powerful graphics composition engine, and smart people love it for that.”, alongside the quotes I mentioned above) is very counterproductive to maintaining a good free software ecosystem.

My bottom line for this post is: I believe it’s vital to allow healthy discussion for projects within the free software community, but in a respectful and constructive manner, and I believe this point is especially important for large projects used by many people (such as ubuntu, the linux kernel, etc.).

on April 10, 2017 04:52 AM

April 08, 2017

Repeatable spell deployments with conjure-up

In our upcoming 2.2 release conjure-up will now automatically write out a custom bundle that incorporates your deployment and any configuration changes that were made in order to ease the burden of customized repeatable deployments.

Deployment and Customization

First, as of this writing, you'll need to install conjure-up from our beta channel in order to pickup this new feature:

sudo snap install conjure-up --classic --beta  

For this example, we'll walk through a simple Kubernetes deployment and making a small application configuration change.

Repeatable spell deployments with conjure-up

Next, you'll select the cloud you wish to deploy to and if necessary the bootstrap process will begin. Once that is complete the Application List screen will appear and this is where we'll make a the adjustment.

Navigate your keyboard arrow keys over to the kubernetes-master [Configure] button and press enter. You will then be presented with the ability to change a few configuration options. For this exercise we are going to install the Docker bits from upstream and not from the Ubuntu archive.

Repeatable spell deployments with conjure-up

Once that's done, tab over to APPLY CHANGES button and proceed.

Now at this point the bundle has been written and we can finish up the installation by answering the questions at the Additional Application Tasks. I'm not going to go into that section but jump to the end where the summary screen has been displayed to you and you've exited out of conjure-up.

Creating the repeatable spell deployment

This next part requires copying both the kubernetes-core spell and the bundle that was written to your own custom spell directory. To do that we'll need to look in our cache directory located at ~/.cache/conjure-up. Running a tree on that directory shows us the following:

$ tree ~/.cache/conjure-up
├── conjure-up.log
├── kubernetes-core
│   ├── metadata.yaml
│   ├── readme.md
│   ├── steps
│   │   ├── 00_deploy-done
│   │   ├── 00_pre-deploy
│   │   ├── lxd-profile.yaml
│   │   ├── step-01_get-kubectl
│   │   ├── step-01_get-kubectl.yaml
│   │   ├── step-02_cluster-info
│   │   └── step-02_cluster-info.yaml
│   └── tests
│       ├── deploy
│       └── verify
└── kubernetes-core-deployed-2017-04-07-20:26:25.yaml

What you see here is the kubernetes-core spell that we just deployed via conjure-up along with a file named kubernetes-core-deployed-2017-04-07-20:26:25.yaml which is the bundle that was written out during our customization section.

Viewing this particular bundle file we can see that our customization's were applied in addition to the rest of the required bundle needed for conjure-up to deploy.

    series: xenial
    series: xenial
    series: xenial
    series: xenial
- - kubernetes-master:certificates
  - easyrsa:client
- - kubernetes-worker:certificates
  - easyrsa:client
- - etcd:certificates
  - easyrsa:client
- - kubernetes-master:etcd
  - etcd:db
- - kubernetes-master:kube-api-endpoint
  - kubernetes-worker:kube-api-endpoint
- - kubernetes-master:cluster-dns
  - kubernetes-worker:kube-dns
series: xenial  
    charm: cs:~containers/easyrsa-7
    num_units: 1
    - '0'
    charm: cs:~containers/etcd-24
    num_units: 1
    - '1'
    charm: cs:~containers/kubernetes-master-12
    num_units: 1
      install_from_upstream: true  # <- Look here I'm new!
    - '2'
    charm: cs:~containers/kubernetes-worker-14
    num_units: 1
    - '3'

What happens here is conjure-up first downloads the bundle from the CharmStore and is then reconstructed with our custom configuration changes and spits out a new compatible bundle that can be used within a conjure-up spell.

In order to get your new spell in working order there are just a couple of steps to perform:

Copy the cached spell to somewhere of your choosing, we'll stick with $HOME/kubernetes-core

cp -a ~/.cache/conjure-up/kubernetes-core ~/kubernetes-core  

Next, copy the newly created bundle and place it within the recently copied spell directory while also making sure to change the filename to bundle.yaml.

cp ~/.cache/conjure-up/kubernetes-core-deployed-2017-04-07-20\:26\:25.yaml ~/kubernetes-core/bundle.yaml  

And that's it!

You can test that your deployment reflects the changes you made by simply re-running conjure-up against that spell directory:

$ conjure-up ~/kubernetes-core

This should prove very useful for teams who wish to document and version control spells for use in automated deployments or perhaps a teaching tool that is customized for your lab environment. Couple this feature with the ability to further customize headless deployments and you've just become the coolest kid at the party.

Further information

Website, GitHub, AskUbuntu, Chat

on April 08, 2017 12:51 AM

April 07, 2017

A huge THANK YOU to the entire HackerNews community, from the Ubuntu community!  Holy smokes...wow...you are an amazing bunch!  Your feedback in the thread, "Ask HN: What do you want to see in Ubuntu 17.10?" is almost unbelievable!

We're truly humbled by your response.

I penned this thread, somewhat on a whim, from the Terminal 2 lounge at London Heathrow last Friday morning before flying home to Austin, Texas.  I clicked "submit", closed my laptop, and boarded an 11-hour flight, wondering if I'd be apologizing to my boss and colleagues later in the day, for such a cowboy approach to Product Management...

When finally I signed onto the in-flight WiFi some 2 hours later, I saw this post at the coveted top position of HackerNews page 1, with a whopping 181 comments (1.5 comments per minute) in the first two hours.  Impressively, it was only 6am on the US west coast by that point, so SFO/PDX/SEA weren't even awake yet.  I was blown away!

This thread is now among the most discussed thread ever in the history of HackerNews, with some 1115 comments and counting at the time of this blog post.

 2530 comments   3125 points     2016-06-24      UK votes to leave EU    dmmalam
2215 comments 1817 points 2016-11-09 Donald Trump is the president-elect of the U.S. introvertmac
1448 comments 1330 points 2016-05-31 Moving Forward on Basic Income dwaxe
1322 comments 1280 points 2016-10-18 Shame on Y Combinator MattBearman
1215 comments 1905 points 2015-06-26 Same-Sex Marriage Is a Right, Supreme Court Rules imd23
1214 comments 1630 points 2016-12-05 Tell HN: Political Detox Week – No politics on HN for one week dang
1121 comments 1876 points 2016-01-27 Request For Research: Basic Income mattkrisiloff
*1115 comments 1333 points 2017-03-31 Ask HN: What do you want to see in Ubuntu 17.10? dustinkirkland
1090 comments 1493 points 2016-10-20 All Tesla Cars Being Produced Now Have Full Self-Driving Hardware impish19
1088 comments 2699 points 2017-03-07 CIA malware and hacking tools randomname2
1058 comments 1188 points 2014-03-16 Julie Ann Horvath Describes Sexism and Intimidation Behind Her GitHub Exit dkasper
1055 comments 2589 points 2017-02-28 Ask HN: Is S3 down? iamdeedubs
1046 comments 2123 points 2016-09-27 Making Humans a Multiplanetary Species [video] tilt
1030 comments 1558 points 2017-01-31 Welcome, ACLU katm
1013 comments 4107 points 2017-02-19 Reflecting on one very, very strange year at Uber grey-area
1008 comments 1990 points 2014-04-10 Drop Dropbox PhilipA

Rest assured that I have read every single one, and many of my colleagues has followed closely along as well.

In fact, to read and process this thread, I first attempted to print it out -- but cancelled the job before it fully buffered, when I realized that it's 105 pages long!  Here's the PDF (1.6MB), if you're curious, or want to page through it on your e-reader.

So instead, I wrote the following Python script, using the HackerNews REST API, to download the thread from Google Firebase into a JSON document, and import into MongoDB, for item-by-item processing.  Actually, this script will work against any HackerNews thread, and it recursively grabs nested comments.  Next time you're asked to write a recursive function on a white board for a Google interview, hopefully you've remember this code!  :-)

$ cat ~/bin/hackernews.py

import json
import requests
import sys


def get_json_from_url(item):
url = "https://hacker-news.firebaseio.com/v0/item/%s.json" % item
data = json.loads(requests.get(url=url).text)
#print(json.dumps(data, indent=4, sort_keys=True))
if "kids" in data and len(data["kids"]) > 0:
for k in data["kids"]:
data[k] = json.loads(get_json_from_url(k))
return json.dumps(data)

data = json.loads(get_json_from_url(sys.argv[1]))
print(json.dumps(data, indent=4, sort_keys=False))

It takes 5+ minutes to run, so you can just download a snapshot of the JSON blob from here (768KB), or if you prefer to run it yourself...

$ hackernews.py 14002821 | tee 14002821.json

First some raw statistics...

  • 1109 total comments
  • 713 unique users contributed a comment
  • 211 users contributed more than 1 comment
    • 42 comments/replies contributed by dustinkirkland (that's me)
    • 12 by vetinari
    • 11 by JdeBP
    • 9 by simosx and jnw2
  • 438 top level comments
    • 671 nested/replies
  • 415 properly formatted uses of "Headline:"
    • Thank you!  That was super useful in my processing of these!
  • 519 mentions of Desktop
  • 174 mentions of Server
  • 69 + 64 mentions of Snaps and Core
I'll try to summarize a few of my key interpretations of the trends, having now processed the entire discussion.  Sincere apologies in advance if I've (a) misinterpreted a theme, (b) skipped your favorite them, or (c) conflated concepts.  If any of these are the case, well, please post your feedback in the HackerNews thread associated with this post :-)

First, grouped below are some of the Desktop themes, with some fuzzy, approximate "weighting" by the number of pertinent discussions/mentions/vehemence.
  • Drop MIR/Unity for Wayland/Gnome (351 weight)
    • Release/GA Unity 8 (15 weight)
    • Easily, the most heavily requested, major change in this thread was for Ubuntu to drop MIR/Unity in favor of Wayland/Gnome.  And that's exactly what Mark Shuttleworth announced in an Ubuntu Insights post here today.  There were a healthy handful of Unity 8 fans, calling for its GA, and more than a few HackerNews comments lamenting the end of Unity in this thread.
  • Improve HiDPI, 4K, display scaling, multi-monitor (217 weight)
    • For the first time in a long time, I feel like a laggard in the technology space!  I own a dozen or so digital displays but not a single 4K or HiDPI monitor.  So while I can't yet directly relate, the HackerNews community is keen to see better support for multiple, high resolution monitors and world class display scaling.  And I suspect you're just a short year or so ahead of much of the rest of the world.
  • Make track pad, touch gestures great (129 weight)
    • There's certainly an opportunity to improve the track pad and touch gestures in the Ubuntu Desktop "more Apple-like".
  • Improve Bluetooth, WiFi, Wireless, Network Manager (97 weight)
    • This item captures some broad, general requests to make Bluetooth and Wireless more reliable in Ubuntu.  It's a little tough to capture an exact work item, but the relevant teams at Canonical have received the feedback.
  • Better mouse settings, more options, scroll acceleration (89 weight)
    • Similar to the touch/track pad request, there was a collection of similar feedback suggesting better mouse settings out-of-the-box, and more fine grained options. 
  • Better NVIDIA, GPU support (87 weight)
    • NVIDIA GPUs are extensively used in both Ubuntu Desktops and Servers, and the feedback here was largely around better driver availability, more reliable upgrades, CUDA package access.  For my part, I'm personally engaged with the high end GPU team at NVIDIA and we're actively working on a couple of initiatives to improve GPU support in Ubuntu (both Desktop and Server).
  • Clean up Network Manager, easier VPN (71 weight)
    • There were several requests around both Network Manager, and a couple of excellent suggestions with respect to easier VPN configuration and connection.  Given the recent legislation in the USA, I for one am fully supportive of helping Ubuntu users do more than ever before to protect their security and privacy, and that may entail better VPN support.
  • Easily customize, relocate the Unity launcher (53 weight)
    • This thread made it abundantly clear that it's important to people to be able to move, hide, resize, and customize their launcher (Unity or Gnome).  I can certainly relate, as I personally prefer my launcher at the bottom of the screen.
  • Add night mode, redshift, f.lux (42 weight)
    • This request is one of the real gems of this whole exercise!  This seems like a nice, little, bite-sized feature, that we may be able include with minimal additional effort.  Great find.
  • Make WINE and Windows apps work better (10 weight)
    • If Microsoft can make Ubuntu on Windows work so well, why can't Canonical make Windows on Ubuntu work?  :-)  If it were only so easy...  For starters, the Windows Subsystem for Linux "simply" needs to implement a bunch of Linux syscalls, whose source is entirely available.  So there's that :-)  Anyway, this one is really going too be a tough one for us to move the needle on...
  • Better accessibility for disabled users, children (9 weight)
    • As a parent, and as a friend of many Ubuntu users with special needs, this is definitely a worthy cause.  We'll continue to try and push the envelop on accessibility in the Linux desktop.
  • LDAP/ActiveDirectory integration out of the box (7 weight)
    • This is actually a regular request of Canonical's corporate Ubuntu Desktop customers.  We're generally able to meet the needs of our enterprise customers around LDAP and ActiveDirectory authentication.  We'll look at what else we can do natively in the distro to improve this.
  • Add support for voice commands (5 weight)
    • Excellent suggestion.  We've grown so accustomed to "Okay Google...", "Alexa...", "Siri..."  How long until we can, "Hey you, Ubuntu..."  :-)
Grouped below are some themes, requests, and suggestions that generally apply to Ubuntu as an OS, or specifically as a cloud or server OS.
  • Better, easier, safer, faster, rolling upgrades (153 weight)
    • The ability to upgrade from one release of Ubuntu to the next has long been one of our most important features.  A variety of requests have identified a few ways that we should endeavor to improve: snapshots and rollbacks, A/B image based updates, delta diffs, easier with fewer questions, super safe rolling updates to new releases.  Several readers suggested killing off the non-LTS releases of Ubuntu and only releasing once a year, or every 2 years (which is the LTS status quo).  We're working on a number of these, with much of that effort focused on Ubuntu Core.  You'll see some major advances around this by Ubuntu 18.04 LTS.
  • Official hardware that just-works, Nexus-of-Ubuntu (130 weight)
    • This is perhaps my personal favorite suggestion of this entire thread -- for us to declare a "Nexus-of-each-Ubuntu-release", much like Google does for each major Android release.  Hypothetically, this would be an easily accessible, available, affordable hardware platform, perhaps designed in conjunction with an OEM, to work perfectly with Ubuntu out of the box.  That's a new concept.  We do have the Ubuntu Hardware Certification Programme, where we clearly list all hardware that's tested and known to work well with Ubuntu.  And we do work with major manufacturers on some fantastic desktops and laptops -- the Dell XPS and System76 both immediately come to mind.  But this suggestion is a step beyond that.  I'm set to speak to a few trusted partners about this idea in the coming weeks.
  • Lighter, smaller, more minimal (113 weight)
    • Add x-y-z-favorite-package to default install (105 weight)
    • For every Ubuntu user that wants to remove stuff from Ubuntu, to make it smaller/faster/lighter/secure, I'll show you another user who wants to add something else to the default install :-)  This is a tricky one, and one that I'm always keen to keep an eye on.  We try very had to strike a delicate balance between minimal-but-usable.  When we have to err, we tend (usually, but not always) on the side of usability.  That's just the Ubuntu way.  That said, we're always evaluating our Ubuntu Server, Cloud, Container, and Docker images to insure that we minimize (or at least justify) any bloat.  We'll certainly take another hard look at the default package sets at both 17.10 and 18.04.  Thanks for bringing this up and we'll absolutely keep it in mind!
  • More QA, testing, stability, general polish (99 weight)
    • The word "polish" is used a total of 24 times, with readers generally asking for more QA, more testing, more stability, and more "polish" to the Ubuntu experience.  This is a tough one to quantify.  That said, we have a strong commitment to quality, and CI/CD (continuous integration, continuous development) testing at Canonical.  As your Product Manager, I'll do my part to ensure that we invest more resources into Ubuntu quality.
  • Fix /boot space, clean up old kernels (92 weight)
    • Ouch.  This is such an ugly, nasty problem.  It personally pissed me off so much, in 2010, that I created a script, "purge-old-kernels".  And it personally pissed me off again so much in 2014, that I jammed it into the Byobu package (which I also authored and maintain), for the sole reason to get it into Ubuntu.  That being said, that's the wrong approach.  I've spoken with Leann Ogasawara, the amazing manager and team lead for the Ubuntu kernel team, and she's committed to getting this problem solved once and for all in Ubuntu 17.10 -- and ideally getting those fixes backported to older releases of Ubuntu.
  • ZFS supported as a root filesystem (84 weight)
    • This was one of the more surprising requests I found here, and another real gem.  I know that we have quite a few ZFS fans in the Ubuntu community (of which, I'm certainly one) -- but I had no idea so many people want to see ZFS as a root filesystem option.  It makes sense to me -- integrity checking, compression, copy-on-write snapshots, clones.  In fact, we have some skunkworks engineering investigating the possibility.  Stay tuned...
  • Improve power management, battery usage (73 weight)
    • Longer batteries for laptops, lower energy bills for servers -- an important request.  We'll need to work closer with our hardware OEM/ODM partners to ensure that we're leveraging their latest and greatest energy conservation features, and work with upstream to ensure those changes are integrated into the Linux kernel and Gnome.
  • Security hardening, grsecurity (72 weight)
    • More security!  There were several requests for "extra security hardening" as an option, and the grsecurity kernel patch set.  So the grsecurity Linux kernel is a heavily modified, patched Linux kernel that adds a ton of additional security checks and features at the lowest level of the OS.  But the patch set is huge -- and it's not upstream in the Linux kernel.  It also only applies against the last LTS release of Ubuntu.  It would be difficult, though not necessarily impossible, to offer the grsecurity supported in the Ubuntu archive.  As for "extra security hardening", Canonical is working with IBM on a number of security certification initiatives, around FIPS, CIS Benchmarks, and DISA STIG documentation.  You'll see these becoming available throughout 2017.
  • Dump Systemd (69 weight)
    • Fun.  All the people fighting for Wayland/Gnome, and here's a vocal minority pitching a variety of other init systems besides Systemd :-)  So frankly, there's not much we can do about this one at this point.  We created, and developed, and maintained Upstart over the course of a decade -- but for various reasons, Red Hat, SUSE, Debian, and most of the rest of the Linux community chose Systemd.  We fought the good fight, but ultimately, we lost graciously, and migrated Ubuntu to Systemd.
  • Root disk encryption, ext4 encryption, more crypto (47 weight)
    • The very first feature of Ubuntu, that I created when I started working for Canonical in 2008, was the Home Directory Encryption feature introduced in late 2008, so yes -- this feature has been near and dear to my heart!  But as one of the co-authors and co-maintainers of eCryptfs, we're putting our support behind EXT4 Encryption for the future of per-file encryption in Ubuntu.  Our good friends at Google (hi Mike, Ted, and co!) have created something super modern, efficient, and secure with EXT4 Encryption, and we hope to get there in Ubuntu over the next two releases.  Root disk encryption is still important, even more now than ever before, and I do hope we can do a bit better to make root disk encryption easier to enable in the Desktop installer.
  • Fix suspend/resume (24 weight)
    • These were a somewhat general set of bugs or issues around suspend/resume not working as well as it should.  If these are a closely grouped set of corner cases (e.g. multiple displays, particular hardware), then we should be able to shake these out with better QA, bug triage, and upstream fixes.  That said, I remember when suspend/resume never worked at all in Linux, so pardon me while I'm a little nostalgic about how far we've come :-)  Okay...now, yes, you're right.  We should do better.
  • New server installer (19 weight)
    • Well aren't you in for a surprise :-)  There's a new server installer coming soon!  Stay tuned.
  • Improve swap space management (12 weight)
    • Another pet peeve of mine -- I feel you!  So I filed this blueprint in 2009, and I'm delighted to say that as of this month (8 years later), Ubuntu 17.04 (Zesty Zapus) will use swap files, rather than swap partitions, by default.  Now, there's a bit more to do -- we should make these a bit more dynamic, tune the swappiness sysctl, etc.  But this is a huge step in the right direction!
  • Reproducible builds (7 weight)
    • Ensuring that builds are reproducible is essential for the security and the integrity of our distribution.  We've been working with Debian upstream on this over the last few years, and will continue to do so.
Ladies and gentlemen, again, a most sincere "thank you", from the Ubuntu community to the HackerNews community.  We value openness -- open source code, open design, open feedback -- and this last week has been a real celebration of that openness for us.  We appreciate the work and effort you put into your comments, and we hope to continue our dialog throughout our future together, and most importantly, that Ubuntu continues to serve your needs and occasionally even exceed your expectations ;-)

on April 07, 2017 01:44 PM

April 06, 2017

While archiving a bunch of documents I found this pic of all of us from Ubuntu Down Under, thought I would share it!

on April 06, 2017 02:12 PM

April 05, 2017

Canonical Refocus

Jono Bacon

I wrote this on G+, but it seemed appropriate to share it here too:

So, today Canonical decided to refocus their business and move away from convergence and devices. This means that the Ubuntu desktop will move back to GNOME.

I have seen various responses to this news. Some sad that it is the end of an era, and a non-zero amount of “we told you so” smugness.

While Unity didn’t pan out, and there were many good steps and missteps along the way, I am proud that Canonical tried to innovate. Innovation is tough and fraught with risk. The Linux desktop has always been a tough nut to crack, and one filled with an army of voices, but I am proud Canonical gave it a shot even if it didn’t succeed it’s ultimate goals. That spirit of experimentation is at the epicenter of open source, and I hope everyone involved here takes a good look at how they contributed to and exacerbated this innovation. I know I have looked inwards at this.

Much as some critics may deny, everyone I know who worked on Unity and Mir, across engineering, product, community, design, translations, QA, and beyond did so with big hearts and open minds. I just hope we see that talent and passion continue to thrive and we continue to see Ubuntu as a powerful driver for the Linux desktop. I am excited to see how this work manifests in GNOME, which has been doing some awesome work in recent years.

And, Mark, Jane, I know this will have been a tough decision to come to, and this will be a tough day for the different teams affected. Hang in there: Ubuntu has had such a profound impact on open source and while the future path may be a little different, I am certain it will be fruitful.

The post Canonical Refocus appeared first on Jono Bacon.

on April 05, 2017 10:12 PM
Mark Shuttleworth publicaba la noticia del año:

"... we will end our investment in Unity8, the phone and convergence shell. We will shift our default Ubuntu desktop back to GNOME for Ubuntu 18.04 LTS".

WOW. Defenestrar a Unity no es tarea fácil. Es la piedra angular que gobierna el escritorio, móvil y tablet, su estandarte, su diferenciación y la meta de titánicos esfuerzos de desarrollo durante 7 años... Ahí queda eso.

El breve post de Mark es difícil de digerir y deja en el aire temas importantes que Canonical deberá aclarar a corto plazo.

Aún así, veo la noticia como positiva... "¿positiva?". Sí, positiva... Digamos que me gusta ser optimista... :/

Primero porque a parte de Unity, en la noticia recalcan que Ubuntu ha encontrado dos nichos de mercado muy importantes: el IoT y la nube.
Si los consigue dominar le darán muchísima estabilidad y beneficios, lo cual le permitirá afrontar muchos retos a futuro. Sólo por esto, sin duda, es el camino a seguir y donde focalizar los esfuerzos.

Unity permitió a Ubuntu dos cosas muy importantes:
  • Diferenciarse visualmente.
  • Controlar el desarrollo de la interface.
Antes de aparecer Unity yo estaba sorprendido de por qué Ubuntu publicaba una versión 10.04 LTS cuyo botón de apagado en el panel de GNOME 2 desaparecía por arte de magia. ¿Cómo diantres podía yo recomendar Ubuntu a nadie si en cada sesión sólo se podía apagar el sistema entrando a la consola como root?
Unity proporcionó control total sobre el desarrollo de la interface permitiendo solventar este tipo de bugs. ¡Así que personalmente fue muy bienvenido!
Pero muchos usuarios abandonaron la distribución debido a su falta de personalización.

Unity apuntaba muy alto, mucho más alto de lo que creíamos en su primera release. Tal vez demasiado alto.
Se acopló como un guante a la interface táctil. Ha sido el único entorno de escritorio que fue homogéneo y coherente tanto en escritorio, móvil y tablet.

Con su abandono, Ubuntu pierde la diferenciación y control, pero 'gana' varias cosas:
  • El móvil estaba herido de muerte. El mercado dictaminó que prefiere el monopolio de Android y no está preparado para permitir madurar otras opciones. Y malgastar recursos en algo decadente no es lógico.
  • Unity 8 no acaba de estabilizar en escritorio. Excesivamente dependiente de drivers y hacer casar mir con demasiadas cosas.
  • Ubuntu seguirá focalizado y con energía en el escritorio.
  • Se reinvertirán esfuerzos en GNOME.
A nivel personal, en estos 2 últimos años jamás he visto en el software libre una comunidad más comprometida y vibrante que la de Ubuntu Phone. He invertido muchísimas horas en aplicaciones como uWriter y uNav, pero no me arrepiento en lo más mínimo, pues me permitieron conocer y compartir momentos inolvidables con muchísima gente que de otra manera no hubiera sido posible.
uNav canibalizó horas de otros proyectos, pero es porque tuvo un hype incomparable y ha sido fruto de muchísimas alegrías, como estar preinstalado por defecto :))

El sabor general es agridulce y en especial por la incertidumbre de qué pasará a corto plazo.
on April 05, 2017 09:44 PM

Bat Signal... No Ubuntu Call for Contributors!I had predicted long ago that Unity and Ubuntu Mobile would fail and fail it has, mostly because Shuttleworth ignored signals that were there from the community and industry for years and continued to dump a fortune into a fight that could not be won.

While Shuttleworth has made it clear that mobile is not in Canonical’s future, he also has not been clear about the future of the desktop other than to signal the move back to Gnome. So I’m going to drop another prediction out there and say that I believe Canonical will entirely abandon the Ubuntu Desktop in the coming years because, like mobile, has almost no chance of sustainable profits. So far as a desktop OS, it has not been able to compete proprietary operating systems like Windows and OSX. The simple fact is game and app developers are not flocking to Ubuntu to port over major apps and games from OSX which continues to leave a major gap in the Ubuntu Desktop that forces even the most die-hard Linux enthusiasts to dual-boot or have separate hardware with a competing OS on it.

I definitely wish this were not the reality and wish, rather than doing this expensive moonshot on mobile, that Canonical had invested that money into bringing mainstream apps and games to the desktop. That said, the future for the cloud and server space continues to be very bright for Canonical and is the bread and butter of the company. That is great news because where the desktop and mobile has failed, there continues to be great success and progress and that is key to the survival of Canonical and the Ubuntu Project.

Ultimately, I think the endgame for Canonical will be to focus on what it does best, which is enterprise server and cloud area and leave the desktop to community contributors and slowly reducing headcount of staff working on the desktop.
I still hope that Canonical will prove me wrong on desktop and finds some way to attract mainstream developers because we really do need a consumer viable Linux desktop and nobody has been able to achieve that yet so for now Ubuntu Mobile and Desktop have Zoomed out of reach of consumer and commercial viability.

on April 05, 2017 08:33 PM

I like the way the Ubuntu Unity desktop works. However, a while ago I switched over to Gnome Shell to see what it was like, and it seemed good so I stuck around. But I’ve added a few extensions to it so it feels a bit more like the parts of the Unity experience that I liked. In light of the news from Canonical that they’ll be shipping the Gnome desktop in the next LTS in 2018, and in light of much hand-wringing from people who like Unity as much as I do about how they don’t want to lose the desktop they prefer, I thought I’d write down what I did, so others can try it too.

Gnome shell, customised to look like the Unity desktop

As you can see, that looks like Unity. It feels like Unity too.

The main bit of customisation here is extensions. From the Gnome shell extensions web app, install Better Volume Indicator, Dash to Dock, No Topleft Hot Corner, and TopIcons Plus. That gives you the Launcher on the left, indicators in the top right, and a volume indicator that you can roll the mouse wheel on. Choose a theme; I use the stock Gnome theme, “Adwaita”, but I turned it to “dark mode” (in “Tweak Tool”, turn on “Global Dark Theme”), and set icons to “Ubuntu-mono-dark” in the same place.

Most of the stuff you’re familiar with in Unity actually carries through to Gnome Shell pretty much unchanged — they’re actually very similar, especially with the excellent Dash to Dock extension! One neat trick is that the window spread and the search have been combined; if you hit the Super key (the “Windows” key), it opens up the window spread and lets you search, so the normal way of launching an app by name (hit Super, type the name, hit Enter) is completely unchanged. Similarly, you can launch apps from the Launcher on the left with Super+1 or Super+2 and so on, just like Unity.

There are a whole bunch of other extensions to customise bits of how Gnome Shell works; if there are some that make it more like a Unity feature you like and I haven’t listed, I’d be happy to hear about them. Meanwhile, I’ve still got the feel I like, on the desktop I’ll be using. Hooray for that.

on April 05, 2017 06:10 PM

Snaps first launched with the ability to ship desktop apps on Ubuntu 16.04, which is an X11 based platform. It was noted that while secure and containerized, the fact that many Snaps were using X11 this made them less secure than they could be. It was a reality of shipping Snaps for 16.04, but something we definitely want to fix for 18.04 using Unity8 and the Mir graphics stack. We can't just ignore all the apps that folks have built for 16.04 though, so we need a solution to run X11 applications on Unity8 securely.

To accomplish this we give each X11 application its own instance of the XMir server. This means that even evil X applications that use insecure features of (or find vulnerabilities in) the Xorg server, they're only compromising their individual instance of the Xserver and are unable to affect other applications. Sounds simple, right? Unfortunately there is a lot more to making an application experience seamless than just handling the graphic buffers and making sure it can display on screen.

The Mir server is designed to handle graphics buffers and their positions on the screen, it doesn't handle all the complexities of things like cut-and-paste and window menus. To help make X11 apps that use these features we're using some pieces of the libertine project which runs X11 apps in LXD containers. It has in it a set of helpers, like pasted, who handle these additional protocols. pasted watches the selected window and the X11 clip buffers to connect into Unity8's cut-and-paste mechanisms which behave very differently. For instance, Unity8 doesn't allow or snooping on clip buffers to steal passwords.

It is also important at this point to note that in Ubuntu Personal we aren't just snapping up applications, we are snapping everything. We expect to have snaps of Unity8, snaps of Network Manager and a snap of XMir. This means that XMir isn't even running in the same security context as Unity8. A vulnerability in XMir only compromises XMir and the files that it has access to. This means that a bug in an X11 application would have get into XMir and then work on the Mir protocol itself before getting to other applications or user session resources.

The final user experience? We hope that no one notices that their applications are X11 applications or Mir applications, users shouldn't have to care about display servers. What we've tried to create is a way for them to still have their favorite X11 applications, as hopefully they transition away from X11, while still being able to get the security benefits of a Mir based desktop.

on April 05, 2017 05:00 AM

April 04, 2017

On Tanglu

Matthias Klumpp

It’s time for a long-overdue blogpost about the status of Tanglu. Tanglu is a Debian derivative, started in early 2013 when the systemd debate at Debian was still hot. It was formed by a few people wanting to create a Debian derivative for workstations with a time-based release schedule using and showcasing new technologies (which include systemd, but also bundling systems and other things) and built in the open with a community using the similar infrastructure to Debian. Tanglu is designed explicitly to complement Debian and not to compete with it on all devices.

Tanglu has achieved a lot of great things. We were the first Debian derivative to adopt systemd and with the help of our contributors we could kill a few nasty issues affecting it and Debian before it ended up becoming default in Debian Jessie. We also started to use the Calamares installer relatively early, bringing a modern installation experience additionally to the traditional debian-installer. We performed the usrmerge early, uncovering a few more issues which were fed back into Debian to be resolved (while workarounds were added to Tanglu). We also briefly explored switching from initramfs-tools to Dracut, but this release goal was dropped due to issues (but might be revived later). A lot of other less-impactful changes happened as well, borrowing a lot of useful ideas and code from Ubuntu (kudos to them!).

On the infrastructure side, we set up the Debian Archive Kit (dak), managing to find a couple of issues (mostly hardcoded assumptions about Debian) and reporting them back to make using dak for distributions which aren’t Debian easier. We explored using fedmsg for our infrastructure, went through a long and painful iteration of build systems (buildbot -> Jenkins -> Debile) before finally ending up with Debile, and added a set of own custom tools to collect archive QA information and present it to our developers in an easy to digest way. Except for wanna-build, Tanglu is hosting an almost-complete clone of basic Debian archive management tools.

During the past year however, the project’s progress slowed down significantly. For this, mostly I am to blame. One of the biggest challenges for a young project is to attract new developers and members and keep them engaged. A lot of the people coming to Tanglu and being interested in contributing were unfortunately no packagers and sometimes no developers, and we didn’t have the manpower to individually mentor these people and teach them the necessary skills. People asking for tasks were usually asked where their interests were and what they would like to do to give them a useful task. This sounds great in principle, but in practice it is actually not very helpful. A curated list of “junior jobs” is a much better starting point. We also invested almost zero time in making our project known and create the necessary “buzz” and excitement that’s actually needed to sustain a project like this. Doing more in the advertisement domain and “help newcomers” area is a high priority issue in the Tanglu bugtracker, which to the day is still open. Doing good alone isn’t enough, talking about it is of crucial importance and that is something I knew about, but didn’t realize the impact of for quite a while. As strange as it sounds, investing in the tech only isn’t enough, community building is of equal importance.

Regardless of that, Tanglu has members working on the project, but way too few to manage a project of this magnitude (getting package transitions migrated alone is a large task requiring quite some time while at the same time being incredibly boring :P). A lot of our current developers can only invest small amounts of time into the project because they have a lot of other projects as well.

The other issue why Tanglu has problems is too much stuff being centralized on myself. That is a problem I wanted to rectify for a long time, but as soon as a task wasn’t done in Tanglu because no people were available to do it, I completed it. This essentially increased the project’s dependency on me as single person, giving it a really low bus factor. It not only centralizes power in one person (which actually isn’t a problem as long as that person is available enough to perform tasks if asked for), it also centralizes knowledge on how to run services and how to do things. And if you want to give up power, people will need the knowledge on how to perform the specific task first (which they will never gain if there’s always that one guy doing it). I still haven’t found a great way to solve this – it’s a problem that essentially kills itself as soon as the project is big enough, but until then the only way to counter it slightly is to write lots of documentation.

Last year I had way less time to work on Tanglu than the project deserves. I also started to work for Purism on their PureOS Debian derivative (which is heavily influenced by some of the choices we made for Tanglu, but with different focus – that’s probably something for another blogpost). A lot of the stuff I do for Purism duplicates the work I do on Tanglu, and also takes away time I have for the project. Additionally I need to invest a lot more time into other projects such as AppStream and a lot of random other stuff that just needs continuous maintenance and discussion (especially AppStream eats up a lot of time since it became really popular in a lot of places). There is also my MSc thesis in neuroscience that requires attention (and is actually in focus most of the time). All in all, I can’t split myself and KDE’s cloning machine remains broken, so I can’t even use that ;-). In terms of projects there is also a personal hard limit of how much stuff I can handle, and exceeding it long-term is not very healthy, as in these cases I try to satisfy all projects and in the end do not focus enough on any of them, which makes me end up with a lot of half-baked stuff (which helps nobody, and most importantly makes me loose the fun, energy and interest to work on it).

Good news everyone! (sort of)

So, this sounded overly negative, so where does this leave Tanglu? Fact is, I can not commit the crazy amounts of time for it as I did in 2013. But, I love the project and I actually do have some time I can put into it. My work on Purism has an overlap with Tanglu, so Tanglu can actually benefit from the software I develop for them, maybe creating a synergy effect between PureOS and Tanglu. Tanglu is also important to me as a testing environment for future ideas (be it in infrastructure or in the “make bundling nice!” department).

So, what actually is the way forward? First, maybe I have the chance to find a few people willing to work on tasks in Tanglu. It’s a fun project, and I learned a lot while working on it. Tanglu also possesses some unique properties few other Debian derivatives have, like being built from source completely (allowing us things like swapping core components or compiling with more hardening flags, switching to newer KDE Plasma and GNOME faster, etc.). Second, if we do not have enough manpower, I think converting Tanglu into a rolling-release distribution might be the only viable way to keep the project running. A rolling release scheme creates much less effort for us than making releases (especially time-based ones!). That way, users will have a constantly updated and secure Tanglu system with machines doing most of the background work.

If it turns out that absolutely nothing works and we can’t attract new people to help with Tanglu, it would mean that there generally isn’t much interest from the developer or user side in a project like this, so shutting it down or scaling it down dramatically would be the only option. But I do not think that this is the case, and I believe that having Tanglu around is important. I also have some interesting plans for it which will be fun to implement for testing 🙂

The only thing that had to stop is leaving our users in the dark on what is happening.

Sorry for the long post, but there are some subjects which are worth writing more than 140 characters about 🙂

If you are interested in contributing to Tanglu, get in touch with us! We have an IRC channel #tanglu-devel on Freenode (go there for quicker responses!), forums and mailinglists,

It looks like I will be at Debconf this year as well, so you can also catch me there! I might even talk about PureOS/Tanglu infrastructure at the conference.

on April 04, 2017 08:12 AM

Last month the web team ran its first design sprint as outlined in The Sprint Book, by Google Ventures’ Jake Knapp. Some of us had read the book recently and really wanted to give the method a try, following the book to the letter.

In this post I will outline what we’ve learned from our pilot design sprint, what went well, what could have gone better, and what happened during the five sprint days. I won’t go into too much detail about explaining what each step of the design sprint consists of — for that you have the book. If you don’t have that kind of time, but would still like to know what I’m talking about, here’s an 8-minute video that explains the concept:


Before the sprint

One of the first things you need to do when running a design sprint is to agree on a challenge you’d like to tackle. Luckily, we had a big challenge that we wanted to solve: ubuntu.com‘s navigation system.


ubuntu.com navigation layers: global nav, main nav, second and third level navubuntu.com’s different levels of navigation


Assigning roles

If you’ve decided to run a design sprint, you’ve also probably decided who will be the Facilitator. If you haven’t, you should, as this person will have work to do before the sprint starts. In our case, I was the Facilitator.

My first Facilitator task was to make sure we knew who was going to be the Decider at our sprint.

We also agreed on who was going to participate, and booked one of our meeting rooms for the whole week plus an extra one for testing on Friday.

My suggestion for anyone running a sprint for the first time is to also name an Assistant. There is so much work to do before and during the sprint, that it will make the Facilitator’s life a lot easier. Even though we didn’t officially name anyone, Greg was effectively helping to plan the sprint too.

Evangelising the sprint

In the week that preceded the sprint, I had a few conversations with other team members who told me the sprint sounded really great and they were going to ‘pop in’ whenever they could throughout the week. I had to explain that, sadly, this wasn’t going to be possible.

If you need to do the same, explain why it’s important that the participants commit to the entire week, focusing on the importance of continuity and of accumulated knowledge that the sprint’s team will gather throughout the week. Similarly, be pleasant but firm when participants tell you they will have to ‘pop out’ throughout the week to attend to other matters — only the Decider should be allowed to do this, and even so, there should be a deputy Decider in the room at all times.


Before the sprint, you also need to make sure that you have all the supplies you need. I tried as much as possible to follow the suggestions for materials outlined in the book, and I even got a Time Timer. In retrospect, it would have been fine for the Facilitator to just keep time on a phone, or a less expensive gadget if you really want to be strict with the no-phones-in-the-room policy.

Even though the book says you should start recruiting participants for the Friday testing during the sprint, we started a week before that. Greg took over that side of the preparation, sending prompts on social media and mailing lists for people to sign up. When participants didn’t materialise in this manner, Greg sent a call for participants to the mailing list of the office building we work at, which worked wonders for us.

Know your stuff

Assuming you have read the book before your sprint, if it’s your first sprint, I recommend re-reading the chapter for the following day the evening before, and take notes.

I printed out the checklists provided in the book’s website and wrote down my notes for the following day, so everything would be in one place.


Facilitator checklist with handwritten notesFacilitator checklists with handwritten notes


I also watched the official video for the day (which you can get emailed to you by the Sprint Bot the evening before), and read all the comments in the Q&A discussions linked to from the emails. These questions and comments from other people who have run sprints was incredibly useful throughout the week.


Sprint Bot emailSprint Bot email for the first day of the sprint


Does this sound like a lot of work? It was. I think if/when we do another sprint the time spent preparing will probably be reduced by at least 50%. The uncertainty of doing something as involved as this for the first time made it more stressful than preparing for a normal workshop, but it’s important to spend the time doing it so that things run smoothly during the sprint week.

Day 1

The morning of the sprint I got in with plenty of time to spare to set up the room for the kick-off at 10am.

I bought lots of healthy snacks (which were promptly frowned on by the team, who were hoping for sweater treats); brought a jug of water and cups, and all the supplies to the room; cleared the whiteboards; and set up the chairs.

What follows are some of the outcomes, questions and other observations from our five days.


In the morning of day 1 you define a long term goal for your project, list the ways in which the project could fail in question format, and draw a flowchart, or map, of how customers interact with your product.

  • Starting the map was a little bit tricky as it wasn’t clear how the map should look when there are more than one type of customer who might have different outcomes
  • In the book there are no examples with more than one type of customer, which meant we had to read and re-read that part of the book until we decided how to proceed as we have several customer types to cater for
  • Moments like these can take the confidence in the process away from the team, that’s why it’s important for the Facilitator to read everything carefully more than once, and ideally for him or her not to be the only person to do so
  • We did the morning exercises much faster than prescribed, but the same didn’t happen in the afternoon!


The team discussing the target for the sprint in front of the journey mapDiscussing the target for the sprint



In the afternoon experts from the sprint and guests come into the room and you ask them lots of questions about your product and how things work. Throughout the interviews the team is taking notes in the “How Might We” format (for example, “How might we reduce the amount of copy?”). By the end of the interviews, you group the notes into themes, vote on the ones you find most useful or interesting, move the most voted notes onto their right place within your customer map and pick a target in the map as the focus for the rest of the sprint.

  • If you have time, explain “How Might We” notes work before the lunch break, so you save that time for interviews in the afternoon
  • Each expert interview should last for about 15-30 minutes, which didn’t fee like long enough to get all the valuable knowledge from our experts — we had to interrupt them somewhat abruptly to make sure the interviews didn’t run over. Next time it might be easier to have a list of questions we want to cover before the interviews start
  • Choreographing the expert interviews was a bit tricky as we weren’t sure how long each would take. If possible, tell people you’ll call them a couple of minutes before you need them rather than set a fixed time — we had to send people back a few times because we weren’t yet finished asking all the question to the previous person!
  • It took us a little longer than expected to organise the notes, but in the end, the most voted notes did cluster around the key section of the map, as predicted in the book!


How Might We notes on the wallsSome of the How Might We notes on the wall after the expert interviews


Other thoughts on day 1

  • Sprint participants might cancel at the last minute. If this happens, ask yourself if they could still appear as experts on Monday afternoon? If not, it’s probably better to write them off the sprint completely
  • There was a lot of checking the book as the day went by, to confirm we were doing the right thing
  • We wondered if this comes up in design sprints frequently: what if the problem you set out to solve pre-sprint doesn’t match the target area of the map at the end of day 1? In our case, we had planned to focus on navigation but the target area was focused on how users learn more about the products/services we offer

A full day of thinking about the problem and mapping it doesn’t come naturally, but it was certainly useful. We conduct frequent user research and usability testing, and are used to watching interviews and analysing findings, nevertheless the expert interviews and listening to different perspectives from within the company was very interesting and gave us a different type of insight that we could build upon during the sprint.

Day 2

By the start of day 2, it felt like we had been in the sprint for a lot longer than just one day — we had accomplished a lot on Monday!


The morning of day 2 is spent doing “Lightning Demos” after a quick 20-minute research. These can be anything that might be interesting, from competitor products to previous internal attempts at solving the sprint challenge. Before lunch, the team decides who will sketch what in the afternoon: will everyone sketch the same thing or different parts of the map.

  • We thought the “Lightning Demos” was a great way to do demos — it was fast and captured the most important thing quickly
  • Deciding who would sketch what wasn’t as straightforward as we might have thought. We decided that everyone should do a journey through our cloud offerings so we’d get different ideas on Wednesday, knowing there was the risk of not everything being covered in the sketches
  • Before we started sketching, we made a list of sections/pages that should be covered in the storyboards
  • As on day 1, the morning exercises were done faster than prescribed, we were finished by 12:30 with a 30 minute break from 11-11:30


Sketches from lightning demosOur sketches from the lightning demos



In the afternoon, you take a few minutes to walk around the sprint room and take down notes of anything that might be useful for the sketching. You then sketch, starting with quick ideas and moving onto a more detailed sketch. You don’t look at the final sketches until Wednesday morning.

  • We spent the first few minutes of the afternoon looking at the current list of participants for the Friday testing to decide which products to focus on in our sketches, as our options were many
  • We had a little bit of trouble with the “Crazy 8s” exercise, where you’re supposed to sketch 8 variations of one idea in 8 minutes. It wasn’t clear what we had to do so we re-read that part a few times. This is probably the point of the exercise: to remove you from your comfort zone, make you think of alternative solutions and get your creative muscles warmed up
  • We had to look at the examples of detailed sketches in the book to have a better idea of what was expected from our sketches
  • It took us a while to get started sketching but after a few minutes everyone seemed to be confidently and quietly sketching away
  • With complicated product offerings there’s the instinct to want to have access to devices to check product names, features, etc – I assumed this was not allowed but some people were sneakily checking their laptops!
  • Naming your sketch wasn’t as easy as it sounded
  • Contrary to what we expected, the afternoon sketching exercises took longer than the morning’s, at 5pm some people were still sketching


The team sketchingEveryone sketching in silence on Tuesday afternoon


Tuesday was lots of fun. Starting the day with the demos, without much discussion on the validity of the ideas, creates a positive mood in the team. Sketching in a very structured manner removes some of the fear of the blank page, as you build up from loose ideas to a very well-defined sketch. The silent sketching was also great as it meant we had some quiet time to pause and think a solution through, giving the people who tend to be more quiet an opportunity to have their ideas heard on par with everyone else.

Day 3

No-one had seen the sketches done on Tuesday, so the build-up to the unveiling on day 3 was more exciting than for the usual design review!


On the Wednesday morning, you decide which sketch (or sketches) you will prototype. You stick the sketches on the wall and review them in silence, discuss each sketch briefly and each person votes on their favourite. After this, the Decider casts three votes, which can follow or not the votes of the rest of the team. Whatever the Decider votes on will be prototyped. Before lunch, you decide whether you will need to create one or more prototypes, depending on whether the Decider’s (or Deciders’) votes fit together or not.

  • We had 6 sketches to review
  • Although the book wasn’t clear as to when the guest Decider should participate, we invited ours from 10am to 11.30am as it seemed that he should participate in the entire morning review process — this worked out well
  • During the speed critique people started debating the validity or feasibility of solutions, which was expected but meant some work for the Facilitator to steer the conversation back on track
  • The morning exercises put everyone in a positive mood, it was an interesting way to review and select ideas
  • Narrating the sketches was harder than what might seem at first, and narrating your own sketch isn’t much easier either!
  • It was interesting to see that many of the sketches included similar solutions — there were definite patterns that emerged
  • Even though I emphasised that the book recommends more than one prototype, the team wasn’t keen on it and the focus of the pre-lunch discussion was mostly on how to merge all the voted solutions into one prototype
  • As for all other days, and because we decided for an all-in-one prototype, we finished the morning exercises by noon


Reviewing the sketches in silenceThe team reviewing the sketches in silence on Wednesday morning



In the afternoon of day 3, you sketch a storyboard of the prototype together, starting one or two steps before the customer encounters your prototype. You should move the existing sketches into the frames of the storyboard when possible, and add only enough detail that will make it easy to build the prototype the following day.

  • Using masking tape was easier than drawing lines for the storyboard frames
  • It was too easy to come up with new ideas while we were drawing the storyboard and it was tricky to tell people that we couldn’t change the plan at this point
  • It was hard to decide the level of detail we needed to discuss and add to the storyboard. We finished the first iteration of the storyboard a few minutes before 3pm. Our first instinct was to start making more detailed wireframes with the remaining time, but we decided to take a break for coffee and come back to see where we needed more detail in the storyboard instead
  • It was useful to keep asking the team what else we needed to define as we drew the storyboard before we started building the prototype the following day
  • Because we read out the different roles in preparation for Thursday, we ended up assigning roles straight away


Drawing the storyboardDiscussing what to add to our storyboard


Other thoughts on day 3

  • One sprint participant couldn’t attend on Tuesday, but was back on Wednesday, which wasn’t ideal but didn’t impact negatively
  • While setting up for the third day, I wasn’t sure if the ideas from the “Lightning Demos” could be erased from the whiteboard, so I took a photo of them and erased it as, even with the luxury of massive whiteboards, we wouldn’t have had space for the storyboard later on!

By the end of Wednesday we were past the halfway mark of the sprint, and the excitement in anticipation for the Friday tests was palpable. We had some time left before the clock hit 5 and wondered if we should start building the prototype straight away, but decided against it — we needed a good night’s sleep to be ready for day 4.

Day 4

Thursday is all about prototyping. You need to choose which tools you will use, prioritising speed over perfection, and you also need to assign different roles for the team so everyone knows what they need to do throughout the day. The interviewer should write the interview script for Friday’s tests.

  • For the prototype building day, we assigned: two writers, one interviewer, one stitcher, two makers and one asset collector
  • We decided to build the pages we needed with HTML and CSS (instead of using a tool like Keynote or InVision) as we could build upon our existing CSS framework
  • Early in the afternoon we were on track, but we were soon delayed by a wifi outage which lasted for almost 1,5 hours
  • It’s important to keep communication flowing throughout the day to make sure all the assets and content that are needed are created or collected in time for the stitcher to start stitching
  • We were finished by 7pm — if you don’t count the wifi outage, we probably would have been finished by 6pm. The extra hour could have been curtailed if there had been just a little bit more detail in the storyboard page wireframes and in the content delivered to the stitcher, and fewer last minute tiny changes, but all-in-all we did pretty well!


Maker and asset collector working on the prototypeJoana and Greg working on the prototype


Other thoughts on day 4

  • We had our sprint in our office, so it would have been possible for us to ask for help from people outside of the sprint, but we didn’t know whether this was “allowed”
  • We could have assigned more work to the asset collector: the makers and the stitcher were looking for assets themselves as they created the different components and pages rather than delegating the search to the asset collector, which is how we normally work
  • The makers were finished with their tasks more quickly than expected — not having to go through multiple rounds of reviews that sometimes can take weeks makes things much faster!

By the end of Thursday there was no denying we were tired, but happy about what we had accomplished in such a small amount of time: we had a fully working prototype and five participants lined up for Friday testing. We couldn’t wait for the next day!

Day 5

We were all really excited about the Friday testing. We managed to confirm all five participants for the day, and had an excellent interviewer and solid prototype. As the Facilitator, I was also happy to have a day where I didn’t have a lot to do, for a change!

Thoughts and notes on day 5

On Friday, you test your prototype with five users, taking notes throughout. At the end of the day, you identify patterns within the notes and based on these you decide which should be the next steps for your project.

  • We’re lucky to work in a building with lots of companies who employ our target audience, but we wonder how difficult it would have been to find and book the right participants within just 4 days if we needed different types of users or were based somewhere else
  • We filled up an entire whiteboard with notes from the first interview and had to go get extra boards during the break
  • Throughout the day, we removed duplicate notes from the boards to make them easier to scan
  • Some participants don’t talk a lot naturally and need a lot of constant reminding to think out loud
  • We had the benefit of having an excellent researcher in our team who already knows and does everything the book recommends doing. It might have been harder for someone with less research experience to make sure the interviews were unbiased and ran smoothly
  • At the end of the interviews, after listing the patterns we found, we weren’t sure whether we could/should do more thorough analysis of the testing later or if we should chuck the post-it notes in the bin and move on
  • Our end-of-sprint decision was to have a workshop the following week where we’d plan a roadmap based on the findings — could this be considered “cheating” as we’re only delaying making a decision?


The team in the observation roomThe team observing the interviews on Friday


A wall of interview notesA wall of interview notes


The Sprint Book notes that you can have one of two results at the end of your sprint: an efficient failure, or a flawed success. If your prototype doesn’t go down well with the participants, your team has only spent 5 days working on it, rather than weeks or potentially months — you’ve failed efficiently. And if the prototype receives positive feedback from participants, most likely there will still be areas that can be improved and retested — you’ve succeeded imperfectly.

At the end of Friday we all agreed that we our prototype was a flawed success: there were things we tested that we’d had never think to try before and that received great feedback, but some aspects certainly needed a lot more work to get right. An excellent conclusion to 5 intense days of work!

Final words

Despite the hard work involved in planning and getting the logistics right, running the web team’s trial design sprint was fun.

The web team is small and stretched over many websites and products. We really wanted to test this approach so we could propose it to the other teams we work with as an efficient way to collaborate at key points in our release schedules.

We certainly achieved this goal. The people who participated directly in the sprint learned a great deal during the five days. Those in the web team who didn’t participate were impressed with what was achieved in one week and welcoming of the changes it initiated. And the teams we work with seem eager to try the process out in their teams, now that they’ve seen what kind of results can be produced in such a short time.

How about you? Have you run a design sprint? Do you have any advice for us before we do it again? Leave your thoughts in the comments section.

on April 04, 2017 07:24 AM