May 26, 2017

After a little over 6 years, I am embarking on a new adventure. Today is my last day at Canonical, it’s bitter sweet saying goodbye precisely because it has been such a joy and an honor to be working here with so many amazing, talented and friendly people. But I am leaving by choice, and for an opportunity that makes me as excited as leaving makes me sad.

Goodbye Canonical

maltaI’ve worked at Canonical longer than I’ve worked at any company, and I can honestly say I’ve grown more here both personally and professionally than I have anywhere else. It launched my career as a Community Manager, learning from the very best in the industry how to grow, nurture, and excite a world full of people who share the same ideals. I owe so many thanks (and beers) to Jono Bacon, David Planella, Daniel Holbach, Jorge Castro, Nicholas Skaggs, Alan Pope, Kyle Nitzsche and now also Martin Wimpress. I also couldn’t have done any of this without the passion and contributions of everybody in the Ubuntu community who came together around what we were doing.

As everybody knows by now, Canonical has been undergoing significant changes in order to set it down the road to where it needs to be as a company. And while these changes aren’t the reason for my leaving, it did force me to think about where I wanted to go with my future, and what changes were needed to get me there. Canonical is still doing important work, I’m confident it’s going to continue making a huge impact on the technology and open source worlds and I wish it nothing but success. But ultimately I decided that where I wanted to be was along a different path.

Of course I have to talk about the Ubuntu community here. As big of an impact as Canonical had on my life, it’s only a portion of the impact that the community has had. From the first time I attended a Florida LoCo Team event, I was hooked. I had participated in open source projects before, but that was when I truly understood what the open source community was about. Everybody I met, online or in person, went out of their way to make me feel welcome, valuable, and appreciated. In fact, it was the community that lead me to work for Canonical in the first place, and it was the community work I did that played a big role in me being qualified for the job. I want to give a special shout out to Daniel Holbach and Jorge Castro, who built me up from a random contributor to a project owner, and to Elizabeth Joseph and Laura Faulty who encouraged me to take on leadership roles in the community. I’ve made so many close and lasting friendships by being a part of this amazing group of people, and that’s something I will value forever. I was a community member for years before I joined Canonical, and I’m not going anywhere now. Expect to see me around on IRC, mailing lists and other community projects for a long time to come.

Hello Endless

EndlessNext week I will be joining the team at Endless as their Community Manager. Endless is an order of magnitude smaller than Canonical, and they have a young community that it still getting off the ground. So even though I’ll have the same role I had before, there will be new and exciting challenges involved. But the passion is there, both in the company and the community, to really explode into something big and impactful. In the coming months I will be working to setup the tools, processes and communication that will be needed to help that community grow and flourish. After meeting with many of the current Endless employees, I know that my job will be made easier by their existing commitment to both their own community and their upstream communities.

What really drew me to Endless was the company’s mission. It’s not just about making a great open source project that is shared with the world, they have a specific focus on social good and improving the lives of people who the current technology isn’t supporting. As one employee succinctly put it to me: the whole world, empowered. Those who know me well will understand why this resonates with me. For years I’ve been involved in open source projects aimed at early childhood education and supporting those in poverty or places without the infrastructure that most modern technology requires. And while Ubuntu covers much of this, it wasn’t the primary focus. Being able to work full time on a project that so closely aligned with my personal mission was an opportunity I couldn’t pass up.

Broader horizons

Over the past several months I’ve been expanding the number of communities I’m involved in. This is going to increase significantly in my new role at Endless, where I will be working more frequently with upstream and side-stream projects on areas of mutual benefit and interest. I’ve already started to work more with KDE, and I look forward to becoming active in GNOME and other open source desktops soon.

I will also continue to grow my independent project, Phoenicia, which has a similar mission to Endless but a different technology and audience. Now that this is no longer competing in the XPRIZE competition, it releases some restrictions that we had to operate under and frees us to investigate new areas of innovation and collaboration. If you’re interested in game development, or making an impact on the lives of children around the world, come and see what we’re doing.

If anybody wants to reach out to me to chat, you can still reach me at and soon at, tweet me at @mhall119, connect on LinkedIn, chat on Telegram or circle me on Google+. And if we’re ever at a conference together give me a shout, I’d love to grab a drink and catch up.

on May 26, 2017 05:03 PM

S10E12 – Needy Expensive Spiders - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

We discuss going to Linux Fest North West and interview Brian Douglass about uAppExplorer and OpenStore. We’ve got some command line love and we go over your feedback.

It’s Season Ten Episode Twelve of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

Here's an example image

  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • This weeks cover image is taken from Wikimedia.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on May 26, 2017 02:00 PM
Version 17.05.00 of the Firmware Test Suite was released this week as part of  the regular end-of-month release cadence. So what is new in this release?
  • Alex Hung has been busy bringing the SMBIOS tests in-sync with the SMBIOS 3.1.1 standard
  • IBM provided some OPAL (OpenPower Abstraction Layer) Firmware tests:
    • Reserved memory DT validation tests
    • Power management DT Validation tests
  • The first fwts snap was created
  •  Over 40 bugs were fixed
As ever, we are grateful for all the community contributions to FWTS.  The full release details are available from the fwts-devel mailing list.

I expect that the next upcoming ACPICA release will be integrated into the 17.06.00 FWTS release next month.
on May 26, 2017 10:05 AM

May 25, 2017

MAAS 2.2.0 Released!

Andres Rodriguez

I’m happy to announce that MAAS 2.2.0 (final) has now been released, and it introduces quite a few exciting features:

  • MAAS Pods – Ability to dynamically create a machine on demand. This is reflected in MAAS’ support for Intel Rack Scale Design.
  • Hardware Testing
  • DHCP Relay Support
  • Unmanaged Subnets
  • Switch discovery and deployment on Facebook’s Wedge 40 & 100.
  • Various improvements and minor features.
  • MAAS Client Library
  • Intel Rack Scale Design support.

For more information, please read the release notes are available here.

MAAS 2.2.0 is currently available in the following MAAS team PPA.
Please note that MAAS 2.2 will replace the MAAS 2.1 series, which will go out of support. We are holding MAAS 2.2 in the above PPA for a week, to provide enough notice to users that it will replace 2.1 series. In the following weeks, MAAS 2.2 will be backported into Ubuntu Xenial.
on May 25, 2017 01:47 PM

May 24, 2017

Secure Boot is here

Ubuntu has now supported UEFI booting and Secure Boot for long enough that it is available, and reasonably up to date, on all supported releases. Here is how Secure Boot works.

An overview

I'm including a diagram here; I know it's a little complicated, so I will also explain how things happen (it can be clicked to get to the full size image).

In all cases, booting a system in UEFI mode loads UEFI firmware, which typically contains pre-loaded keys (at least, on x86). These keys are usually those from Microsoft so that Windows can load its own bootloader and verify it, as well as those from the computer manufacturer. The firmware doesn't, by itself, know anything special about how to boot the system -- this is something that is informed by NVRAM (or some similar memory that survives a reboot) by way of a few variables: BootOrder, which specified what order to boot things in, as well as BootEntry#### (hex numbers), which contains the path to the EFI image to load, a disk, or some other method of starting the computer (such as booting in the Setup tool for that firmware). If no BootEntry variable listed in BootOrder gets the system booting, then nothing would happen. Systems however will usually at least include a path to a disk as a permanent or default BootEntry. Shim relies on that, or on a distro, to load in the first place.

Once we actually find shim to boot; this will try to validate signatures of the next piece in the puzzle: grub2, MokManager, or fallback, depending on the state of shim's own variables in NVRAM; more on this later.

In the usual scenario, shim will validate the grub2 image successfully, then grub2 itself will try to load the kernel or chainload another EFI binary, after attempting to validate the signatures on these images by way of asking shim to check the signature.


Shim is just a very simple layer that holds on to keys outside of those installed by default on the system (since they normally can't be changed outside of Setup Mode, and require a few steps to do), and knows how to load grub2 in the normal case, as well as how to load MokManager if policy changes need to be applied (such as disabling signature validation or adding new keys), as well as knowing how to load the fallback binary which can re-create BootEntry variables in case the firmware isn't able to handle them. I will expand on MokManager and fallback in a future blog post.

Your diagram says shim is signed by Microsoft, what's up with that?

Indeed, shim is an EFI binary that is signed by Microsoft how we ship it in Ubuntu. Other distributions do the same. This is required because the firmware on most systems already contains Microsoft certificates (pre-loaded in the factory), and it would be impractical to have different shims for each manufacturer of hardware. All EFI binaries can be easily re-signed anyway, we just do things like this to make it as easy as possible for the largest number of people.

One thing this means is that uploads of shim require a lot of effort and testing. Fortunately, since it is used by other distributions too, it is a well-tested piece of code. There is even now a community process to handle review of submissions for signature by Microsoft, in an effort to catch anything outlandish as quickly and as early as possible.

Why reboot once a policy change is made or boot entries are rebuilt?

All of this happens through changes in firmware variables. Rebooting makes sure we can properly take into account changes in the firmware variables, and possibly carry on with other "backlogged" actions that need to happen (for instance, rebuilding BootEntry variables first, and then loading MokManager to add a new signing key before we can load a new grub2 image you signed yourself).


grub2 is not a new piece of the boot process in any way. It's been around for a long while. The difference from booting in BIOS mode compared to in UEFI is that we install an UEFI binary version of grub2. The software is the same, just packaged slightly differently (I may outline the UEFI binary format at some point in the future). It also goes through some code paths that are specific to UEFI, such as checking if we've booting through shim, and if so, asking it to validate signatures. If not, we can still validate signatures, but we would have to do so using the UEFI protocol itself, which is limited to allowing signatures by keys that are included in the firmware, as expressed earlier. Mostly just the Microsoft signatures.

grub2 in UEFI otherwise works just like it would elsewhere: it try to find its grub.cfg configuration file, and follow its instructions to boot the kernel and load the initramfs.

When Secure Boot is enabled, loading the kernel normally requires that the kernel itself is signed. The kernels we install in Ubuntu are signed by Canonical, just like grub2 is, and shim knows about the signing key and can validate these signatures.

At the time of this writing, if the kernel isn't signed or is signed by a key that isn't known, grub2 will fall back to loading the kernel as a normal binary (as in not signed), outside of BootServices (a special mode we're in while booting the system, normally it's exited by the kernel early on as the kernel loads). Exiting BootServices means some special features of the firmware are not available to anything that runs afterwards, so that while things may have been loaded in UEFI mode, they will not have access to everything in firmware. If the kernel is signed correctly, then grub2 leaves the ExitBootServices call to be done by the kernel.

Very soon, we will stop allowing to load unsigned (or signed by unknown keys) kernels in Ubuntu. This is work in progress. This change will not affect most users, only those who build their own kernels. In this case, they will still be able to load kernels by making sure they are signed by some key (such as their own, and I will cover signing things in my next blog entry), and importing that key in shim (which is a step you only need to do once).

The kernel

In UEFI, the kernel enforces that modules loaded are properly signed. This means that for those who need to build their own custom modules, or use DKMS modules (virtualbox, r8168, bbswitch, etc.), you need to take more steps to let the modules load properly.

In order to make this as easy as possible for people, for now we've opted to let users disable Secure Boot validation in shim via a semi-automatic process. Shim is still being verified by the system firmware, but any piece following it that asks shim to validate something will get an affirmative response (ie. things are valid, even if not signed or signed by an unknown key). grub2 will happily load your kernel, and your kernel will be happy to load custom modules. This is obviously not a perfectly secure solution, more of a temporary measure to allow things to carry on as they did before. In the future, we'll replace this with a wizard-type tool to let users sign their own modules easily. For now, signature of binaries and modules is a manual process (as above, I will expand on it in a future blog entry).

Shim validation

To toggle shim validation, if you were using DKMS packages and feel you'd really prefer to have shim validate everything (but be aware that if your system requires these drivers, they will not load and your system may be unusable, or at least whatever needs that driver will not work):
sudo update-secureboot-policy --enable
If nothing happens, it's because you already have shim validation enabled: nothing has required that it be disabled. If things aren't as they should be (for instance, Secure Boot is not enabled on the system), the command will tell you.

And although we certainly don't recommend it, you can disable shim validation yourself with much the same command (see --help). There is an example of use of update-secureboot-policy here.
on May 24, 2017 08:07 PM

By Randall Ross, Ubuntu Evangelist

We’ve all been to “those other” developer events: Sitting in a room watching a succession of never-ending slide presentations. Engagement with the audience, if any, is minimal. We leave with some tips and tools that we might be able to put into practice, but frankly, we attended because we were supposed to. The highlight was actually the opportunity to connect with industry contacts.

Key members of the OpenPOWER Foundation envisioned something completely different in their quest to create the perfect developer event, something that has never been done before: What if developers at a developer event actually spent their time developing?

The OpenPOWER Foundation is an open technical membership organization that enables its member companies to provide customized, innovative solutions based on POWER CPU processors and system platforms that encourage data centers to rethink their approach to technology. The Foundation found that ISVs needed support and encouragement to develop OpenPOWER-based solutions and take advantage of other OpenPOWER Ready components. The demand for OpenPOWER solutions has been growing, and ISVs needed a place to get started.

To solve this challenge, The OpenPOWER Foundation created the first ever Developer Congress, a hands-on event that will take place May 22-25 in San Francisco. The Congress will focus on all aspects of full stack solutions — software, hardware, infrastructure, and tooling — and developers will have the opportunity to learn and develop solutions amongst peers in a collaborative and supportive environment.

The Developer Congress will provide ISVs with development, porting, and optimization tools and techniques necessary to utilize multiple technologies, for example: PowerAI, TensorFlow, Chainer, Anaconda, GPU, FPGA, CAPI, POWER, and OpenBMC. Experts in the latest hot technologies such as deep learning, machine learning, artificial intelligence, databases and analytics, and cloud will be on hand to provide one-on-one advice as needed.

As Event Co-Chair, I had an idea for a different type of event. One where developers are treated as “heroes” (because they are — they are the creators of solutions). My Event Co-Chair Greg Phillips, OpenPOWER Content Marketing Manager at IBM, envisioned an event where developers will bring their laptops and get their hands dirty, working under the tutelage of technical experts to create accelerated solutions.

The OpenPOWER Developer Congress is designed to provide a forum that encourages learning from peers and networking with industry thought leaders. Its format emphasises collaboration with partners to find complementary technologies, and provides on-site mentoring through liaisons assigned to help developers get the most out of their experience.

Support from the OpenPOWER Foundation doesn’t end with the Developer Congress. The OpenPOWER Foundation is dedicated to providing its members with ongoing support in the form of information, access to developer tools and software labs across the globe, and assets for developing on OpenPOWER.

The OpenPOWER Foundation is committed to making an investment in the Developer Congress to provide an expert-rich environment that allows attendees to walk away three days later with new skills, new tools, and new relationships. As Thomas Edison said, “Opportunity is missed by most people because it is dressed in overalls and looks like work.” So developers, come get your hands dirty.

Learn more about the OpenPOWER Developer Congress.

About the author

Randall Ross leads the OpenPOWER Ambassadors, and is a prominent Ubuntu community member. He hails from Vancouver BC, where he leads one of the largest groups of Ubuntu enthusiasts and contributors in the world.

on May 24, 2017 12:59 PM
In my last blog post I mentioned ss, another tool that comes with the iproute2 package and allows you to query statistics about sockets. The same thing that can be done with netstat, with the added benefit that it is typically a little bit faster, and shorter to type.

Just ss by default will display much the same thing as netstat, and can be similarly passed options to limit the output to just what you want. For instance:

$ ss -t
State       Recv-Q Send-Q       Local Address:Port                        Peer Address:Port              
ESTAB       0      0                               
ESTAB       0      0                              
ESTAB       0      0             

ss -t shows just TCP connections. ss -u can be used to show UDP connections, -l will show only listening ports, and things can be further filtered to just the information you want.

I have not tested all the possible options, but you can even forcibly close sockets with -K.

One place where ss really shines though is in its filtering capabilities. Let's list all connections with a source port of 22 (ssh):

$ ss state all sport = :ssh
Netid State      Recv-Q Send-Q     Local Address:Port                      Peer Address:Port              
tcp   LISTEN     0      128                    *:ssh                                  *:*                  
tcp   ESTAB      0      0                          
tcp   LISTEN     0      128                   :::ssh                                 :::* 
And if I want to show only connected sockets (everything but listening or closed):

$ ss state connected sport = :ssh
Netid State      Recv-Q Send-Q     Local Address:Port                      Peer Address:Port              
tcp   ESTAB      0      0             

Similarly, you can have it list all connections to a specific host or range; in this case, using the subnet, which apparently belongs to Google:

$ ss state all dst
Netid State      Recv-Q Send-Q     Local Address:Port                      Peer Address:Port              
tcp   ESTAB      0      0                       
tcp   ESTAB      0      0                        
tcp   ESTAB      0      0         
This is very much the same syntax as for iptables, so if you're familiar with that already, it will be quite easy to pick up. You can also install the iproute2-doc package, and look in /usr/share/doc/iproute2-doc/ss.html for the full documentation.

Try it for yourself! You'll see how well it works. If anything, I'm glad for the fewer characters this makes me type.
on May 24, 2017 12:54 AM

May 23, 2017

By Sumit Gupta, Vice President, IBM Cognitive Systems

10 years ago, every CEO leaned over to his or her CIO and CTO and said, “we got to figure out big data.” Five years ago, they leaned over and said, “we got to figure out cloud.” This year, every CEO is asking their team to figure out “AI” or artificial intelligence.

IBM laid out an accelerated computing future several years ago as part of our OpenPOWER initiative. This accelerated computing architecture has now become the foundation of modern AI and machine learning workloads such as deep learning. Deep learning is so compute intensive that despite using several GPUs in a single server, one computation run of deep learning software can take days, if not weeks, to run.

The OpenPOWER architecture thrives on this kind of compute intensity. The POWER processor has much higher compute density than x86 CPUs (there are up to 192 virtual cores per CPU socket in Power8). This density per core, combined with high-speed accelerator interfaces like NVLINK and CAPI that optimize GPU pairing, provides an exponential performance benefit. And the broad OpenPOWER Linux ecosystem, with 300+ members, means that you can run these high-performance POWER-based systems in your existing data center either on-prem or from your favorite POWER cloud provider at costs that are comparable to legacy x86 architectures.

Take a Hack at the Machine Learning Work Group

The recently formed OpenPOWER Machine Learning Work Group gathers experts in the field to focus on the challenges that machine learning developers are continuously facing. Participants identify use cases, define requirements, and collaborate on solution architecture optimisations. By gathering in a workgroup with a laser focus, people from diverse organisations can better understand and engineer solutions that address similar needs and pain points.

The OpenPOWER Foundation pursues technical solutions using POWER architecture from a variety of member-run work groups. The Machine Learning Work Group is a great example of how hardware and software can be leveraged and optimized across solutions that span the OpenPOWER ecosystem.

Accelerate Your Machine Learning Solution at the Developer Congress

This spring, the OpenPOWER Foundation will host the OpenPOWER Developer Congress, a “get your hands dirty” event on May 22-25 in San Francisco. This unique event provides developers the opportunity to create and advance OpenPOWER-based solutions by taking advantage of on-site mentoring, learning from peers, and networking with developers, technical experts, and industry thought leaders. If you are a developer working on Machine Learning solutions that employ the POWER architecture, this event is for you.

The Congress is focused full stack solutions — software, firmware, hardware infrastructure, and tooling. It’s a hands-on opportunity to ideate, learn, and develop solutions in a collaborative and supportive environment. At the end of the Congress, you will have a significant head start on developing new solutions that utilize OpenPOWER technologies and incorporate OpenPOWER Ready products.

There has never been another event like this one. It’s a developer conference devoted to developing, not sitting through slideware presentations or sales pitches. Industry experts from the top companies that are innovating in deep learning, machine learning, and artificial intelligence will be on hand for networking, mentoring, and providing advice.

A Developer Congress Agenda Specific to Machine Learning

The OpenPOWER Developer Congress agenda addresses a variety of Machine Learning topics. For example, you can participate in hands-on VisionBrain training, learning a new model and generating the API for image classification, using your own family pictures to train the model. The current agenda includes:

• VisionBrain: Deep Learning Development Platform for Computer Vision
• GPU Programming Training, including OpenACC and CUDA
• Inference System for Deep Learning
• Intro to Machine Learning / Deep Learning
• Develop / Port / Optimize on Power Systems and GPUs
• Advanced Optimization
• Spark on Power for Data Science
• Openstack and Database as a Service
• OpenBMC

Bring Your Laptop and Your Best Ideas

The OpenPOWER Developer Congress will take place May 22-25 in San Francisco. The event will provide ISVs with development, porting, and optimization tools and techniques necessary to utilize multiple technologies, for example: PowerAI, TensorFlow, Chainer, Anaconda, GPU, FPGA, CAPI, POWER, and OpenBMC. So bring your laptop and preferred development tools and prepare to get your hands dirty!

About the author

Sumit Gupta is Vice President, IBM Cognitive Systems, where he leads the product and business strategy for HPC, AI, and Analytics. Sumit joined IBM two years ago from NVIDIA, where he led the GPU accelerator business.

on May 23, 2017 09:31 AM

Hi guys! Again… Long time, no see you :-).

As you surely know, in the past weeks Ubuntu took the hard decision of stopping the development of Unity desktop environment, focusing again in shipping GNOME as default DE, and joining the upstream efforts.

While, in a personal note, after more than 6 years of involvement in the Unity development, this is a little heartbreaking, I also think that given the situation this is the right decision, and I’m quite excited to be able to work even closer to the whole opensource community!

Most of the aspects of the future Ubuntu desktop have to be defined yet, and I guess you know that there’s a survey going on I encourage you to participate in order to make your voice count…

One important aspect of this, is the visual appearance, and the Ubuntu Desktop team has decided that the default themes for Ubuntu 17.10 will continue to be the ones you always loved! Right now some work is being done to make sure Ambiance and Radiance look and work good in GNOME Shell.

In the past days I’ve released a  new version of ‘light-themes‘ to fix several theming problems in GNOME Shell.

This is already quite an improvement, but we can’t fix bugs we don’t know about… So this is where you can help make Ubuntu better!

Get Started

If you haven’t already, here’s how I recommend you get started.
Install the latest Ubuntu 17.10 daily image (if not going wild and trying this in 17.04).
After installing it, install gnome-shell.
Install gnome-tweak-tool if you want an easy way to change themes.
On the login screen, switch your session to GNOME and log in.

Report Bugs

Run this command to report bugs with Ambiance or Radiance:

ubuntu-bug light-themes

Attach a screenshot to the Launchpad issue.

Other info

Ubuntu’s default icon theme is ubuntu-mono-dark (or -light if you switch to Radiance) but most of Ubuntu’s customized icons are provided by humanity-icon-theme.

Helping with Themes development

If you want to help with the theming itself, you’re very welcome. Gtk themes are nowadays using CSS, so I’m pretty sure that any Web designer out there can help with them (these are the supported properties).

All you have to do, is simply use the Gtk Inspector that can be launched from any Gtk3 app, and write some CSS rules and see how they get applied on the fly. Once you’re happy with your solution, you can just create a MP for the Ubuntu Themes.

Let’s keep ubuntu, look ubuntu!

PS: thanks to Jeremy Bicha for the help in this post.

on May 23, 2017 05:20 AM

I’ve been running Artful aka 17.10 on my laptop for maybe a month now with no real issues and thats thanks to the awesome Kubuntu developers we have and our testers!


My Current layout has Latte Dock running with a normal panel (yes a bit Mac-y I know). I had to build Latte Dock for myself in Launchpad[1] so if you want to use it on 17.10 with the normal PPA/testing warning, I also have it built for 17.04 if anyone wants that as well[2]. I’m also running the new Qt music player Babe-Qt.

I also have the Uptime widget and a launcher widget in the Latte Dock as it can hold widgets just like a normal panel. Oh and the wallpaper is from awesome game Firewatch (it’s on Linux!)


on May 23, 2017 03:40 AM

May 22, 2017

conjure-up dev summary for week 20

conjure-up dev summary for week 20 conjure-up dev summary for week 20 conjure-up dev summary for week 20

This week we concentrated on polishing up our region and credentials selection capabilities.

Initially, we wanted people deploying big software as fast as possible without it turning into a "choose your own adventure" book. This has been hugely successful as it minimized time and effort to getting to that end result of a usable Kubernetes or OpenStack.

Now fast forward a few months and feature requests were coming in wanting to expand on this basic flow of installing big software. Our first request was to add the ability to specify regions where this software would live, that feature is complete and is known as region selections:

Region Selections

Region selection is supported in both Juju as a Service (JAAS) and as a self hosted controller.

conjure-up dev summary for week 20

In this example we are using JAAS to deploy Canonical Distribution of Kubernetes to the us-east-2 region in AWS:

conjure-up dev summary for week 20

conjure-up will already know what regions are available and will prompt you with that list:

conjure-up dev summary for week 20

Finally, input your credentials and off it goes to having a fully usable Kubernetes cluster in AWS.

conjure-up dev summary for week 20

The credentials view leads us into our second completed feature. Multiple user credentials are supported for all the various clouds and previously conjure-up tried to do the right thing if an existing credential for a cloud was found. Again, in the interest of getting you up and running as fast as possible conjure-up would re-use any credentials it found for the cloud in use. This worked well initially, however, as usage increased so did the need for being able to choose from a list of existing credentials or create your own. That feature work has been completed and will be available in conjure-up 2.2.

Credential Selection

This screen shows you a list of existing credentials that were created for AWS or gives you the option of creating additional ones:

conjure-up dev summary for week 20

Trying out the latest

As always we keep our edge releases current with the latest from our code. If you want to help test out these new features simply run:

sudo snap install conjure-up --classic --edge  

That's all for this week, check back next week for more conjure-up news!

on May 22, 2017 04:19 PM

Welcome to the first ever Ubuntu OpenStack development summary!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!

OpenStack Distribution

Stable Releases

Ceph 10.2.7 for Xenial, Yakkety, Zesty and Trusty-Mitaka UCA:

Open vSwitch updates (2.5.2 and 2.6.1) for Xenial and Yakkety plus associated UCA pockets:

Point releases for Horizon (9.1.2) and Keystone (9.3.0) for Xenial and Trusty-Mitaka UCA:

And the current set of OpenStack Newton point releases have just entered testing:

Development Release

OpenStack Pike b1 is available in Xenial-Pike UCA (working through proposed testing in Artful).

Open vSwitch 2.7.0 is available in Artful and Xenial-Pike UCA.

Expect some focus on development previews for Ceph Luminous (the next stable release) for Artful and the Xenial-Pike UCA in the next month.

OpenStack Snaps

Progress on producing snap packages for OpenStack components continues; snaps for glance, keystone, nova, neutron and nova-hypervisor are available in the snap store in the edge channel – for example:

sudo snap install --edge --classic keystone

Snaps are currently Ocata aligned; once the team have a set of snaps that we’re all comfortable are a good base, we’ll be working towards publication of snaps across tracks for OpenStack Ocata and OpenStack Pike as well as expanding the scope of projects covered with snap packages.

The edge channel for each track will contain the tip of the associated branch for each OpenStack project, with the beta, candidate and release channels being reserved for released versions. These three channels will be used to drive the CI process for validation of snap updates. This should result in an experience something like:

sudo snap install --classic --channel=ocata/stable keystone


sudo snap install --classic --channel=pike/edge keystone

As the snaps mature, the team will be focusing on enabling deployment of OpenStack using snaps in the OpenStack Charms (which will support CI/CD testing) and migration from deb based installs to snap based installs.

Nova LXD

Support for different Cinder block device backends for Nova-LXD has landed into driver (and the supporting os-brick library), allowing Ceph Cinder storage backends to be used with LXD containers; this is available in the Pike development release only.

Work on support for new LXD features to allow multiple storage backends to be used is currently in-flight, allowing the driver to use dedicated storage for its LXD instances alongside any use of LXD via other tools on the same servers.

OpenStack Charms

6 monthly release cycle

The OpenStack Charms project is moving to a 6 monthly release cadence (rather than the 3 month cadence we’ve followed for the last few years); This reflects the reduced rate of new features across OpenStack and the charms, and the improved process for backporting fixes to the stable charm set between releases. The next charm release will be in August, aligned with the release of OpenStack Pike and the Xenial Pike UCA.

If you have bugs that you’d like to see backported to the current stable charm set, please tag them with the ‘stable-backport’ tag (and they will pop-up in the right place in Launchpad) – you can see the current stable bug pipeline here.

Ubuntu Artful and OpenStack Pike Support

Required changes into the OpenStack Charms to support deployment of Ubuntu Artful (the current development release) and OpenStack Pike are landing into the development branches for all charms, alongside the release of Pike b1 into Artful and the Xenial-Pike UCA.

You can consume these charms (as always) via the ~openstack-charmers-next team, for example:

juju deploy cs:~openstack-charmers-next/keystone

IRC (and meetings)

You can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see for more details.


on May 22, 2017 11:00 AM

May 21, 2017

Segundo capítulo del podcast Ubuntu y otras hierbas.

En esta ocasión:
Charlaremos sobre estos temas:
  • Migración a GNOME 3
  • UBPorts mantendrá Unity y Ubuntu Phone
El podcast está disponible para escuchar en:

Podcast en Youtube

Links relacionados al podcast:
Banda sonora del podcast bajo licencia CC BY-NC 4.0.
Vídeo de Marius en la Ubucon sobre UBPorts.

Todos los capítulos del podcast 'Ubuntu y otras hierbas' disponibles aquí.

on May 21, 2017 11:50 AM

Pi Zero as a Serial Gadget

David Tomaschik

I just got a new Raspberry Pi Zero W (the wireless version) and didn’t feel like hooking it up to a monitor and keyboard to get started. I really just wanted a serial console for starters. Rather than solder in a header, I wanted to be really lazy, so decided to use the USB OTG support of the Pi Zero to provide a console over USB. It’s pretty straightforward, actually.

Install Raspbian on MicroSD

First off is a straightforward “install” of Raspbian on your MicroSD card. In my case, I used dd to image the img file from Raspbian to a MicroSD card in a card reader.

dd if=/home/david/Downloads/2017-04-10-raspbian-jessie-lite.img of=/dev/sde bs=1M conv=fdatasync

Mount the /boot partition

You’ll want to mount the boot partition to make a couple of changes. Before doing so, run partprobe to re-read the partition tables (or unplug and replug the SD card). Then mount the partition somewhere convenient.

mount /dev/sde1 /mnt/boot

Edit /boot/config.txt

To use the USB port as an OTG port, you’ll need to enable the dwc2 device tree overlay. This is accomplished by adding a line to /boot/config.txt with dtoverlay=dwc2.

vim /mnt/boot/config.txt
(append dtoverlay=dwc2)

Edit /boot/cmdline.txt

Now we’ll need to tell the kernel to load the right module for the serial OTG support. Open /boot/cmdline.txt, and after rootwait, add modules-load=dwc2,g_serial.

vim /mnt/boot/cmdline.txt
(insert modules-load=dwc2,g_serial after rootwait)

When you save the file, make sure it is all one line, if you have any line wrapping options they may have inserted newlines into the file.

Mount the root (/) partition

Let’s switch the partition we’re dealing with.

umount /mnt/boot
mount /dev/sde2 /mnt/root

Enable a Console on /dev/ttyGS0

/dev/ttyGS0 is the serial port on the USB gadget interface. While we’ll get a serial port, we won’t have a console on it unless we tell systemd to start a getty (the process that handles login and starts shells) on the USB serial port. This is as simple as creating a symlink:

ln -s /lib/systemd/system/getty@.service /mnt/root/etc/systemd/system/

This asks systemd to start a getty on ttyGS0 on boot.

Unmount and boot your Pi Zero

Unmount your SD card, insert the micro SD card into a Pi Zero, and boot with a Micro USB cable between your computer and the OTG port.

Connect via a terminal emulator

You can connect via the terminal emulator of your choice at 115200bps. The Pi Zero shows up as a “Netchip Technology, Inc. Linux-USB Serial Gadget (CDC ACM mode)”, which means that (on Linux) your device will typically be /dev/ttyACM0.

screen /dev/ttyACM0 115200


This is a quick way to get a console on a Raspberry Pi Zero, but it has downsides:

  • Provides only console, no networking.
  • File transfers are “difficult”.
on May 21, 2017 07:00 AM

May 19, 2017

After some internal bikeshedding, we decided to rework the tooling that the Server Team has been working on for git-based source package management. The old tool was usd (Ubuntu Server Dev), as it stemmed from a Canonical Server sprint in Barcelona last year. That name is confusing (acronyms that aren’t obvious are never good) and really the tooling had evolved to be a git wrapper.

So, we renamed everything to be git-ubuntu. Since git is awesome, that means git ubuntu also works as long as git-ubuntu is in your $PATH. The snap (previously usd-nacc) has been deprecated in favor of git-ubuntu (it still exists, but if you try to run, e.g., usd-nacc.usd you are told to install the git-ubuntu snap). To get it, use:

sudo snap install --classic git-ubuntu

We are working on some relatively big changes to the code-base to release next week:

  1. Empty directory support (LP: #1687057). My colleague Robie Basak implemented a workaround for upstream git not being able to represent empty directories.
  2. Standardizing (internal to the code) how the remote(s) work and what refspecs are used to fetch from them.

Along with those architectural changes, one big functional shift is to using git-config to store some metadata about the user (specifically, the Launchpad user name to use, in ~/.gitconfig) and the command used to create the repository (specifically, the source package name, in <dir>/.git/config). I think this actually ends up being quite clean from an end-user perspective, and it means our APIs and commands are easier to use, as we can just lookup this information from git-config when using an existing repository.

As always, the latest code is at:

on May 19, 2017 10:45 PM

I personally stopped blogging a while ago, but I kinda feel this is a necessary blog post. A while ago, a professor used the term Community Manager interchangeably with Social Media Manager. I made a quick comment about them not being the same, and later some friends started questioning my reasoning.

Last week during OSCON, I was chatting with a friend about this same topic. A startup asked me to help them as a community manager about a year ago. I gladly accepted, but then they just started assigning me social media work.

And these are not the only cases. I have seen several people use both terms interchangeably, which they are not. I guess it’s time for me to burst the bubble that has been wrapped around both terms.

In order to do that, let’s explain, in very simple words, the job of both of them. Let’s start with the social media managers. Their job is, as the title says, to manage social media channels. Post on Facebook, respond to Twitter replies, make sure people feel welcome when they visit any of the social media channels, and automatically represent the brand/product through social media.

Community managers, on the other side, focus on building communities around a product or service. To put it simpler, they make sure that there’s people that care about the product, that contribute to the ecosystem, that work with the provider to make it better. Social media management is a part of their job. It is a core function, because it is one of the channels that allow them to communicate with people to hear their concerns and project the provider’s voice about the product. However, it is not their only job. They also have to go out and meet with people, in real life. Talk with higher-ups to voice the concerns. Hear how the product is impacting people’s life in order to make a change, or continue on the same good path.

With this, I’m not trying to devalue the work of social media managers. On the other hand, they are have a very valuable job. Imagine all those companies with social media profiles, without the funny comments. No message replies if you had a question. Horrible, right? Managing these channels is as important, so brands/products are ‘alive’ on the interwebs. Being a community manager is not only managing a channel. Therefore, they are not comparable jobs.

Each of the positions is different, even though they complement each other pretty well. But I hope that with this post you can understand a little bit more about the inside job of both community managers and social media managers. In a fast-paced world like ours today, these two can make a huge difference between having a presence online, or not. So, next time, if you’re looking for a community manager, don’t expect them to do only social media work. And viceversa – if you’re looking for a social media manager, don’t expect them to build community out of social media.

on May 19, 2017 05:02 PM

Hot on the heels of landing Cockpit in Debian unstable and Ubuntu 17.04, the Ubuntu backport request got approved (thanks Iain!), which means that installing cockpit on Ubuntu 16.04 LTS or 16.10 is now also a simple apt install cockpit away. I updated the installation instructions accordingly.

Enjoy, and let us know about problems!

on May 19, 2017 02:49 PM


Vulnerabilities were identified in the Belden GarrettCom 6K and 10KT (Magnum) series network switches. These were discovered during a black box assessment and therefore the vulnerability list should not be considered exhaustive; observations suggest that it is likely that further vulnerabilities exist. It is strongly recommended that GarrettCom undertake a full whitebox security assessment of these switches.

The version under test was indicated as: 4.6.0. Belden Garrettcom released an advisory on 8 May 2017, indicating that issues were fixed in 4.7.7:

This is a local copy of an advisory posted to the Full Disclosure mailing list.

GarrettCom-01 - Authentication Bypass: Hardcoded Web Interface Session Token

Severity: High

The string “GoodKey” can be used in place of a session token for the web interface, allowing a complete bypass to all web interface authentication. The following request/response demonstrates adding a user ‘gibson’ with the password ‘god’ on any GarrettCom 6K or 10K switch.

GET /gc/service.php?a=addUser&uid=gibson&pass=god&type=manager&key=GoodKey HTTP/1.1
Connection: close
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.28 Safari/537.36
Accept: */*
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8

HTTP/1.0 200 OK
Server: GoAhead-Webs
Content-Type: text/html

<?xml version='1.0' encoding='UTF-8'?><data val="users"><changed val="yes" />
<helpfile val="user_accounts.html" />
<user uid="operator" access="Operator" />
<user uid="manager" access="Manager" />
<user uid="gibson" access="Manager" />

GarrettCom-02 - Secrets Accessible to All Users

Severity: High

Unprivileged but authenticated users (“operator” level access) can view the plaintext passwords of all users configured on the system, allowing them to escalate privileges to “manager” level. While the default “show config” masks the passwords, executing “show config saved” includes the plaintext passwords. The value of the “secrets” setting does not affect this.

6K>show config group=user saved
#User Management#
add user=gibson level=2 pass=god

GarrettCom-03 - Stack Based Buffer Overflow in HTTP Server

Severity: High

When rendering the /gc/flash.php page, the server performs URI encoding of the Host header into a fixed-length buffer on the stack. This decoding appears unbounded and can lead to memory corruption, possibly including remote code execution. Sending garbage data appears to hang the webserver thread after responding to the present request. Requests with Host headers longer than 220 characters trigger the observed behavior.

GET /gc/flash.php HTTP/1.1
Connection: close
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.28 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8

GarrettCom-04 - SSL Keys Shared Across Devices

Severity: Moderate

The SSL certificate on all devices running firmware version 4.6.0 is the same. This issue was previously reported and an advisory released by ICS-CERT. While GarrettCom reported the issue was fixed in 4.5.6, the web server certificate remains static in 4.6.0:

openssl s_client -connect -showcerts
depth=0 C = US, ST = California, L = Fremont, O = Belden, OU = Technical Support, CN =, emailAddress =
verify error:num=18:self signed certificate
verify return:1
depth=0 C = US, ST = California, L = Fremont, O = Belden, OU = Technical Support, CN =, emailAddress =
verify return:1
Certificate chain
0 s:/C=US/ST=California/L=Fremont/O=Belden/OU=Technical Support/CN=
i:/C=US/ST=California/L=Fremont/O=Belden/OU=Technical Support/CN=

Note that Belden Garrettcom has addressed this issue by reinforcing that users of the switches should install their own SSL certificates if they do not want to use the default certificate and key.

GarrettCom-05 - Weak SSL Ciphers Enabled

Severity: Moderate

Many of the SSL ciphers available for the switch are outdated or use insecure ciphers or hashes.  Additionally, no key exchanges with perfect forward secrecy are offered, rendering all previous communications possibly compromised, given the issue reported above.  Particularly of note is the use of 56-bit DES, RC4, and MD5-based MACs.

ssl3: AES256-SHA
ssl3: DES-CBC3-SHA
ssl3: AES128-SHA
ssl3: SEED-SHA
ssl3: RC4-SHA
ssl3: RC4-MD5
tls1: AES256-SHA
tls1: DES-CBC3-SHA
tls1: AES128-SHA
tls1: SEED-SHA
tls1: RC4-SHA
tls1: RC4-MD5
tls1_1: AES256-SHA
tls1_1: CAMELLIA256-SHA
tls1_1: DES-CBC3-SHA
tls1_1: AES128-SHA
tls1_1: SEED-SHA
tls1_1: CAMELLIA128-SHA
tls1_1: RC4-SHA
tls1_1: RC4-MD5
tls1_1: DES-CBC-SHA
tls1_2: AES256-GCM-SHA384
tls1_2: AES256-SHA256
tls1_2: AES256-SHA
tls1_2: CAMELLIA256-SHA
tls1_2: DES-CBC3-SHA
tls1_2: AES128-GCM-SHA256
tls1_2: AES128-SHA256
tls1_2: AES128-SHA
tls1_2: SEED-SHA
tls1_2: CAMELLIA128-SHA
tls1_2: RC4-SHA
tls1_2: RC4-MD5
tls1_2: DES-CBC-SHA

GarrettCom-06 - Weak HTTP session key generation

Severity: Moderate

The HTTP session key generation is predictable due to the lack of randomness in the generation process. The key is generated by hashing the previous hash result with the current time unit with precision around 50 unit per second. The previous hash is replaced with a fixed salt for the first hash generation.

The vulnerability allows an attacker to predict the first key that’s generated by the switch if he has some knowledge about when the key was generated. Alternatively, the vulnerability also enables privilege escalation attacks which predict all future keys by observing two consecutive key generations of lower privileges.


  • 2017/01/?? - Issues Discovered
  • 2017/01/27 - Reported to
  • 2017/04/27 - 90 day timeline expires, Belden reports patched release forthcoming.
  • 2017/05/08 - Belden releases update & advisory.
  • 2017/05/15 - Disclosure published


These issues were discovered by Andrew Griffiths, David Tomaschik, and Xiaoran Wang of the Google Security Assessments Team.

on May 19, 2017 07:00 AM

Recently I got a Google Home (thanks to Google for the kind gift). Last week I recorded a quick video of me interrogating it:

Well, tonight I nipped upstairs quickly to go and grab something and as I came back downstairs I heard Jack, our vivacious 4-year-old asking it questions, seemingly not expecting daddy to be listening.

This is when I discovered the wild curiosity in a 4-year-old’s mind. It included such size and weight questions as…

OK Google, how big is the biggest teddy bear?


OK Google, how much does my foot weigh?

…to curiosities about physics…

OK Google, can chickens fly faster than space rockets?

…to queries about his family…

OK Google, is daddy a mommy?

…and while asking this question he interrupted Google’s denial of an answer with…

OK Google, has daddy eaten a giant football?

Jack then switched gears a little bit, out of likely frustration that Google seemed to “not have an answer for that yet” to all of his questions, and figured the more confusing the question, the more likely that talking thing in the kitchen would work:

OK Google, does a guitar make a hggghghghgghgghggghghg sound?

Google was predictably stumped with the answer. So, in classic Jack fashion, the retort was:

OK Google, banana peel.

While this may seem random, it isn’t:

I would love to see the world through his eyes, it must be glorious.

The post Google Home: An Insight Into a 4-Year-Old’s Mind appeared first on Jono Bacon.

on May 19, 2017 03:27 AM

Last week, we presented a new paper that describes how children are thinking through some of the implications of new forms of data collection and analysis. The presentation was given at the ACM CHI conference in Denver last week and the paper is open access and online.

Over the last couple years, we’ve worked on a large project to support children in doing — and not just learning about — data science. We built a system, Scratch Community Blocks, that allows the 18 million users of the Scratch online community to write their own computer programs — in Scratch of course — to analyze data about their own learning and social interactions. An example of one of those programs to find how many of one’s follower in Scratch are not from the United States is shown below.

Last year, we deployed Scratch Community Blocks to 2,500 active Scratch users who, over a period of several months, used the system to create more than 1,600 projects.

As children used the system, Samantha Hautea, a student in UW’s Communication Leadership program, led a group of us in an online ethnography. We visited the projects children were creating and sharing. We followed the forums where users discussed the blocks. We read comment threads left on projects. We combined Samantha’s detailed field notes with the text of comments and forum posts, with ethnographic interviews of several users, and with notes from two in-person workshops. We used a technique called grounded theory to analyze these data.

What we found surprised us. We expected children to reflect on being challenged by — and hopefully overcoming — the technical parts of doing data science. Although we certainly saw this happen, what emerged much more strongly from our analysis was detailed discussion among children about the social implications of data collection and analysis.

In our analysis, we grouped children’s comments into five major themes that represented what we called “critical data literacies.” These literacies reflect things that children felt were important implications of social media data collection and analysis.

First, children reflected on the way that programmatic access to data — even data that was technically public — introduced privacy concerns. One user described the ability to analyze data as, “creepy”, but at the same time, “very cool.” Children expressed concern that programmatic access to data could lead to “stalking“ and suggested that the system should ask for permission.

Second, children recognized that data analysis requires skepticism and interpretation. For example, Scratch Community Blocks introduced a bug where the block that returned data about followers included users with disabled accounts. One user, in an interview described to us how he managed to figure out the inconsistency:

At one point the follower blocks, it said I have slightly more followers than I do. And, that was kind of confusing when I was trying to make the project. […] I pulled up a second [browser] tab and compared the [data from Scratch Community Blocks and the data in my profile].

Third, children discussed the hidden assumptions and decisions that drive the construction of metrics. For example, the number of views received for each project in Scratch is counted using an algorithm that tries to minimize the impact of gaming the system (similar to, for example, Youtube). As children started to build programs with data, they started to uncover and speculate about the decisions behind metrics. For example, they guessed that the view count might only include “unique” views and that view counts may include users who do not have accounts on the website.

Fourth, children building projects with Scratch Community Blocks realized that an algorithm driven by social data may cause certain users to be excluded. For example, a 13-year-old expressed concern that the system could be used to exclude users with few social connections saying:

I love these new Scratch Blocks! However I did notice that they could be used to exclude new Scratchers or Scratchers with not a lot of followers by using a code: like this:
when flag clicked
if then user’s followers < 300
stop all.
I do not think this a big problem as it would be easy to remove this code but I did just want to bring this to your attention in case this not what you would want the blocks to be used for.

Fifth, children were concerned about the possibility that measurement might distort the Scratch community’s values. While giving feedback on the new system, a user expressed concern that by making it easier to measure and compare followers, the system could elevate popularity over creativity, collaboration, and respect as a marker of success in Scratch.

I think this was a great idea! I am just a bit worried that people will make these projects and take it the wrong way, saying that followers are the most important thing in on Scratch.

Kids’ conversations around Scratch Community Blocks are good news for educators who are starting to think about how to engage young learners in thinking critically about the implications of data. Although no kid using Scratch Community Blocks discussed each of the five literacies described above, the themes reflect starting points for educators designing ways to engage kids in thinking critically about data.

Our work shows that if children are given opportunities to actively engage and build with social and behavioral data, they might not only learn how to do data analysis, but also reflect on its implications.

This blog-post and the work that it describes is a collaborative project by Samantha Hautea, Sayamindu Dasgupta, and Benjamin Mako Hill. We have also received support and feedback from members of the Scratch team at MIT (especially Mitch Resnick and Natalie Rusk), as well as from Hal Abelson from MIT CSAIL. Financial support came from the US National Science Foundation.
on May 19, 2017 12:51 AM

May 18, 2017

With the Ubuntu Desktop transitioning from Unity to GNOME, what does that look like in 17.10 and 18.04?  You can learn some more about that by checking out my interview on OMG! Ubuntu!, but more importantly our friends at OMG! Ubuntu! are helping us out by collecting some user feedback on GNOME extensions and defaults.  Please take a few minutes and complete the survey.  Thanks for your input!
on May 18, 2017 04:51 PM

S10E11 – Flaky Noxious Rainstorm - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

We talk about getting older and refusbishing ZX Spectrums. Ubuntu, Fedora and openSUSE are “apps” in the Windows Store. Android has a new modular base, Fuscia has a new UI layer, HP laptops come pre-installed with a keylogger, many people WannaCry and Ubuntu start to reveal their plans for Ubuntu 17.10 desktop.

It’s Season Ten Episode Eleven of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpressare connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on May 18, 2017 02:00 PM


Within the next three years, more than seven billion people and businesses will be connected to the Internet. During this time of dramatic increases in access to the Internet, networks have seen an interesting proliferation of systems for digital identity management (i.e. our SPID in Italy). But what is really meant by “digital identity“? All these systems are implemented in order to have the utmost certainty that the data entered by the subscriber (address, name, birth, telephone, email, etc.) is directly coincident with that of the physical person. In other words, data are certified to be “identical” to those of the user; there is a perfect overlap between the digital page and the authentic user certificate: an “idem“, that is, an identity.

This identity is our personal records reflected on the net, nothing more than that. Obviously, this data needs to be appropriately protected from malicious attacks by means of strict privacy rules, as it contains so-called “sensitive” information, but this data itself is not sufficiently interesting for the commercial market, except for statistical purposes on homogeneous population groups. What may be a real goldmine for the “web company” is another type of information: user’s ipseity. It is important to immediately remove the strong semantic ambiguity that weighs on the notion of identity. There are two distinct meanings…

<Read More…[by Fabio Marzocca]>

on May 18, 2017 08:50 AM

May 17, 2017

A group has recently been formed on Meetup seeking to build a food computer in Zurich. The initial meeting is planned for 6:30pm on 20 June 2017 at ETH, (Zurich Centre/Zentrum, Rämistrasse 101).

The question of food security underlies many of the world's problems today. In wealthier nations, we are being called upon to trust a highly opaque supply chain and our choices are limited to those things that major supermarket chains are willing to stock. A huge transport and storage apparatus adds to the cost and CO2 emissions and detracts from the nutritional value of the produce that reaches our plates. In recent times, these problems have been highlighted by the horsemeat scandal, the Guacapocalypse and the British Hummus crisis.

One interesting initiative to create transparency and encourage diversity in our diets is the Open Agriculture (OpenAg) Initiative from MIT, summarised in this TED video from Caleb Harper. The food produced is healthier and fresher than anything you might find in a supermarket and has no exposure to pesticides.

An open source approach to food

An interesting aspect of this project is the promise of an open source approach. The project provides hardware plans, a a video of the build process, source code and the promise of sharing climate recipes (scripts) to replicate the climates of different regions, helping ensure it is always the season for your favour fruit or vegetable.

Do we need it?

Some people have commented on the cost of equipment and electricity. Carsten Agger recently blogged about permaculture as a cleaner alternative. While there are many places where people can take that approach, there are also many overpopulated regions and cities where it is not feasible. Some countries, like Japan, have an enormous population and previously productive farmland contaminated by industry, such as the Fukushima region. Growing our own food also has the potential to reduce food waste, as individual families and communities can grow what they need.

Whether it is essential or not, the food computer project also provides a powerful platform to educate people about food and climate issues and an exciting opportunity to take the free and open source philosophy into many more places in our local communities. The Zurich Meetup group has already received expressions of interest from a diverse group including professionals, researchers, students, hackers, sustainability activists and free software developers.

Next steps

People who want to form a group in their own region can look in the forum topic "Where are you building your Food Computer?" to find out if anybody has already expressed interest.

Which patterns from the free software world can help more people build more food computers? I've already suggested using Debian's live-wrapper to distribute a runnable ISO image that can boot from a USB stick, can you suggest other solutions like this?

Can you think of any free software events where you would like to see a talk or exhibit about this project? Please suggest them on the OpenAg forum.

There are many interesting resources about the food crisis, an interesting starting point is watching the documentary Food, Inc.

If you are in Switzerland, please consider attending the meeting on at 6:30pm on 20 June 2017 at ETH (Centre/Zentrum), Zurich.

One final thing to contemplate: if you are not hacking your own food supply, who is?

on May 17, 2017 06:41 PM

The X-Updates PPA has the latest Mesa release for xenial and zesty, go grab it while it’s hot!

On a related note, Mesa 17.0.6 got accepted to zesty-proposed, so give that a spin too if you’re running zesty and report “yay!”/”eww..” on the SRU bug.

on May 17, 2017 04:20 PM

It's now super easy to build Linux software packages on your Windows laptop. No VM required, no need for remote Linux hosts.

I spend a lot of my day talking to developers about their Linux software packaging woes. Many of them are using Linux desktops as their primary development platform. Some aren't, and that's their (or their employers) choice. For those developers who run Windows and want to target Linux for their applications, things just got a bit easier.

Snapcraft now runs on Windows, via Bash on Windows (a.k.a Windows Subsystem for Linux).

Snapcraft on Windows

Microsoft updated their Windows Subsystem for Linux to include Ubuntu 16.04.2 LTS as the default image. When WSL first launched a year ago, it shipped 14.04 to early adopters and developers.

Snapcraft is available in the Ubuntu 16.04 repositories, so is install-able inside WSL. So developers on Windows can easily run Snapcraft to package their software as snaps, and push to the store. No need for virtual machines, or separate remote hosts running Linux.

I made a quick video about it here. Please share it with your Windows-using developer friends :)

If you already have WSL setup with Ubuntu 14.04 the easiest way to move to 16.04.2 is to delete the install and start again. This will remove the installed packages and data in your WSL setup, so backup first. Open a command prompt window in Windows and run:-

lxrun /uninstall

To re-install, which will pull down Ubuntu 16.04.2 LTS:-

lxrun /install

Re-installing Ubuntu

Once you've got Ubuntu 16.04.2 LTS in WSL, launch it from the start menu then install snapcraft from the Ubuntu repositories with:-

sudo apt install snapcraft

Once that's done, you can either launch snapcraft within Bash on Windows, or directly from a Window command prompt shell with bash -c snapcraft. Here's a video showing me building the Linux version of storjshare using this configuration on my Windows 10 desktop from a standard command prompt window. I sped the video up because nobody wants to watch 8 minutes of shell scrolling by in realtime. Also, my desktop is quite slow. :)

You can find out more about snapcraft from our documentation, tutorials, and the videos. We also have a forum where we'd love to hear about your experience with snapcraft.

on May 17, 2017 02:00 PM

May 16, 2017

This cycle I’ve decided to look at what art and design I can improve in Kubuntu. The first was having something we’ve never had, a wallpaper contest that’s available to ALL users. We’re encouraging all our artist users to submit art ranging from digital, photographic, and beyond! You all have till June 8 to submit your wonderful and artful work.

I’ve also taken a look at our slideshow in its current state:

After a weekend, a few extra days and some great feedback from both our Kubuntu community and the KDE VDG, I’m really pleased with the improvement. It’s based on the work the Ubuntu Budgie team have done on their slideshow so a HUGGGGEEEE shout out to them for their great work. Here is the current WIP state:

Currently the work is in my own branch, of the ubiquity-slideshow-ubuntu package on Launchpad. Once I’ve mostly finished and we’ve gotten some feedback, I’ll send a merge request to put it in the main package and brought to you for Kubuntu 17.10.

on May 16, 2017 10:40 PM

In my previous blog on the topic of software defined radio (SDR), I provided a quickstart guide to using gqrx, GNU Radio and the RTL-SDR dongle to receive FM radio and the amateur 2 meter (VHF) band.

Using the same software configuration and the same RTL-SDR dongle, it is possible to add some extra components and receive ham radio and shortwave transmissions from around the world.

Here is the antenna setup from the successful SDR workshop at OSCAL'17 on 13 May:

After the workshop on Saturday, members of the OSCAL team successfully reconstructed the SDR and antenna at the Debian info booth on Sunday and a wide range of shortwave and ham signals were detected:

Here is a close-up look at the laptop, RTL-SDR dongle (above laptop), Ham-It-Up converter (above water bottle) and MFJ-971 ATU (on right):

Buying the parts

Component Purpose, Notes Price/link to source
RTL-SDR dongle Converts radio signals (RF) into digital signals for reception through the USB port. It is essential to buy the dongles for SDR with TCXO, the generic RTL dongles for TV reception are not stable enough for anything other than TV. ~ € 25
Enamelled copper wire, 25 meters or more Loop antenna. Thicker wire provides better reception and is more suitable for transmitting (if you have a license) but it is heavier. The antenna I've demonstrated at recent events uses 1mm thick wire. ~ € 10
4 (or more) ceramic egg insulators Attach the antenna to string or rope. Smaller insulators are better as they are lighter and less expensive. ~ € 10
4:1 balun The actual ratio of the balun depends on the shape of the loop (square, rectangle or triangle) and the point where you attach the balun (middle, corner, etc). You may want to buy more than one balun, for example, a 4:1 balun and also a 1:1 balun to try alternative configurations. Make sure it is waterproof, has hooks for attaching a string or rope and an SO-239 socket. from € 20
5 meter RG-58 coaxial cable with male PL-259 plugs on both ends If using more than 5 meters or if you want to use higher frequencies above 30MHz, use thicker, heavier and more expensive cables like RG-213. The cable must be 50 ohm. ~ € 10
Antenna Tuning Unit (ATU) I've been using the MFJ-971 for portable use and demos because of the weight. There are even lighter and cheaper alternatives if you only need to receive. ~ € 20 for receive only or second hand
PL-259 to SMA male pigtail, up to 50cm, RG58 Joins the ATU to the upconverter. Cable must be RG58 or another 50 ohm cable ~ € 5
Ham It Up v1.3 up-converter Mixes the HF signal with a signal from a local oscillator to create a new signal in the spectrum covered by the RTL-SDR dongle ~ € 40
SMA (male) to SMA (male) pigtail Join the up-converter to the RTL-SDR dongle ~ € 2
USB charger and USB type B cable Used for power to the up-converter. A spare USB mobile phone charge plug may be suitable. ~ € 5
String or rope For mounting the antenna. A ligher and cheaper string is better for portable use while a stronger and weather-resistent rope is better for a fixed installation. € 5

Building the antenna

There are numerous online calculators for measuring the amount of enamelled copper wire to cut.

For example, for a centre frequency of 14.2 MHz on the 20 meter amateur band, the antenna length is 21.336 meters.

Add an extra 24 cm (extra 12 cm on each end) for folding the wire through the hooks on the balun.

After cutting the wire, feed it through the egg insulators before attaching the wire to the balun.

Measure the extra 12 cm at each end of the wire and wrap some tape around there to make it easy to identify in future. Fold it, insert it into the hook on the balun and twist it around itself. Use between four to six twists.

Strip off approximately 0.5cm of the enamel on each end of the wire with a knife, sandpaper or some other tool.

Insert the exposed ends of the wire into the screw terminals and screw it firmly into place. Avoid turning the screw too tightly or it may break or snap the wire.

Insert string through the egg insulators and/or the middle hook on the balun and use the string to attach it to suitable support structures such as a building, posts or trees. Try to keep it at least two meters from any structure. Maximizing the surface area of the loop improves the performance: a circle is an ideal shape, but a square or 4:3 rectangle will work well too.

For optimal performance, if you imagine the loop is on a two-dimensional plane, the first couple of meters of feedline leaving the antenna should be on the plane too and at a right angle to the edge of the antenna.

Join all the other components together using the coaxial cables.

Configuring gqrx for the up-converter and shortwave signals

Inspect the up-converter carefully. Look for the crystal and find the frequency written on the side of it. The frequency written on the specification sheet or web site may be wrong so looking at the crystal itself is the best way to be certain. On my Ham It Up, I found a crystal with 125.000 written on it, this is 125 MHz.

Launch gqrx, go to the File menu and select I/O devices. Change the LNB LO value to match the crystal frequency on the up-converter, with a minus sign. For my Ham It Up, I use the LNB LO value -125.000000 MHz.

Click OK to close the I/O devices window.

On the Input Controls tab, make sure Hardware AGC is enabled.

On the Receiver options tab, change the Mode value. Commercial shortwave broadcasts use AM and amateur transmission use single sideband: by convention, LSB is used for signals below 10MHz and USB is used for signals above 10MHz. To start exploring the 20 meter amateur band around 14.2 MHz, for example, use USB.

In the top of the window, enter the frequency, for example, 14.200 000 MHz.

Now choose the FFT Settings tab and adjust the Freq zoom slider. Zoom until the width of the display is about 100 kHZ, for example, from 14.15 on the left to 14.25 on the right.

Click the Play icon at the top left to start receiving. You may hear white noise. If you hear nothing, check the computer's volume controls, move the Gain slider (bottom right) to the maximum position and then lower the Squelch value on the Receiver options tab until you hear the white noise or a transmission.

Adjust the Antenna Tuner knobs

Now that gqrx is running, it is time to adjust the knobs on the antenna tuner (ATU). Reception improves dramatically when it is tuned correctly. Exact instructions depend on the type of ATU you have purchased, here I present instructions for the MFJ-971 that I have been using.

Turn the TRANSMITTER and ANTENNA knobs to the 12 o'clock position and leave them like that. Turn the INDUCTANCE knob while looking at the signals in the gqrx window. When you find the best position, the signal strength displayed on the screen will appear to increase (the animated white line should appear to move upwards and maybe some peaks will appear in the line).

When you feel you have found the best position for the INDUCTANCE knob, leave it in that position and begin turning the ANTENNA knob clockwise looking for any increase in signal strength on the chart. When you feel that is correct, begin turning the TRANSMITTER knob.

Listening to a transmission

At this point, if you are lucky, some transmissions may be visible on the gqrx screen. They will appear as darker colours in the waterfall chart. Try clicking on one of them, the vertical red line will jump to that position. For a USB transmission, try to place the vertical red line at the left hand side of the signal. Try dragging the vertical red line or changing the frequency value at the top of the screen by 100 Hz at a time until the station is tuned as well as possible.

Try and listen to the transmission and identify the station. Commercial shortwave broadcasts will usually identify themselves from time to time. Amateur transmissions will usually include a callsign spoken in the phonetic alphabet. For example, if you hear "CQ, this is Victor Kilo 3 Tango Quebec Romeo" then the station is VK3TQR. You may want to note down the callsign, time, frequency and mode in your log book. You may also find information about the callsign in a search engine.

The video demonstrates reception of a transmission from another country, can you identify the station's callsign and find his location?

If you have questions about this topic, please come and ask on the Debian Hams mailing list. The gqrx package is also available in Fedora and Ubuntu but it is known to crash on startup in Ubuntu 17.04. Users of other distributions may also want to try the Debian Ham Blend bootable ISO live image as a quick and easy way to get started.

on May 16, 2017 06:34 PM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In April, about 190 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Antoine Beaupré did 19.5 hours (out of 16h allocated + 5.5 remaining hours, thus keeping 2 extra hours for May).
  • Ben Hutchings did 12 hours (out of 15h allocated, thus keeping 3 extra hours for May).
  • Brian May did 10 hours.
  • Chris Lamb did 18 hours.
  • Emilio Pozuelo Monfort did 17.5 hours (out of 16 hours allocated + 3.5 hours remaining, thus keeping 2 hours for May).
  • Guido Günther did 12 hours (out of 8 hours allocated + 4 hours remaining).
  • Hugo Lefeuvre did 15.5 hours (out of 6 hours allocated + 9.5 hours remaining).
  • Jonas Meurer did nothing (out of 4 hours allocated + 3.5 hours remaining, thus keeping 7.5 hours for May).
  • Markus Koschany did 23.75 hours.
  • Ola Lundqvist did 14 hours (out of 20h allocated, thus keeping 6 extra hours for May).
  • Raphaël Hertzog did 11.25 hours (out of 10 hours allocated + 1.25 hours remaining).
  • Roberto C. Sanchez did 16.5 hours (out of 20 hours allocated + 1 hour remaining, thus keeping 4.5 extra hours for May).
  • Thorsten Alteholz did 23.75 hours.

Evolution of the situation

The number of sponsored hours decreased slightly and we’re now again a little behind our objective.

The security tracker currently lists 54 packages with a known CVE and the dla-needed.txt file 37. The number of open issues is comparable to last month.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on May 16, 2017 03:52 PM

Welcome to the Ubuntu Weekly Newsletter. This is issue #507 for the weeks May 1-15, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • Chris Guiver
  • Nathan Handler
  • Jose Antonio Rey
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on May 16, 2017 04:20 AM

May 15, 2017

The Firmware Test Suite (FWTS) has an easy to use text based front-end that is primarily used by the FWTS Live-CD image but it can also be used in the Ubuntu terminal.

To install and run the front-end use:

 sudo apt-get install fwts-frontend  
sudo fwts-frontend-text

..and one should see a menu of options:

In this demonstration, the "All Batch Tests" option has been selected:

Tests will be run one by one and a progress bar shows the progress of each test. Some tests run very quickly, others can take several minutes depending on the hardware configuration (such as number of processors).

Once the tests are all complete, the following dialogue box is displayed:

The test has saved several files into the directory /fwts/15052017/1748/ and selecting Yes one can view the results log in a scroll-box:

Exiting this, the FWTS frontend dialog is displayed:

Press enter to exit (note that the Poweroff option is just for the fwts Live-CD image version of fwts-frontend).

The tool dumps various logs, for example, the above run generated:

 ls -alt /fwts/15052017/1748/  
total 1388
drwxr-xr-x 5 root root 4096 May 15 18:09 ..
drwxr-xr-x 2 root root 4096 May 15 17:49 .
-rw-r--r-- 1 root root 358666 May 15 17:49 acpidump.log
-rw-r--r-- 1 root root 3808 May 15 17:49 cpuinfo.log
-rw-r--r-- 1 root root 22238 May 15 17:49 lspci.log
-rw-r--r-- 1 root root 19136 May 15 17:49 dmidecode.log
-rw-r--r-- 1 root root 79323 May 15 17:49 dmesg.log
-rw-r--r-- 1 root root 311 May 15 17:49 README.txt
-rw-r--r-- 1 root root 631370 May 15 17:49 results.html
-rw-r--r-- 1 root root 281371 May 15 17:49 results.log

acpidump.log is a dump of the ACPI tables in format compatible with the ACPICA acpidump tool.  The results.log file is a copy of the results generated by FWTS and results.html is a HTML formatted version of the log.
on May 15, 2017 05:27 PM

I thought I was being smart.  By not buying through AVADirect I wasn’t going to be using an insecure site to purchase my new computer.

For the curious I ended purchasing through eBay (A rating) and Newegg (A rating) a new Ryzen (very nice chip!) based machine that I assembled myself.   Computer is working mostly ok, but has some stability issues.   A Bios update comes out on the MSI website promising some stability fixes so I decide to apply it.

The page that links to the download is HTTPS, but the actual download itself is not.
I flash the BIOS and now appear to have a brick.

As part of troubleshooting I find that the MSI website has bad HTTPS security, the worst page being:

Given the poor security and now wanting a motherboard with a more reliable BIOS  (currently I need to send the board back at my expense for an RMA) I looked at other Micro ATX motherboards starting with a Gigabyte which has even less pages using any HTTPS and the ones that do are even worse:

Unfortunately a survey of motherboard vendors indicates MSI failing with Fs might put them in second place.   Most just have everything in the clear, including passwords.   ASUS clearly leads the pack, but no one protects the actual firmware/drivers you download from them.

Main Website Support Site RMA Process Forum Download Site Actual Download
MSI F F F F F Plain Text
AsRock Plain text Email Email Plain text Plain Text Plain Text
Gigabyte (login site is F) Plain text Plain Text Plain Text Plain text Plain Text Plain Text
EVGA Plain text default/A- Plain text Plain text A Plain Text Plain Text
ASUS A- A- B Plain text default/A A- Plain Text
BIOSTAR Plain text Plain text Plain text n/a? Plain Text Plain Text

A quick glance indicates that vendors that make full systems use more security (ASUS and MSI being examples of system builders).

We rely on the security of these vendors for most self-built PCs.  We should demand HTTPS by default across the board.   It’s 2017 and a BIOS file is 8MB, cost hasn’t been a factor for years.

on May 15, 2017 03:46 PM

Ryzen so far…

Bryan Quigley

So my first iteration ended in a failed BIOS update…  Now I have a fresh MB.

Iteration 2 – disable everything

Ryzen machine is running pretty stable now with a few tweaks.   I was getting some memory paging bugs but one of things worked around it:

  • Moved from 4.10 (stock zesty) to 4.11 mainline kernel
  • Remove 1 of my 2 16 GB sticks of memory
  • Underclock memory from 2400 -> 2133
  • Re-enable VM Support (CVM)
  • Disable the C6
  • Disable boost

It was totally stable for several days after that..

Iteration 3 – BIOS update

Trying to have less things disabled (or more specifically to get my full 32 GB of ram) I did the latest (7A37v14) BIOS update (with all cables not important for the update removed).

Memtest had also intermittently shown bad ram… but I can no longer reproduce…  Both sticks tested independently show nothing is wrong..  Then I put both back in and it says it’s fine.

Part of that was resetting the settings above and although it was more stable I was still getting random crashes.

Iteration 4 – Mostly just underclock the RAM

  • Underclocked 32 GB of  memory from 2400 -> 2133
  • On 4.11 kernel mainline kernel with Nouveau drivers (previously on Nvidia prop. driver, but didn’t support 4.11 at the time)

So far it’s been stable and that’s what I’m running.

Outstanding things

  • CPU Temperature Reporting on Linux is Missing.  (AMD has to release the data to do so – see some discussion here.  That is a community project, posting there will not help AMD do anything)
  • Being coreboot friendly with these new chips
  • Update BIOS from Linux?
  • Why is VM support disabled by default? (It’s called SVM on these boards)
  • MSI please document/implement BIOS recover for these motherboards


Ryzen 1700 is a pretty powerful chip.  I love having 16 threads available to me (VMs/Compiling faster is what I wanted from ryzen and it delivers)   Like many new products there are some stumbling blocks for earlier adopters, but I feel like on my hardware combinations+ I’m finally seeing the stability I need.

*Stability testing was just leaving BOINC running (with SETI and NFS projects) with Firefox open.  And doing normal work with VMs, etc.
Ryzen 1700
2 x Patriot 16GB DDR4-2400  PSD416G24002H

on May 15, 2017 03:45 PM

May 14, 2017

Plasma Desktop 5.9.5 for Zesty 17.04, 5.8.6 for Xenial 16.04,  KDE Frameworks 5.33 and some selected application updates are now available via the Kubuntu backports PPA.

The updates include:

Kubuntu 17.04 – Zesty Zapus.

  • Plasma 5.9.5 bugfix release
  • KDE Frameworks 5.33
  • Digikam 5.5.5
  • Krita 3.1.3
  • Konversation 1.7.2
  • Krusader 2.6
  • Yakuake 3.0.4
  • Labplot 2.4

Kubuntu 16.04 – Xenial Xerus

  • Plasma 5.8.6 bugfix release
  • KDE Frameworks 5.33
  • Krita 3.1.3

We hope that the additional application updates for 17.04 above will soon be available for 16.04.

To update, use the Software Repository Guide to add the following repository to your software sources list:


or if already added, the updates should become available via your preferred updating method.

Instructions on how to manage PPA and more info about the Kubuntu PPAs can be found in the Repositories Documentation

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt-get update
sudo apt-get dist-upgrade

on May 14, 2017 09:36 AM

May 13, 2017

<a href="">"bookmarks" by Audrey B., CC BY 2.0</a>

I am happy to announce the availability of Bookmarks for Nextcloud 0.10.0! Bookmarks is a simple way to manage the remarkable websites and pages you come across on the Internet. Bookmarks 0.10.0 provides API methods to create, read, update and delete your bookmarks as well as compatibility with upcoming Nextcloud 12, next to smaller improvements and fixes.

Bookmarks 0.10 is available at the Nextcloud App Store and thus of course from your Nextcloud app management. Since this release Bookmarks is not officially supporting ownCloud anymore.

Personally I had hoped to release sooner, and if it was not for Marcel Klehr it would have been terribly later. He added the API methods, did a lot of code consolidation, migrated away from the deprecated database interface, and while doing so fixed the annoying tag rename bug. Kudos for his great efforts! Apart of him, I would also like to thank adsworth, brantje, Henni and LukasReschke for their contributions in this release!

Developers interested in the API can check out the documentation at the README file of our project, which was also written by Marcel.

Eventually, there will be the next, annual Nextcloud Conference in Berlin. This will take part from Aug 22nd to Aug 29th, and the Call for Papers is still open. Any sort of (potential) contributor is warmly welcome!

on May 13, 2017 09:49 PM

May 12, 2017

conjure-up dev summary for week 19

conjure-up dev summary for week 19 conjure-up dev summary for week 19 conjure-up dev summary for week 19

This week contains a significant change for how we do Localhost deployments and a lot more polishing around the cloud selection experience.

Localhost Deployments

Over the last couple of months we've been monitoring our issues, statistics, message boards and came to the conclusion that it may be time to take further control over the technologies conjure-up requires to work.

Currently, we bundle Juju into our snap and that gives us the ability to validate, test, and lock in to a version we are comfortable with shipping to the masses. However, with LXD we took the approach of using what was already installed on the user's system or making sure to upgrade to the latest LXD that is known to work with conjure-up at the time of release. The problem with doing it this way is there may be times where the LXD version gets upgraded before we push out a new release which then leaves the big question of will conjure-up continue to work as expected? Most cases, the answer is yes, however, there have been times when some form of API change is introduced that may cause breakage. Fortunately for us, we haven't seen this problem come up in a long time, however, the potential for future problems is still there.

So how do we solve this issue? We sent out a proposal outlining why we wanted to go with a particular solution and made sure to solicit input from the community to either get approval or see if there were any other solutions. Read about that proposal and responses for more details into that process and the pros and cons. The conclusion was to go with our proposal and bundle LXD into conjure-up snap in the same way we do Juju.

This work has been completed and should make it's way into conjure-up 2.2. Prior to that though we need to make sure to socialize this change as it will cause users existing Localhost deployment to not be easily reachable and also documenting how users can reach their newly deployed containers.


There has been significant work put in to improve the user experience and to gently warn them when their inputs are incorrect. We modeled the validation framework similar to what you see in a normal webpage submission form and also extending on that idea to make a best effort to automatically fix their inputs on the fly.

For example, with MAAS deployments conjure-up requires a server address and api key. If the user inputs just the IP address (ie we will automatically format that to the correct API endpoint (ie

conjure-up dev summary for week 19

After Validation:
conjure-up dev summary for week 19

That's all for now, remember to check back next week for more conjure-up goodness!

on May 12, 2017 01:32 PM

May 11, 2017

For those students who are disappointed with a rejection email, here are some common mistakes and strengths we noticed. Keep these in mind to strengthen your proposal next year.

Common Mistakes:

  • Did not follow directions
  • Did not subscribe to and use the mail lists, IRC channels, attend team meetings, etc.
  • Did not submit a final, complete proposal
  • Misunderstood the project's scope, or failed to include writing documentation and tests throughout the coding period
  • Poor timeline: unrealistic, or lack of implementation or time detail
  • Did not take mentors' proposal feedback into consideration, or submitted too late to get input
  • Did not link to commits to the KDE codebase
  • Had no engagement with the community
  • Demonstrated no knowledge of the KDE community's needs

On the other hand, some students have active since many months, or even a year.

Accepted Students:

  • Showed extra effort, thought, and time spent on making a great proposal
  • Submitted a complete draft soon after applications opened. Some even asked for feedback before that
  • Improved each draft iteration with mentor feedback
  • Demonstrated areas of growth and collaboration, through linked commits
  • Engaged on mail lists and chat
  • Engaged with the community past the submission deadline
  • Detailed timeline included time for code review, unit testing, and writing documentation throughout the coding period
  • Included all features planned to improve and/or implement the project
  • Marked clear deliverables
  • Included all other commitments, and adjusted timeline based on absences

There is no need to wait around for GSoC deadlines to get started or continue in any open source organization, including KDE.

This year, KDE had great student engagement and a good level of commitment for all students so even if you followed all of these points, you may still have gotten a rejection email. We realize that this can be discouraging. However, we did our best to pick the students whom we think can fulfill the project's needs, and continue along in the future as KDE developers.

We really appreciate all the effort and thank you for applying to KDE. Our community covers the world, and we're here to help you get started in open source development at any time. In fact, if you are interested in being mentored and do not need funding, we'll be rolling out Season of KDE in a couple of months.
on May 11, 2017 01:58 AM

May 10, 2017

Public wall murals

Stuart Langridge

Across from the window of my flat there is this big wall.

It is not very pretty. It would be nice if it were pretty. Say, by having some kind of excellent massive mural painted on it. I haven’t looked into this whatsoever, and presumably the people who own the building that this wall is part of are entitled to some sort of opinion on this, but… what’s involved in getting a big mural there? Are there, like, rules about this sort of thing? Or can you just paint what you like on outside walls? Also, who out there can pay for this? I’d like it to be me, but unless my six lottery numbers come up that isn’t happening, so… are there grants or something? I feel like there ought to be grants for public art and that sort of thing. And how much money would be needed? I have no idea what sort of community arts projects there are around or how they get paid or whether this sort of thing costs ten quid or ten thousand. Your thoughts invited.

on May 10, 2017 09:03 AM

May 09, 2017

Attracting newcomers is among the most widely studied problems in online community research. However, with all the attention paid to challenge of getting new users, much less research has studied the flip side of that coin: large influxes of newcomers can pose major problems as well!

The most widely known example of problems caused by an influx of newcomers into an online community occurred in Usenet. Every September, new university students connecting to the Internet for the first time would wreak havoc in the Usenet discussion forums. When AOL connected its users to the Usenet in 1994, it disrupted the community for so long that it became widely known as “The September that never ended”.

Our study considered a similar influx in NoSleep—an online community within Reddit where writers share original horror stories and readers comment and vote on them. With strict rules requiring that all members of the community suspend disbelief, NoSleep thrives off the fact that readers experience an immersive storytelling environment. Breaking the rules is as easy as questioning the truth of someone’s story. Socializing newcomers represents a major challenge for NoSleep.

Number of subscribers and moderators on /r/NoSleep over time.

On May 7th, 2014, NoSleep became a “default subreddit”—i.e., every new user to Reddit automatically joined NoSleep. After gradually accumulating roughly 240,000 members from 2010 to 2014, the NoSleep community grew to over 2 million subscribers in a year. That said, NoSleep appeared to largely hold things together. This reflects the major question that motivated our study: How did NoSleep withstand such a massive influx of newcomers without enduring their own Eternal September?

To answer this question, we interviewed a number of NoSleep participants, writers, moderators, and admins. After transcribing, coding, and analyzing the results, we proposed that NoSleep survived because of three inter-connected systems that helped protect the community’s norms and overall immersive environment.

First, there was a strong and organized team of moderators who enforced the rules no matter what. They recruited new moderators knowing the community’s population was going to surge. They utilized a private subreddit for NoSleep’s staff. They were able to socialize and educate new moderators effectively. Although issuing sanctions against community members was often difficult, our interviewees explained that NoSleep’s moderators were deeply committed and largely uncompromising.

That commitment resonates within the second system that protected NoSleep: regulation by normal community members. From our interviews, we found that the participants felt a shared sense of community that motivated them both to socialize newcomers themselves as well as to report inappropriate comments and downvote people who violate the community’s norms.

Finally, we found that the technological systems protected the community as well. For instance, post-throttling was instituted to limit the frequency at which a writer could post their stories. Additionally, Reddit’s “Automoderator”, a programmable AI bot, was used to issue sanctions against obvious norm violators while running in the background. Participants also pointed to the tools available to them—the report feature and voting system in particular—to explain how easy it was for them to report and regulate the community’s disruptors.

This blog post was written with Charlie Kiene. The paper and work this post describes is collaborative work with Charlie Kiene and Andrés Monroy-Hernández. The paper was published in the Proceedings of CHI 2016 and is released as open access so anyone can read the entire paper here. A version of this post was published on the Community Data Science Collective blog.

on May 09, 2017 04:33 PM

Cockpit has now been in Debian unstable and Ubuntu 17.04 and devel, which means it’s now a simple

$ sudo apt install cockpit

away for you to try and use. This metapackage pulls in the most common plugins, which are currently NetworkManager and udisks/storaged. If you want/need, you can also install cockpit-docker (if you grab from jessie-backports or use Ubuntu) or cockpit-machines to administer VMs through libvirt. Cockpit upstream also has a rather comprehensive Kubernetes/Openstack plugin, but this isn’t currently packaged for Debian/Ubuntu as kubernetes itself is not yet in Debian testing or Ubuntu.

After that, point your browser to https://localhost:9090 (or the host name/IP where you installed it) and off you go.

What is Cockpit?

Think of it as an equivalent of a desktop (like GNOME or KDE) for configuring, maintaining, and interacting with servers. It is a web service that lets you log into your local or a remote (through ssh) machine using normal credentials (PAM user/password or SSH keys) and then starts a normal login session just as gdm, ssh, or the classic VT logins would.

Login screen System page

The left side bar is the equivalent of a “task switcher”, and the “applications” (i. e. modules for administering various aspects of your server) are run in parallel.

The main idea of Cockpit is that it should not behave “special” in any way - it does not have any specific configuration files or state keeping and uses the same Operating System APIs and privileges like you would on the command line (such as lvmconfig, the org.freedesktop.UDisks2 D-Bus interface, reading/writing the native config files, and using sudo when necessary). You can simultaneously change stuff in Cockpit and in a shell, and Cockpit will instantly react to changes in the OS, e. g. if you create a new LVM PV or a network device gets added. This makes it fundamentally different to projects like webmin or ebox, which basically own your computer once you use them the first time.

It is an interface for your operating system, which even reflects in the branding: as you see above, this is Debian (or Ubuntu, or Fedora, or wherever you run it on), not “Cockpit”.

Remote machines

In your home or small office you often have more than one machine to maintain. You can install cockpit-bridge and cockpit-system on those for the most basic functionality, configure SSH on them, and then add them on the Dashboard (I add a Fedora 26 machine here) and from then on can switch between them on the top left, and everything works and feels exactly the same, including using the terminal widget:

Add remote Remote terminal

The Fedora 26 machine has some more Cockpit modules installed, including a lot of “playground” ones, thus you see a lot more menu entries there.

Under the hood

Beneath the fancy Patternfly/React/JavaScript user interface is the Cockpit API and protocol, which particularly fascinates me as a developer as that is what makes Cockpit so generic, reactive, and extensible. This API connects the worlds of the web, which speaks IPs and host names, ports, and JSON, to the “local host only” world of operating systems which speak D-Bus, command line programs, configuration files, and even use fancy techniques like passing file descriptors through Unix sockets. In an ideal world, all Operating System APIs would be remotable by themselves, but they aren’t.

This is where the “cockpit bridge” comes into play. It is a JSON (i. e. ASCII text) stream protocol that can control arbitrarily many “channels” to the target machine for reading, writing, and getting notifications. There are channel types for running programs, making D-Bus calls, reading/writing files, getting notified about file changes, and so on. Of course every channel can also act on a remote machine.

One can play with this protocol directly. E. g. this opens a (local) D-Bus channel named “d1” and gets a property from systemd’s hostnamed:

$ cockpit-bridge --interact=---

{ "command": "open", "channel": "d1", "payload": "dbus-json3", "name": "org.freedesktop.hostname1" }
{ "call": [ "/org/freedesktop/hostname1", "org.freedesktop.DBus.Properties", "Get",
          [ "org.freedesktop.hostname1", "StaticHostname" ] ],
  "id": "hostname-prop" }

and it will reply with something like


(“donald” is my laptop’s name). By adding additional parameters like host and passing credentials these can also be run remotely through logging in via ssh and running cockpit-bridge on the remote host.

Stef Walter explains this in detail in a blog post about Web Access to System APIs. Of course Cockpit plugins (both internal and third-party) don’t directly speak this, but use a nice JavaScript API.

As a simple example how to create your own Cockpit plugin that uses this API you can look at my schroot plugin proof of concept which I hacked together at in about an hour during the Cockpit workshop. Note that I never before wrote any JavaScript and I didn’t put any effort into design whatsoever, but it does work ☺.

Next steps

Cockpit aims at servers and getting third-party plugins for talking to your favourite part of the system, which means we really want it to be available in Debian testing and stable, and Ubuntu LTS. Our CI runs integration tests on all of these, so each and every change that goes in is certified to work on Debian 8 (jessie) and Ubuntu 16.04 LTS, for example. But I’d like to replace the external PPA/repository on the Install instructions with just “it’s readily available in -backports”!

Unfortunately there’s some procedural blockers there, the Ubuntu backport request suffers from understaffing, and the Debian stable backport is blocked on getting it in to testing first, which in turn is blocked by the freeze. I will soon ask for a freeze exception into testing, after all it’s just about zero risk - it’s a new leaf package in testing.

Have fun playing around with it, and please report bugs!

Feel free to discuss and ask questions on the Google+ post.

on May 09, 2017 08:51 AM