September 25, 2016

Abstract

I introduce TrieHash an algorithm for constructing perfect hash functions from tries. The generated hash functions are pure C code, minimal, order-preserving and outperform existing alternatives. Together with the generated header files,they can also be used as a generic string to enumeration mapper (enums are created by the tool).

Introduction

APT (and dpkg) spend a lot of time in parsing various files, especially Packages files. APT currently uses a function called AlphaHash which hashes the last 8 bytes of a word in a case-insensitive manner to hash fields in those files (dpkg just compares strings in an array of structs).

There is one obvious drawback to using a normal hash function: When we want to access the data in the hash table, we have to hash the key again, causing us to hash every accessed key at least twice. It turned out that this affects something like 5 to 10% of the cache generation performance.

Enter perfect hash functions: A perfect hash function matches a set of words to constant values without collisions. You can thus just use the index to index into your hash table directly, and do not have to hash again (if you generate the function at compile time and store key constants) or handle collision resolution.

As #debian-apt people know, I happened to play a bit around with tries this week before guillem suggested perfect hashing. Let me tell you one thing: My trie implementation was very naive, that did not really improve things a lot…

Enter TrieHash

Now, how is this related to hashing? The answer is simple: I wrote a perfect hash function generator that is based on tries. You give it a list of words, it puts them in a trie, and generates C code out of it, using recursive switch statements (see code generation below). The function achieves competitive performance with other hash functions, it even usually outperforms them.

Given a dictionary, it generates an enumeration (a C enum or C++ enum class) of all words in the dictionary, with the values corresponding to the order in the dictionary (the order-preserving property), and a function mapping strings to members of that enumeration.

By default, the first word is considered to be 0 and each word increases a counter by one (that is, it generates a minimal hash function). You can tweak that however:

= 0
WordLabel ~ Word
OtherWord = 9

will return 0 for an unknown value, map “Word” to the enum member WordLabel and map OtherWord to 9. That is, the input list functions like the body of a C enumeration. If no label is specified for a word, it will be generated from the word. For more details see the documentation

C code generation

switch(string[0] | 32) {
case 't':
    switch(string[1] | 32) {
    case 'a':
        switch(string[2] | 32) {
        case 'g':
            return Tag;
        }
    }
}
return Unknown;

Yes, really recursive switches – they directly represent the trie. Now, we did not really do a straightforward translation, there are some optimisations to make the whole thing faster and easier to look at:

First of all, the 32 you see is used to make the check case insensitive in case all cases of the switch body are alphabetical characters. If there are non-alphabetical characters, it will generate two cases per character, one upper case and one lowercase (with one break in it). I did not know that lowercase and uppercase characters differed by only one bit before, thanks to the clang compiler for pointing that out in its generated assembler code!

Secondly, we only insert breaks only between cases. Initially, each case ended with a return Unknown, but guillem (the dpkg developer) suggested it might be faster to let them fallthrough where possible. Turns out it was not faster on a good compiler, but it’s still more readable anywhere.

Finally, we build one trie per word length, and switch by the word length first. Like the 32 trick, his gives a huge improvement in performance.

Digging into the assembler code

The whole code translates to roughly 4 instructions per byte:

  1. A memory load,
  2. an or with 32
  3. a comparison, and
  4. a conditional jump.

(On x86, the case sensitive version actually only has a cmp-with-memory and a conditional jump).

Due to https://gcc.gnu.org/bugzilla/show_bug.cgi?id=77729 this may be one instruction more: On some architectures an unneeded zero-extend-byte instruction is inserted – this causes a 20% performance loss.

Performance evaluation

I run the hash against all 82 words understood by APT in Packages and Sources files, 1,000,000 times for each word, and summed up the average run-time:

host arch Trie TrieCase GPerfCase GPerf DJB
plummer ppc64el 540 601 1914 2000 1345
eller mipsel 4728 5255 12018 7837 4087
asachi arm64 1000 1603 4333 2401 1625
asachi armhf 1230 1350 5593 5002 1784
barriere amd64 689 950 3218 1982 1776
x230 amd64 465 504 1200 837 693

Suffice to say, GPerf does not really come close.

All hosts except the x230 are Debian porterboxes. The x230 is my laptop with a a Core i5-3320M, barriere has an Opteron 23xx. I included the DJB hash function for another reference.

Source code

The generator is written in Perl, licensed under the MIT license and available from https://github.com/julian-klode/triehash – I initially prototyped it in Python, but guillem complained that this would add new build dependencies to dpkg, so I rewrote it in Perl.

Benchmark is available from https://github.com/julian-klode/hashbench

Usage

See the script for POD documentation.


Filed under: General
on September 25, 2016 06:44 PM

September 23, 2016

The next Ubuntu Online Summit is going to happen:

15-16 November 2016

At the event we are going to celebrate the 16.10 release and all the great things which are new and get to talk about what’s coming up in Ubuntu 17.04.

The event will be added to summit.ubuntu.com shortly and you will all receive a reminder or two to add your sessions. 🙂

We’re looking forward to seeing you there.

Originally posted to the ubuntu-news-team mailing list on Thu Sep 22 09:46:35 UTC 2016 by Daniel Holbach

on September 23, 2016 12:58 AM

September 22, 2016

Ubuntu Core Store

Last week Nextcloud, Western Digital and Canonical launched the Nextcloud box, a simple box powered by a Raspberry Pi to easily build a private cloud storage service at home . With the Nextcloud box, you are already securely sharing files with your colleagues, friends, and/or family.
The team at Rocket.chat has been asking “Why not add a group chat so you can chat in real-time or leave messages for one another?”

We got in touch with the team to hear about their story!

Here’s Sing Li from Rocket.chat telling us about their journey to Ubuntu Core!

Introducing Rocket.chat

Rocket.chat is a slack-like server that you can run on your own servers or at home… and installing it with Ubuntu core couldn’t be easier. Your private chat server will be up and running in 2 minutes.

If you’ve not heard of them, Rocket.Chat is one of the largest MIT licenced open source group chat project on GitHub with over 200 global contributors, 9,000 stars, and 40,000 community servers deployed world-wide.

Rocket.chat is an optimized nodejs application, bundled in compressed tar zip. To install it a system admin would need to untar the optimized app, and install its dependency on the server . He/She would then need to configure the server via environment variables and be configured to survive restart using a service manager.. Combined with the typical routine of setting up and configuring a reverse proxy and getting DNS plus SSL set up correctly. This meant that system administrators had to spend on average 3 hours deploying the rocket.chat server before they can even start to see the first login page.

Being a mature production server, the configuration surface of Rocket.Chat is also very large. Currently we have over a hundred configurable settings for all sort of different use-cases. Getting configuration just right for a use-case adds to the already long time required to deploy the server.

Making installation a breeze

We started to look for alternatives that can ubiquitously delivery our server product to the largest possible body of end users (deployer of chat servers) and provide a non-complex, pleasant initial out-of-box experience. If it can help with updating of software and expediting installation of new version – it would be a bonus. If it can further reduce our complex build pipeline complexity, then it is a sure winner.

When we first saw snaps, we knew it had everything were looking for. The ubiquity of Ubuntu 16.04 LTS, Ubuntu 14.04 LTS, plus Ubuntu Core for IoT – means we can deliver Rocket.Chat to an incredibly large audience of server deployers globally – all via one single package. But also cater for the increasing number of users asking us to run a Rocket.chat server on a Raspberry Pi.

With Snap, we only have one bundle for all the Linux platforms supported. It is right here in the snap store What’s real cool is that even the formerly manually built ARM based server, for Raspberry Pi and other IoT devices, can also be part of the same snap. It enjoys the same simplicity of installation, and transactional update just like other Intel 64 bit Linux platforms.

Our next step will be to distribute our desktop client with snaps. We have a vision that once we tag a release, within seconds a build process is kicked off through the CI and published to the snap store.

The asymmetric nature of the snap delivery pipeline has definite benefits. By paying the cost of heavy processing work up front during the build phase, snap deployment is lightning fast. On most modern Intel servers or VPSes, `snap install rocketchat.server` takes only about 30 seconds. The server is ready to handle hundreds of users immediately, available at URL `http://<server address>:3000`.

Consistent with the snap design philosophy, we have done substantial engineering to come up with a default set of ready-to-go configuration settings that supports the most common use cases of a simple chat server.

What this enables us to do is to deliver a 30s “instantly gratifying” experience to system admistator who’d like to give Rocket.Chat server a try – with no obligation whatsover. Try it, if you like it, keep it and learn.

All of the complexity of configuring a full-fledged production server is folded away. The system administrator can learn more about configuration and customization at her/his own pace later.

Distributing updates in a flash

We work on new features on a develop branch (Github), and many of our community members test on this branch with us. Once the features are deemed stable, they are merged down to master (Github) where our official releases (Github) reside. Both branches are rigged to Continuous Integration via travis. Our travis script optimizes, compresses and bundle the distributions and then pushes them out to the various distribution channels that we support, many of them requires further special sub-builds and repackaging that can take substantial time.

Some distributions even calls for manual build every release. For example, we build manually for ARM architecture (Github), to support a growing community of Raspberry Pi makers, hobbyists, and IoT enthusiasts at Github

In addition to the server builds, we also have our desktop client(Github). Every new release requires a manual build on Windows, OSX, and Ubuntu. This build process requires a member of our team to physically login to a Windows(x86 and x86_64) OSX and Ubuntu(x86 and x86_64) machine to create our releases. Once this release was built, our users then had to goto our release page to manually download the Ubuntu release and install it.

That’s where snap also bring a greatt a much simpler distribution mechanism and a better user experience. When we add features and push new, every one of our users will be enjoying it within a few hours- automatically. The version update is transactional, so if they can always roll back to the previous version if they’d like.

In fact, we make use of the stable and edge channels corresponding to our master and develop branches. Community members helping us to test the latest software is on the edge channel , and often gets multiple updates throughout the day as we fix bugs and add new features.

We look forward to the point where our desktop client is also available as a snap and when our users no longer have to wrestle with updating their Desktop Clients. Like the server we are able to deliver their updates quickly and seamlessly.

on September 22, 2016 02:11 PM

S09E30 – Pie Till You Die - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Thirty of Season-Nine of the Ubuntu Podcast! Mark Johnson, Alan Pope and Martin Wimpress are here again.

Most of us are here, but one of us is busy!

In this week’s show:

  • We discuss the Raspberry Pi hitting 10 Million sales and the impact the it has had.

  • We share a Command Line Lurve:

    • set -o vi – Which makes bash use vi keybindings.
  • We also discuss solving an “Internet Mystery” #blamewindows

  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • This weeks cover image is taken from Wikimedia.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on September 22, 2016 02:00 PM

Big​ ​Software​ ​is​ ​a​ ​new​ ​class​ ​of​ ​application.​ ​It’s​ ​composed​ ​of​ ​so​ ​many moving​ ​pieces​ ​that​ ​humans,​ ​by​ ​themselves,​ ​cannot​ ​design,​ ​deploy​ ​or operate​ ​them.​ ​OpenStack,​ ​Hadoop​ ​and​ ​container-based​ ​architectures​ ​are all​ ​examples​ ​of​ ​​Big​ ​Software​.

  • Gathering​ ​service​ ​metrics​ ​for​ ​complex​ ​big​ ​software​ ​stacks​ ​can​ ​be​ ​a​ ​chore
  • Especially​ ​when​ ​you​ ​need​ ​to​ ​warehouse,​ ​visualize,​ ​and​ ​share​ ​the​ ​metrics
  • It’s​ ​not​ ​just​ ​about​ ​measuring​ ​machine​ ​performance,​ ​but​ ​application performance​ ​as​ ​well

You​ ​usually​ ​need​ ​to​ ​warehouse​ ​months​ ​of​ ​history​ ​of​ ​these​ ​metrics​ ​so​ ​you can​ ​spot​ ​trends.​ ​This​ ​enables​ ​you​ ​to​ ​make​ ​educated​ ​infrastructure decisions.​ ​That’s​ ​a​ ​powerful​ ​tool​ ​that’s​ ​usually​ ​offered​ ​on​ ​the​ ​provider level.​ ​But​ ​what​ ​if​ ​you​ ​run​ ​a​ ​hybrid​ ​cloud​ ​deployment?​ ​Not​ ​every​ ​cloud service​ ​is​ ​created​ ​equally.

The​ ​Elastic​ ​folks​ ​provide​ ​everything​ ​we​ ​need​ ​to​ ​make​ ​this​ ​possible.

Additionally​ ​we​ ​can​ ​connect​ ​it​ ​to​ ​​all​ ​sorts​​ ​of​ ​other​ ​bundles​ ​in​ ​the​ ​charm.

Stack

Big Software is a new class of application. It’s composed of so many moving pieces that humans, by themselves, cannot design, deploy or operate them. OpenStack, Hadoop and container-based architectures are all examples of Big Software.

Gathering service metrics for complex big software stacks can be a chore. Especially when you need to warehouse, visualize, and share the metrics. It’s not just about measuring machine performance, but application performance as well.

You usually need to warehouse months of history of these metrics so you can spot trends. This enables you to make educated infrastructure decisions. That’s a powerful tool that’s usually offered on the provider level. But what if you run a hybrid cloud deployment? Not every cloud service is created equally.

The Elastic folks provide everything we need to make this possible. Additionally we can connect it to all sorts of other bundles in the charm store. We can now collect data on any cluster, store it, and visualize it. Let’s look at the pieces that are modeled in this bundle:

  • Elasticsearch – a distributed RESTful search engine
  • Beats – lightweight processes that gather metrics on nodes and ship them to Elasticsearch.
    • Filebeat ships logs
    • Topbeat ships “top-like” data
    • Packetbeat provides network protocol monitoring
  • Dockerbeat is a community beat that provides app container monitoring
  • Logstash – Performs data transformations, and routing to storage. As an example: Elasticsearch for instant visualization or HDFS for long term storage and analytics
  • Kibana – a web front end to visualize and analyze the gathered metrics

Getting Started

First, install and configure Juju. This will allow us to model our clusters easily and repeatedly. We used LXD as a backend in order to maximize our ability to explore the cluster on our desktops/laptops. Though you can easily deploy these onto any major public cloud.

juju deploy ~containers/bundle/beats-core

This will give you a complete stack, it looks like this:

 

Note: if you wish to deploy the latest version of this bundle, the ~containers team is publishing a development channel release as new beats are added to the core bundle.

juju deploy ~containers/bundle/beats-core --channel=development

Once everything is deployed we need to deploy the dashboards:

juju action do kibana/0 deploy-dashboard dashboard=beats

Now do a `juju status kibana` to get the ip address to the unit it’s allocated. Now we are… monitoring nothing. We need something to connect it to, and then introduce it to beats, so something like:

juju deploy myapplication
   juju add-relation filebeat:beats-host myapplication
   juju add-relation topbeat:beats-host myapplication

Let’s connect it to something interesting, like an Apache Spark deployment.

Integrating with other bundles

The standalone bundle is useful but let’s use a more practical example. The Juju Ecosystem team has added elastic stack monitoring to a bunch of existing bundles. You don’t even have to manually connect the beats-core deployment to anything, you can just use an all in one bundle:

 

To deploy this bundle in the command line:

juju deploy apache-processing-spark

We also recommend running `juju status` periodically to check the progress of the deployment. You can also just open up a new terminal and keep `watch juju status` in a window so you can have the status continuously display while you continue on.

In this bundle: Filebeat and Topbeat act as subordinate charms. Which means they are co-located on the spark units. This allows us to use these beats to track each spark node. And since we’re adding this relationship at the service level; any subsequent spark nodes you add will automatically include the beats monitors. The horizontal scaling of our cluster is now observable.

Let’s get the kibana dashboard ready:

juju set-config kibana dashboards="beats"

Notice that this time, we used charm config instead of an action to deploy the dashboard. This allows us to blanket configure, and deploy the kibana dashboards from a bundle. Reducing the number of steps a user must take to get started.

After deployment you will need to do a `juju status kibana` to get the IP address of the unit. Then browse to it in your web browser. For those of you deploying on public clouds: you will need to also do `juju expose kibana` to open a port in the firewall to allow access. Remember, to make things accessible to others in our clouds Juju expects you to explicitly tell it to do this. Out-of-the-box we keep things closed.
When you get to the kibana GUI you need add `topbeat-*` or `filebeat-*` in the initial screen setup to set up Kibana’s index. Make sure you click the “Create” button for each one:

Create

Now we need to load the dashboard’s we’ve included for you, click on the “Dashboard” section and click the load icon, then select the “topbeat-dashboard”:

topbeat dashboard

near-realtime

Now you should see a your shiny new dashboard:

Shiny dashboard

You now have an observable Spark cluster! Now that your graphs are up, let’s run something to ensure all the working pieces are working, let’s do a quick pagerank benchmark:

juju run-action spark/0 pagerank

This will output a UUID for your job for you to query for results:

juju show-action-output 

You can find more about available actions in the bundle’s documentation. Feel free to launch the action multiple times if you want to exercise the hardware, or run your own Spark jobs as you see fit.

By default the `apache-processing-spark` bundle gives us three nodes. I left those running for a while and then decided to grow the cluster. Let’s add 10 nodes

juju add-unit -n10 spark

Your `juju status` should be lighting up now with the new units being fired up, and in Kibana itself we can see the rest of the cluster coming online in near-realtime:

near-realtime

Here you can see the CPU and memory consumption of the cluster. You can see the initial three nodes hanging around, and then as the other nodes come up, beats gets installed and they report in, automatically.

Why automatically? ‘apache-processing-spark’ technically is just some yaml. The magic is that we are not just deploying code, we’re modelling the relationship between these applications:

relations:
  - [spark, zookeeper]
  - ["kibana:rest", "elasticsearch:client"]
  - ["filebeat:elasticsearch", "elasticsearch:client"]
  - ["filebeat:beats-host", "spark:juju-info"]
  - ["topbeat:elasticsearch", "elasticsearch:client"]
  - ["topbeat:beats-host", "spark:juju-info"]

So when spark is added, you’re not just adding a new machine, you’re mutating the scale of the application within the model. But what does that mean?

A good way to think about it is just like simple elements and compounds. For example: Carbon Monoxide (CO) and Carbon Dioxide (CO2) are built from the exact same elements. But the combination of those elements allow for two different compounds with different characteristics. If you think of your infrastructure similarly, you’re not just designing the components that compose it. But the number of interactions that those components have with themselves and others.

So, automatically deploying filebeat and topbeat when spark is scaled just becomes an automatic part of the lifecycle. In this case, one new spark unit results in one new unit of filebeat, and one new unit of topbeat. Similarly, we can change this model as our requirements change.

This post-deployment mutability of infrastructure is one of Juju’s key unique features. You’re not just defining how applications talk and relate to each other. You’re also defining the ratios of units to their supporting applications like metrics collection.

We’ve given you two basic elements of beats today, filebeat, and topbeat. And like chemistry, more elements make for more interesting things. So now let’s show you how to expand your metrics-gathering to another level.

Charming up your own custom beat

Elastic has engineered Beats to be expandable. They have invested effort in making it easy for you to write your own “beat”. As you can imagine, this can lead to an explosion of community-generated beats for measuring all sorts of things. We wanted to enable any enthusiast of the beats community to be able to hook into a Juju deployed workload.

As part of this work we’ve published a beats base layer. This will allow you to generate a charm for your custom beat. Or any of the community written beats for that matter. Then deploy it right into your model, just like we do with topbeat and filebeat. Let’s look at an example:

The Beats-base layer

Beats Base provides some helper python code to handle the common patterns every beats unit will undergo. Such as declaring to the model how it will talk to Logstash and/or Elasticsearch. This is always handled the same way among all the beats. So we’re keeping developers from needing to repeat themselves.

Additionally the elasticbeats library handles:

  • Unit index creation
  • Template rendering in any context
  • Enabling the beat as a system service

So starting from beats-base, we have 3 concerns to address and we will have delivered our beat:

  • How to install your beat (delivery)
  • How to configure your beat (template config)
  • Declare your beats index (payload delivery from installation step)

Let’s start with Packetbeat as an example. Packetbeat is an open source project that is designed to provide real‑time analytics for web, database, and other network protocols.

charm create packetbeat

Every charm starts with a layer.yaml

includes:

  • beats-base
  • apt
  • repository: http://github.com/juju-solutions/layer-packetbeat

Let’s add a little bit of metadata.yaml

name: packetbeat
summary: Deploys packetbeat
maintainer: Charles Butler 
description: |
  data shipper that integrates with Elasticsearch to provide
  real-time analytics for web, database, and other 
  network protocols
series:
  - trusty
Tags:
monitoring
analytics
networking

With those meta files in place we’re ready to write our reactive code.

reactive/packetbeat.py

For delivery of packetbeat, elastic has provided a deb repository for the official beats. This makes delivery a bit simpler using the apt-layer. The consuming code is very simple:

import charms.apt

@when_not('apt.installed.packetbeat')
def install_filebeat():
    status_set('maintenance', 'Installing packetbeat')
    charms.apt.queue_install(['packetbeat'])

This completes our need to deliver the application. The apt-layer will handle all the usual software delivery things for us like installing and configuring an apt repository, etc. Since this layer is reused in charms all across the community, we merely reuse it here.

The next step is modeling how we react to our data-sources being connected. This typically requires rendering a yaml file to configure the beat, starting the beat daemon, and reacting to the beats-base beat.render state.

In order to do this we’ll be adding:

  • Configuration options to our charm
  • A Jinja template to render the yaml configuration
  • Reactive code to handle the state change and events

The configuration for packetbeat comes in the form of declaring protocol and port. This makes attaching packetbeat to anything transmitting data on the wire simple to model with configuration. We’ll provide some sane defaults, and allow the admin to configure the device to listen on.

Config.yaml

  device:
    type: string
    default: any
    description: Device to listen on, eg eth0
  protocols:
    type: string
    description: |
      the ports on which Packetbeat can find each protocol. space
      separated protocol:port format.
    default: "http:80 http:8080 dns:53 mysql:3306 pgsql:5432   redis:6379 thrift:9090 mongodb:27017 memcached:11211"

templates/packetbeat.yml

# This file is controlled by Juju. Hand edits will not persist!
interfaces:
  device: {{ device }}
protocols:
  {% for protocol in protocols -%}
    {{ protocol }}:
      ports: {{ protocols[protocol] }}
  {% endfor %}
{% if elasticsearch -%}
output:
  elasticsearch:
    hosts: {{ elasticsearch }}
{% endif -%}
{% if principal_unit %}
shipper:
  name: {{ principal_unit }}
{% endif %}


reactive/packetbeat.py

from charms.reactive import set_state
from charmhelpers.core.host import service_restart
from charmhelpers.core.hookenv import status_set
from elasticbeats import render_without_context


@when('beat.render')
@when_any('elasticsearch.available', 'logstash.available')
def render_filebeat_template():
    render_without_context('packetbeat.yml', '/etc/packetbeat/packetbeat.yml')
    remove_state('beat.render')
    service_restart('packetbeat')
    status_set('active', 'Packetbeat ready')

With all these pieces of the charm plugged in, run a `charm build` in your layer directory and you’re ready to deploy the packetbeat charm.

juju deploy cs:bundles/beats-core
    juju deploy cs:trusty/consul
    juju deploy ./builds/packetbeat

    juju add-relation packetbeat elasticsearch
    juju add-relation packetbeat consul

Consul is a great test, we can attach a single beat and monitor DNS, and Web traffic thanks to its UI

juju set-config packetbeat protocols=”dns:53 http:8500”

single beat and monitor DNS

Load up the kibana dashboard, and look under the “Discover” tab. There will be a packetbeat index, and data aggregating underneath it. Units requesting cluster DNS will start to pile on as well.

To test both of these metrics, browse around the Consul UI on port 8500. Additionally you can ssh into a unit, and dig @ the consul dns server to see DNS metrics populate.

Populating the Packetbeat dashboard from here is a game of painting with data by the numbers.

Conclusion

Observability is a great feature to have in your deployments. Whether it’s a brand new 12-factor application or the simplest of MVC apps. Being able to see inside the box is always a good feature for modern infrastructure to own.

This is why we’re excited about the Elastic stack! We can plug this into just about anything and immediately start gathering data. We’re looking forward to seeing how people bring in new beats to connect other metrics to existing bundles.

We’ve included this bundle in our Swarm, Kubernetes and big data bundles out of the box. I encourage everyone who is publishing bundles in the charm store to consider plugging in this bundle for production-grade observability.

on September 22, 2016 06:18 AM

Kubuntu beta; please test!

Valorie Zimmerman

Kubuntu 16.10 beta has been published. It is possible that it will be re-spun, but we have our beta images ready for testing now.

Please go to http://iso.qa.ubuntu.com/qatracker/milestones/367/builds, login, click on the CD icon and download the image. I prefer zsync, which I download via the commandline:

~$ cd /media/valorie/ISOs (or whereever you store your images)
~$ zsync http://cdimage.ubuntu.com/kubuntu/daily-live/20160921/yakkety-desktop-i386.iso.zsync

The other methods of downloading work as well, including wget or just downloading in your browser.

I tested usb-creator-kde which has sometimes now worked, but it worked like a champ once the images were downloaded. Simply choose the proper ISO and device to write to, and create the live image.

Once I figured out how to get my little Dell travel laptop to let me boot from USB (delete key as it is booting; quickly hit f12, legacy boot, then finally I could actually choose to boot from USB). Secure boot and UEFI make this more difficult these days.

I found no problems in the live session, including logging into wireless, so I went ahead and started firefox, logged into http://iso.qa.ubuntu.com/qatracker, chose my test, and reported my results. We need more folks to install on various equipment, including VMs.

When you run into bugs, try to report them via "apport", which means using ubuntu-bug packagename in the commandline. Once apport has logged into launchpad and downloaded the relevant error messages, you can give some details like a short description of the bug, and can get the number. Please report the bug numbers on the qa site in your test report.

Thanks so much for helping us make Kubuntu friendly and high-quality.
on September 22, 2016 03:32 AM

September 21, 2016


I reinstalled my primary laptop (Lenovo x250) about 3 months ago (June 30, 2016), when I got a shiny new SSD, with a fresh Ubuntu 16.04 LTS image.

Just yesterday, I needed to test something in KVM.  Something that could only be tested in KVM.

kirkland@x250:~⟫ kvm
The program 'kvm' is currently not installed. You can install it by typing:
sudo apt install qemu-kvm
127 kirkland@x250:~⟫

I don't have KVM installed?  How is that even possible?  I used to be the maintainer of the virtualization stack in Ubuntu (kvm, qemu, libvirt, virt-manager, et al.)!  I lived and breathed virtualization on Ubuntu for years...

Alas, it seems that I've use LXD for everything these days!  It's built into every Ubuntu 16.04 LTS server, and one 'apt install lxd' away from having it on your desktop.  With ZFS, instances start in under 3 seconds.  Snapshots, live migration, an image store, a REST API, all built in.  Try it out, if you haven't, it's great!

kirkland@x250:~⟫ time lxc launch ubuntu:x
Creating supreme-parakeet
Starting supreme-parakeet
real 0m1.851s
user 0m0.008s
sys 0m0.000s
kirkland@x250:~⟫ lxc exec supreme-parakeet bash
root@supreme-parakeet:~#

But that's enough of a LXD advertisement...back to the title of the blog post.

Here, I want to download an Ubuntu cloud image, and boot into it.  There's one extra step nowadays.  You need to create your "user data" and feed it into cloud-init.

First, create a simple text file, called "seed":

kirkland@x250:~⟫ cat seed
#cloud-config
password: passw0rd
chpasswd: { expire: False }
ssh_pwauth: True
ssh_import_id: kirkland

Now, generate a "seed.img" disk, like this:

kirkland@x250:~⟫ cloud-localds seed.img seed
kirkland@x250:~⟫ ls -halF seed.img
-rw-rw-r-- 1 kirkland kirkland 366K Sep 20 17:12 seed.img

Next, download your image from cloud-images.ubuntu.com:

kirkland@x250:~⟫ wget http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img                                                                                                                                                          
--2016-09-20 17:13:57-- http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img
Resolving cloud-images.ubuntu.com (cloud-images.ubuntu.com)... 91.189.88.141, 2001:67c:1360:8001:ffff:ffff:ffff:fffe
Connecting to cloud-images.ubuntu.com (cloud-images.ubuntu.com)|91.189.88.141|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 312606720 (298M) [application/octet-stream]
Saving to: ‘xenial-server-cloudimg-amd64-disk1.img’
xenial-server-cloudimg-amd64-disk1.img
100%[=================================] 298.12M 3.35MB/s in 88s
2016-09-20 17:15:25 (3.39 MB/s) - ‘xenial-server-cloudimg-amd64-disk1.img’ saved [312606720/312606720]

In the nominal case, you can now just launch KVM, and add your user data as a cdrom disk.  When it boots, you can login with "ubuntu" and "passw0rd", which we set in the seed:

kirkland@x250:~⟫ kvm -cdrom seed.img -hda xenial-server-cloudimg-amd64-disk1.img

Finally, let's enable more bells an whistles, and speed this VM up.  Let's give it all 4 CPUs, a healthy 8GB of memory, a virtio disk, and let's port forward ssh to 2222:

kirkland@x250:~⟫ kvm -m 8192 \
-smp 4 \
-cdrom seed.img \
-device e1000,netdev=user.0 \
-netdev user,id=user.0,hostfwd=tcp::5555-:22 \
-drive file=xenial-server-cloudimg-amd64-disk1.img,if=virtio,cache=writeback,index=0

And with that, we can how ssh into the VM, with the public SSH key specified in our seed:

kirkland@x250:~⟫ ssh -p 5555 ubuntu@localhost
The authenticity of host '[localhost]:5555 ([127.0.0.1]:5555)' can't be established.
RSA key fingerprint is SHA256:w2FyU6TcZVj1WuaBA799pCE5MLShHzwio8tn8XwKSdg.
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)? yes

Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-36-generic x86_64)

* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage

Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

ubuntu@ubuntu:~⟫

Cheers,
:-Dustin
on September 21, 2016 03:03 PM
Vivaldi browser has taken the world of internet browsing by storm, and only months after its initial release it has found its way into the computers of millions of power users. In this interview, Mr.Jon Stephenson von Tetzchner talks about how he got the idea to create this project and what to expect in the future.
on September 21, 2016 02:29 PM

I read that Neon Dev Edition Unstable Branches is moving to Plasma Wayland by default instead of X.  So I thought it a good time to check out this week’s Plasma Wayland ISO. Joy of joys it has gained the ability to work in VirtualBox and virt-manager since last I tried.  It’s full of flickers and Spectacle doesn’t take screenshots but it’s otherwise perfectly functional.  Very exciting 🙂

 

Facebooktwittergoogle_pluslinkedinby feather
on September 21, 2016 11:33 AM

September 20, 2016

Interview conducted in writing July-August 2016.

[Eric] Good morning, Kira. It is a pleasure to interview you today and to help you introduce your recently launched Alexa skill, “CloudStatus”. Can you provide a brief overview about what the skill does?

[Kira] Good morning, Papa! Thank you for inviting me.

CloudStatus allows users to check the service availability of any AWS region. On opening the skill, Alexa says which (if any) regions are experiencing service issues or were recently having problems. Then the user can inquire about the services in specific regions.

This skill was made at my dad’s request. He wanted to quickly see how AWS services were operating, without needing to open his laptop. As well as summarizing service issues for him, my dad thought CloudStatus would be a good opportunity for me to learn about retrieving and parsing web pages in Python.

All the data can be found in more detail at status.aws.amazon.com. But with CloudStatus, developers can hear AWS statuses with their Amazon Echo. Instead of scrolling through dozens of green checkmarks to find errors, users of CloudStatus listen to which services are having problems, as well as how many services are operating satisfactorily.

CloudStatus is intended for anyone who uses Amazon Web Services and wants to know about current (and recent) AWS problems. Eventually it might be expanded to talk about other clouds as well.

[Eric] Assuming I have an Amazon Echo, how do I install and use the CloudStatus Alexa skill?

[Kira] Just say “Alexa, enable CloudStatus skill”! Ask Alexa to “open CloudStatus” and she will give you a summary of regions with problems. An example of what she might say on the worst of days is:

“3 out of 11 AWS regions are experiencing service issues: Mumbai (ap-south-1), Tokyo (ap-northeast-1), Ireland (eu-west-1). 1 out of 11 AWS regions was having problems, but the issues have been resolved: Northern Virginia (us-east-1). The remaining 7 regions are operating normally. All 7 global services are operating normally. Which Amazon Web Services region would you like to check?”

Or on most days:

“All 62 regional services in the 12 AWS regions are operating normally. All 7 global services are operating normally. Which Amazon Web Services region would you like to check?”

Request any AWS region you are interested in, and Alexa will present you with current and recent service issues in that region.

Here’s the full recording of an example session: http://pub.alestic.com/alexa/cloudstatus/CloudStatus-Alexa-Skill-sample-20160908.mp3

[Eric] What technologies did you use to create the CloudStatus Alexa skill?

[Kira] I wrote CloudStatus using AWS Lambda, a service that manages servers and scaling for you. Developers need only pay for their servers when the code is called. AWS Lambda also displays metrics from Amazon CloudWatch.

Amazon CloudWatch gives statistics from the last couple weeks, such as the number of invocations, how long they took, and whether there were any errors. CloudWatch Logs is also a very useful service. It allows me to see all the errors and print() output from my code. Without it, I wouldn’t be able to debug my skill!

I used Amazon EC2 to build the Python modules necessary for my program. The modules (Requests and LXML) download and parse the AWS status page, so I can get the data I need. The Python packages and my code files are zipped and uploaded to AWS Lambda.

Fun fact: My Lambda function is based in us-east-1. If AWS Lambda stops working in that region, you can’t use CloudStatus to check if Northern Virginia AWS Lambda is working! For that matter, CloudStatus will be completely dysfunctional.

[Eric] Why do you enjoy programming?

[Kira] Programming is so much fun and so rewarding! I enjoy making tools so I can be lazy.

Let’s rephrase that: Sometimes I’m repeatedly doing a non-programming activity—say, making a long list of equations for math practice. I think of two “random” numbers between one and a hundred (a human can’t actually come up with a random set of numbers) and pick an operation: addition, subtraction, multiplication, or division. After doing this several times, the activity begins to tire me. My brain starts to shut off and wants to do something more interesting. Then I realize that I’m doing the same thing over and over again. Hey! Why not make a program?

Computers can do so much in so little time. Unlike humans, they are capable of picking completely random items from a list. And they aren’t going to make mistakes. You can tell a computer to do the same thing hundreds of times, and it won’t be bored.

Finish the program, type in a command, and voila! Look at that page full of math problems. Plus, I can get a new one whenever I want, in just a couple seconds. Laziness in this case drives a person to put time and effort into ever-changing problem-solving, all so they don’t have to put time and effort into a dull, repetitive task. See http://threevirtues.com/.

But programming isn’t just for tools! I also enjoy making simple games and am learning about websites.

One downside to having computers do things for you: You can’t blame a computer for not doing what you told it to. It did do what you told it to; you just didn’t tell it to do what you thought you did.

Coding can be challenging (even frustrating) and it can be tempting to give up on a debug issue. But, oh, the thrill that comes after solving a difficult coding problem!

The problem-solving can be exciting even when a program is nowhere near finished. My second Alexa program wasn’t coming along that well when—finally!—I got her to say “One plus one is eleven.” and later “Three plus four is twelve.” Though it doesn’t seem that impressive, it showed me that I was getting somewhere and the next problem seemed reasonable.

[Eric] How did you get started programming with the Alexa Skills Kit (ASK)?

[Kira] My very first Alexa skill was based on an AWS Lambda blueprint called Color Expert (alexa-skills-kit-color-expert-python). A blueprint is a sample program that AWS programmers can copy and modify. In the sample skill, the user tells Alexa their favorite color and Alexa stores the color name. Then the user can ask Alexa what their favorite color is. I didn’t make many changes: maybe Alexa’s responses here and there, and I added the color “rainbow sparkles.”

I also made a skill called Calculator in which the user gets answers to simple equations.

Last year, I took a music history class. To help me study for the test, I created a trivia game from Reindeer Games, an Alexa Skills Kit template (see https://developer.amazon.com/public/community/post/TxDJWS16KUPVKO/New-Alexa-Skills-Kit-Template-Build-a-Trivia-Skill-in-under-an-Hour). That was a lot of fun and helped me to grow in my knowledge of how Alexa works behind the scenes.

[Eric] How does Alexa development differ from other programming you have done?

[Kira] At first Alexa was pretty overwhelming. It was so different from anything I’d ever done before, and there were lines and lines of unfamiliar code written by professional Amazon people.

I found the ASK blueprints and templates extremely helpful. Instead of just being a functional program, the code is commented so developers know why it’s there and are encouraged to play around with it.

Still, the pages of code can be scary. One thing new Alexa developers can try: Before modifying your blueprint, set up the skill and ask Alexa to run it. Everything she says from that point on is somewhere in your program! Find her response in the program and tweak it. The variable name is something like “speech_output” or “speechOutput.”

It’s a really cool experience making voice apps. You can make Alexa say ridiculous things in a serious voice! Because CloudStatus started with the Color Expert blueprint, my first successful edit ended with our Echo saying, “I now know your favorite color is Northern Virginia. You can ask me your favorite color by saying, ‘What’s my favorite color?’.”

Voice applications involve factors you never need to deal with in a text app. When the user is interacting through text, they can take as long as they want to read and respond. Speech must be concise so the listener understands the first time. Another challenge is that Alexa doesn’t necessarily know how to pronounce technical terms and foreign names, but the software is always improving.

One plus side to voice apps is not having to build your own language model. With text-based programs, I spend a considerable amount of time listing all the ways a person can answer “yes,” or request help. Luckily, with Alexa I don’t have to worry too much about how the user will phrase their sentences. Amazon already has an algorithm, and it’s constantly getting smarter! Hint: If you’re making your own skill, use some built-in Amazon intents, like AMAZON.YesIntent or AMAZON.HelpIntent.

[Eric] What challenges did you encounter as you built the CloudStatus Alexa skill?

[Kira] At first, I edited the code directly in the Lambda console. Pretty soon though, I needed to import modules that weren’t built in to Python. Now I keep my code and modules in the same directory on a personal computer. That directory gets zipped and uploaded to Lambda, so the modules are right there sitting next to the code.

One challenge of mine has been wanting to fix and improve everything at once. Naturally, there is an error practically every time I upload my code for testing. Isn’t that what testing is for? But when I modify everything instead of improving bit by bit, the bugs are more difficult to sort out. I’m slowly learning from my dad to make small changes and update often. “Ship it!” he cries regularly.

During development, I grew tired of constantly opening my code, modifying it, zipping it and the modules, uploading it to Lambda, and waiting for the Lambda function to save. Eventually I wrote a separate Bash program that lets me type “edit-cloudstatus” into my shell. The program runs unit tests and opens my code files in the Atom editor. After that, it calls the command “fileschanged” to automatically test and zip all the code every time I edit something or add a Python module. That was exciting!

I’ve found that the Alexa speech-to-text conversions aren’t always what I think they will be. For example, if I tell CloudStatus I want to know about “Northern Virginia,” it sends my code “northern Virginia” (lowercase then capitalized), whereas saying “Northern California” turns into “northern california” (all lowercase). To at least fix the capitalization inconsistencies, my dad suggested lowercasing the input and mapping it to the standardized AWS region code as soon as possible.

[Eric] What Alexa skills do you plan on creating in the future?

[Kira] I will probably continue to work on CloudStatus for a while. There’s always something to improve, a feature to add, or something to learn about—right now it’s Speech Synthesis Markup Language (SSML). I don’t think it’s possible to finish a program for good!

My brother and I also want to learn about controlling our lights and thermostat with Alexa. Every time my family leaves the house, we say basically the same thing: “Alexa, turn off all the lights. Alexa, turn the kitchen light to twenty percent. Alexa, tell the thermostat we’re leaving.” I know it’s only three sentences, but wouldn’t it be easier to just say: “Alexa, start Leaving Home” or something like that? If I learned to control the lights, I could also make them flash and turn different colors, which would be super fun. :)

In August a new ASK template was released for decision tree skills. I want to make some sort of dichotomous key with that. https://developer.amazon.com/public/community/post/TxHGKH09BL2VA1/New-Alexa-Skills-Kit-Template-Step-by-Step-Guide-to-Build-a-Decision-Tree-Skill

[Eric] Do you have any advice for others who want to publish an Alexa skill?

[Kira]

  • Before submitting your skill for certification, make sure you read through the submission checklist. https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-submission-checklist#submission-checklist

  • Remember to check your skill’s home cards often. They are displayed in the Alexa App. Sometimes the text that Alexa pronounces should be different from the reader-friendly card content. For example, in CloudStatus, “N. Virginia (us-east-1)” might be easy to read, but Alexa is likely to pronounce it “En Virginia, Us [as in ‘we’] East 1.” I have to tell Alexa to say “northern virginia, u.s. east 1,” while leaving the card readable for humans.

  • Since readers can process text at their own pace, the home card may display more details than Alexa speaks, if necessary.

  • If you don’t want a card to accompany a specific response, remove the ‘card’ item from your response dict. Look for the function build_speechlet_response() or buildSpeechletResponse().

  • Never point your live/public skill at the $LATEST version of your code. The $LATEST version is for you to edit and test your code, and it’s where you catch errors.

  • If the skill raises errors frequently, don’t be intimidated! It’s part of the process of coding. To find out exactly what the problem is, read the “log streams” for your Lambda function. To print debug information to the logs, print() the information you want (Python) or use a console.log() statement (JavaScript/Node.js).

  • It helps me to keep a list of phrases to try, including words that the skill won’t understand. Make sure Alexa doesn’t raise an error and exit the skill, no matter what nonsense the user says.

  • Many great tips for designing voice interactions are on the ASK blog. https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-voice-design-best-practices

  • Have fun!

In The News

Amazon had early access to this interview and to Kira and wrote an article about her in the Alexa Blog:

14-Year-Old Girl Creates CloudStatus Alexa Skill That Benefits AWS Developers

which was then picked up by VentureBeat:

A 14-year-old built an Alexa skill for checking the status of AWS

which was then copied, referenced, tweeted, and retweeted.

Original article and comments: https://alestic.com/2016/09/alexa-skill-aws-cloudstatus/

on September 20, 2016 04:15 AM

If you are a member of Launchpad’s beta testers team, you’ll now have a slightly different interface for selecting source packages in the Launchpad web interface, and we’d like to know if it goes wrong for you.

One of our longer-standing bugs has been #42298 (“package picker lists unpublished (invalid) packages”).  When selecting a package – for example, when filing a bug against Ubuntu, or if you select “Also affects distribution/package” on a bug – and using the “Choose…” link to pop up a picker widget, the resulting package picker has historically offered all possible source package names (or sometimes all possible source and binary package names) that Launchpad knows about, without much regard for whether they make sense in context.  For example, packages that were removed in Ubuntu 5.10, or packages that only exists in Debian, would be offered in search results, and to make matters worse search results were often ordered alphabetically by name rather than by relevance.  There was some work on this problem back in 2011 or so, but it suffered from performance problems and was never widely enabled.

We’ve now resurrected that work from 2011, fixed the performance problems, and converted all relevant views to use it.  You should now see something like this:

New package picker, showing search results for "pass"

Exact matches on either source or binary package names always come first, and we try to order other matches in a reasonable way as well.  The disclosure triangles alongside each package allow you to check for more details before you make a selection.

Please report any bugs you find with this new feature.  If all goes well, we’ll enable this for all users soon.

Update: as of 2016-09-22, this feature is enabled for all Launchpad users.

on September 20, 2016 12:37 AM

September 19, 2016

Our packaging team has been working very hard, however, we have a lack of active Kubuntu Developers involved right now. So we're asking for Devels with a bit of extra time and some experience with KDE packages to look at our Frameworks, Plasma and Applications packaging in our staging PPAs and sign off and upload them to the Ubuntu Archive.

If you have the time and permissions, please stop by #kubuntu-devel in IRC or Telegram and give us a shove across the beta timeline!
on September 19, 2016 10:53 PM

clusterhq_logo

Recently I signed ClusterHQ as a client. If you are unfamiliar with them, they provide a neat technology for managing data as part of the overall lifecycle of an application. You can learn more about them here.

I will be consulting with Cluster to help them (a) build their community strategy, (b) find a great candidate as Senior Developer Evanglist, and (c) help to mentor that person in their role to be successful.

If you are looking for a new career, this could be a good opportunity. ClusterHQ are doing some interesting work, and if this role is a good fit for you, I will also be there to help you work within a crisply defined strategy and be successful in the execution. Think of it as having a friend on the inside. 🙂

You can learn more in the job description, but you should have these skills:

  • You are a deep full-stack cloud technologist. You have a track record of building distributed applications end-to-end.
  • You either have a Bachelor’s in Computer Science or are self-motivated and self-taught such that you don’t need one.
  • You are passionate about containers, data management, and building stateful applications in modern clusters.
  • You have a history of leadership and service in developer and DevOps communities, and you have a passion for making applications work.
  • You have expertise in lifecycle management of data.
  • You understand how developers and IT organizations consume cloud technologies, and are able to influence enterprise technology adoption outcomes based on that understanding.
  • You have great technical writing skills demonstrated via documentation, blog posts and other written work.
  • You are a social butterfly. You like meeting new people on and offline.
  • You are a great public speaker and are sought after for your expertise and presentation style.
  • You don’t mind charging your laptop and phone in airport lounges so are willing and eager to travel anywhere our developer communities live, and stay productive and professional on the road.
  • You like your weekend and evening time to focus on your outside-of-work passions, but don’t mind working irregular hours and weekends occasionally (as the job demands) to support hackathons, conferences, user groups, and other developer events.

ClusterHQ are primarily looking for help with:

  • Creating high-quality technical content for publication on our blog and other channels to show developers how to implement specific stateful container management technologies.
  • Spreading the word about container data services by speaking and sharing your expertise at relevant user groups and conferences.
  • Evangelizing stateful container management and ClusterHQ technologies to the Docker Swarm, Kubernetes, and Mesosphere communities, as well as to DevOPs/IT organizations chartered with operational management of stateful containers.
  • Promoting the needs of developers and users to the ClusterHQ product & engineering team, so that we build the right products in the right way.
  • Supporting developers building containerized applications wherever they are, on forums, social media, and everywhere in between.

Pretty neat opportunity.

Interested?

If you are interested in this role, there are few options for next steps:

  1. You can apply directly by clicking here.
  2. Alternatively, if I know you, I would invite you to confidentially share your interest in this role by filling in my form here. This way I can get a good sense of who is interested and also recommend people I personally know and can vouch for. I will then reach out to those of you who this seems to be a good potential fit for and play a supporting role in brokering the conversation.

By the way, there are going to be a number of these kinds of opportunities shared here on my blog. So, be sure to subscribe to my posts if you want to keep up to date with the latest opportunities.

The post Looking For Talent For ClusterHQ appeared first on Jono Bacon.

on September 19, 2016 07:25 PM

For a few weeks we have been running the Snappy Playpen as a pet/research project already. Many great things have happened since then:

  • With the Playpen we now have a repository of great best-practice examples.
  • We brought together a lot of people who are excited about snaps, who worked together, collaborated, wrote plugins together and improved snapcraft and friends.
  • A number of cloud parts were put together by the team as well.
  • We landed quite a few high-quality snaps in the store.
  • We had lots of fun.

Opening the Sandpit

With our next Snappy Playpen event tomorrow, 20th September 2016, we want to extend the scheme. We are opening the Sandpit part of the Playpen!

One thing we realised in the last weeks is that we treated the Playpen more and more like a place where well-working, tested and well-understood snaps go to inspire people who are new to snapping software. What we saw as well was that lots of fellow snappers kept their half-done snaps on their hard-disk instead of sharing them and giving others the chance to finish them or get involved in fixing. Time to change that, time for the Sandpit!

In the Sandpit things can get messy, but you get to explore and play around. It’s fun. Naturally things need to be light-weight, which is why we organise the Sandpit on just a simple wiki page. The way it works is that if you have a half-finished snap, you simply push it to a repo, add your name and the link to the wiki, so others get a chance to take a look and work together with you on it.

Tomorrow, 20th September 2016, we are going to get together again and help each other snapping, clean up old bits, fix things, explain, hang out and have a good time. If you want to join, you’re welcome. We’re on Gitter and on IRC.

  • WHEN: 2016-09-20
  • WHAT: Snappy Playpen event – opening the Sandpit
  • WHERE: Gitter and on IRC

Added bonus

As an added bonus, we are going to invite Michael Vogt, one of the core developers of snapd to the Ubuntu Community Q&A tomorrow. Join us at 15:00 UTC tomorrow on http://ubuntuonair.com and ask all the questions you always had!

See you tomorrow!

on September 19, 2016 01:38 PM

September 18, 2016

DNSync

Paul Tagliamonte

While setting up my new network at my house, I figured I’d do things right and set up an IPSec VPN (and a few other fancy bits). One thing that became annoying when I wasn’t on my LAN was I’d have to fiddle with the DNS Resolver to resolve names of machines on the LAN.

Since I hate fiddling with options when I need things to just work, the easiest way out was to make the DNS names actually resolve on the public internet.

A day or two later, some Golang glue, and AWS Route 53, and I wrote code that would sit on my dnsmasq.leases, watch inotify for IN_MODIFY signals, and sync the records to AWS Route 53.

I pushed it up to my GitHub as DNSync.

PRs welcome!

on September 18, 2016 09:00 PM

September 16, 2016

One of the tools I use a lot to work with git repositories is Tig. This handy ncurses tool let you browse your history, cherry-pick commits, do partial commits and a few other things. But one thing I wanted to do was to be able to start an interactive rebase from within Tig. This week I decided to dig into the documentation a bit to see if it was possible to do so.

Reading the manual I found out Tig is extensible: one can bind shortcut keys to trigger commands. The bound commands can use of several state variables such as the current commit or the current branch. This makes it possible to use Tig as a commit selector for custom commands. Armed with this knowledge, I added these lines to $HOME/.tigrc:

bind main R !git rebase -i %(commit)^
bind diff R !git rebase -i %(commit)^

That worked! If you add these two lines to your .tigrc file, you can start Tig, scroll to the commit you want and press Shift+R to start the rebase from it. No more copying the commit id and going back to the command line!

Note: Shift+R is already bound to the refresh action in Tig, but this action can also be triggered with F5, so it's not really a problem.

on September 16, 2016 10:17 PM

Hello everyone! This is a guest post by Menno Smits, who works on the Juju team. He originally announced this great news in the Juju mailing list (which you should all subscribe to!), and I thought it was definitely worth to announce in the Planet. Stay tuned to the mailing list for more great announcements, which I feel are going to come now that we are moving to RCs of Juju 2.0.

Juju 2.0 is just around the corner and there’s so much great stuff in the release. It really is streets ahead of the 1.x series.

One improvement that’s recently landed is that the Juju client will now work on any Linux distribution. Up until now, the client hasn’t been usable on variants of Linux for which Juju didn’t have explicit support (Ubuntu and CentOS). This has now been fixed – the client will now work on any Linux distribution. Testing has been done with Fedora, Debian and Arch, but any Linux distribution should now be OK to use.

It’s worth noting that when using local Juju binaries (i.e. a `juju` and `jujud` which you’ve built yourself or that someone has sent you), checks in the `juju bootstrap` command have been relaxed. This way, a Juju client running on any Linux flavour can bootstrap a controller running any of the supported Ubuntu series. Previously, it wasn’t possible for a client running a non-Ubuntu distribution to bootstrap a Juju controller using local Juju binaries.

All this is great news for people who are wanting to get started with Juju but are not running Ubuntu on their workstations. These changes will be available in the Juju 2.0 rc1 release.


on September 16, 2016 04:57 PM

September 15, 2016

It’s Episode Twenty-Nine of Season-Nine of the Ubuntu Podcast! Mark Johnson, Alan Pope and Martin Wimpress (just about) are here again.

Most of us are here, but one of us is busy and another was cut off part way through!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on September 15, 2016 08:15 PM
Enlazo a un excelente tutorial (Licencia CC-BY-NC-SA 4) de Miguel Menéndez, que explica en español paso a paso cómo crear una aplicación para Ubuntu Phone de una manera amena y entretenida. Incluyendo ejemplos, ejercicios, apoyo en IRC y grupo de Telegram...

Importante indicar que es el curso aún está en desarrollo, con una entrega semanal, por lo que en las próximas semanas se irá completando.

Curso UT
Para acceder, simplemente descarga la aplicación en tu Ubuntu Phone:
https://uappexplorer.com/app/curso.innerzaurus
o visita esta URL:
https://mimecar.gitbooks.io/curso-de-programacion-de-ubuntu-phone-touch/content/chapter1.html
on September 15, 2016 07:27 PM

September 14, 2016

Our upcoming release, Plasma 5.8 will be the first long-term supported (LTS) release of the Plasma 5 series. One great thing of this release is that it aligns support time-frames across the whole stack from the desktop through Qt and underlying operating systems. This makes Plasma 5.8 very attractive for users need to that rely on the stability of their computers.

Qt, Frameworks & Plasma

In the middle layer of the software stack, i.e. Qt, KDE Frameworks and Plasma, the support time-frames and conditions roughly look like this:

Qt 5.6

Qt 5.6 has been released in March as the first LTS release in the Qt 5 series. It comes with a 3-year long-term support guarantee, meaning it will receive patch releases providing bug fixes and security updates.

Frameworks 5.26

In tune with Plasma, during the recent Akademy we have decided to make KDE Frameworks, the libraries that underlie Plasma and many KDE applications 18 months of security support and fixes for major bugs, for example crashes. These updates will be shipped as needed for single frameworks and also appear as tags in the git repositories.

Plasma 5.8

The core of our long-term support promise is that Plasma 5.8 will receive at least 18 months of bugfix and security support from upstream KDE. Patch releases with bugfix, security and translation updates will be shipped in fibonacci rhythm.
To make this LTS extra reliable, we’ve concentrated the (still ongoing) development cycle for Plasma 5.8 on stability, bugfixes, performance improvements and overall polish. We want this to shine.
There’s one caveat, however: Wayland support excluded from long-term-support promises, as it is too experimental. X11 as display server is fully supported, of course.

Neon and Distros

You can enjoy these LTS releases from the source through a Neon flavor that ships an updated LTS stack based on Ubuntu’s 16.04 LTS version. openSuse Leap, which focuses on stability and continuity also ships Plasma 5.8, making it a perfect match.
The Plasma team encourages other distros to do the same.

Post LTS

After the 5.8 release, and during its support cycle, KDE will continue to release feature updates for Plasma which are supported through the next development cycle as usual.
Lars Knoll’s Qt roadmap talk (skip to 29:25 if you’re impatient and want to miss an otherwise exciting talk) proposes another Qt LTS release around 2018, which may serve as a base for future planning in the same direction.

It definitely makes a lot of sense to align support time-frames for releases vertically across the stack. This makes support for distributions considerably easier, creates a clearer base for planning for users (both private and institutional) and effectively leads to less headaches in daily life.

on September 14, 2016 01:15 PM

September 13, 2016

The Ubuntu OpenStack team is pleased to announce the general availability of OpenStack Newton B3 milestone in Ubuntu 16.10 and for Ubuntu 16.04 LTS via the Ubuntu Cloud Archive.

Ubuntu 16.04 LTS

You can enable the Ubuntu Cloud Archive pocket for OpenStack Newton on Ubuntu 16.04 installations by running the following commands:

sudo add-apt-repository cloud-archive:newton
sudo apt update

The Ubuntu Cloud Archive for Newton includes updates for Aodh, Barbican, Ceilometer, Cinder, Designate, Glance, Heat, Horizon, Ironic (6.1.0), Keystone, Manila, Networking-OVN, Neutron, Neutron-FWaaS, Neutron-LBaaS, Neutron-VPNaaS, Nova, and Trove.

You can see the full list of packages and versions at here.

Ubuntu 16.10

No extra steps required; just start installing OpenStack!

Branch Package Builds

If you want to try out the latest master branch updates, or updates to stable branches, we are delivering continuously integrated packages on each upstream commit in the following PPA’s:

sudo add-apt-repository ppa:openstack-ubuntu-testing/liberty
sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka
sudo add-apt-repository ppa:openstack-ubuntu-testing/newton

bear in mind these are built per-commitish (30 min checks for new commits at the moment) so ymmv from time-to-time.

Reporting bugs

Any issues please report bugs using the ‘ubuntu-bug’ tool:

sudo ubuntu-bug nova-conductor

this will ensure that bugs get logged in the right place in Launchpad.

Thanks and have fun!

Cheers,

Corey

(on behalf of the Ubuntu OpenStack team)


on September 13, 2016 10:27 AM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, 140 work hours have been dispatched among 10 paid contributors. Their reports are available:

  • Balint Reczey did 9.5 hours (out of 14.75 hours allocated + 2 remaining, thus keeping 7.25 extra hours for September).
  • Ben Hutchings did 14 hours (out of 14.75 hours allocated + 0.7 remaining, keeping 1.45 extra hours for September) but he did not publish his report yet.
  • Brian May did 14.75 hours.
  • Chris Lamb did 15 hours (out of 14.75 hours, thus keeping 0.45 hours for next month).
  • Emilio Pozuelo Monfort did 13.5 hours (out of 14.75 hours allocated + 0.5 remaining, thus keeping 2.95 hours extra hours for September).
  • Guido Günther did 9 hours.
  • Markus Koschany did 14.75 hours.
  • Ola Lundqvist did 15.2 hours (out of 14.5 hours assigned + 0.7 remaining).
  • Roberto C. Sanchez did 11 hours (out of 14.75h allocated, thus keeping 3.75 extra hours for September).
  • Thorsten Alteholz did 14.75 hours.

Evolution of the situation

The number of sponsored hours rised to 167 hours per month thanks to UR Communications BV joining as gold sponsor (funding 1 day of work per month)!

In practice, we never distributed this amount of work per month because some sponsors did not renew in time and some of them might not even be able to renew at all.

The security tracker currently lists 31 packages with a known CVE and the dla-needed.txt file 29. It’s a small bump compared to last month but almost all issues are affected to someone.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on September 13, 2016 08:50 AM

Welcome to the Ubuntu Weekly Newsletter. This is issue #481 for the week September 5 – 11, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Guiver
  • Chris Sirrs
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on September 13, 2016 12:02 AM

September 12, 2016

Are you interested in snapping software and need help?

snapcraft-website

There’s a lot of good reasons for snapping software:

  • You get software out to millions of users: Ubuntu (snapd installed by default since Ubuntu 16.04 LTS), snapd available too on Arch, Debian, Gentoo, Fedora, openSUSE, openembedded, yocto and OpenWRT.
  • You get to define the experience: ship the stack the way you tested it. Just one simple test-scenario for you.
  • Building a snap is simple (one piece of YAML controls the build), publishing is instantaneous (one command to run, automatic review).
  • Multiple release channels in the store.

If you’re intrigued but need help to get started, tomorrow is a great time for this, as we’re going to have another Snappy Playpen event.

Tomorrow (13th Sept 2016) we are going to hang out on Gitter and IRC and will be there to answer your questions, work on snaps together and have fun!

In the Snappy Playpen project we are collecting best-practices and work on getting snaps out there together. We’re a friendly bunch and look forward to meeting you!

on September 12, 2016 04:08 PM

Ubuntu took part in the Google Code-in contest as a mentoring organization last December and January. Google Code-In is a contest for 13–17 year old pre-university students who want to learn more about and get involved with open source. With the help of mentors from all participating organizations, the students completed tasks that had been set up by the organization. In the end, both sides win; the students are introduced to the open source communities (and if they win they get a prize trip and many other benefits) and the mentoring organizations get work done.

The contest itself ended in January 2016, and in June it was time for the grand prize winner’s trip to San Francisco, where I represented Ubuntu as a mentor side. I’m sure you are waiting to hear more, so let’s go!

The Trip

Meet & Greet

On Sunday evening, the trip started with a more or less informal Meet & Greet event, where mentors could have the chance to meet the winners for their organization and the other way around. To recap, the winners for the Ubuntu organization where Matthew Allen from Australia and Daniyaal Rasheed from the United States. Congratulations! They, along with the other winners and many more contestants did great work during the contest and I’m eagerly waiting to see more contributions from then in the future!

The event was indeed a great way to start the trip and let people socialize and get to know each other. The winners were also presented with the first surprise; a Nexus phone for everybody (along with other swag given to the students and mentors)!

The Campus

Heading into the new week and first whole day together in San Francisco, we took some buses from the hotel to the Google campus. After the breakfast, we had some talks by mentors and the award ceremony with Christ DiBona, the director of open source in Google – or quoting the man himself, the “special”. Without further ado, here’s a few photos taken from the ceremony by the lovely Jeremy Allison (thanks!).

Matthew Allen Daniyaal Rasheed Both winners and myself, the representing mentor for Ubuntu

After the ceremonies, it was lunchtime. During lunch, students not from the US were hooked up with Googlers from their own home country – cool! Following lunch, we heard a bunch of talks from Googlers, some organization presentations by mentors and visited the Google Store (where all the winners got a gift card to spend on more Google swag) as well as the visitor centre.

Finally after dinner the buses headed back to the hotel for everybody to get some rest and prepare for the next day…

San Francisco activities

Tuesday was the fun day in San Francisco! In the morning, winners had two options: either a Segway Tour around the city or a visit to the Exploratorium science museum. I believe everybody had a nice time and nobody riding the Segways got seriously hurt either.

After getting everybody back together, it was time for some lunch again. We had lunch near the Ghirardelli Square and it was the perfect opportunity to get some nice dessert in ice cream and/or chocolate form for those who wanted it.

When we had finished all the eating, it was time to head for the Golden Gate bridge and surrounding areas (or hotel if you wanted some rest). Some of us walked the bridge (some even all the way to the other side and back), some around the parks nearby, guided by the local Googlers. It was definitely a sunny and windy day at the bridge at least!

After all these activities had been done and being rejoined by the people resting at the hotel, the whole group headed for an evening on a yacht! On the yacht we had a delicious dinner and drinks as well as lot of good discussions. Jeremy was on fire again with his cameras and he got some nice shots, including a mentors-only photo!

The Office

On Wednesday, our last day together, we walked to the Google office in San Francisco. After a wonderful breakfast we were up to more talks by Googlers, the last presentations by mentors, of course some more Google swag, lunch at the terrace, mini-tours inside the office, cake, chocolate, more candies and sweets etc.

Most importantly, the winners, mentors and parents alike had the last chance to get some discussions going before most people headed back home or other places to continue their journey.

Afterthoughts

I think it’s a great idea to involve young people to open source communities and get some work done. This contest is not only a contest. It isn’t only about contestants potentially being applied to a great university or an internship at Google later. It also isn’t only about the organizations getting work done by other people.

It’s a great way to get like-minded people communicate with each other, starting from a young age.

It can help young people who might not be socially the most extrovert to find something they like doing.

It can potentially make more open source careers possible through the passion that these young people have.

Whether Ubuntu will apply to be a mentor organization next time depends much on the volunteers. If there are enough mentors who are willing to do the work – figuring out what tasks are suitable for the contestants, registering them and helping the contestants work their way through them – then I don’t see why Ubuntu would not be signing up again.

Personally, I can highly recommend applying again. It’s been a great ride. Thank you everybody, you know who you are!

Other blogs

François Revol (Haiku mentor) – a series of blog articles about the trip with even more details

on September 12, 2016 11:11 AM

September 10, 2016

As you may have read on our social media pages, we already have the winners of this cycle’s Wallpaper Contest.

The team would like to thank all the gorgeous photographs and digital art that have been submitted to the contest (next to one hundred in four weeks is amazing; even after announcing the winners we’ve been getting more submissions :P)
1610-wallpaper-winners

From left to right, the wallpapers are:


We’d like to announce, also, that new rules will be added for next year’s Wallpaper Contest:

  1. Please don’t submit entries to any other Ubuntu flavour wallpaper contest. If one wallpaper is selected in more than one distribution, we will have 2 copies of the same image in Ubuntu.
  2.  Winning photos should not be submitted for any future Ubuntu
    wallpaper contest. That, of course, doesn’t impede you to try again the next cycle if you haven’t won the current one.
on September 10, 2016 07:03 PM

pelican-feature

I have decided to move to using GitHub pages and Pelican to create my person ‘hub’ on the Internet. I am still undecided about moving content from WordPress to GitHub pages.

This site will be removed on October 8th, 2016.


on September 10, 2016 06:35 PM

September 09, 2016

Click Hooks

Ted Gould

After being asked about what I like about Click hooks I thought it would be nice to write up a little bit of the why behind them in a blog post. The precursor to this story is that I told Colin Watson that he was wrong to build hooks like this; he kindly corrected me and helped me fix my code to match but I still wasn't convinced. Now today I see some of the wisdom in the Click hook design and I'm happy to share it.

The standard way to think about hooks is as a way to react to the changes to the system. If a new application is installed then the hook gets information about the application and responds to the new data. This is how most libraries work with providing signals about the data that they maintain, and we apply that same logic to thinking about filesystem hooks. But filesystem hooks are different because the coherent state is harder to query. In your library you might respond the signal for a few things, but in many code paths the chances are you'll just go through the list of original objects to do operations. With filesystem hooks that complete state is almost never used, only the caches are that are created by the hooks themselves.

Click hooks work by creating a directory of symbolic links that matches the current state of the system, and then asks you to ensure your cache matches that state of the system. This seems inefficient because you have to determine which parts of your cache need to change, which get removed and which get added. But it results in better software because your software, including your hooks, has errors in it. I'm sorry to be the first one to tell you, but there are bugs. If your software is 99% correct, there is still something it is doing wrong. When you have delta updates that update the cache that error compounds and never gets completely corrected with each update because the complete state is never examined. So slowly the quality of your cache gets worse, not awful, but worse. By transferring the current system state to the cache each time you get the error rate of your software in the cache, but you don't get the compounded error rate of each delta. This adds up.

The design of the hooks system in Click might feel wrong as you start to implement one, but I think that after you create a few hooks you'll find there is wisdom in it. And as you use other hook systems in other platforms think about checking the system state to ensure you're always creating the best cache possible, even if the hook system there didn't force you to do it.

on September 09, 2016 03:52 PM

Introducing snapd-glib

Robert Ancell

World, meet snapd-glib. It's a new library that makes it easy for GLib based projects (e.g. software centres) to access the daemon that allows you to query, install and remove Snaps. If C is not for you, you can use all the functionality in Python (using GObject Introspection) and Vala. In the future it will support Qt/QML through a wrapper library.

snapd uses a REST API and snapd-glib very closely matches that. The behaviour is best understood by reading the documentation of that API. To give you a taste of how it works, here's an example that shows how to find and install the VLC snap.

Step 1: Connecting to snapd

The connection to snapd is controlled through the SnapdClient object. This object has all the methods required to communicate with snapd. Create and connect with:

    g_autoptr(SnapdClient) c = snapd_client_new ();
    if (!snapd_client_connect_sync (c, NULL, &error))
        // Something went wrong


Step 2: Find the VLC snap 

Asking snapd to perform a find causes it to contact the remote Snap store. This can take some time so consider using an asynchronous call for this. This is the synchronous version:

    g_autoptr(GPtrArray) snaps =
        snapd_client_find_sync (c,
                                SNAPD_FIND_FLAGS_NONE, "vlc",
                                NULL, NULL, &error);
    if (snaps == NULL)
        // Something went wrong
    for (int i = 0; i < snaps->len; i++) {
        SnapdSnap *snap = snaps->pdata[i];
        // Do something with this snap information
    }


Step 3: Authenticate 

Some methods require authorisation in the form of a Macaroon (the link is quite complex but in practise it's just a couple of strings). To get a Macaroon you need to provide credentials to snapd. In Ubuntu this is your Ubuntu account, but different snapd installations may use another authentication provider.

Convert credentials to authorization with:

    g_autoptr(SnapdAuthData) auth_data =
        snapd_login_sync (email, password, code,
                          NULL, &error);
    if (auth_data == NULL)
        return EXIT_FAILURE;

    snapd_client_set_auth_data (c, auth_data)

Once you have a Macaroon you can store it somewhere and re-use it next time you need it. Then the authorization can be created with:

    g_autoptr(SnapdAuthData) auth_data =
        snapd_auth_data_new (macaroon, discharges);
    snapd_client_set_auth_data (c, auth_data);

Step 4: Install VLC 

In step 2 we could determine the VLC snap has the name "vlc". Since this involves downloading ~100Mb and is going to take some time the asynchronous method is used. There is a callback that gives updates on the progress of the install and one that is called when the operation completes:
 
    snapd_client_install_async (c,
                                "vlc", NULL,
                                progress_cb, NULL,
                                NULL,
                                install_cb, NULL);


static void
progress_cb (SnapdClient *client,

             SnapdTask *main_task, GPtrArray *tasks,
             gpointer user_data)
{

    // Tell the user what's happening
}

static void
install_cb (GObject *object, GAsyncResult *result,

            gpointer user_data)
{
    g_autoptr(GError) error = NULL;

    if (snapd_client_install_finish (SNAPD_CLIENT (object),

                                     result, &error))
        // It installed!
    else
        // Something went wrong...
}


Conclusion 

With snapd-glib and the above code as a starting point you should be able to start integrating Snap support into your project. Have fun!
on September 09, 2016 12:02 AM

September 08, 2016

We just rolled out a new feature for Launchpad’s Git repository hosting: Git-based merge proposals can now be linked to Launchpad bugs.  This can be done manually from the web UI for the merge proposal, but normally you should just mention the Launchpad bug in the commit message of one of the commits you want to merge.  The required commit message text to link to bugs #XXX and #YYY looks like this:

LP: #XXX, #YYY

This is the same form used for Launchpad bug references in debian/changelog files in source packages, and the general approach of mentioning bugs in commit messages is similar to that of various other hosting sites.

Bugs are not automatically closed when merge proposals land, because the policy for when that should happen varies from project to project: for example, projects often only close bugs when they make releases, or when their code is deployed to production sites.

Users familiar with Bazaar on Launchpad should note that the model for Git bug linking is slightly different: bugs are linked to merge proposals rather than to individual branches.  This difference is mainly because individual branches within a Git repository are often much more ephemeral than Bazaar branches.

Documentation is here, along with other details of Launchpad’s Git hosting.

on September 08, 2016 02:21 PM
I am going to Nextcloud Conference

Woah, time has gone in a rush since my last update and a lot of things happened from that one. You could read a lot in the different techs news sites, and the articles include that i have joined forces with my former colleagues and long time friends to thrive Nextcloud. Since, we released Nextcloud 9 and 10, published Android and iOS clients and also have a themed desktop client. Around us is a vivid community of all sorts of contributors, we are working together on projects together with Collabora, Canonical and others, and commercially closed also the first deals: Cheers to our customers.

Next up is our Nextcloud Conference in Berlin in the main building of the TU Berin, from September 16th to 22nd. We are really glad and exciting to have found high profile keynote speakers for our event. Karen Sandler is executive director at the NGO Software Freedom Conservancy and Jane Silber comes as CEO of well-known Canonical, sponsor of Ubuntu. And still there will be further thrilling talks! On Friday we are focused on the usage of Nextcloud within an enterprise environment, while on Saturday and Sunday we will have mostly lightning talks and workshops after the keynotes. The days following are dedicated to hackathons and ad-hoc meetings.

If you like to meet me or any of the other contributors, you know where to find us! If you like to join please register, so our Dear Jos can organize enough food and drinks so that nobody needs to starve or becomes parched.

on September 08, 2016 12:00 PM

September 07, 2016

New software: sicherboot

Julian Andres Klode

Fork me on GitHub

Today, I wrote sicherboot, a tool to integrate systemd-boot into a Linux distribution in an entirely new way: With secure boot support. To be precise: The use case here is to only run trusted code which then unmounts an otherwise fully encrypted disk, as in my setup:

screenshot-from-2016-09-06-04-09-52

If you want, sicherboot automatically creates db, KEK, and PK keys, and puts the public keys on your EFI System Partition (ESP) together with the KeyTool tool, so you can enroll the keys in UEFI. You can of course also use other keys, you just need to drop a db.crt and a db.key file into /etc/sicherboot/keys. It would be nice if sicherboot could enroll the keys directly in Linux, but there seems to be a bug in efitools preventing that at the moment. For some background: The Platform Key (PK) signs the Key Exchange Key (KEK) which signs the database key (db). The db key is the one signing binaries.

sicherboot also handles installing new kernels to your ESP. For this, it combines the kernel with its initramfs into one executable UEFI image, and then signs that. Combined with a fully encrypted disk setup, this assures that only you can run UEFI binaries on the system, and attackers cannot boot any other operating system or modify parts of your operating system (except for, well, any block of your encrypted data, as XTS does not authenticate the data; but then you do have to know which blocks are which which is somewhat hard).

sicherboot integrates with various parts of Debian: It can work together by dracut via an evil hack (diverting dracut’s kernel/postinst.d config file, so we can run sicherboot after running dracut), it should support initramfs-tools (untested), and it also integrates with systemd upgrades via triggers on the /usr/lib/systemd/boot/efi directory.

Currently sicherboot only supports Debian-style setups with /boot/vmlinuz-<version> and /boot/initrd.img-<version> files, it cannot automatically create combined boot images from or install boot loader entries for other naming schemes yet. Fixing that should be trivial though, with a configuration setting and some eval magic (or string substitution).

Future planned features include: (1) support for multiple ESP partitions, so you can have a fallback partition on a different drive (think RAID type situation, keep one ESP on each drive, so you can remove a failing one); and (2) a tool to create a self-contained rescue disk image from a directory (which will act as initramfs) and a kernel (falling back to a vmlinuz file )

It might also be interesting to add support for other bootloaders and setups, so you could automatically sign a grub cryptodisk image for example. Not sure how much sense that makes.

I published the source at https://github.com/julian-klode/sicherboot (MIT licensed) and uploaded the package to Debian, it should enter the NEW queue soon (or be in NEW by the time you read this). Give it a try, and let me know what you think.


Filed under: Debian, sicherboot
on September 07, 2016 04:09 PM

It’s a while since last article about writing GStreamer plugins in Rust, so here is a short (or not so short?) update of the changes since back then. You might also want to attend my talk at the GStreamer Conference on 10-11 October 2016 in Berlin about this topic.

At this point it’s still only with the same elements as before (HTTP source, file sink and source), but the next step is going to be something more useful (an element processing actual data, parser or demuxer is not decided yet) now that I’m happy with the general infrastructure. You can still find the code in the same place as before on GitHub here, and that’s where also all updates are going to be.

The main sections here will be Error Handling, Threading and Asynchonous IO.

Error Handling & Panics

First of all let’s get started with a rather big change that shows some benefits of Rust over C. There are two types of errors we care about here: expected errors and unexpected errors.

Expected Errors

In GLib based libraries we usually report errors with some kind of boolean return value plus an optional GError that allows to propagate further information about what exactly went wrong to the caller but also to the user. Bindings sometimes convert these directly into exceptions of the target language or whatever construct exists there.

Unfortunately, in GStreamer we use GErrors not very often. Consider for example GstBaseSink (in pseudo-C++/Java/… for simplicity):

class BaseSrc {
    ...
    virtual gboolean start();
    virtual gboolean stop();
    virtual GstFlowReturn create(GstBuffer ** buffer);
    ...
}

For start()/stop() there is just a boolean, for render() there is at least an enum with a few variants. This is for from ideal, so what is additionally required by implementors of those virtual methods is that they post error messages if something goes wrong with further details. Those are propagated out of the normal control flow via the GstBus to the surrounding bins and in the end the application. It would be much nicer if instead we would have GErrors there and make it mandatory for implementors to return one if something goes wrong. These could still be converted to error messages but at a central place then. Something to think about for the next major version of GStreamer.

This is of course only for expected errors, that is, for things where we know that something can go wrong and want to report that.

Rust

In Rust this problem is solved in a similar way, see the huge chapter about error handling in the documentation. You basically return either the successful result, or something very similar to a GError:

trait Src {
    ...
    start(&mut self) -> Result<(), ErrorMessage>;
    stop(&mut self) -> Result<(), ErrorMessage>;
    create(&mut self, &[u8] buffer) -> Result;
    ...
}

Result is the type behind that, and it comes with convenient macros for propagating errors upwards (try!()), chaining multiple failing calls and/or converting errors (map(), and_then(), map_err(), or_else(), etc) and libraries that make defining errors with all the glue code required for combining different errors types from different parts of the code easier.

Similar to Result, there is also Option, which can be Some(x) or None, to signal the possible absence of a value. It works similarly, has similar API, but is generally not meant for error handling. It’s now used instead of GST_CLOCK_TIME_NONE (aka u64::MAX) to signal the absence of e.g. a stop position of a seek, or the absence of a known size of the stream. It’s more explicit then giving a single integer value of all of them a completely different meaning.

How is the different?

The most important difference from my point of view here is, that you must handle errors in one way or another. Otherwise the compiler won’t accept your code. If something can fail, you must explicitly handle this and can’t just silently ignore the possibility of failure. While in C people tend to just ignore error return values and assume that things just went fine.

What’s ErrorMessage and FlowError, what else?

As you probably expect, ErrorMessage maps to the GStreamer concept of error messages and contains exactly the same kind of information. In Rust this is implemented slightly different but in the end results in the same thing. The main difference here is that whenever e.g. start fails, you must provide an error message and can’t just fail silently. That error message can then be used by the caller, and e.g. be posted on the bus (and that’s exactly what happens).

FlowError is basically the negative part (the errors or otherwise non-successful results) of GstFlowReturn:

pub enum FlowError {
    NotLinked,
    Flushing,
    Eos,
    NotNegotiated(ErrorMessage),
    Error(ErrorMessage),
}

Similarly, for the actual errors (NotNegotiated and Error), an actual error message must be provided and that then gets used by the caller (and is posted on the bus).

And in the same way, if setting an URI fails we now return a Result<(), UriError>, which then reports the error properly to GStreamer.

In summary, if something goes wrong, we know about that, have to handle/report that and have an error message to post on the bus.

Macros are awesome

As a side-note, creating error messages for GStreamer is not too convenient and they want information like the current source file, line number, function, etc. Like in C, I’ve created a macro to make such an error message. Different to C, macros in Rust are actually awesome though and not just arbitrarily substituting text. Instead they work via pattern matching and allow you to distinguish all kinds of different cases, can be recursive and are somewhat typed (expression vs. statement vs. block of code vs. type name, …).

Unexpected Errors

So this was about expected errors so far, which have to be handled explicitly in Rust but not in C, and for which we have some kind of data structure to pass around. What about the other cases, the errors that will never happen (but usually do sooner or later) because your program would just be completely broken then and all your assumptions wrong, and you wouldn’t know what to do in those cases anyway.

In C with GLib we usually have 3 ways of handling these. 1) Not at all (and crashing, doing something wrong, deadlocking, deleting all your files, …), 2) explicitly asserting what the assumptions in the code are and crashing cleanly otherwise (SIGABRT), or 3) returning some default value from the function but just returning immediately and printing a warning instead of going on.

None of these 3 cases are handleable in any case, which seems fair because they should never happen and if they do we wouldn’t know what to do anyway. 1) is obviously least desirable but the most common, 3) is only slightly better (you get a warning, but usually sooner or later something will crash anyway because you’re in an inconsistent state) and 2) is cleanest. However 2) is nothing you really want either, your application should somehow be able to return back to a clean state if it can (by e.g. storing the current user data, stopping everything and loading up a new UI with the stored user data and some dialog).

Rust

Of course no Rust code should ever run into case 1) above and randomly crash, cause memory corruptions or similar. But of course this will also happen due to bugs in Rust itself, using unsafe code, or code wrapping e.g. a C library. There’s not really anything that can be done about this.

For the other two cases there is however: catching panics. Whenever something goes wrong in unexpected ways, the corresponding Rust code can call the panic!() macro in one way or another. Like via assertions, or when “asserting” that a Result is never the error case by calling unwrap() on it (you don’t have to handle errors but you have to explicitly opt-in to ignore them by calling unwrap()).

What happens from there on is similar to exception handling in other languages (unless you compiled your code so that panics automatically kill the application). The stack gets unwound, everything gets cleaned up on the way, and at some point either everything stops or someone catches that. The boundary for the unwinding is either your main() in Rust, or if the code is called from C, then at that exact point (i.e. for the GStreamer plugins at the point where functions are called from GStreamer).

So what?

At the point where GStreamer calls into the Rust code, we now catch all unwinds that might happen and remember that one happened. This is then converted into a GStreamer error message (so that the application can handle that in a meaningful way) and by remembering that we prevent any further calls into the Rust code and immediately make them error messages too and return.

This allows to keep the inconsistent state inside the element and to allow the application to e.g. remove the element and replace it with something else, restart the pipeline, or do whatever else it wants to do. Assertions are always local to the element and not going to take down the whole application!

Threading

The other major change that happened is that Sink and Source are now single-threaded. There is no reason why the code would have to worry about threading as everything happens in exactly one thread (the streaming thread), except for the setting/getting of the URI (and possibly other “one-time” settings in the future).

To solve that, at the translation layer between C and Rust there is now a (Rust!) wrapper object that handles all the threading (in Rust with Mutexes, which work like the ones in C++, or atomic booleans/integers), stores the URI separately from the Source/Sink and just passes the URI to the start() function. This made the code much cleaner and made it even simpler to write new sources or sinks. No more multi-threading headaches.

I think that we should in general move to such a simpler model in GStreamer and not require a full-fledged, multi-threaded GstElement subclass to be implemented, but instead something more use-case oriented (Source, sink, encoder, decoder, …) that has a single threaded API and hides all the gory details of GstElement. You don’t have to know these in most cases, so you shouldn’t have to know them as is required right now.

Simpler Source/Sink Traits

Overall the two traits look like this now, and that’s all you have to implement for a new source or sink:

pub type UriValidator = Fn(&Url) -> Result<(), UriError>;

pub trait Source {
    fn uri_validator(&self) -> Box;

    fn is_seekable(&self) -> bool;
    fn get_size(&self) -> Option;

    fn start(&mut self, uri: Url) -> Result<(), ErrorMessage>;
    fn stop(&mut self) -> Result<(), ErrorMessage>;
    fn fill(&mut self, offset: u64, data: &mut [u8]) -> Result;
    fn seek(&mut self, start: u64, stop: Option) -> Result<(), ErrorMessage>;
}

pub trait Sink {
    fn uri_validator(&self) -> Box;

    fn start(&mut self, uri: Url) -> Result<(), ErrorMessage>;
    fn stop(&mut self) -> Result<(), ErrorMessage>;

    fn render(&mut self, data: &[u8]) -> Result<(), FlowError>;
}

Asynchronous IO

The last time I mentioned that a huge missing feature was asynchronous IO, in a composeable way. This has some news now, there’s an abstract implementation for futures and a set of higher-level APIs around mio for doing actual IO, called tokio. Independent of that there’s also futures-cpupool, which allows to call arbitrary calculations as futures on threads of a thread pool.

Recently also the HTTP library Hyper, as used by the HTTP source (and Servo), also got a branch that moves it to tokio for allowing asynchronous IO. Once that is landed, it can relatively easily be used inside the HTTP source for allowing to interrupt HTTP requests at any time.

It seems like this area moves into a very promising direction now, solving my biggest technical concern in a very pleasant way.

on September 07, 2016 12:37 PM

Releasing the 4.1.0 Ubuntu SDK IDE

Ubuntu App Developer Blog

We are happy to announce the release of the Ubuntu SDK IDE 4.1.0 for the Trusty, Xenial and Yakkety Ubuntu series.

The testing phase took longer than we have expected but finally we are ready. To compensate this delay we have even upgraded the IDE to the most recent 4.1.0 QtCreator.

Based on QtCreator 4.1.0

We have based the new IDE on the most recent QtCreator upstream release, which brings a lot of new features and fixes. To see whats new there just check out: http://blog.qt.io/blog/2016/08/25/qt-creator-4-1-0-released/.

LXD based backend

The click chroot based builders are now deprecated. LXD allows us to download and use pre built SDK images instead of having to bootstrap them every time a new build target is created.  These LXD containers are used to run the applications from the IDE. Which means that the host machine of the SDK IDE does not need any runtime dependencies.

Get started

It is good to know that all existing schroot based builders will not be used by the IDE anymore. The click chroots will remain on the host but will be decoupled from the Ubuntu SDK IDE. If they are not required otherwise just remove them using the Ubuntu dialog in Tools->Options.

If the beta IDE was used already make sure to recreate all containers, there were some bugs in the images that we do not fix automatically.

To get the new IDE use:

sudo add-apt-repository ppa:ubuntu-sdk-team/ppa

sudo apt update && sudo apt install ubuntu-sdk-ide

Check our first blog post about the LXD based IDE for more detailed instructions:

https://developer.ubuntu.com/en/blog/2016/06/14/calling-testers-new-ubuntu-sdk-ide-post/

on September 07, 2016 11:04 AM

The QtCon / Akademy organizers have published the videos of last weekend’s conference presentations. If you’re interested in the topic, you can watch the video of my presentation about the KDE Software Store here:

I’ve also uploaded my slides, and you can find the rest of the QtCon presentation videos here.

on September 07, 2016 09:37 AM

September 06, 2016

snappy

For some weeks now on the Ubuntu Community Team we’ve been setting aside Tuesdays as Taco “Snappy Playpen” Tuesdays.

Join us on Tuesday 6th September 2016 in #snappy on freenode IRC or on Gitter.

The general goal of the Snappy Playpen is simple:-

  • Create snaps for a broad range of interesting, popular, or fun applications, games and utilities for desktop, IoT and Server
  • Exercise & improve Snapcraft and snapd on multiple Linux distros
  • Answer questions from Snappy users and developers

This week we’re focusing on the following, but welcome all contributions, as always:-

We use GitHub and CodeReviewHub to manage the development of our snaps.

See you there! 😀

on September 06, 2016 06:59 AM

September 05, 2016

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

This months is rather light since I was away in vacation for two weeks.

Kali related work

The new pkg-security team is working full steam and I reviewed/sponsored many packages during the month: polenum, accheck, braa, t50, ncrack, websploit.

I filed bug #834515 against sbuild since sbuild-createchroot was no longer usable for kali-rolling due to the embedded dash. That misfeature has been reverted and implemented through an explicit option.

I brought the attention of ftpmasters on #832163 since we had unexpected packages in the standard section (they have been discovered in the Kali live ISO while we did not want them).

I uploaded two fontconfig NMU to finally push to Debian a somewhat cleaner fix for the problem of various captions being displayed as squares after a font upgrade (see #828037 and #835142).

I tested (twice) a live-build patch from Adrian Gibanel Lopez implementing EFI boot with grub and merged it into the official git repository (see #731709).

I filed bug #835983 on python-pypdf2 since it has an invalid dependency forbidding co-installation with python-pypdf.

I orphaned splint since its maintainer was missing in action (MIA) and immediately made a QA upload to fix the RC bug which kicked it out of testing (this package is a build dependency of a Kali package).

django-jsonfield

I wrote a patch to make python-django-jsonfield compatible with Django 1.10 (#828668) and I committed that patch in the upstream repository.

Distro Tracker

I made some changes to make the codebase compatible with Django 1.10 (and added Django 1.10 to the tox test matrix). I added a “Debian Maintainer Dashboard” link next to people’s name on request of Lucas Nussbaum (#830548).

I made a preliminary review of Paul Wise’s patch to add multiarch hints (#833623) and improved the handling of the mailbot when it gets MIME Headers referencing an unknown charset (like “cp-850”, Python only knows of “cp850”)

I also helped Peter Palfrader to enabled a .onion address for tracker.debian.org, see onion.debian.org for the full list of services available over Tor.

Misc stuff

I updated my letsencrypt.sh salt formula to work with the latest version of letsencrypt.sh (0.2.0)

I merged updated translations for the Debian Administrator’s Handbook from weblate.org and uploaded a new version to Debian.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on September 05, 2016 10:03 AM

go-haversine

Paul Tagliamonte

In the spirit of blogging about some of the code i’ve written in the past year or two, I wrote a small utility library called go-haversine, which uses the Haversine Forumla to compute the distance between two points.

This is super helpful when working with GPS data - but remember, this assumes everything’s squarely on the face of the planet.

on September 05, 2016 02:52 AM

September 04, 2016

Talking Dollars

Stephen Michael Kellat

One of the hardest topics to talk about in a blog post is business. When it comes to the world of free software, it is generally assumed that there is an algorithm to take care of that. There are also folks out there who are dismissive of "business concerns" and lay concern that nobody should be concerned about your business model while they themselves receive compensation for work in someone else's business model.

I haven't had much time due to the pressures of my job to spend on the fundraising campaign to raise money to get out of my job. Yes, it is an unfortunate catch. There was a research project proposed that may need some tweaks now but fundamentally remains concerned with reviewing access methods to the Internet via satellite. Considering the SpaceX disaster recently that destroyed Facebook's hardware that was to be deployed to space for Internet.org-based access, it is still a live topic without much open or published research. The basis for even doing such a thing started with a satellite called PACSAT launched in the early 1990s but to hear talk of these satellites today you'd think it was a totally new innovation.

By the numbers:

Total Sought

$80,000.00

Year 1 Annual Compensation

$20,000.00

Year 2 Annual Compensation

$20,000.00

Office Rent (Both Years)

$12,000.00

Communications (Both Years)

$5,000.00

Printing (Year Two)

$3,000.00

Travel (Year Two)

$2,000.00

Taxes & Other Compliance Costs (Year Zero)

$18,000.00

Now, these numbers at immediate face value look bizarre. What could cost so much? Isn't research free to conduct? Shouldn't anyone be glad they have a great employer?

First and foremost, bear in mind that I am a civil servant. If you are having a difficult time watching news about the US presidential election, imagine what it looks like to me. The outcome of the election leads to a new chief executive and that may lead to many changes at work. None of the impending scenarios look that great that may yet take effect as of inauguration at 12:01 PM Eastern Time on January 20, 2017.

Research isn't free to conduct. It takes time and that has a cost. Printing the results of research and presenting it somewhere also costs money, too. Working out of my home is not as doable as it might have been previously due to the number of family members being housed under one roof making it hard to have work space set aside. On the other hand there is a nice luthier's workspace here.

Some specific structures were decided upon in putting this together. First and foremost was the need for leasing office space. As noted above, I don't have space to work out of home easily. Since the original proposal happened to have some actual functional research to it involving the mounting of antenna masts and other equipment, finding space elsewhere would be important. I put a new roof on the house not that long ago and do not have that much space to place antennae.

This would have been structured as sole proprietor. That means that the judgment call was made that the project was small enough to not require the founding of a company to sustain a time-limited endeavor. After canvassing multiple potential fiscal sponsors, there was not a suitable match available in northeast Ohio that could be found either. Apparently smaller questions like this don't fit places like an already existing aerospace institute (not big enough) and too many local academic institutions are too focused on social justice/LGBT issues nowadays to do much cross-functional information science research. Paying the various fees to the Ohio Secretary of State for incorporation plus the minimum of $850 in user fees to the Internal Revenue Service itself to ask it for non-profit status was thought to be overkill for just one research project alone.

In the sole proprietor status, all money received would book as income directly to me and I would be taxed on all of it. I would take a direct tax hit in one year even though the project is designed to run for two years. Ideally the running time was planned to be January 2017 through December 2018. Considering who my current employer as a civil servant happens to be, I'm actually well-trained for handling the paperwork involved in such issues and handling compliance in reporting the income. That's also why such a hefty chunk is broken out in the amounts above for paying taxes as well as covering other compliance costs. Income isn't tax-free when you're starting from scratch. This is the least complicated option financially and structurally.

As to the other figures, they're not that hard. The annual compensation is a bit of a cut in pay from what I currently earn per year. I would be able to pay the mortgage but won't be buying diamond rings for anybody any time soon. Telecommunications costs are projected based upon the not very competitive market locally. Printing would involve having copies of the project final report produced for distribution. Travel would involve getting to a conference location to speak.

These aren't the simplest decisions to make on how to proceed in setting things up. They are the difference between trying to proceed and giving up. As we continue to reinvent the wheel in providing Internet via satellite yet don't have a strong literature base about such forms of data transmission, there is a hole to be filled. Waiting for the perfect fiscal structure to support such research may easily result in the research becoming moot as time slips away.

Some dollars have actually been received. Admittedly the goal is still pretty far away. I've not come close to "packing it in" and giving up just yet. Funds can still be donated here: https://www.generosity.com/education-fundraising/fixing-potholes-of-the-information-superhighway

Creative Commons License
Talking Dollars by Stephen Michael Kellat is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at https://identi.ca/alpacaherder/note/GpHahBCeT7uy5xDPPziBhA.

on September 04, 2016 10:46 PM

Me and Harald gave a talk at QtCon Akademy about KDE neon and how it is modernising and smoothing the KDE development process.

Read the slides and watch the video of us in action (sound improves after 10 minutes, stay with it).

 

Facebooktwittergoogle_pluslinkedinby feather
on September 04, 2016 03:47 PM