August 19, 2017

conjure-up dev summary: aws native integration, vsphere <3, and ADDONS

conjure-up dev summary: aws native integration, vsphere <3, and ADDONS conjure-up dev summary: aws native integration, vsphere <3, and ADDONS conjure-up dev summary: aws native integration, vsphere <3, and ADDONS

We have 3 killer features in this recent development cycle aimed at bringing you closer to your cloud provider of choice, more advanced configuration options, and to Just Do More with our introduction of addons.

AWS Native Integration

conjure-up has gained support for native integration with AWS. This support is enabled by default in our Kubernetes spells. This allows the Kubernetes Controller Manager to automatically provision the AWS resources that it needs to integrate with the Elastic Load Balancer or Elastic Block Storage for persistent volume storage.

Our documentation has been updated for this feature as well, check out how to setup ELB for Kubernetes and how to setup EBS for Kubernetes.

vSphere

conjure-up is able to talk to vSphere API endpoint and obtain necessary information to provide the user with more control over which datacenter to use, what networks for primary/external networks, and preferred datastore.

conjure-up dev summary: aws native integration, vsphere <3, and ADDONS

For more information and extended walkthrough on our vSphere support click here.

Addons

conjure-up has gained the ability to support "addons". This enables anyone who has a product that sits on top of Kubernetes or OpenStack to have a fully working, reproducible, and guided process for getting your application stack deployed.

The beauty of addons is that you can use our base spell, for example, Canonical Distribution of Kubernetes (CDK) and bring in any additional charms required for things like persistent storage (Ceph or Swift). After that, you provide the installation walkthrough on how to install, configure, and setup any prerequisites so that when the installation is complete your users can get started using your product.

We've gone ahead and created an addon for people to try out and experience the conjure-up way of getting you from an installed stack to a fully usable one. Start using your big software instead of learning how to deploy it.

CDK, AWS, Deis == InstaPaaS

The addon we chose for this adventure, Deis or more specifically Deis Workflow a very cool PaaS from the same people who brought you Helm.

So the goal for this addon can be summed up with the following steps:

  1. Choose our Canonical Distribution of Kubernetes Spell
  2. Select the Deis addon
  3. Pick the cloud provider
  4. Configure Kubernetes and Deis Workflow
  5. Install Kubernetes
  6. Install Deis, enable native cloud integration, enable Deis for immediate use

So let's get started.

1. Choose our Canonical Distribution of Kubernetes Spell

sudo snap install conjure-up --classic --edge  
conjure-up  

conjure-up dev summary: aws native integration, vsphere <3, and ADDONS

2. Select the Deis Workflow Addon

conjure-up dev summary: aws native integration, vsphere <3, and ADDONS

3. Pick the cloud provider

conjure-up dev summary: aws native integration, vsphere <3, and ADDONS

Yes localhost and of course Azure provider support coming soon. Hey Microsoft, have your people call my people :)

4. Configure Kubernetes and Deis Workflow

conjure-up dev summary: aws native integration, vsphere <3, and ADDONS

5. Install Kubernetes

conjure-up dev summary: aws native integration, vsphere <3, and ADDONS

6. Install Deis, enable native cloud integration, enable Deis for immediate use

conjure-up dev summary: aws native integration, vsphere <3, and ADDONS

That can't be all.. can it?

Let's just verify that Deis is up and running. We're going to check if an admin user and ssh keys are imported and available.

ubuntu@conjure-up-runner:~$ deis keys  
=== admin Keys
ubuntu@conjure-up-runner ssh-rsa AAAAB3Nz...-up-runner  
ubuntu@conjure-up-runner:~$ deis whoami  
You are admin at http://deis.52.72.186.16.nip.io  
ubuntu@conjure-up-runner:~$ deis users  
=== Users (*=admin)
*admin

That all looks good, now all thats left is to immediately start deploying applications. We'll test one of their Go applications.

ubuntu@conjure-up-runner:~$ git clone https://github.com/deis/example-go.git  
ubuntu@conjure-up-runner:~$ cd example-go/  
ubuntu@conjure-up-runner:~/example-go$ deis create  
Creating Application... done, created zanier-instinct  
Git remote deis successfully created for app zanier-instinct.  
ubuntu@conjure-up-runner:~/example-go$ git push deis master  
Counting objects: 106, done.  
Compressing objects: 100% (61/61), done.  
Writing objects: 100% (106/106), 24.10 KiB | 0 bytes/s, done.  
Total 106 (delta 40), reused 106 (delta 40)  
remote: Resolving deltas: 100% (40/40), done.  
Starting build... but first, coffee!  
...
-----> Restoring cache...
       No cache file found. If this is the first deploy, it will be created now.
-----> Go app detected
-----> Fetching jq... done
-----> Checking Godeps/Godeps.json file.
-----> Installing go1.7.6
-----> Fetching go1.7.6.linux-amd64.tar.gz... done
-----> Running: go install -v -tags heroku .
       github.com/deis/example-go
-----> Discovering process types
       Procfile declares types -> web
-----> Checking for changes inside the cache directory...
       Files inside cache folder changed, uploading new cache...
       Done: Uploaded cache (82M)
-----> Compiled slug size is 1.9M
Build complete.  
Launching App...  
...
Done, zanier-instinct:v2 deployed to Workflow

Use 'deis open' to view this application in your browser

To learn more, use 'deis help' or visit https://deis.com/

To ssh://git@deis-builder.52.72.186.16.nip.io:2222/zanier-instinct.git  
 * [new branch]      master -> master

Let's check out our new app:

ubuntu@conjure-up-runner:~/example-go$ deis open  

conjure-up dev summary: aws native integration, vsphere <3, and ADDONS

b0ss.

Want to get your product in our addons list for the world to consume?

Come talk to us over in https://rocket.ubuntu.com/channel/conjure-up

How to use these features

Currently, conjure-up in our snap --edge channel contains all the latest features outlined in this summary:

sudo snap install conjure-up --classic --edge  

Or to upgrade from stable

sudo snap refresh conjure-up --edge  
on August 19, 2017 02:12 AM

August 18, 2017

Hello MAASters! This is the development summary for the past couple of weeks:

MAAS 2.3 (current development release)

The team is preparing and testing the next official release, MAAS 2.3 alpha2. It is currently undergoing a heavy round of testing and will be announced separately the beginning of the upcoming week. In the past three weeks, the team has:

  • Support for CentOS Network configuration
    We have completed the work to support CentOS Advanced Networking, which provides the ability for users to configure VLAN, bond and bridge interfaces, bringing it feature parity with Ubuntu. This will be available in MAAS 2.3 alpha 2.
  • Support for Windows Network configuration
    MAAS can now configure NIC teaming (bonding) and VLAN interfaces for Windows deployments. This uses the native NetLBFO in Windows 2008+. Contact us for more information [1].
  • Hardware Testing Phase 2

    • Testing scripts now define a type field that informs MAAS for which component will be tested and where the resulting metrics will apply. This may be node, cpu, memory, or storage, defaults to node.
    • Completed work to support the definition and parsing of a YAML based description for custom test scripts. This allows the user to defined the test’s title, description, and the metrics the test will output, which allows MAAS to parse and eventually display over the UI/API.
  • Network beaconing & better network discovery

    • Beaconing is now fully functional for controller registration and interface updates!
    • When registering or updating a new controller (either the first standalone controller, or a secondary/HA controller), new interfaces that have been determined to be on an existing VLAN will not cause a new fabric to be created in MAAS.
  • Switch modeling
    • The basic database model for the new switching model has been implemented.
    • On-going progress of presenting switches in the node listing is under way.
    • Work is in-progress to allow MAAS to deploy a rack controller which will be utilized when deploying a new switch with MAAS.
  • Minor UI improvements
    • Renamed “Device Discovery” to “Network Discovery”.
    • Discovered devices where MAAS cannot determine the hostname now just show the hostname as “unknown” and grayed out instead of using the MAC address manufacturer as the hostname.
  • Bug fixes:
    • LP: #1704444 – MAAS API returns 500 internal server error instead of raising actual error.
    • LP: #1705501 – django warning on install
    • LP: #1707971 – MAAS becomes unstable after rack controller restarts
    • LP: #1708052 – Quick erase doesn’t remove md superblock
    • LP: #1710681 – Cannot delete an Ubuntu image, “Update Selection” is disabled

MAAS 2.2.2 Released in the Ubuntu Archive!

MAAS 2.2.2 has now also been released in the Ubuntu Archive. For more details on MAAS 2.2.2, please see [2].

 

[1]: https://maas.io/contact-us

[2]: https://lists.ubuntu.com/archives/maas-devel/2017-August/002663.html

on August 18, 2017 07:58 PM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In July, about 181 work hours have been dispatched among 11 paid contributors. Their reports are available:

  • Antoine Beaupré did 20h (out of 16h allocated + 4 extra hours).
  • Ben Hutchings did 14 hours (out of 15h allocated, thus keeping 1 extra hour for August).
  • Chris Lamb did 18 hours.
  • Emilio Pozuelo Monfort did 18.5 hours (out of 23.5 hours allocated + 8 hours remaining, thus keeping 13 hours for August).
  • Guido Günther did 10 hours.
  • Hugo Lefeuvre did nothing due to personal problems (out of 2h allocated + 10 extra hours, thus keeping 12 extra hours for August).
  • Markus Koschany did 23.5 hours.
  • Ola Lundqvist did not publish his report yet (out of 14h allocated + 2 extra hours).
  • Raphaël Hertzog did 7 hours (out of 12 hours allocated but he gave back his remaining hours).
  • Roberto C. Sanchez did 19.5 hours (out of 23.5 hours allocated + 12 hours remaining, thus keeping 16 extra hours for August).
  • Thorsten Alteholz did 23.5 hours.

Evolution of the situation

The number of sponsored hours increased slightly with two new sponsors: Leibniz Rechenzentrum (silver sponsor) and Catalyst IT Ltd (bronze sponsor).

The security tracker currently lists 74 packages with a known CVE and the dla-needed.txt file 64. The number of packages with open issues increased of almost 50% compared to last month. Hopefully this backlog will get cleared up when the unused hours will actually be done. In any case, this evolution is worth watching.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on August 18, 2017 02:16 PM

S10E24 – Fierce Hurried Start - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

This week we’ve been buying a house, discuss podcasting creative commons music, serve up some command line lurve and go over your feedback.

It’s Season Ten Episode Twenty-Four of the Ubuntu Podcast! Alan Pope, Mark Johnson and Dave Lee are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been upto recently:
    • Mark has been buying a house.
  • We discuss podcasting music, like what Dave does on The Bugcast.

  • We share ntfy as our Command Line Lurve:

    • ntfy done <command> – Send a notification to your desktop, phone or other backend when a command finishes.
  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • This weeks cover image is taken from Wikimedia.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on August 18, 2017 02:00 PM

Big update today and probably a very awaited one: here is an important step on our journey on transforming the default session in Ubuntu Artful. Let’s get the new Ubuntu Dock installed by default! For more background on this, you can refer back to our decisions regarding our default session experience as discussed in my blog post.

Day 5: A dock, light fork and upstream extension discussions.

Today is THE day. After upgrading (and all things migrating), our new Ubuntu Dock should be installed by default and activated in your Ubuntu artful session. Note that of course, this is part of the GNOME Shell ubuntu mode and not enabled in the GNOME vanilla session.

It’s looking like this by default, thanks to the changes in those 6 packages:

Current look of artful ubuntu session with Ubuntu Dock enabled by default

Ubuntu Dock is a light fork of Dash to Dock, with very few tweaks that I’ll highlight in this post.

I guess the first question is “why shipping a dock by default”?

We want to ease our user base migration from the Unity experience to a GNOME Shell one as much as possible. Those have two angles: we will deliver a really different default environment, but still need to balance that with the level of familiarity and comforts they are used to get to smooth this transition.

Our user survey results clearly demonstrated that Ubuntu users value having a dock as part of their desktop shell. The extension is bringing back that feature to our desktop.

Dash to Dock seems to be one of the most popular dock extension for the Shell, and very well maintained. It seemed logical to base on that great work and the Ubuntu Desktop team is committed to do the required maintenance work on our spin off. We thus contacted some weeks ago the main upstream developer, Michele, with whom we started to get very great conversation.

Ok, so, why you just didn’t ship Dash to Dock, but rather a light fork of it?

It’s because we like to make our life difficult and complicated of course… :)

Ok, more seriously, there are a couple of reasons for that:

  • This extension is installed by default on Ubuntu. It means it needs to come from a package, in main, from the ubuntu archive. We thus need to package it first.
  • If our extension kept the same GNOME Shell extension ID than Dash to Dock, it would mean that any users can upgrade it as soon a new release of Dash to Dock reached the GNOME Shell extensions website. This would bypass all our QA & security procedures and checks (remember that Ubuntu Desktop is supporting it and you have regular bug fixes and security updates) by enabling another source of update. Once people update from the website, it’s in their local user directory and no system update can override this, potentially having a stalled, version they installed. That’s the reason why we are not going to publish Ubuntu Dock on the GNOME Shell extensions website, as people who may want install from it can just install Dash to Dock, gaining the same features (and more!).
  • Once released in an ubuntu version, only bug fixes, but no behavior or new defaults are allowed for UI and behavior stability are permitted. An online extension without multiple versions tracking doesn’t allow us to control that.
  • We wanted to give a very lightly modified experience for our own dock, both in terms of defaults and configurability.

Note that both Michele and other contributors of Dash to Dock reached out, understood all those 4 points and encouraged us as well going on that path.

How did you operate this light fork?

We thus decided to create a different branch of the same extension that is even available in the upstream repository! This branch, containing our modifications, will be regularly rebased on top of the upstream Dash to Dock one and kept in the same repository, alongside the upstream code. I guess that’s a strong testimonial of the great collaboration that is starting between our two teams. Also, that will ensure we keep a fresh code, close to upstream.

We even already proposed some modifications that were needed for the Ubuntu Dock package but compatible with upstream code that are now in upstream Dash to Dock itself, more of that in the following weeks!

It’s really great to see that some people even started to contribute to our version, but we’ll encourage them, when the changes are compatible with Dash to Dock itself, to directly push them there. That’s why we are really keen on using a single repo with a different branch. Michele is currently working out giving some of us direct access to that “ubuntu-dock” branch so that we don’t have to keep our current one separately.

What did you change in Ubuntu Dock?

We didn’t want to bring back the full Unity experience. However, we reviewed some of our user testing research results that we conducted in the past to understand what made Unity Unity. We have also shared those user testing results with the GNOME design team and at GUADEC Matthew explained what, for example, led to the indicators philosophy. What constitutes the ubuntu dock today are the key points we identified from those years of experience.

Basically, Ubuntu Dock is a version of Dash to Dock with different defaults:

  • The less visible change but more important one is probably the extension ID, description and such, to separate our dock from Dash to Dock as explained previously.
  • The dock is always visible by default, taking full height and reduced spaces between icons, a little bit less opaque than the default. It has a fixed width (not depending on the number of applications pinned or running), and uses Ubuntu colored orange pips to denote running apps. If the dock is set to intellhide mode, it’s taking every windows on the workspace into account and not only the currently focused one.
  • We disabled the settings panel and expose some of those settings in GNOME Control Center. More on settings in the next section.
  • We did change the scrolling behavior on the application picker button to change workspace thanks to some suggestions on the french forum, which is the default upstream dash behavior for GNOME, but not Dash to Dock. That way, we have a similar user experience than vanilla GNOME.

Those behavior changes are implemented as a gsettings schema override. We wanted to keep the settings compatible with upstream Dash to Dock to ease maintenance as well as user configurability. As a user-installed dock extension has their own compiled schemas, this was easily doable.

Also, as the extension is enabled in our GNOME Shell mode, it can’t be easily disabled by the user. We thus introduced a final change to keep our dock compatible with Dash to Dock at runtime. It means that each docks have its own set of defaults. However, if you install and enable Dash to Dock, even with Ubuntu Dock running, Ubuntu Dock vanishes to let Dash to Dock taking over. If you change any settings there (via right clicking on the application picker button, or dconf editor/gsettings), the modifications will impact both Ubuntu Dock and Dash to Dock. Finally, if you disable Dash to Dock later on, Ubuntu Dock come back. So the two docks share any user modifications, but not their defaults! Basically, the idea is this allows the power user to go into the tweak mode by just installing that Dash to Dock extension.

We disabled as well the hot corner in our ubuntu session, using the patch proposed by Florian (GNOME Shell upstream developer) discussed on this bug report. The GNOME Shell overview is consequently only available by clicking on “Activities” (alongside regular keyboard shortcuts). Indeed, the hot corner isn’t very compatible with a dock where you can aim for the first application item, miss it and trigger the hot corner by error. As usual, we only turned it off in the ubuntu session only with the same default desktop override mechanism.

And related to this, and even more important, what we didn’t change. As much as possible we want to stay compatible with the GNOME Shell current design:

  • For instance, the application picker button is kept at the bottom and not moved at the top of the dock. Indeed, the application picker button isn’t the BFB from the Unity world launching our “dash”, with a bunch of collected search items and suggestions (applications, weather, wikipedia content…), but only a convenient shortcut to see “more applications” than there are in the dock. There was no need to deviate from the GNOME Shell design on this, and so, we didn’t. In addition, it would be awkward to have “Activities”, and then, the “Application menu” just below it, as it will make more difficult for people to understand the difference between the two entries. Activities and application picker button aren't the same The application picker button is a “show me more apps” for when you didn’t pin or run the desired applications: Thus both should be separated geographically
  • As mentioned previously, we changed the Dash to Dock default setting on the application picker button to behave like the GNOME Shell dash application picker button: it switches between the different workspaces. One net benefit in this behavior is that the “Show desktop” feature to access desktop icons, despite not being available via a click anymore, is now easily accessible by scrolling to the last of the dynamic workspaces.

Why disable all those settings?

Both GNOME and Unity shared a common vision from the start: let’s take sane defaults, minimize the amount of user visible settings. This is why we implemented some settings in Compizconfig Settings Manager, but never shipped it by default, nor we are going to ship Tweaks by default either in artful.

No, we never shipped that by default despite having written some code for it

Every time you add a setting, it’s a fork in your code base. You have to test it alongside any other potential options. The more options you add, the more combinations you have to handle, the more testing is needed. Every time you add an option, you double your test cases, and thus increment from 2^n by 2^(n+1) the combinatory you have to tests. This is an explosion of test cases as not all options are compatible with each other (enabling option 1 is going to break option 2, but how can you spot that if you don’t have hours and hours of integration tests?). And let’s be honest, developers (humans…) are bad at anticipating all those incompatible settings.

Having a definitive set of settings that you think are important for the majority of your user base is the goal to better overall quality. You are in a controlled environment and know what you do.

The settings we exposed in GNOME Control Center are those corresponding to the identified set of preferences we saw in our past user testing that most people wanted to tweak and were available in previous version of Ubuntu. We did port them back and made them compatible with Ubuntu Dock (thanks to the Dash to Dock large set of settings!). Arguably, the “display panel” of GNOME Control Center isn’t the best place for those settings (is it as well the best place for the “night mode” available upstream?) but the appearance one where they were isn’t available anymore and putting them in the “background panel” didn’t sound good either. We’ll look at improving this in a future release.

GNOME Control Center Ubuntu Dock settings

You can change here the icon size in the launcher, the Hide mode (intellihide is available) and showing the dock in all monitors or your prefered one. This impacts as well Dash to Dock if you installed it. Of course, this change is only visible under the ubuntu session, not the GNOME vanilla one.

Note that for people who will run Unity in artful, will have their own settings at the usual place in Unity Control Center.

Transitioning from Unity

As we didn’t want to let our Unity users losing some of their important settings on upgrade, we decided to migrate the ones we are exposing above (so that people can easily revert if they want) from the Unity world to this new Ubuntu session based on GNOME Shell with Ubuntu Dock enabled. This transition is done at the first user login after the upgrade occurred. I’ll write a blog post later on to detail more what we did to handle all the transitions between sessions once users upgrade to Ubuntu 17.10.

Some known issues

The biggest part of introducing this dock is now done. Of course, artful is still a development release and we know there are some corner cases to fix like this nautilus one due to introducing the dock. Also, changing back some keyboard shortcuts in Dash to Dock (but there are some bugs we’ll fix upstream) will be a nice improvement. The road is long, but it’s free. :)

To conclude

I really like Kazhnuz comment I read on this OMGUbuntu blog post, it really follows how we see Ubuntu Dock compared to Dash to Dock:

The first thing to understand with Ubuntu-Dock is what it is.

It’s not a “better dock” to Dash-to-Dock, they didn’t fork it because d-t-d wasn’t “good enough” for Ubuntu. It doesn’t aim to make you, medium or power-user want to change the dock you use. Dash-to-Dock will still be the “superior dock”, with more features for you to use, and there will be tons of tutorial to explain how to use it instead of the default dock (and Ubuntu-dock auto-disable if Dash-to-Dock is enabled). If there is any new feature they want for their dock, it’ll be certainly upstreamed in D-t-D.

They just want a simple Dock, in order to have an always visible windows list, to make their own user-base more comfortable with the new desktop.

Thanks for summing it up so well!

As every day, if you are eager to experiment with these changes before they migrate to the artful release pocket, you can head over to our official Ubuntu desktop team transitions ppa to get a taste of what’s cooking!

Enjoy playing with it over the weekend and let’s see what Monday holds on for us!

on August 18, 2017 11:40 AM

August 17, 2017

In DebConf17 there was a talk about The Update Framework, short TUF. TUF claims to be a plug-in solution to software updates, but while it has the same practical level of security as apt, it also has the same shortcomings, including no way to effectively revoke keys.

TUF divides signing responsibilities into roles: A root role, a targets rule (signing stuff to download), a snapshots rule (signing meta data), and a time stamp rule (signing a time stamp file). There also is a mirror role for signing a list of mirrors, but we can ignore that for now. It strongly recommends that all keys except for timestamp and mirrors are kept offline, which is not applicable for APT repositories – Ubuntu updates the repository every 30 minutes, imagine doing that with offline keys. An insane proposal.

In APT repositories, we effectively only have a snapshots rule – the only thing we sign are Release files, and trust is then chained down by hashes (Release files hashes Packages index files, and they have hashes of individual packages). The keys used to sign repositories are online keys, after all, all the metadata files change every 30 minutes (Ubuntu) or 6 hours (Debian) – it’s impossible to sign them by hand. The timestamp role is replaced by a field in the Release file specifying until when the Release file is considered valid.

Let’s check the attacks TUF protects again:

  • Arbitrary installation attacks. – We protect against that with the outer signature and hashes
  • Endless data attacks. – Yes, we impose a limit on Release files (the sizes of other files are specified in there and this file is signed)
  • Extraneous dependencies attacks – That’s verified by the signed hashes of Packages files
  • Fast-forward attacks – same
  • Indefinite freeze attacks – APT has a Valid-Until field that can be used to specify a maximum life time of a release file
  • Malicious mirrors preventing updates. – Well, the user configures the mirror, so usually not applicable. if the user has multiple mirrors, APT deals with that fine
  • Mix-and-match attacks – Again, signed Release file and hashes of other files
  • Rollback attacks – We do not allow Date fields in Release files to go backwards
  • Slow retrieval attacks – TUF cannot protect against that either. APT has very high timeouts, and there is no reasonable answer to that.
  • Vulnerability to key compromises – For our purposes where we need all repository signing keys to be online, as we need to sign new releases and metadata fairly often, it does not make it less vulnerable to require a threshold of keys (APT allows repositories to specify concrete key ids they may be signed with though, that has the same effect)
  • Wrong software installation. – Does not happen, the .deb files are hashed in the Packages files which are signed by the release file

As we can see, APT addresses all attacks TUF addresses.

But both do not handle key revocation. So, if a key & mirror gets compromised (or just key and the mirror is MITMed), we cannot inform the user that the key has been compromised and block updates from the compromised repository.

I just wrote up a proposal to allow APT to query for revoked keys from a different host with a key revocation list (KRL) file that is signed by different keys than the repository. This would solve the problem of key revocation easily – even if the repository host is MITMed or compromised, we can still revoke the keys signing the repository from a different location.


Filed under: Debian, Ubuntu
on August 17, 2017 07:49 PM

The Security Team weekly reports are intended to be very short summaries of the Security Team’s weekly activities.

If you would like to reach the Security Team, you can find us at the #ubuntu-hardened channel on FreeNode. Alternatively, you can mail the Ubuntu Hardened mailing list at: ubuntu-hardened@lists.ubuntu.com

During the last week, the Ubuntu Security team:

  • Triaged 537 public security vulnerability reports, retaining the 134 that applied to Ubuntu.
  • Published 16 Ubuntu Security Notices which fixed 36 security issues (CVEs) across 17 supported packages.

Ubuntu Security Notices

Bug Triage

Mainline Inclusion Requests

Updates to Community Supported Packages

  • Simon Quigley (tsimonq2) provided debdiffs for trusty-zesty for vlc (LP: #1709420)

Development

What the Security Team is Reading This Week

Weekly Meeting

More Info

Almost every household has an unsolved Rubiks Cube but you can esily solve it learning a few algorithms.

on August 17, 2017 03:17 PM

NASA/EOSDIS Earthdata

Duncan McGreggor

Update

It's been a few years since I posted on this blog -- most of the technical content I've been contributing to in the past couple years has been in the following:
But since the publication of the Mastering matplotlib book, I've gotten more and more into satellite data. The book, it goes without saying, focused on Python for the analysis and interpretation of satellite data (in one of the many topics covered). After that I spent some time working with satellite and GIS data in general using Erlang and LFE. Ultimately though, I found that more and more projects were using the JVM for this sort of work, and in particular, I noted that Clojure had begun to show up in a surprising number of Github projects.

EOSDIS

Enter NASA's Earth Observing System Data and Information System (see also earthdata.nasa.gov and EOSDIS on Wikipedia), a key part of the agency's Earth Science Data Systems Program. It's essentially a concerted effort to bring together the mind-blowing amounts of earth-related data being collected throughout, around, and above the world so that scientists may easily access and correlate earth science data for their research.

Related NASA projects include the following:
The acronym menagerie can be bewildering, but digging into the various NASA projects is ultimately quite rewarding (greater insights, previously unknown resources, amazing research, etc.).

Clojure

Back to the Clojure reference I made above:  I've been contributing to the nasa/Common-Metadata-Repository open source project (hosted on Github) for a few months now, and it's been amazing to see how all this data from so many different sources gets added, indexed, updated, and generally made so much more available to any who want to work with it. The private sector always seems to be so far ahead of large projects in terms of tech and continuously improving updates to existing software, so its been pretty cool to see a large open source project in the NASA Github org make so many changes that find ways to keep helping their users do better research. More so that users are regularly delivered new features in a large, complex collection of libraries and services thanks in part to the benefits that come from using a functional programming language.

It may seem like nothing to you, but the fact that there are now directory pages for various data providers (e.g., GES_DISC, i.e., Goddard Earth Sciences Data and Information Services Center) makes a big difference for users of this data. The data provider pages now also offer easy access to collection links such as UARS Solar Ultraviolet Spectral Irradiance Monitor. Admittedly, the directory pages still take a while to load, but there are improvements on the way for page load times and other related tasks. If you're reading this a month after this post was written, there's a good chance it's already been fixed by now.

Summary

In summary, it's been a fun personal journey from looking at Landsat data for writing a book to working with open source projects that really help scientists to do their jobs better :-) And while I have enjoyed using the other programming languages to explore this problem space, Clojure in particular has been a delightfully powerful tool for delivering new features to the science community.
on August 17, 2017 02:05 PM

Let’s continue on our journey on transforming the current default session in Ubuntu Artful with a small change today. For more background on this, you can refer back to our decisions regarding our default session experience as discussed in my blog post.

Day 4: Status icons and multiple sessions

Let’s focus today on a very small detail, but that still show up we are looking at those nitpicks before delivering ubuntu 17.10.

As I teased yesterday, can you spot what’s wrong in that picture?

Today's ubuntu session, before "ZE" change

Sure, there is some french involved, but I wasn’t talking about that. :) What’s happening is that the battery icon is squished on a non HiDPI display. This is coming from the fact that the icon isn’t square, and GNOME-Shell is expecting square icons, contrary to our Unity indicator power implementation. Consequently, our battery icons from our ubuntu-mono icon theme aren’t compatible in their current form with the Shell.

As we can’t separate easily the icon theme, and this one is containing a bunch of other monochrome icons we want to use by default in our ubuntu session for various applications, the “simple” fix was to prepend those special icons with unity- and having only indicator-power reading them, still being backward compatible as Ubuntu MATE is using this indicator with their own theme. (Actually, I had to quickly write a small script to rename everything in batch, as there is a bunch of symlinks in between of all those files).

The result is that now both the ubuntu and GNOME vanilla session shows up the upstream battery icon, alongside other default status icons:

Vanilla upstream GNOME session with correct battery icon

And our unity session (available in universe) will display our desired set of icons: Unity session in artful

Even if this seems a trivial issue, attention to details is always important and a crucial value we always have. Especially if we can, easily, prevent breakage on other part of the distribution and archive.

As every day, if you are eager to experiment with these changes before they migrate to the artful release pocket, you can head over to our official Ubuntu desktop team transitions ppa to get a taste of what’s cooking!

And we’ll attack tomorrow the big piece: introducing the new Ubuntu Dock!

on August 17, 2017 10:35 AM

On August 2, Luke Marsden (Weaveworks) and Marco Ceppi (Canonical) presented a webinar on how to Speed up your software development lifecycle with Kubernetes. In the session they described how you can use conjure-up and Weave Cloud to set up, manage and monitor an app in Kubernetes. In this tutorial we’re going to show you how to set up Kubernetes on any cloud, the conjure-up way. Once the cluster is spun up, you’ll use Weave Cloud to deploy an application, explore the microservices and monitor the app as it runs in the cluster.

Why Canonical & Weaveworks?

Canonical’s conjure-up makes it easy to deploy and operate Kubernetes in production, using a neat, easy-to-use CLI installer. Weave Cloud fills in the gaps missing with a Kubernetes install and provides the tools necessary for a full development lifecycle:

  • Deploy – plug output of CI system into cluster so that you can ship features faster
  • Explore – visualize and understand what’s happening so that you can fix problems faster
  • Monitor – understand behavior of running system so that you can fix problems faster using Prometheus

some text

Weave Cloud Development Lifecycle

Installing Kubernetes with conjure-up

  1. Use conjure-up to install Kubernetes on your cloud infrastructure (LXD provider is not currently supported by Weave Cloud)
  1. Run the following script to enable privileged containers & set up RBAC properly:
juju config kubernetes-master allow-privileged=true 
juju config kubernetes-worker allow-privileged=true
juju ssh kubernetes-master/0 -- 'sudo snap set kube-apiserver
authorization-mode=RBAC'
sleep 120 
juju ssh kubernetes-master/0 -- '/snap/bin/kubectl create clusterrolebinding
root-cluster-admin-binding --clusterrole=cluster-admin --user=admin &&
/snap/bin/kubectl create clusterrolebinding kubelet-node-binding 
--clusterrole=system:node --user=kubelet'
  1. Run
    export KUBECONFIG=<path-to-kubeconfig>

    find the path from e.g.

    cat ~/bin/kubectl.conjure<tab>

    You may wish to make this permanent by adding the export command to your ~/.bash_profile or equivalent shell startup script. Once you have the environment variable in place, you can run kubectl commands against the cluster. Try it out with

    kubectl get nodes

Connecting your conjured up cluster to Weave Cloud

  1. Next you will visualize the Kubernetes cluster in Weave Cloud. Sign up for Weave Cloud. Select Setup → Kubernetes → Generic Kubernetes and then cut and paste the Kubernetes command from the Weave Cloud UI into your terminal:

some text

Weave Cloud Token and command location

For example, you would run:

kubectl apply -n kube-system -f \ "https://cloud.weave.works/k8s.yaml?t=[CLOUD-TOKEN]&k8s-version=$(kubectl version | base64 | tr -d '\n')"

Where,

  • [CLOUD-TOKEN] is the Weave Cloud token.

The cluster should now appear in Weave Cloud. Check Explore → Hosts to see all five hosts:

some text

  1. Deploy the Sock Shop by first creating the namespace, checking it out of Git and then changing the kubernetes deploy directory:
kubectl create namespace sock-shop git clone https://github.com/microservices-demo/microservices-demo cd microservices-demo kubectl apply -n sock-shop -f deploy/kubernetes/manifests

Now you should be able to see the Sock Shop in Weave Cloud Explore (click Controllers and select the sock-shop namespace filter from the bottom left):

some text

And you should be able to access the shop in your browser, using the IP address of one of your Kubernetes nodes at port :30001.

some text

Once the app is loaded, try out the Monitoring tool in Weave Cloud to observe the latencies between services in the cluster. Click Monitor and then run the following query:

rate(request_duration_seconds_sum[1m])/rate(request_duration_seconds_count[1m])

some text

You should see all the different requests latencies for all the services in the sock shop. This is possible because the sock shop is instrumented with the Prometheus client libraries.

Conclusion

In this post, we showed you how to get from nothing to a Kubernetes cluster using Canonical’s conjure-up. We then showed you how to install the Weave Cloud agents and just scratched the surface of what you can do with Weave Cloud: monitoring the request latencies on a Prometheus-instrumented app, the sock shop.

Next steps

on August 17, 2017 10:21 AM

August 16, 2017

Okay, so I have been slack with my blogging again. I have been travelling around Europe with work quite a bit, had a short holiday over Easter in Denmark, and also had 3 weeks of Summer Holiday in Germany.

Debian

  • Tidied up the packaging and tried building the latest version of libdrumstick, but tests had been added to the package by upstream which were failing. I still need to get back and investigate that.
  • Updated node-seq (targeted at experimental due to the Debian Stretch release freeze) and asked for sponsorship (as I did not have DM rights for it yet).
  • Uploaded the latest version of abcmidi (also to experimental), and again.
  • Updated node-tmp to the latest version and uploaded to experimental.
  • Worked some more on bluebird RFP, but getting errors when running tests. I still haven’t gone back to investigate that.
  • Updated node-coffeeify to the latest version and uploaded to experimental.
  • Uploaded the latest version of node-os-tmpdir (also to experimental).
  • Uploaded the latest version of node-concat-stream (also to experimental).
  • After encouragement from several Debian Developers, I applied to become a full Debian Developer. Over the summer months I worked with Santiago as my Application Manager and answered questions about working in the Debian Project.
  • A web vulnerability was identified in node-concat-stream, so I prepared a fix to the version in unstable, uploaded it to unstable, and submitted a unblock request bug so that it would be fixed in the coming Debian Stretch release.
  • Debian 10 (Stretch) was released! Yay!
  • Moved abcmidi from experimental to unstable, adding an autopkgtest at the same time.
  • Moved node-concat-stream from experimental to unstable. During the process I had to take care of the intermediate upload to stretch (on a separate branch) because of the freeze.
  • Moved node-tmp to unstable from experimental.
  • Moved node-os-tmpdir from experimental to unstable.
  • Filed a removal bug for creepy, which seems to be unmaintained upstream these days. Sent my unfinished Qt4 to Qt5 porting patches upstream just in case!
  • Uploaded node-object-inspect to experimental to check the reverse dependencies, then moved it to unstable. Then a new upstream version came out which is now in experimental waiting for a retest of reverse dependencies.
  • Uploaded the latest version of gramps (4.2.6).
  • Uploaded a new version of node-cross-spawn to experimental.
  • Discovered that I had successfully completed the DD application process and I was now a Debian Developer. I celebrated by uploading the Debian Multimedia Blends package to the NEW queue, which I was not able to do before!
  • Tweaked and uploaded the node-seq package (with an RC fix) which had been sitting there because I did not have DM rights to the package. It is not an important package anyhow, as it is just one of the many dependencies that need to be packaged for Browserify.
  • Packaged and uploaded the latest node-isarray directly to unstable, as the changes seemed harmless.
  • Prepared and uploaded the latest node-js-yaml to experimental.
  • Did an update to the Node packaging Manual now that we are allowed to use “node” as the executable in Debian instead of “nodejs” which caused us to do a lot of patching in the past to get node packages working in Debian.

Ubuntu

  • Did a freeze exception bug for ubuntustudio-controls, but we did not manage to get it sponsored before the Ubuntu Studio Zesty 17.04 release.
  • Investigated why Ardour was not migrating from zesty-proposed, but I couldn’t be sure of what was holding it up. After getting some help from the Developer’s mailing list, I prepared “no change rebuild” of pd-aubio which was sponsored by Steve Langasek after a little tweak. This did the trick.
  • Wrote to the Ubuntu Studio list asking for support for testing the Ubuntu Studio Zesty release, as I would be on holiday in the lead up to the release. When I got back, I found the release had gone smoothly. Thanks team!
  • Worked on some blueprints for the next Ubuntu Studio Artful release.
  • As Set no longer has enough spare time to work on Ubuntu Studio, we had a meeting on IRC to decide what to do. We decided that we should set up a Council like Xubuntu have. I drafted an announcement, but we still have not gone live with it yet. Maybe someone will have read this far and give us a push (or help). 🙂
  • Did a quick test of Len’s ubuntustudio-controls re-write (at least the GUI bits). We better get a move on if we want this to be part of Artful!
  • Tested ISO for Ubuntu Studio Xenial 16.04.3 point release, and updated the release notes.
  • Started working on a merge of Qjackctl using git-ubuntu for the first time. Had some issues getting going, so I asked the authors for some advice.

on August 16, 2017 05:16 PM

Repugnant

Valorie Zimmerman

I grew up in a right-wing, Republican family. As I grew to adulthood and read about the proud history of the Republican party, beginning with Lincoln, I embraced that party, even as racism began to be embraced as a political strategy during Nixon's campaign for president. I overlooked that part, because I didn't want to see it. Besides, the Democrats were the party of racists.

However, as I heard about the crimes that President Nixon seemed to be excusing, and that people around me also seemed to excuse, I began to think long and hard about party versus principle. Within a few years, I left that party, especially as I saw the Democrats, so long the party steeped in racism, begin to attempt to repair that damage done to the country. It took me many years to admit that I had changed parties, because my beliefs have not changed that much. I just see things more clearly now, after reading a lot more history.

Today I've seen a Republican president embrace racism, support of the Confederacy, and support racists, neo-Nazis, white nationalists, and the Ku Klux Klan party -- a party his father supported in Queens, New York. Fred Trump was arrested for marching publicly in full regalia, masked, hooded and robed. I've seen no report that he was convicted, although there are pictures of the march and the arrest report in the local newspaper.

Make no mistake about it; today's statement was deliberate. Trump's entry into the political fray was as a leader of the so-called birthers, questioning Barack Obama's citizenship. His announcement of candidacy was a full-throated anti-immigrant stance, which he never moderated and has not changed.

Yes, previous American presidents have been racist, some of them proudly so. But since the Civil War we have not seen -- until today -- a president of the United States throw his political lot in with white nationalists and neo-Nazis. Good people voted for this man, hoping that he would shake things up in Washington. Good people cannot stand by statements such as Trump made today.

It is time for the Congress to censure this President. The statements made today are morally bankrupt, and are intolerable. Good people do not march with neo-Nazis, and good people cannot let statements such as those made today, stand.
on August 16, 2017 03:03 AM

August 15, 2017

Welcome to the Ubuntu Weekly Newsletter. This is issue #516 for the week of August 8 – 14, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • Chris Guiver
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on August 15, 2017 10:59 PM

August 14, 2017

Vanilla Framework has a new website

Canonical Design Team

We’re happy to announce the long overdue Vanilla Framework website, where you can find all the relevant links and resources needed to start using Vanilla.

 

vanillaframework.io homepageThe homepage of vanillaframework.io

 

When you visit the new site, you will also see the new Vanilla logo, which follows the same visual principles as other logos within the Ubuntu family, like Juju, MAAS and Landscape.

We wanted to make sure that the Vanilla site showcased what you can do with the framework, so we kept it nice and clean, using only Vanilla patterns.

We plan on extending it to include more information about Vanilla and how it works, how to prototype using Vanilla, and the design principles that are behind it, so keep tuned.

And remember you can follow Vanilla on Twitter, ask questions on Slack and file new pattern proposals and issues on GitHub.

on August 14, 2017 02:35 PM
New release 0.0.85 of folder color! Now you can select a file and set an emblem! Enjoy it! :)

File emblem

How to install?
on August 14, 2017 09:18 AM

Something a little different to share today, but important if you are (a) not especially gifted/interested in cooking, (b) love great food, and (c) are a bit of a nerd. Sous vide is the technique, and the Joule is the solution.

Sous vide is a method of cooking that involves putting food in a bag and submerging it in a water pan that is kept at a regulated temperature. You then essentially slow-cook the food, but because the water that the food is in is so consistent in temperature, it evenly cooks the food.

The result of this is phenomenal food. While I am still fairly new to sous vide, everything we have tried has been a significant improvement compared to regular methods (e.g. grilling).

As an example, chicken is notoriously difficult to cook well. When I sous vide the chicken and then sear it on the grill (to get some delicious char), you get incredible tender and juicy chicken with the ideal grilled texture and flavor.

Steak is phenomenal too. I use the same technique: sous vide it to a medium-rare doneness and then sear it at high heat on the grill. Perfectly cooked steak.

A particular surprise here are eggs. When you sous vide an egg, the yolk texture is undeniably better. It takes on an almost custard like texture and brings the flavor to life.

So, sous vide is an unquestionably fantastic method of cooking. The big question is, particularly for the non-cooks among you, is it worth it?

Sous vide is great for busy (or lazy) people

Part of why I am loving sous vide is that it matches the formula I want to experience in cooking:

Easy + Low Effort + Low Risk + Minimal Cleanup = Great Food

Here’s the breakdown:

  • Easy – you can’t really screw it up. Put the food in a bag, set the right temperate, come back after a given period of time and your food is perfectly cooked.
  • + Low Effort – it takes a few minutes to start the cooking process and you can do other things while it cooks. You never need to babysit it.
  • + Low Risk – with sous vide you know it is cooked evenly. As an example, with chicken, it is common to get a cooked outer core (from grilling) and it be uncooked in the middle. As such people overcook it to prevent the risk. With sous vide you just have to ensure you cook it to a safe level and it is consistently cooked.
  • + Minimal Cleanup – you put the food in a bag, cook it, and then throw the bag away. The only pan you use is a bowl with water in it (about as easy to clean as possible). Perfect!

Thus, the result is great food and minimal fuss.

One other benefit is reheating for later eating.

As an example, right now I am ‘sous vide’ing’ (?) a pan full of eggs. These will be for breakfast every day this week. When the eggs are done, we will pop them in the fridge to keep. To reheat, we simply submerge the eggs in boiling water and it raises the internal temperature back up. The result is the incredible sous vide texture and consistency, but it takes merely (a) boiling the kettle, (b) submerging the eggs, and (c) waiting a little bit to get the benefits of sous vide later.

The gadget

This is where the nerdy bit comes in, but it isn’t all that nerdy.

For Christmas, Erica and I got a Joule. Put simply, it is white stick that plugs into the wall and connects to your phone via bluetooth.

You fill a pan with water, pop the Joule in, and search for the food you want to cook. The app will then recommend the right temperate and cooking time. When you set the time, the Joule turns on and starts circulating the water in the pan until it reaches the target temperate.

Next, you put the food in in the bag and the app starts the timer. When the timer is done your phone gets notified, you pull the food out and bingo!

The library of food in the app is enormous and even helps with how to prepare the food (e.g. any recommended seasoning). If though you want to ignore the guidance and just set a temperature and cooking time, then you can do that too.

When you are done cooking, throw the bag you cooked the food in away, empty the water out of the pan, and put the Joule back in the cupboard. Job done.

Now, to be clear, there are many other sous vide gadgets, none of which I have tried. I have tried one, the Joule, and it has been brilliant.

So, that’s it: I just wanted to share this recent discovery. Give it a try, I think you will dig it as much as I do.

The post Sous Vide For Nerds (With Limited Cooking Experience) appeared first on Jono Bacon.

on August 14, 2017 05:52 AM

August 12, 2017

IPv6 Unique Local Address

Sebastian Schauenburg

Since IPv6 is happening, we should be prepared. During the deployment of a new accesspoint I was in need of a Unique Local Address. IPv6 Unique Local Addresses basically are comparable to the IPv4 private address ranges.

Some sites refer to Unique-Local-IPv6.com, but that is offline nowadays. Others refer to kame.net generator, which is open source and still available yay.

But wait, there must be an offline method for this right? There is! subnetcalc (nowadays available here apparently) to the rescue:

subnetcalc fd00:: 64 -uniquelocal | grep ^Network

Profit :-)

on August 12, 2017 09:00 AM

August 11, 2017

Some of the talks, initiatives, conversations, and workshops that inspired me at Akademy. Thanks so much for the e.V. for sponsoring me.

A. Wikidata  - We have some work to do to get our data automatically uploaded into Wikidata. However, doing so will help us keep our Wikipedia pages up-to-date.

B. Looking for Love, Paul Brown's talk and workshop about Increasing your audience's appreciation for your project. Many of the top Google results for our pages don't address what people are looking for:
  1. What can your project do for me? 
  2. What does your application or library do?
Paul highlighted one good example: https://krita.org/. That crucial information is above the fold, with no scrolling. Attractive, and exactly the approach we should be taking in all our public-facing pages.

My offer to all projects: I will help with the text on any of your pages. This is a serious offer! Just ask in IRC or send an email to valorie at kde dot org for editing.

C. The Enterprise list for people with large KDE deployments, an under-used resource for those supporting our users in huge numbers, in schools, governments and companies. If you know of anyone doing this job who is not on the list, hand along the link to them.

D. Goalposts for KDE - I was not at this "Luminaries" Kabal Proposals BoF, but I read the notes. I'll be happy to see this idea develop on the Community list.

E. UserBase revival -- This effort is timely! and brings the list of things I'm excited about full circle. For many teams, UserBase pages are their website. We need to clean up and polish UserBase! Join us in #kde-wiki in IRC or the Telegram channel and https://userbase.kde.org/Wiki_Team_Page where we'll actually be tracking and doing the work. I'm so thankful that Claus is taking the leadership on this.

If you are a project leader and want help buffing your UserBase pages, we can help!


In addition to all of the above ideas, there is still another idea floating around that needs more development. Each of our application sites, at least, should have a quality metric box, listing things like code testing, translation/internationalization percentage, number of contributors, and maybe more. These should be picked up automatically, not generated by hand. No other major projects seem to have this, so we should lead. When people are looking for what applications they want to run on their computers, they should choose by more than color or other incidentals. We work so much on quality -- we should lead with it. There were many informal discussions about this but no concrete proposals yet.
on August 11, 2017 08:08 AM

Release Engineer (Berlin, Germany), Sony Interactive Entertainment

Do you want to be part of an engineering team that is building a world class cloud platform that scales to millions of users? Are you excited to dive into new projects, have an enthusiasm for automation, and enjoy working in a strong collaborate culture? If so, join us!

Responsibilities

Design and development of Release Engineering projects and tools to aid in release pipeline. Work in cross-functional development teams to build and deploy new software systems. Work with team and project managers to deliver quality software within schedule constraints.

Requirements

Demonstrable knowledge of distributed architectures, OOP and Python BS or a minimum of 5 years of relevant work experience

Skills & Knowledge

  • Expert level knowledge of Unix/Linux
  • Advanced skills in Python
  • Kubernetes experience is a huge plus
  • Programming best practices including unit testing, integration testing, static analysis, and code documentation
  • Familiarity with build systems
  • Familiarity with continuous integration and delivery

Additional Attributes

  • Contributor to open source projects
  • Version control systems (preferably Git)
  • Gamer is a plus
  • Enjoys working in a fast-paced environment
  • Strong communication skills

Interested? Use this link

on August 11, 2017 07:16 AM

HackerBoxes is a monthly subscription service for hardware hackers and makers. I hadn’t heard of it until I was researching DEF CON 25 badges, for which they had a box, at which point I was amazed I had missed it. They were handing out coupons at DEF CON and BSidesLV for 10% off your first box, so I decided to give it a try.

Hacker Tracker

First thing I noticed upon opening the box was that there’s no fanfare in the packaging or design of the shipping. You get a plain white box shipped USPS with all of the contents just inside. I can’t decide if I’m happy they’re not wasting material on extra packaging, or disappointed they didn’t do more to make it feel exciting. If you look at their website, they show all the past boxes with a black “Hacker Boxes” branded box, so I don’t know if this is a change, or the pictures on the website are misleading, or the influx of new members from hacker summer camp has resulted in a box shortage.

I unpacked the box quickly to find the following:

  • Arduino Nano Clone
  • Jumper Wires
  • Small breadboard
  • MicroSD Card (16 GB)
  • USB MicroSD Reader
  • MicroSD Breakout Board
  • u-blox NEO 6M GPS module
  • Magnetometer breakout
  • PCB Ruler
  • MicroUSB Cable
  • Hackerboxes Sticker
  • Pinout card with reminder of instructions (aka h4x0r sk00l)

If you’ve been trying to do the math in your head, I’ll save you the trouble. In quantity 1, these parts can be had from AliExpress for about $30. If you’re feeling impatient, you can do it on Amazon for about $50. Of course, the value of the parts alone isn’t the whole story: this is a curated set of components that builds a project, and the directions they provide on getting started are part of the product. (I just know everyone wanted to know the cash value.)

Compared to some of their historical boxes, I’m a little underwhelmed. Many of their boxes look like something where I could do many things with the kit or teach hardware concepts: for example, “0018: Circuit Circus” is clearly an effort to teach analog circuits. “0015 - Connect Everything” lets you connect everything to WiFi via the ESP32. Even when not multi-purpose, previous kits have included reusable tools like a USB borescope or a Utili-Key. Many seem to have an exclusive “fun” item, like a patch or keychain, in addition to the obligatory HackerBoxes sticker.

In contrast, the “Hacker Tracker” box feels like a unitasker: receive GPS/magnetometer readings and log them to a MicroSD card. Furthermore, there’s not much hardware education involved: all of the components connect directly via jumper wires to the provided Arduino Nano clone, so other than “connect the right wire”, there’s no electronics skillset to speak of. On the software side, while there are steps along the way showing how each component is used, a fully-functional Arduino sketch is provided, so you don’t have to know any programming to get a functional GPS logger.

Overall, I feel like this kit is essentially “paint-by-numbers”, which can either be great or disappointing. If you’re introducing a teenager to electronics and programming, a “paint-by-numbers” approach is probably a great start. Likewise, if this is your first foray into electronics or Arduino, you should have no trouble following along. On the other hand, if you’re more experienced and just looking for inspiration of endless possibilities, I feel like this kit has fallen short.

There’s one other gripe I have with this kit: there are headers on the Arduino Nano clone and the MicroSD breakout, but the headers are not soldered on the accelerometer or GPS module. At least if you’re going to make a simple kit, make it so I don’t have to clean off the soldering station, okay?

So, am I keeping my subscription? For the moment, yes, at least for another month. Like I said, I’ve been impressed by past kits, so this might just be an off month for what I’m looking for. I don’t think this kit is bad, and I’m not disappointed, just not as excited as I’d hoped to be. I might have to give Adabox a try though.

As for the subscription service itself: it looks like their web interface makes it easy to skip a month (maybe you’re travelling and won’t have time?) or cancel entirely. I’m not advocating cancelling, but I absolutely hate when subscription services make you contact customer service to cancel (just so they can try to talk you into staying longer, like AOL back in the 90s). The site has a nice clean feel and works well.

If anyone from HackerBoxes is reading this, I’ll consolidate my suggestions to you in a few points:

  • Hook us up with patches & more stickers! Especially a sticker that won’t take 1/4 of a laptop. (I love the sticker from #0015 and the patch from #0018.)
  • Don’t have the only soldering be two tiny header strips. Getting out the soldering iron just to do a couple of SPI connections is a bit of a drag. Either do a PCB like #0019, #0020, etc., or provide modules with headers in place. (If it wasn’t for the soldering, you could take this kit on vacation and play with just the kit and a laptop!)
  • Instructables with more information on why you’re doing what you’re doing would be nice. Mentioning that there’s a level shifter on the MicroSD breakout because MicroSD cards run at 3.3V, and not the 5V from an Arduino Nano, for example.
  • Including a part that requires a warning about you (the experts) having had a lot of problems with it in an introductory kit seems like a poor choice. A customer with flaky behavior won’t know if it’s their setup, their code, or the part.

Overall, I’m excited to see so much going into STEM education and the maker movement, and I’m happy that it’s still growing. I want to thank HackerBoxes for being a part of that and wish them success even if I don’t turn out to be their ideal demographic.

on August 11, 2017 07:00 AM

August 10, 2017

Akademy 2017

Akademy 2017


This years akademy held in Almeria, Spain was a great success.
We ( the neon team ) have decided to move to using snappy container format for KDE applications in KDE Neon.
This will begin in the dev/unstable builds while we sort out the kinks and heavily test it. We still have some roadblocks to overcome, but hope to work with the snappy team to resolve them.
We have also begun the transition of moving Plasma Mobile CI over to Neon CI. So between mobile (arm), snap and debian packaging, we will be very busy!
I attended several BoFs that brought great new ideas for the KDE community.
I was able to chat with Kubuntu release manager ( Valorie Zimmerman ) and hope to work closer with the Kubuntu and Debian teams to reduce duplicate work. I feel this
is very important for all teams involved.

We had so many great talks, see some here:
https://www.youtube.com/watch?v=di1z_mahvf0&list=PLsHpGlwPdtMojYjH8sHRKSvyskPA4xtk6

Akademy is a perfect venue for KDE contributors to work face to face to tackle issues and create new ideas.
Please consider donating: https://ev.kde.org/akademy/

As usual, it was wonderful to see my KDE family again! See you all next year in Vienna!

on August 10, 2017 07:57 PM

S10E23 – Important Fluffy Turn - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

This week we’re joined by a love bug and add more pixels to our computer. RedHat abandon btrfs, Marcus Hutchins is arrested, Google did evil and the podcast patent is over turned! We also have a large dose of Ubuntu community news and some events.

It’s Season Ten Episode Twenty-Three of the Ubuntu Podcast! Alan Pope, Mark Johnson and Dave Lee are connected and speaking to your brain.

In this week’s show:

Entroware Apollo laptop contest reminder

  • We kicked off a contest in Episode 22 to win an Entroware Apollo laptop, the very one Alan reviewed last week.
  • The contest is open until 3rd September 2017, so plenty of time to get your entries in!
  • Listen to Episode 22 for all the details.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on August 10, 2017 02:00 PM

The Book

Well, after nine months of hard work, the book is finally out! It's available both on Packt's site and Amazon.com. Getting up early every morning to write takes a lot of discipline, it takes even more to say "no" to enticing rabbit holes or herds of Yak with luxurious coats ripe for shaving ... (truth be told, I still did a bit of that).

The team I worked with at Packt was just amazing. Highly professional and deeply supportive, they were a complete pleasure with which to collaborate. It was the best experience I could have hoped for. Thanks, guys!

The technical reviewers for the book were just fantastic. I've stated elsewhere that my one regret was that the process with the reviewers did not have a tighter feedback loop. I would have really enjoyed collaborating with them from the beginning so that some of their really good ideas could have been integrated into the book. Regardless, their feedback as I got it later in the process helped make this book more approachable by readers, more consistent, and more accurate. The reviewers have bios at the beginning of the book -- read them, and look them up! These folks are all amazing!

The one thing that slipped in the final crunch was the acknowledgements, and I hope to make up for that here, as well as through various emails to everyone who provided their support, either directly or indirectly.

Acknowledgments

The first two folks I reached out to when starting the book were both physics professors who had published very nice matplotlib problems -- one set for undergraduate students and another from work at the National Radio Astronomy Observatory. I asked for their permission to adapt these problems to the API chapter, and they graciously granted it. What followed were some very nice conversations about matplotlib, programming, physics, education, and publishing. Thanks to Professor Alan DeWeerd, University of Redlands and Professor Jonathan W. Keohane, Hampden Sydney College. Note that Dr. Keohane has a book coming out in the fall from Yale University Press entitled Classical Electrodynamics -- it will contain examples in matplotlib.

Other examples adapted for use in the API chapter included one by Professor David Bailey, University of Toronto. Though his example didn't make it into the book, it gets full coverage in the Chapter 3 IPython notebook.

For one of the EM examples I needed to derive a particular equation for an electromagnetic field in two wires traveling in opposite directions. It's been nearly 20 years since my post-Army college physics, so I was very grateful for the existence and excellence of SymPy which enabled me to check my work with its symbolic computations. A special thanks to the SymPy creators and maintainers.

Please note that if there are errors in the equations, they are my fault! Not that of the esteemed professors or of SymPy :-)

Many of the examples throughout the book were derived from work done by the matplotlib and Seaborn contributors. The work they have done on the documentation in the past 10 years has been amazing -- the community is truly lucky to have such resources at their fingertips.

In particular, Benjamin Root is an astounding community supporter on the matplotlib mail list, helping users of every level with all of their needs. Benjamin and I had several very nice email exchanges during the writing of this book, and he provided some excellent pointers, as he was finishing his own title for Packt: Interactive Applications Using Matplotlib. It was geophysicist and matplotlib savant Joe Kington who originally put us in touch, and I'd like to thank Joe -- on everyone's behalf -- for his amazing answers to matplotlib and related questions on StackOverflow. Joe inspired many changes and adjustments in the sample code for this book. In fact, I had originally intended to feature his work in the chapter on advanced customization (but ran out of space), since Joe has one of the best examples out there for matplotlib transforms. If you don't believe me, check out his work on stereonets. There are many of us who hope that Joe will be authoring his own matplotlib book in the future ...

Olga Botvinnik, a contributor to Seaborn and PhD candidate at UC San Diego (and BioEng/Math double major at MIT), provided fantastic support for my Seaborn questions. Her knowledge, skills, and spirit of open source will help build the community around Seaborn in the years to come. Thanks, Olga!

While on the topic of matplotlib contributors, I'd like to give a special thanks to John Hunter for his inspiration, hard work, and passionate contributions which made matplotlib a reality. My deepest condolences to his family and friends for their tremendous loss.

Quite possibly the tool that had the single-greatest impact on the authoring of this book was IPython and its notebook feature. This brought back all the best memories from using Mathematica in school. Combined with the Python programming language, I can't imagine a better platform for collaborating on math-related problems or producing teaching materials for the same. These compliments are not limited to the user experience, either: the new architecture using ZeroMQ is a work of art. Nicely done, IPython community! The IPython notebook index for the book is available in the book's Github org here.

In Chapters 7 and 8 I encountered a bit of a crisis when trying to work with Python 3 in cloud environments. What was almost a disaster ended up being rescued by the work that Barry Warsaw and the rest of the Ubuntu team did in Ubuntu 15.04, getting Python 3.4.2 into the release and available on Amazon EC2. You guys saved my bacon!

Chapter 7's fictional case study examining the Landsat 8 data for part of Greenland was based on one of Milos Miljkovic's tutorials from PyData 2014, "Analyzing Satellite Images With Python Scientific Stack". I hope readers have just as much fun working with satellite data as I did. Huge thanks to NASA, USGS, the Landsat 8 teams, and the EROS facility in Sioux Falls, SD.

My favourite section in Chapter 8 was the one on HDF5. This was greatly inspired by Yves Hilpisch's presentation "Out-of-Memory Data Analytics with Python". Many thanks to Yves for putting that together and sharing with the world. We should all be doing more with HDF5.

Finally, and this almost goes without saying, the work that the Python community has done to create Python 3 has been just phenomenal. Guido's vision for the evolution of the language, combined with the efforts of the community, have made something great. I had more fun working on Python 3 than I have had in many years.

on August 10, 2017 04:12 AM

August 09, 2017

Welcome to the Ubuntu Weekly Newsletter. This is issue #515 for the week of August 1 – 7, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • Chris Guiver
  • Athul Muralidhar
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on August 09, 2017 04:44 AM

August 08, 2017

Status Quo

For over four years now, the Ubuntu Community Portal has been the 'welcome mat' for new people seeking to get involved in Ubuntu. In that time the site had seen some valuable but minor incremental changes; no major updates have occurred recently. I'd like us to fix this. We can also use this as an opportunity to improve our whole onboarding process.

I've spent a chunk of time recently chatting with active members of the Ubuntu Community about the community itself. A few themes came up in these conversations which can be summarised as:-

  • Our onboarding process for new contributors is not straightforward or easy to find
  • Contributors find it hard to see what's going on in the project
  • There is valuable documentation out there, but no launch pad to find it

To try address these concerns we have looked at each area to try improve the situation.

Onboarding

A prospective contributor has a limited amount of their spare time to get involved, and with a poorly documented or hard-to-find on-boarding process, they will likely give up and walk away. They won't know where to go for the 'latest news' of what's happening in this development cycle, or how they can contribute their limited time to the project most effectively and it is important that get access to the community straight away

Communication

Ubuntu has been around a long time with teams using a range of different communication tools. Despite happening in the open, the quick moving and scattered conversations lose transparency. So finding out what's 'current' is hard for new (and existing) contributors. Surfacing the gems of what's needed and the current strategic direction more obviously would help here and having a place where all contributors can discuss topics is important.

Documentation

The wiki has served Ubuntu well but it suffers from many of the problems wikis have over time, namely out-of-date information, stale references, and bloat. We could undertake an effort to tidy up the wiki but today we have other tools that could serve better. Sites such as http://tutorials.ubuntu.com and http://docs.ubuntu.com which are much richer and easier to navigate and form the basis of our other official documentation and ways of working. Using these in conjunction with any new proposal makes much more sense.

So, what could we do to improve things?

Community Hub Proposal

I propose we replace the Community Portal with a dynamic and collaboratively maintained site. The site would raise the profile of conversations and content, to improve our onboarding and communication issues.

We could migrate existing high-value content over from the existing site to the new one, and encourage all contributors to Ubuntu, both within and outside Canonical to post to the site. We will work with teams to bring announcements and conversations to the site, to ensure content is fresh and useful.

In common with many other projects, we could use discourse for this. I don't expect this site to replace all existing tools used by the teams, but it could help to improve visibility of some.

The new Hub would contain pointers to the most relevant information for on-boarding, calls for participation (in translation, documentation, testing), event announcements, feature polls and other dynamic conversations. New & existing contributors alike should feel they can get up to date with what's going on in Ubuntu by visiting the site regularly. We should expect respectful and inclusive conversation as with any Ubuntu resource.

The Community Hub isn’t intended to replace all our other sites, but add to them. So this wouldn’t replace our existing well established Ask Ubuntu and Ubuntu Forums support sites, but would supplement them. The Community Hub could indeed link to interesting or trending content on the forums or unanswered Ask Ubuntu questions.

So ultimately the Community Hub would become a modern, welcoming environment for the community to learn about and join in with the Ubuntu project, talk directly with the people working on Ubuntu, and hopefully become contributors themselves.

Next steps

We’ll initially need to take a snapshot of the pages on the current site, and stand up an instance of discourse for the new site. We will need to migrate over the content which is most appropriate to keep, and archive anything which is no longer accurate or useful. I’d like us to have some well defined but flexible structure to the site categories. We can take inspiration from other community discourse sites, but I’d be interested in hearing feedback from the community on this.

While the site is being set-up, we’ll start planning a schedule of content, pulling from all teams in the Ubuntu project. We will reach out to Ubuntu project teams to get content lined up for the coming months. If you’re active in any team within the project, please contact me so we can talk about getting your teams work highlighted.

If you have any suggestions or want to get involved, feel free to leave a comment on this post or get in touch with me.

on August 08, 2017 05:45 PM

August 07, 2017

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12 hours but I only managed to work for 7 hours (due to vacation and unanticipated customer work). I gave back the remaining hours to the pool as I didn’t want to carry them over for August which will be also short due to vacation (BTW I’m not attending Debconf). I spent my 7 hours doing CVE triaging during the week where I was in charge of the LTS frontdesk (I committed 22 updates to the security tracker). I did publish DLA-1010-1 on vorbis-tools but the package update had been prepared by Petter Reinholdtsen.

Misc Debian work

zim. I published an updated package in experimental (0.67~rc2-2) with the upstream bug fixes on the current release candidate. The final version has been released during my vacation and I will soon upload it to unstable.

Debian Handbook. I worked with Petter Reinholdtsen to finalize the paperback version of the Norwegian translation of the Debian Administrator’s Handbook (still covering Debian 8 Jessie). It’s now available.

Bug reports. I filed a few bugs related to my Kali work. #868678: autopkgtest’s setup-testbed script is not friendly to derivatives. #868749: aideinit fails with syntax errors when /etc/debian_version contains spaces.

debian-installer. I submitted a few d-i patches that I prepared for a customer who had some specific needs (using the hd-media image to boot the installer from an ISO stored in an LVM logical volume). I made changes to debian-installer-utils (#868848), debian-installer (#868852), and iso-scan (#868859, #868900).

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on August 07, 2017 02:49 PM

In addition to taking stock of how things went at Hacker Summer Camp, I think it’s important to examine the lessons learned from the event. Some of these lessons will be introspective and reflect on myself and my career, but I think it’s important to share these to encourage others to also reflect on what they want and where they’re going.

Introspections

It’s still incredibly important to me to be doing hands-on technical work. I do a lot of other things, and they may have significant impact, but I can’t imagine taking a purely leadership/organizational role. I wouldn’t be happy, and unhappy people are not productive people. Finding vulnerabilities, doing technical research, building tools, are all areas that make me excited to be in this field and to continue to be in this field.

I saw so many highly-technical projects presented and demoed, and these were all the ones that made me excited to still be in this field. The IoT village, in particular, showed a rapidly-evolving highly technical area of security with many challenges left to be solved:

  • How do you configure devices that lack a user interface?
  • How do you update devices that users expect to run 24/7?
  • How do you build security into a device that users expect to be dirt cheap?
  • What are the tradeoffs between Bluetooth, WiFi, 802.15.4, and other radio techs?

Between these questions and my love of playing with hardware (my CS concentration was in embedded systems), it’s obvious why I’ve at least slightly gravitated towards IoT/embedded security.

This brings me to my next insight: I’m still very much a generalist. I’ve always felt that being a generalist has hamstrung me from working on cool things, but I’m beginning to think the only thing hamstringing me is me. Now I just need to get over the notion that 0x20 is too old of an age for cool security/vulnerability research. I’m focusing on IoT and I’ve managed to exclude certain areas of security in the interests of time management: for as fascinating as DFIR is, I’m not actively pursuing anything in that space because it turns out time is a finite quantity and spreading it too thin means getting nowhere with anything.

Observations

Outwardly, I’m happy that BSidesLV and DEF CON both appear to have had an increasingly diverse attendance, though I have no idea how accurate the numbers are given their methodology. (To be fair, I’m super happy someone is trying to even to figure this out in the chaos that is hacker summer camp.) The industry, and the conferences, may never hit a 50/50 gender split, but I think that’s okay if we can get to a point where we build an inclusive meritocracy of an environment. Ensuring that women, LGBTQ, and minorities who want to get into this industry can do so and feel included when they do is critical to our success. I’m a firm believer that the best security professionals draw from their life background when designing solutions, and having a diverse set of life backgrounds ensures a diverse set of solutions. Different experiences and different viewpoints avoids groupthink, so I’m very hopeful to see those numbers continue to rise each year.

I have zero data to back this up, but observationally, it seemed that more attendees brought their kids with them to hacker summer camp. I love this: inspiring the next generation of hackers, showing them that technology can be used to do cool things, and that it’s never too early to start learning about it will benefit both them (excel in the workforce, even if they take the hacker mindset to another industry) and society (more creative/critical thinkers, better understanding of future tech, and hopefully keeping them on the white hat side). I don’t know how much of this is a sign of the maturing industry (more hackers have kids now), more parents feel that it’s important to expose their kids to this community, or maybe just a result of the different layout of Caesar’s, leading to bad observations.

Logistics

There were a few things from my packing list this year that turned out to be really useful. I’m going to try to do an updated planning post pair (e.g., one far out and one shortly before con) for next year, but there’s a few things I really thought were useful and so I’ll highlight them here.

  • An evaporative cooling towel really helps with the Vegas heat. It’s super lightweight and takes virtually no space. Dry, its useful as a normal towel, but if you wet it slightly, the evaporating water actually cools off the towel (and you). Awesome for 108 degree weather.
  • An aluminum water bottle would’ve been nice. Again, fight the dehydration. In the con space, there’s lots of water dispensers with at least filtered water (Vegas tap water is terrible) plus the SIGG bottles are nice because you can use a carabiner to strap it to your bag. I like the aluminum better than a polycarbonate (aka Nalgene) because it won’t crack no matter how you abuse it. (Ok, maybe it’s possible to crack aluminum, but this isn’t the Hydraulic Press Channel.)
  • RFID sleeves. I mentioned these before. Yes, my room key was based on some RFID/proximity technology. Yes, a proxmark can clone it. Yes, I wanted to avoid that happening without my knowing.

For some reason, I didn’t get a chance to break out a lot of the hacking gear I brought with me, but I’ll probably continue to bring it to cons “just in case”. I’m usually checking a bag anyway, so a few pounds of gear is a better option than regretting it if I want to do something.

Conclusion

That concludes my Hacker Summer Camp blog series for this year. I hope it’s been useful, entertaining, or both. Agree with something I said? Disagree? Hit me up on Twitter or find me via other means of communications. :)

on August 07, 2017 07:00 AM

I have previously posted pieces about data.world, an Austin-based startup focused on providing a powerful platform for data preparation, analysis, and collaboration.

data.world were previously a client where I helped to shape their community strategy and I have maintained a close relationship with them ever since.

I am delighted to share that I have accepted an offer to join their Advisory Board. As with most advisory boards, this will be a part-time role where I will provide guidance and support to the organization as they grow.

Why I Joined

Without wishing to sound terribly egotistical, I often get offers to participate in an advisory capacity with various organizations. I am typically loathed to commit too much as I am already rather busy, but I wanted to make an exception for data.world.

Why? There are a few reasons.

Firstly, the team are focusing on a really important problem. As our world becomes increasingly connected, we are generating more and more data. Sadly, much of this data is in different places, difficult to consume, and disconnected from other data sets.

data.world provides a place where data can be stored, sanitized/prepped, queried, and collaborated around. In fact, I believe that collaboration is the secret sauce: when we combine a huge variety of data sets, a consistent platform for querying, and a community with the ingenuity and creative flair for querying that data…we have a powerful enabler for data discovery.

data.world provides a powerful set of tools for storing, prepping, querying, and collaborating around data.

There is a particularly pertinent opportunity here. Buried inside individual data sets there are opportunities to make new discoveries, find new patterns/correlations, and use data as a means to make better decisions. When you are able to combine data sets, the potential for discovery exponentially grows, whether you are a professional researcher or an armchair enthusiast.

This is why the community is so important. In the same way GitHub provided a consistent platform for millions of developers to create, fork, share, and collaborate around code…both professionals and hobbyists…data.world has the same potential for data.

…and this is why I am excited to be a part of the data.world Advisory Board. Stay tuned for more!

The post Joining the data.world Advisory Board appeared first on Jono Bacon.

on August 07, 2017 04:28 AM

August 04, 2017

Interview on hostingadvice.com

Sebastian Kügler

How KDE's Open Source community has built reliable, monopoly-free computing for 20+ yearsHow KDE’s Open Source community has built reliable, monopoly-free computing for 20+ years
A few days ago, Hostingadvice.com’s lovely Alexandra Leslie interviewed me about my work in KDE. This interview has just been published on their site. The resulting article gives an excellent overview over what and why KDE is and does, along with some insights and personal stories from my own history in the Free software world.

At the time, Sebastian was only a student and was shocked that his work could have such a huge impact on so many people. That’s when he became dedicated to helping further KDE’s mission to foster a community of experts committed to experimentation and the development of software applications that optimize the way we work, communicate, and interact in the digital space.

“With enough determination, you can really make a difference in the world,” Sebastian said. “The more I realized this, the more I knew KDE was the right place to do it.”

on August 04, 2017 08:42 PM

ISO Image Writer Alpha 0.2

Jonathan Riddell

New release of ISO Image Writer

=== Alpha 0.2, 4 August 2017 ===

*Verification in a thread to not block UI
*Verifies Arch ISOs
*Build fixes
GPG verification:
 Key fingerprint = 2D1D 5B05 8835 7787 DE9E E225 EC94 D18F 7F05 997E
 Jonathan Riddell <jr@jriddell.org>

isoimagewriter 0.2 download

Facebooktwittergoogle_pluslinkedinby feather
on August 04, 2017 04:53 PM

August 03, 2017

The third point release update to Kubuntu 16.04 LTS (Xenial Xerus) is out now. This contains all the bug-fixes added to 16.04 since its first release in April 2016. Users of 16.04 can run the normal update procedure to get these bug-fixes. In addition, we suggest adding the Backports PPA to update to Plasma 5.8.7. Read more about it:

http://kubuntu.org/news/latest-round-of-backports-ppa-updates-include-plasma-5-10-2-for-zesty-17-04/

Warning: 14.04 LTS to 16.04 LTS upgrades are problematic, and should not be attempted by the average user. Please install a fresh copy of 16.04.3 instead. To prevent messages about upgrading, change Prompt=lts with Prompt=normal or Prompt=never in the /etc/update-manager/release-upgrades file. As always, make a thorough backup of your data before upgrading.

See the Ubuntu 16.04.3 release announcement and Kubuntu Release Notes.

Download 16.04.3 images.

on August 03, 2017 10:08 PM
Thanks to all the hard work from our contributors, we are pleased to announce that Lubuntu 16.04.3 LTS has been released! What Is Lubuntu? Lubuntu is an official Ubuntu flavor based on the Lightweight X11 Desktop Environment (LXDE). The project’s goal is to provide a lightweight yet functional distribution. Lubuntu specifically targets older machines with […]
on August 03, 2017 09:22 PM

I’ve recently embarked on a journey of trying new things in my homelab. I’ve been experimenting with new methods of managing my services at home, and I have tried a few combinations of methods and operating systems.

Now that I am more familiar with the cloud native landscape, I was wondering if I could wean myself off the most stateful snowflake in my life, the trusty home server.

I’ve been collecting hardware recently and decided to go out of my comfort zone and run some home services via different combinations. Here’s what I found out.

What do I need to run?

I have a few things I run in house that I need to host:

The Unifi stuff depends on Java and MongoDB, and accesses hardware on the network. Pi-hole expects to basically be my DNS server, and Plex is the textbook definition of a stateful app; depending on the size of your video collection it can grow a substantial database that’s all on disk, so moving around means bringing gigs of stuff with it.

With this varied set of apps, we shall begin!

Old Reliable, traditional Ubuntu 16.04 with .debs

This has been working for me for years up to this point. As with anything involving third party packaging, maintenance can tend to get annoying.

For Unifi you need either an external Mongo repo(!) and/or a OpenJDK PPA(!) to get it to work.

Pi hole wants me to pipe a script to bash, and Plex just publishes one off debs with no repository, making that update a hassle.

There are some benefits here, once I configure unattended-upgrades things generally run fine. It’s a well understood system, and having the large repository of software is always good. Over time it tends to accumulate crap though.

Ubuntu 16.04 with Docker containers

The main advantage to this setup is I can crowdsource the maintenance of these apps to people who do a real good job, like the awesome guys at linuxserver.io. I can keep a lean and mostly stock host, toss away broken containers if need be, and keep everything nice and isolated.

What ppa do I need for unifi? Is my plex up to date? What java symlink do I need to fix? I don’t need to care anymore!

docker-compose was really good for this and relatively simple to grok, especially by stealing other people’s configs from github and modifying them to my needs.

Why not LXD containers?

I ran with this config for a while, but it had one major issue for me. LXD containers are system containers, that is, they run a copy of the OS. So now instead of maintaining one host I am now maintaining one host OS and many client OSes. Going into each one and installing/configuring the services felt like I was adding complexity. LXD is great, just not for my specific use case.

Ubuntu Core 16.04 with Docker containers

Finally, something totally different. I am pretty sure “home server” doesn’t rank high on the use case here, but I figured I would give it a shot. I took the same docker-compose files from before, except this time I deploy on top of ubuntu-core.

This gives me some nice features over the mutable-buntu. First off, atomic upgrades. Everytime it gets a new kernel it just reboots, and then on boot all the containers update and come back up.

This has a few teething issues. First off, if there’s a kernel update it’s just going to reboot, you can’t really control that. Another is, it really is small, so it’s missing tools. It can only install snaps, so no rsync, no git, no curl, no wget. I’m not going to run git out of a docker container. Also I couldn’t figure out how to run docker as the non-root user and I can’t seem to find the documentation on how to do that anywhere.

Container Linux with Docker containers

A slim OS designed to only run containers. This one definately has a more “work related” slant to it. There’s no normal installer, the installer basically takes a cloud-init-like yaml file and then dd’s the disk. Or just fire up your home PXE server. :) Most of the docs don’t even talk about how to configure the OS, the entire “state” is kept in this yaml file in git, it is expected that I can blow it away at any time and drop containers on it.

This comes with git, rsync, and curl/wget out of the box, so it’s nice to be able to have these core tools in there instead of totally missing. There’s also a toolbox command that will automagically fetch a traditional distro container (defaults to fedora) and then mounts the filesystem inside, so you can nano to your heart’s content.

This works really well. Container Linux lets me dictate the update policy as part of the config file, and if I have multiple servers I can cluster them together so that they will take turns rebooting without having a service go down. But as you can see, we quickly venture out of the “home server” use case with this one.

Container Linux with rkt/systemd

This is the setup I am enjoying the most. So instead of using the docker daemon, I create a systemd service file, like say /etc/systemd/system/unifi.service:

[Unit]
Description=Unifi
After=network.target
[Service]
Slice=machine.slice
Type=simple
ExecStart=/usr/bin/rkt --insecure-options=image run docker://linuxserver/unifi --volume config,kind=host,source=/home/jorge/config/unifi --mount volume=config,target=/config --net=host --dns=8.8.8.8
KillMode=mixed
Restart=always
[Install]
WantedBy=multi-user.target

Then I systemctl start unifi to start it, and systemctl enable unifi to enable it on boot. ContainerOS is set to reboot on Thursdays at 4am, containers update on boot. I can use journalctl and machinectl like I can with “normal” services.

Note that you can use this config on any OS that has systemd and rkt. Since ContainerOS has a section in it’s yaml file for writing systemd services, I can have one well maintained file that will just enable me to spit out and entire configured server in one shot. Yeah!

This one “feels” like the most future proof, as the OCI spec is now finalized it feels like over time every tool will just be able to consume these images. I don’t know what that means for rkt and/or docker, but you can similarly use docker in this manner as well.

What about state?

So far I’ve only really talked about the “installation problem”, I’ve continually left out the hard part, the state of the applications themselves. Reinstalling coreos will get me all the apps back but no data, that doesn’t sound very cloud native!

If you look at the systemd service file above, you see I keep the state in /home/jorge/config/plex. I do this for each of the services. I need to find a way to make sure that that is saved somewhere else than just local disk.

Saving this onto an NFS share earned overwhelming NOPE NOPE NOPE from the straw poll I took (and one even threatened to come over and fight me). And that’s kind of cheating by moving the problem.

So right now this is still up in the air, I fired up a quick duplicati instance to keep a copy on S3.

Really don’t want to add a ceph cluster to my home administration tasks. Suggestions here would be most welcome.

How have they been running?

I know what you’re thinking. Self rebooting servers and auto updating containers? You’re high.

Surprisingly in the last three months I have been running all five of these things side by side and they’ve all been rock solid.

I don’t know what to say here, some combination of great OS and container maintainers, or maybe not so much churn. I am going to try to keep these up and running as long as possible just to see what happens.

What’s left to try?

The obvious hole here is the Project Atomic stack, which will be next on the list.

And of course, it’s only a matter of time until one of these reboots breaks something or a container has a bad day.

If you think this blog post isn’t crazy enough, Chuck and I will be delving into Kubernetes for this later on as this is all just a warm up.

on August 03, 2017 04:00 AM

August 02, 2017

Plasma rocks Akademy published

Sebastian Kügler

Marco and Sebas talk Plasma: State of the UnionMarco and Sebas talk Plasma: State of the Union
I published an article about the recent Akademy conference on KDE’s news site, dot.kde.org. This article discusses the presentation Marco and I gave (Plasma: State of the Union), Wayland, Kirigami, Plasma Mobile and our award-winning colleague Kai. Enjoy the read!

on August 02, 2017 10:43 AM

git ubuntu clone

Nish Aravamudan

This is the second post in a collaborative series between Robie Basak and myself to introduce (more formally) git ubuntu to a broader audience. There is an index of all our planned posts in the first post. As mentioned there, it is important to keep in mind that the tooling and implementation are still highly experimental.

In this post, we will introduce the git ubuntu clone subcommand and take a brief tour of what an imported repository looks like. git ubuntu clone will be the entry point for most users to interact with Ubuntu source packages, as it answers a common request on IRC: “Where is the source for package X?”. As Robie alluded to in his introductory post, one of the consequences of the git ubuntu importer is that there is now a standard way to obtain the source of any given source package: git ubuntu clone1.

Getting git ubuntu clone

git-ubuntu is distributed as a “classic” snap. To install it on Ubuntu 16.04 or later:
sudo snap install --classic git-ubuntu. Help is available via git-ubuntu --help and man-pages are currently in development 2.

Using git ubuntu clone

Let’s say we are interested in looking at the state of PHP 7.0 in Ubuntu. First, we obtain a local copy of the repository 3: git ubuntu clone php7.0


With that one command, we now have the entire publishing history for php7.0 in ./php7.0. Anyone who has tried to find the source for an Ubuntu package before will recognize this as a significant simplification and improvement.

With git, we would expect to be on a ‘master’ branch after cloning. git ubuntu clone defaults to a local branch ‘ubuntu/devel’, which represents the current tip of development in Ubuntu. ‘ubuntu/devel’ is branched from the remote-tracking branch ‘pkg/ubuntu/devel’.


You might now be wondering, “What is ‘pkg/’?”

The default remotes

Running git remote, we see two remotes are already defined: ‘pkg’ and ‘nacc’.


‘pkg’ will be the same for all users and is similar to ‘origin’ that git users will be familiar with. The second is a derived remote name based upon a Launchpad ID. As shown above, the first time run git ubuntu runs, it will prompt for a Launchpad ID that will be cached for future use in ~/.gitconfig. Much like ‘origin’, the ‘pkg’ branches will keep moving forward via the importer and running git fetch pkg will keep your local remote-tracking branches up to date. While not strictly enforced by git or git ubuntu, we should treat the ‘pkg/’ namespace as reserved and read-only to avoid any issues.

The importer branches

The tip of ‘pkg/ubuntu/devel’ reflects the latest version of this package in Ubuntu. This will typically correspond to the development release and often will be the version in the ‘-proposed’ pocket for that release. As mentioned earlier, a local branch ‘ubuntu/devel’ is created by default, which starts at ‘pkg/ubuntu/devel’, much like ‘master’ typically starts at ‘origin/master’ by default when using git. Just like the tip of ‘ubuntu/devel’ is the latest version in Ubuntu for a given source package, there are series-‘devel’ branches for the latest in a given series, e.g., the tip of ‘pkg/ubuntu/xenial-devel’ is the latest version uploaded to 16.04. There are also branches tracking each ‘pocket’ of every series, e.g. ‘pkg/ubuntu/xenial-security’ is the latest version uploaded to the security pocket of 16.04.

Finally, there is a distinct set of branches which correspond to the exact same histories, but with quilt patches applied. Going into the reasoning behind this is beyond the scope of this post, but will be covered in a future post. It is sufficient for now to be aware that is what ‘pkg/applied/*’ are for.

What else can we do?

All of these branches have history, like one would expect, reflecting the exact publishing history of php7.0 within the context of that branch’s semantics, e.g., the history of ‘pkg/ubuntu/xenial-security’ shows all uploads to the security pocket of 16.04 and what those uploads, in turn, are based off of, etc. As another example, git log ubuntu/devel shows you the long history of the latest upload to Ubuntu.

With this complete imported history, we can not only see the history of the current version and any given series, but also what is different between versions and releases 16.04 and 17.04 for php7.0!


For other source packages that have existed much longer, you would be able to compare LTS to LTS, and do all the other normal git-ish things you might like, such as git blame to see what introduced a specific change to a file.

We can also see all remote-tracking branches with the normal git branch -r


This shows us a few of the namespaces in use currently:

  • pkg/ubuntu/* — patches-unapplied Ubuntu series branches
  • pkg/debian/* — patches-unapplied Debian series branches
  • pkg/applied/ubuntu/* — patches-applied Ubuntu series branches
  • pkg/applied/debian/* — patches-applied Debian series branches
  • pkg/importer/* — importer-internal branches

As Robie mentioned in the first post, we are currently using a whitelist to constrain the importer to a small subset of source packages. What happens if you request to clone a source package that has not yet been imported?

While many details (particularly why the repository looks the way it does) have been glossed in this post, we now have a starting point for cloning any source package (if it has been imported) and a way to request an import of any source package.

Using git directly (for advanced users)

Technically, git ubuntu clone is equivalent in functionality to git clone and git clone could be used directly. In fact, one of our goals is to not impede a “pure” git usage in any way. But again, as Robie mentioned in his introductory post, there are some caveats to both using git and the structure of our repositories that git ubuntu is aware of. The “well-defined URLs” just mentioned are still being worked on, but for instance for PHP 7.0, one could follow the instructions at the top of the Launchpad code page for the php7.0 source package. The primary differences we would notice in this usage is “origin” instead of “pkg” and there will not be a remote for your personal Launchpad space for this source package.

Conclusion

In this post, we have seen a new way to get the source for any given package, git ubuntu clone.

Robie’s next post will discuss where the imported repositories are and what they look like. My next post will continue discussing the git ubuntu tooling, by looking at another relatively simple subcommand “tag”.


  1. Throughout this post, we are assuming a automatically updated repository. This is true for the whitelisted set of packages currently auto-imported, but not true generally (yet). 
  2. All commands are available as both git-ubuntu … and git ubuntu …. However, for –help to work in the latter form, the changes mentioned in [LP : #1699526|https://bugs.launchpad.net/usd-importer/+bug/1699526], a few simple tweaks to ~/.gitconfig are necessary until some additional snap functionality is available generally. 
  3. Currently, git ubuntu clone is rather quiet while it works, and can take a long time (the history of a source package can be long!); we have received feedback and opened [a bug|https://bugs.launchpad.net/usd-importer/+bug/1707225] to make it a bit more like git clone from a UX perspective. 

on August 02, 2017 12:31 AM

August 01, 2017

Secure Boot signing


The whole concept of Secure Boot requires that there exists a trust chain, from the very first thing loaded by the hardware (the firmware code), all the way through to the last things loaded by the operating system as part of the kernel: the modules. In other words, not just the firmware and bootloader require signatures, the kernel and modules too. People don't generally change firmware or bootloader all that much, but what of rebuilding a kernel or adding extra modules provided by hardware manufacturers?

The Secure Boot story in Ubuntu includes the fact that you might want to build your own kernel (but we do hope you can just use the generic kernel we ship in the archive), and that you may install your own kernel modules. This means signing UEFI binaries and the kernel modules, which can be done with its own set of tools.

But first, more on the trust chain used for Secure Boot.


Certificates in shim


To begin with signing things for UEFI Secure Boot, you need to create a X509 certificate that can be imported in firmware; either directly though the manufacturer firmware, or more easily, by way of shim.

Creating a certificate for use in UEFI Secure Boot is relatively simple. openssl can do it by running a few SSL commands. Now, we needs to create a SSL certificate for module signing...

First, let's create some config to let openssl know what we want to create (let's call it 'openssl.cnf'):

# This definition stops the following lines choking if HOME isn't
# defined.
HOME                    = .
RANDFILE                = $ENV::HOME/.rnd 
[ req ]
distinguished_name      = req_distinguished_name
x509_extensions         = v3
string_mask             = utf8only
prompt                  = no

[ req_distinguished_name ]
countryName             = CA
stateOrProvinceName     = Quebec
localityName            = Montreal
0.organizationName      = cyphermox
commonName              = Secure Boot Signing
emailAddress            = example@example.com

[ v3 ]
subjectKeyIdentifier    = hash
authorityKeyIdentifier  = keyid:always,issuer
basicConstraints        = critical,CA:FALSE
extendedKeyUsage        = codeSigning,1.3.6.1.4.1.311.10.3.6,1.3.6.1.4.1.2312.16.1.2
nsComment               = "OpenSSL Generated Certificate"
Either update the values under "[ req_distinguished_name ]" or get rid of that section altogether (along with the "distinguished_name" field) and remove the "prompt" field. Then openssl would ask you for the values you want to set for the certificate identification.

The identification itself does not matter much, but some of the later values are important: for example, we do want to make sure "1.3.6.1.4.1.2312.16.1.2" is included in extendedKeyUsage, and it is that OID that will tell shim this is meant to be a module signing certificate.

Then, we can start the fun part: creating the private and public keys.

openssl req -config ./openssl.cnf \
        -new -x509 -newkey rsa:2048 \
        -nodes -days 36500 -outform DER \
        -keyout "MOK.priv" \
        -out "MOK.der"
This command will create both the private and public part of the certificate to sign things. You need both files to sign; and just the public part (MOK.der) to enroll the key in shim.


Enrolling the key


Now, let's enroll that key we just created in shim. That makes it so it will be accepted as a valid signing key for any module the kernel wants to load, as well as a valid key should you want to build your own bootloader or kernels (provided that you don't include that '1.3.6.1.4.1.2312.16.1.2' OID discussed earlier).

To enroll a key, use the mokutil command:
sudo mokutil --import MOK.der
Follow the prompts to enter a password that will be used to make sure you really do want to enroll the key in a minute.

Once this is done, reboot. Just before loading GRUB, shim will show a blue screen (which is actually another piece of the shim project called "MokManager"). use that screen to select "Enroll MOK" and follow the menus to finish the enrolling process. You can also look at some of the properties of the key you're trying to add, just to make sure it's indeed the right one using "View key". MokManager will ask you for the password we typed in earlier when running mokutil; and will save the key, and we'll reboot again.


Let's sign things


Before we sign, let's make sure the key we added really is seen by the kernel. To do this, we can go look at /proc/keys:

$ sudo cat /proc/keys
0020f22a I--Q---     1 perm 0b0b0000     0     0 user      invocation_id: 16
0022a089 I------     2 perm 1f0b0000     0     0 keyring   .builtin_trusted_keys: 1
003462c9 I--Q---     2 perm 3f030000     0     0 keyring   _ses: 1
00709f1c I--Q---     1 perm 0b0b0000     0     0 user      invocation_id: 16
00f488cc I--Q---     2 perm 3f030000     0     0 keyring   _ses: 1
[...]
1dcb85e2 I------     1 perm 1f030000     0     0 asymmetri Build time autogenerated kernel key: eae8fa5ee6c91603c031c81226b2df4b135df7d2: X509.rsa 135df7d2 []
[...]

Just make sure a key exists there with the attributes (commonName, etc.) you entered earlier.

To sign kernel modules, we can use the kmodsign command:
kmodsign sha512 MOK.priv MOK.der module.ko
module.ko should be the file name of a kernel module file you want to sign. The signature will be appended to it by kmodsign, but if you would rather keep the signature separate and concatenate it to the module yourself, you can do that too (see 'kmosign --help').

You can validate that the module is signed by checking that it includes the string '~Module signature appended~':

$ hexdump -Cv module.ko | tail -n 5
00002c20  10 14 08 cd eb 67 a8 3d  ac 82 e1 1d 46 b5 5c 91  |.....g.=....F.\.|
00002c30  9c cb 47 f7 c9 77 00 00  02 00 00 00 00 00 00 00  |..G..w..........|
00002c40  02 9e 7e 4d 6f 64 75 6c  65 20 73 69 67 6e 61 74  |..~Module signat|
00002c50  75 72 65 20 61 70 70 65  6e 64 65 64 7e 0a        |ure appended~.|
00002c5e

You can also use hexdump this way to check that the signing key is the one you created.


What about kernels and bootloaders?


To sign a custom kernel or any other EFI binary you want to have loaded by shim, you'll need to use a different command: sbsign. Unfortunately, we'll need the certificate in a different format in this case.

Let's convert the certificate we created earlier into PEM:

openssl x509 -in MOK.der -inform DER -outform PEM -out MOK.pem

Now, we can use this to sign our EFI binary:

sbsign --key MOK.priv --cert MOK.pem my_binary.efi --output my_binary.efi.signed
As long as the signing key is enrolled in shim and does not contain the OID from earlier (since that limits the use of the key to kernel module signing), the binary should be loaded just fine by shim.


Doing signatures outside shim


If you don't want to use shim to handle keys (but I do recommend that you do use it), you will need to create different certificates; one of which being the PK (Platform Key) for the system, which you can enroll in firmware directly via KeyTool or some firmware tool provided with your system. I will not elaborate the steps to enroll the keys in firmware as it tends to vary from system to system, but the main idea is to put the system in Secure Boot "Setup Mode"; run KeyTool (which is its own EFI binary you can build yourself and run), and enroll the keys -- first by installing the KEK and DB keys, and finishing with the PK. These files need to be available from some FAT partition.

I do have this script to generate the right certificates and files; which I can share (and itself is copied from somewhere, I can't remember):

#!/bin/bash
echo -n "Enter a Common Name to embed in the keys: "
read NAME
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=$NAME PK/" -keyout PK.key \
        -out PK.crt -days 3650 -nodes -sha256
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=$NAME KEK/" -keyout KEK.key \
        -out KEK.crt -days 3650 -nodes -sha256
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=$NAME DB/" -keyout DB.key \
        -out DB.crt -days 3650 -nodes -sha256
openssl x509 -in PK.crt -out PK.cer -outform DER
openssl x509 -in KEK.crt -out KEK.cer -outform DER
openssl x509 -in DB.crt -out DB.cer -outform DER
GUID=`python -c 'import uuid; print str(uuid.uuid1())'`
echo $GUID > myGUID.txt
cert-to-efi-sig-list -g $GUID PK.crt PK.esl
cert-to-efi-sig-list -g $GUID KEK.crt KEK.esl
cert-to-efi-sig-list -g $GUID DB.crt DB.esl
rm -f noPK.esl
touch noPK.esl
sign-efi-sig-list -t "$(date --date='1 second' +'%Y-%m-%d %H:%M:%S')" \
                  -k PK.key -c PK.crt PK PK.esl PK.auth
sign-efi-sig-list -t "$(date --date='1 second' +'%Y-%m-%d %H:%M:%S')" \
                  -k PK.key -c PK.crt PK noPK.esl noPK.auth
chmod 0600 *.key
echo ""
echo ""
echo "For use with KeyTool, copy the *.auth and *.esl files to a FAT USB"
echo "flash drive or to your EFI System Partition (ESP)."
echo "For use with most UEFIs' built-in key managers, copy the *.cer files."
echo ""
The same logic as earlier applies: sign things using sbsign or kmodsign as required (use the .crt files with sbsign, and .cer files with kmodsign); and as long as the keys are properly enrolled in the firmware or in shim, they will be successfully loaded.


What's coming up for Secure Boot in Ubuntu


Signing things is complex -- you need to create SSL certificates, enroll them in firmware or shim... You need to have a fair amount of prior knowledge of how Secure Boot works, and that the commands to use are. It's rather obvious that this isn't at the reach of everybody, and somewhat bad experience in the first place. For that reason, we're working on making the key creation, enrollment and signatures easier when installing DKMS modules.

update-secureboot-policy should soon let you generate and enroll a key; and DKMS will be able to sign things by itself using that key.
on August 01, 2017 02:48 PM

More KDE Twits

Jonathan Riddell

After reading up on some Bootstrap I managed to move the Twitter feeds to the side on Planet KDE so you can get suitably distracted by #KDE and @kdecommunity feeds while reading your blog posts.

I also stepped down from Dot and KDE promo stuff after getting burnt out from doing it for many years hoping others would fill in which I hope they now will.

 

Facebooktwittergoogle_pluslinkedinby feather
on August 01, 2017 10:22 AM

July 31, 2017

tl;dr: GStreamer bindings for Rust can be found here: https://github.com/sdroege/gstreamer-rs

In the last few months since RustFest in Kiev, I was working (as promised during Luis and my talk at the conference) on creating (mostly autogenerated) Rust bindings for GStreamer, with a nice API and properly integrating with gtk-rs and other software. But the main goal was to autogenerate as much as possible with the help of GObject-Introspection to keep the maintenance effort low, and re-use / have all kinds of infrastructure for GLib/GObject that already exists in the gtk-rs ecosystem.

There are already GStreamer bindings available, which hopefully can be replaced by this very soon.

Most of the time working on this in the last months was spent on making the code generator work well with GStreamer and add lots of missing features, and also add missing bits and pieces to the GLib/GObject bindings. That should all be in place now, and now at GUADEC, I had some time to finish the last pieces of the GStreamer core API. Every needed piece of the GStreamer core API should be covered now, and lots of pieces that are not needed very often. The code can currently be found here.

Example code

There currently is no real documentation available. The goal is the autogenerate this too, from the existing C documentation. But until then, the main GStreamer documentation like the Application Development Manual and Core API Reference should be helpful. Almost everything is mapping 1:1.

I’ve written some example applications though, which can be found in the same GIT repository.

GTK

The first example is a GTK application, using the gtkglsink (or gtksink if no GL support is available) GStreamer elements to show a test video in a GTK window and the current playback position in a GTK label below the video. It’s using the GLib event loop for handling GTK events (closing the window), timeouts (query position every few hundred ms) and handling the GStreamer messages (errors, etc.).

The code can be found here.

Tokio

The next example is using Tokio instead of the GLib event loop for handling the GStreamer messages. The same should also work with anything else working with future-rs. Tokio is a Rust library for asynchronous IO, based on Futures.

For this to work, a BusStream struct is defined that implements the Stream trait (from future-rs), and produces all messages that are available on the bus. Then this stream is passed to the Tokio Core (basically the event loop) and each message is handled from there.

The code can be found here.

Pipeline building, Pad Probes, Buffers, Events, Queries, …

Various other examples also exist.

There is one example that is dynamically building a pipeline with decodebin for decoding a file with audio and/or video.

Another example is building a minimal version (without any features other than creating a pipeline from a string and running it) of the gst-launch-1.0 tool. This is not using the GLib main loop for handling the GStreamer messages, but there is a slightly modified variant of the same example that uses the GLib main loop instead.

Yet another example is building a small pipeline with an audio test source, producing a 440Hz sine wave, and calculates the RMS of the audio. For this a pad probe is installed on the source pad of the audio source, and the samples of each buffer are analyzed.

And three last examples are showing how to work with queries and events, and a rather boring example that just uses playbin.

What next?

Overall I have to say that using the bindings is already much more convenient and fun than to write any GStreamer code in C. So if you’re thinking of writing a new GStreamer application and would consider Rust for that: I can highly recommend that 🙂

So what comes next. First of all, the next steps would be to make sure that the bindings work for everybody. So if you are using the old bindings, or if you want to use GStreamer from Rust, now is a good time to start testing and let me know of any problems, any inconveniences, anything that seems ugly, or anything that is still missing but you need it.

In the meantime I’m going to make the bindings a full replacement of the old ones. For this various non-core GStreamer libraries have to be added too: gstreamer-app, gstreamer-video, gstreamer-audio and gstreamer-pbutils. Hopefully this can all be done over the next few weeks, but at this point all API that is needed to use GStreamer is already available. Only some convenience/helper API is missing.

And of course my efforts to make it easy to write GStreamer plugins in Rust didn’t end yet. I’ll continue working on that, and will also move all that over to these bindings too to remove a lot of code duplication.

on July 31, 2017 03:21 PM

July 28, 2017

The deadline for the CFP for the containers microconference at Plumber’s is coming up next week. See https://discuss.linuxcontainers.org/t/containers-micro-conference-at-linux-plumbers-2017/262 for more information


on July 28, 2017 10:51 PM

Hello MAASters! The MAAS development summaries are back!

The past three weeks the team has been made good progress on three main areas, the development of 2.3, maintenance for 2.2, and out new and improved python library (libmaas).

MAAS 2.3 (current development release)

The first official MAAS 2.3 release has been prepared. It is currently undergoing a heavy round of testing and will be announced separately once completed. In the past three weeks, the team has:

  • Completed Upstream Proxy UI
    • Improve the UI to better configure the different proxy modes.
    • Added the ability to configure an upstream proxy.
  • Network beaconing & better network discovery
  • Started Hardware Testing Phase 2
      • UX team has completed the initial wireframes and gathered feedback.
      • Started changes to collect and gather better test results.
  • Started Switch modeling
      • Started changes to support switch and switch port modeling.
  • Bug fixes
    • LP: #1703403 – regiond workers can use too many postgres connections
    • LP: #1651165 – Unable to change disk name using maas gui
    • LP: #1702690 – [2.2] Commissioning a machine prefers minimum kernel over commissioning global
    • LP: #1700802 – [2.x] maas cli allocate interfaces=<label>:ip=<ADDRESS> errors with Unknown interfaces constraint Edit
    • LP: #1703713 – [2.3] Devices don’t have a link from the DNS page
    • LP: #1702976 – Cavium ThunderX lacks power settings after enlistment apparently due to missing kernel
    • LP: #1664822 – Enable IPMI over LAN if disabled
    • LP: #1703713 – Fix missing link on domain details page
    • LP: #1702669 – Add index on family(ip) for each StaticIPAddress to improve execution time of the maasserver_routable_pairs view.
    • LP: #1703845 – Set the re-check interval for rack to region RPC connections to the lowest value when a RPC connection is closed or lost.

MAAS 2.2 (current stable release)

  • Last week, MAAS 2.2 was SRU’d into the Ubuntu Archives and to our latest LTS release, Ubuntu Xenial, replacing the MAAS 2.1 series.
  • This week, a new MAAS 2.2 point release has also been prepared. It is currently undergoing heavy testing. Once testing is completed, it will be released in a separate announcement.

Libmaas

Last week, the team has worked on increasing the level

  • Added ability to create machines.
  • Added ability to commission machines.
  • Added ability to manage MAAS networking definitions. Including Subnet, Fabrics, Spaces, vlans, IP Ranges, Static Routes and DHCP.
on July 28, 2017 08:46 AM