July 25, 2017

Plasma’s Vision

Sebastian Kügler

Plasma -- Durable, Usable, Elegant.Plasma — Durable, Usable, Elegant.
Over the past weeks, we (KDE’s Plasma team) have distilled the reasons why we do what we do, and what we want to achieve into a vision statement. In this article, I’d like to present Plasma’s vision and explain a bit what is behind it. Let’s start with the statement itself, though:

Plasma is a cross-device work environment by the KDE Community where trust is put on the user’s capacity to best define her own workflow and preferences.

Plasma is simple by default, a clean work area for real-world usage which intends to stay out of your way.
Plasma is powerful when needed, enabling the user to create the workflow that makes her more effective to complete her tasks.

Plasma never dictates the user’s needs, it only strives to solve them. Plasma never defines what the user is allowed to do, it only ensures that she can.

Our motivation is to enable actual work to happen, across devices, across different platforms, using any application needed.

We build to be durable, we create to be usable, we design to be elegant.

I’ve marked a few bits which are especially important in a bold font, let’s get into a bit more detail:

Cross-device — Plasma is a work environment for different classes of devices, it adapts to the form-factor and offers a user interface which is suitable for the device’s characteristics (input methods such as touchscreen, mouse, keyboard) and constraints (screen size, memory and CPU capabilties, etc.).

Define the workflow — Plasma is a flexible tool that can be set up as the user wishes and needs to make her more effective, to get the job done. Plasma is not a purpose in itself, it rather enables and gets out of the way. This isn’t to say that we’ll introduce every single option one can think of, but we strive to serve many users’ use cases.

Simple by default means that Plasma is self-explanatory to new users, and that it offers a clean and sober interface in its default state. We don’t want to overwhelm the user, but present a serene and friendly environment.

Powerful when needed on the other hand means that under the hood, Plasma offers tremendous power that allow to get almost any job done efficiently and without flailing.

We build to be durable, we create to be usable, we design to be elegant. — The end result is a stable workspace that the user can trust, that is friendly and easy to use, and that is beautiful and elegant in how it works.

on July 25, 2017 10:52 AM
This is a suite of blog posts explaining how we snapped Ubuntu Make which is a complex software study case with deep interactions with the system. For more background on this, please refer to our first blog post. For how we got here, have a look at the last blog post. Testing on 14.04 LTS Ok, we now have a working classic snap on 16.04 LTS. However, we saw that a word of caution needs to be taken care of due to the way classic snaps enable us to access the system.
on July 25, 2017 07:46 AM

July 24, 2017

GNOME

We’re working on adding captive portal detection to Artful. We think it would be good if there were an option in the privacy settings to enable or disable this. We’ve done some initial work as preparation – some patches have been submitted to Network Manager’s upstream to enable an option to be created. They are currently awaiting review.

ISO cleanups

Work continues to move deprecated components out of the desktop ISO and to universe. This week’s list includes indicator-application and indicator-messages.

Snaps

Updated versions of gedit, gnome-sudoku, quadrapassel, gnome-dictionary, gnome-calculator, and gnome-clocks are in the edge channel of the store now built using the gnome-3-24 platform snap and content interface.

Video & Audio

A workaround has been uploaded for GDM which was blocking the A2DP high quality bluetooth profile from being activated in the user session.

Updates

  • Libreoffice 5.3.4, with a workaround for an i386 kernel issue which had been blocking the previous version.
  • GStreamer 1.12.2
  • Evolution 3.24.2
  • The nplan autopkgtest got fixed, unblocking the network-manager update.

Unity 7

16.04 continues to be supported, and we’ve been working on some further improvements for Unity that are targeted there. In particular there are improvements currently being developed to the low graphics mode that will benefit users of low powered systems and VMs.

QA

GNOME Software has a documented test plan now.

on July 24, 2017 07:09 PM

git ubuntu illustration

Back in 2014, I published some information and tooling on using git for Ubuntu development, even though most Ubuntu development wasn’t done with git at the time.

Three years on, this work has expanded significantly. Most of the server team is using git for daily work when touching Ubuntu packaging. We have expanded our tooling. With the significant interest we’ve received, we’re now interested in developing this work to make git become the way of working with Ubuntu’s source code. Our plan is to do this with no disruption to existing developer workflows.

This post is part of a blog series. Here’s an index of all our planned posts. We’ll amend this index and update links as we go.

  • Developing Ubuntu using git (this post)
  • git ubuntu clone
  • The imported repositories
  • Available branches
  • History and parenting
  • Repository objects
  • Rich history
  • Wrapper subcommands

Why is this so hard?

Most Free Software development projects already use git. So why has Ubuntu taken so long?

Unlike most software projects, Ubuntu (like other distributions) derives its sources from upstreams, rather than being the originator of the software. This means that we do not “control” the git repositories. Repository elements such as branch names, tag names, branching policies and so forth are not up to us; and nor is the choice to use git in the first place.

For git use in Ubuntu development to be effective, we need these repository elements to follow the same schemes across all our packages. But upstream projects use different schemes for their branches, tags and merge workflows. So our task isn’t as trivial as just adopting upstreams’ git trees.

Existing packaging repositories

While Ubuntu makes key changes to packages as needed, the long tail of packages that we ship are derived from Debian with no changes. Debian package maintainers use the VCS system of their choice, which may be nothing, git, or something else like Mercurial or Bazaar. These repositories are arranged in the manner of the package maintainers’ choices. They may be derived from their upstreams’ respective repositories, or they may instead be based on wholesale upstream code “import” commits done when the packaging is updated to a latest upstream release.

Right now, 68% of source packages in Ubuntu are listed as having their packaging maintained in git (whether in Debian or Ubuntu):

Note: the data I have used cannot tell us the difference between a package not being maintained in a VCS and the package maintainer not having declared in metadata that a particular VCS is in use.

Choices for Ubuntu and git

We’re not in a position to mandate that everyone uses git. Even if we did do that for Ubuntu, we cannot expect mandate it in Debian and certainly not in every upstream project in our repositories.

One of the problems we want to solve is to be able to answer the question “where do I git clone from to get the Ubuntu source for package X?”. We don’t want to be forced to say “ah, but for package X, it’s the same as Debian, and they’re using Mercurial in this case, so you can’t git clone you have to use hg clone from this other place”, and then have the answer be different for package Y and different again for package Z. We know this will happen for 3 out of 10 packages. We want to eliminate all the edge cases so that, for git clone against any Ubuntu package, the repository structures are all consistent and all subsequent developer expectations always work.

To achieve this consistency, we need to find a way to use git for all Ubuntu packages: regardless of what VCS Debian or upstreams use for each package and project; and regardless of their different branching, tagging and merging models.

We think we’ve achieved this with our design; more on this in a future post.

Ubuntu, Bazaar, and UDD

Some readers may be familiar with a previous effort in Ubuntu, UDD, which was largely a similar effort but with Bazaar. Nine years later, git has largely won the “VCS wars”, and appears to be preferred by the majority of developers. Our current effort could be seen as “UDD, but with git”, if you wish.

Project goals

We'd like to avoid flag days and forced workflow changes. Ubuntu git integration will develop over time, but we don’t expect Ubuntu developers to be forced to switch to it. We’d prefer for developers to choose to use our integration on its own merits, switching over if and when they feel it appropriate. If, after further consultation with users and Launchpad developers, we did switch to git as the primary source of truth from Launchpad, we expect to be able to wrap for backwards compatibility with dput.

Being central to all code, “moving to git” can be somewhat all-encompassing in terms of desirable use cases. Our original goal was very specific: to make what we call “Ubuntu package merges” easier. I achieved this back in 2014, and the server team has since made big improvements to this particular use case. Now we want to use git for much more, so this necessarily encompasses a wide range of use cases. We have accepted the following use cases as falling within the scope of our project:

For drive-by contributors and new developers

  • Provide a single place from which any developer can git clone to see the current state of a package across all our releases, and to provide branches against which pull requests can be received.

  • Make automatic checking of contributions possible via a linter, for contributors to run locally but also run by a bot automatically against pull requests, to tighten the feedback loop where automatic advice is possible.

  • Simplify and flatten the learning curve by eliminating the need to use some of the arcane tooling that has built up over the decades in the case of simple contributions. Most developers either know git or have many others readily available to teach them git, so we can take advantage of this instead of requiring them to learn pull-lp-source, debdiff, etc.

For routine Ubuntu development

  • Faster and more accurate “Ubuntu package merges” by using git (already achieved).

  • Collaborative working for sets of complex package changes, such as SRUs and backports, so that planned changes can be shared, reviewed and amended before upload.

For experienced Ubuntu developers

  • Automatic linting of contributions to allow contributors to fix some issues directly and immediately themselves, to relieve sponsor and review workload.

  • All publication history available for debugging, bisection and other general software archeology tasks.

  • git push to upload to the Ubuntu archives.

Current status

We have an importer running that automatically imports new source package publications into git, so the entire publication history of that package becomes available to git users. Until we’re ready to scale this up, we’re importing a subset of packages from a whitelist, with other packages imported on request for interested developers. You can also run the importer yourself locally on any package.

Tooling is available as an extension to git providing a set of subcommands (git ubuntu ...). The CLI is still experimental and subject to change, and we have a set of high-level subcommands planned that we have yet to write.

Experimental status

If you’re interested, please do take a look! We’d appreciate feedback. However, note that we aren’t “production ready” yet:

  • There are a number of developer UX issues we’d like to fix before declaring the CLI “stable”.

  • For scaling reasons, Launchpad needs improved git shared object support before we’re ready for developers to push cloned package repositories en-masse.

  • We expect to re-run the importer on all packages before declaring ourselves ready, so git commit hashes for our published branches will change until we declare them stable.

What would make a 1.0

  1. Launchpad shared object support.
  2. Hash stability declared.
  3. Developer UX issues fixed.
  4. Anything else? Please let us know what you think should be essential.

The wrapper

On our way, we hit a bunch of edge cases which may confuse developers. Some examples:

  • An upstream may have placed a .gitattributes file that will unexpectedly “modify” the upstream source ($Id$ etc) as we add packaging commits.

  • git will by default convert line endings and suchlike for you; but in packaging work, we want to leave the upstream sources untouched except where we have some reason to explicitly patch them.

  • The build may depend on empty directories, which git cannot currently represent.

These edge cases can be worked around, often automatically, but this won’t happen when a new developers use git clone directly.

To avoid having to introduce too much at once, we have written a wrapper that handles these edge cases automatically, or at least warns you about them on the occasions that they are relevant. There are also some common repetitive actions that are specific to our workflows; the wrapper also composes these for convenience to save developer time.

We don’t want to mandate use of our wrapper. To better suit advanced developers, we’ve designed everything to be directly accessible without the wrapper, and we consider this method of access to be a first class citizen in our work. We’ll talk more about the wrapper and its capabilities in a future post.

Next

In the next post, we’ll cover details of where the imported repositories are and what they look like.

on July 24, 2017 04:29 PM

One of the most challenging components of building a community is how to (a) determine what to measure, (b) measure it effectively, and (c) interpret those measurements in a way that drives improvements.

Of course, what complicates this is that communities are a mixture of tangible metrics (things we can measure with a computer), and intangible (things such as “enablement”, “happiness”, “satisfaction” etc).

Here is a presentation I delivered recently that provides an overview of the topic and plenty of pragmatic guidance for how you put this into action:

If you can’t see the video, click here.

The post Video: Measuring Community Health appeared first on Jono Bacon.

on July 24, 2017 03:46 AM

July 22, 2017

Software from the Source

Sebastian Kügler

In this article, I am outlining an idea for an improved process of deploying software to Linux systems. It combined advantages of traditional, package mangement based systems with containerized software through systems such as Flatpak, Snap, or AppImage. An improved process allows us to make software deployment more efficient across the whole Free software community, have better supported software on users systems and allow for better quality at the same time.

Where we are goingWhere we are going
In today’s Linux and Free software ecosystems, users usually receive all their software from one source. It usually means that software is well integrated with the system, can be tested in combination with each other and support comes from a single vendor. Compared to systems, in which single software packages are downloaded from their individual vendors and then installed manually, this has huge advantages, as it makes it easy to get updates for everything installed on your system with a single command. The base system and the software comes from the same hands and can be tested as a whole. This ease of upgrading is almost mind-boggling to people who are used to a Windows world, where you’d download 20 .exe installer files post-OS-install and have to update them individually, a hugely time-consuming process and at time outright dangerous as software easily gets out of date.

Traditional model of software deploymentTraditional model of software deployment
There are also downsides to how we handle software deployment and installation currently, most of them revolve around update cycles. There is always a middle man who decides when and what to upgrade. This results in applications getting out of date, which is bad in reality and leads to a number of problems, as security and bug fixes are not making it to users in a timely fashion,

  • It’s not unusual that software installed on a “supported” Linux system is outdated and not at all supported upstream anymore on the day it reaches the user. Worse, policies employed by distributions (or more generally, operating system vendors) will prevent some software packages from ever getting an update other than the most critical security fix within the whole support cycle.
  • Software out in the wild with its problems isn’t supported upstream, bug reports reaching the upstream developers are often invalid and have been fixed in newer versions, or users are asked to test the latest version, which most of the time isn’t available for their OS — this makes it harder to address problems with the software and it’s frustrating for both, users and developers.
  • Even if bugs are fixed in a timely fashion, the likelihood of users of traditional distributions actually receiving these updates without manually installing them is small, especially if users are not aware of it.
  • Packaging software for a variety of different distributions is a huge waste of time. While this can be automated to some extent, it’s still less than ideal as work is duplicated, packaging bugs do happen simply because distribution packagers do not fully understand how a specific piece of software is built and best deployed (there’s a wide variety of software after all) and software stacks aren’t often well-aligned. (More on that later!)
  • Support cycles differ, leading to two problems:
  • Distros need to guarantee support for software they didn’t produce
  • Developers are not sure how much value there is in shipping a release and subsequent bugfix releases, since it takes usually at least months until many users upgrade their OS and receive the new version.
  • Related to that, it can take a long time until a user confirms a bug fix.
  • There is only a small number of distributions who can package every single piece of useful software available. This essentially limits the user’s choice because his niche distro of choice may simply not have all needed software available.

The value of downstreams

One argument that has been made is that downstreams do important work, too. An example for that legal or licensing problems are often found during reviews at SUSE, one of KDE’s downstream partners. These are often fed back to KDE’s developers where the problems can be fixed and be made part of upstream. This doesn’t have to change at all, in fact, with a quicker deployment process, we’re actually able to ship these fixes quicker to users. Likewise, QA that currently happens downstream should actually shift more to upstream so fixes get integrated and deployed quicker.

One big problem that we are currently facing is the variety of software stacks our downstreams use. An example that often bites us is that Linux distributions are combining applications with different versions of Qt. This is not only problematic on desktop form-factors, but has been a significant problem on mobile as well. Running an application against the same version of Qt that developers developed or tested it against means fewer bugs due to a smaller matrix of software stacks, resulting in less user-visible bugs.

In short: We’d be better off if work happening downstream happens more upstream, anyway.

Upstream as software distributor

Software directly from its sourceSoftware directly from its source
So, what’s the idea? Let me explain what I have in mind. This is a bit of a radical idea, but given my above train of thoughts, it may well solve a whole range of problems that I’ve explained.

Linux distributors supply a base system, but most of the UI layers, so the user-visible parts come from downstream KDE (or other vendors, but let’s assume KDE for now). The user gets to run a stable base that boots a system that supports all his hardware and gets updated according to the user’s flavor, but the apps and relevant libraries come from upstream KDE, are maintained, tested and deployed from there. For many of the applications, the middle-man is cut out.

This leads to

  • vastly reduced packaging efforts of distros as apps are only packaged once, not once per distro.
  • much, much shorter delays until a bug fix reaches the user
  • stacks that are carefully put together by those that know the apps’ requirements best

Granted, for a large part of the user’s system that stays relatively static, the current way of using packaged software works just fine. What I’m talking about are the bits and pieces that the users relies on for her productivity, the apps that are fast moving, where fixes are more critical to the user’s productivity, or simply where the user wants to stay more up to date.

Containerization allows systemic improvements

In practice, this can be done by making containerized applications more easily available to the users. Discover, Plasma’s software center, can allow the user to install software directly supplied by KDE and allow to keep it up to date. Users can pick where to get software from, but distros can make smart choices for users as well. Leaner distros could even entirely rely on KDE (or other upstreams) shipping applications and fully concentrate on the base system and advancing that part.

Luckily, containerization technologies now allow us to rethink how we supply users with our software and provide opportunities to let native apps on Linux systems catch up with much shorter deployment cycles and less variety in the stack, resulting in higher quality software on our users’ systems.

on July 22, 2017 04:04 PM

July 21, 2017

Back in March, we asked the HackerNews community, “What do you want to see in Ubuntu 17.10?”: https://ubu.one/AskHN

A passionate discussion ensued, the results of which are distilled into this post: http://ubu.one/thankHN

In fact, you can check that link, http://ubu.one/thankHN and see our progress so far this cycle.  We already have a beta code in 17.10 available for your testing for several of those:

And several others have excellent work in progress, and will be complete by 17.10:

In summary -- your feedback matters!  There are hundreds of engineers and designers working for *you* to continue making Ubuntu amazing!

Along with the switch from Unity to GNOME, we’re also reviewing some of the desktop applications we package and ship in Ubuntu.  We’re looking to crowdsource input on your favorite Linux applications across a broad set of classic desktop functionality.

We invite you to contribute by listing the applications you find most useful in Linux in order of preference. To help us parse your input, please copy and paste the following bullets with your preferred apps in Linux desktop environments.  You’re welcome to suggest multiple apps, please just order them prioritized (e.g. Web Browser: Firefox, Chrome, Chromium).  If some of your functionality has moved entirely to the web, please note that too (e.g. Email Client: Gmail web, Office Suite: Office360 web).  If the software isn’t free/open source, please note that (e.g. Music Player: Spotify client non-free).  If I’ve missed a category, please add it in the same format.  If your favorites aren’t packaged for Ubuntu yet, please let us know, as we’re creating hundreds of new snap packages for Ubuntu desktop applications, and we’re keen to learn what key snaps we’re missing.

  • Web Browser: ???
  • Email Client: ???
  • Terminal: ???
  • IDE: ???
  • File manager: ???
  • Basic Text Editor: ???
  • IRC/Messaging Client: ???
  • PDF Reader: ???
  • Office Suite: ???
  • Calendar: ???
  • Video Player: ???
  • Music Player: ???
  • Photo Viewer: ???
  • Screen recording: ???

In the interest of opening this survey as widely as possible, we’ve cross-posted this thread to HackerNews, Reddit, and Slashdot.  We very much look forward to another friendly, energetic, collaborative discussion.

Or, you can fill out the survey here: https://ubu.one/apps1804

Thank you!
On behalf of @Canonical and @Ubuntu
on July 21, 2017 10:05 PM

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list.

cloud-init

  • cloud-init now supports python 3.6
  • modify Depends such that cloud-init no longer brings ifupdown into an image (LP: #1705639)
  • IPv6 Networking and Gateway fixes (LP: #1694801, #1701097)
  • Other networking fixes (LP: #1695092, #1702513)
  • Numerous CentOS networking commits (LP: #1682014, #1701417, #1686856, #1687725)

Git Ubuntu

  • Added initial linting tool (LP: #1702954)

Bug Work and Triage

IRC Meeting

Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Uploads to the Development Release (Artful)

asterisk, 1:13.14.1~dfsg-2ubuntu2, vorlon
billiard, 3.5.0.2-1, None
cloud-init, 0.7.9-221-g7e41b2a7-0ubuntu3, smoser
cloud-init, 0.7.9-221-g7e41b2a7-0ubuntu2, smoser
cloud-init, 0.7.9-221-g7e41b2a7-0ubuntu1, smoser
cloud-init, 0.7.9-212-g865e941f-0ubuntu1, smoser
cloud-init, 0.7.9-210-ge80517ae-0ubuntu1, smoser
freeradius, 3.0.15+dfsg-1ubuntu1, nacc
libvirt, 3.5.0-1ubuntu2, paelzer
multipath-tools, 0.6.4-5ubuntu1, paelzer
multipath-tools, 0.6.4-3ubuntu6, costamagnagianfranco
multipath-tools, 0.6.4-3ubuntu5, jbicha
nginx, 1.12.1-0ubuntu1, teward
ocfs2-tools, 1.8.5-2, None
openldap, 2.4.44+dfsg-8ubuntu1, costamagnagianfranco
puppet, 4.10.4-2ubuntu1, nacc
python-tornado, 4.5.1-2.1~build1, costamagnagianfranco
samba, 2:4.5.8+dfsg-2ubuntu4, mdeslaur
spice, 0.12.8-2.1ubuntu0.1, mdeslaur
tgt, 1:1.0.71-1ubuntu1, paelzer
Total: 20

Uploads to Supported Releases (Trusty, Xenial, Yakkety, Zesty)

freeipmi, yakkety, 1.4.11-1.1ubuntu4~0.16.10, dannf
golang-1.6, xenial, 1.6.2-0ubuntu5~16.04.3, mwhudson
heimdal, zesty, 7.1.0+dfsg-9ubuntu1.1, sbeattie
heimdal, yakkety, 1.7~git20150920+dfsg-4ubuntu1.16.10.1, sbeattie
heimdal, xenial, 1.7~git20150920+dfsg-4ubuntu1.16.04.1, sbeattie
heimdal, trusty, 1.6~git20131207+dfsg-1ubuntu1.2, sbeattie
heimdal, xenial, 1.7~git20150920+dfsg-4ubuntu1.16.04.1, sbeattie
heimdal, trusty, 1.6~git20131207+dfsg-1ubuntu1.2, sbeattie
heimdal, yakkety, 1.7~git20150920+dfsg-4ubuntu1.16.10.1, sbeattie
heimdal, zesty, 7.1.0+dfsg-9ubuntu1.1, sbeattie
iscsitarget, xenial, 1.4.20.3+svn502-2ubuntu4.4, apw
iscsitarget, xenial, 1.4.20.3+svn502-2ubuntu4.3, smb
iscsitarget, trusty, 1.4.20.3+svn499-0ubuntu2.3, smb
iscsitarget, trusty, 1.4.20.3+svn499-0ubuntu2.3, smb
iscsitarget, xenial, 1.4.20.3+svn502-2ubuntu4.3, smb
maas, xenial, 2.2.0+bzr6054-0ubuntu2~16.04.1, andreserl
maas, yakkety, 2.2.0+bzr6054-0ubuntu2~16.10.1, andreserl
maas, zesty, 2.2.0+bzr6054-0ubuntu2~17.04.1, andreserl
mysql-5.5, trusty, 5.5.57-0ubuntu0.14.04.1, mdeslaur
mysql-5.5, trusty, 5.5.57-0ubuntu0.14.04.1, mdeslaur
mysql-5.7, zesty, 5.7.19-0ubuntu0.17.04.1, mdeslaur
mysql-5.7, xenial, 5.7.19-0ubuntu0.16.04.1, mdeslaur
mysql-5.7, xenial, 5.7.19-0ubuntu0.16.04.1, mdeslaur
mysql-5.7, zesty, 5.7.19-0ubuntu0.17.04.1, mdeslaur
nagios-images, zesty, 0.9.1ubuntu0.1, nacc
ntp, yakkety, 1:4.2.8p8+dfsg-1ubuntu2.2, paelzer
postfix, yakkety, 3.1.0-5ubuntu1, vorlon
samba, zesty, 2:4.5.8+dfsg-0ubuntu0.17.04.4, sbeattie
samba, yakkety, 2:4.4.5+dfsg-2ubuntu5.8, sbeattie
samba, xenial, 2:4.3.11+dfsg-0ubuntu0.16.04.9, sbeattie
samba, trusty, 2:4.3.11+dfsg-0ubuntu0.14.04.10, sbeattie
samba, xenial, 2:4.3.11+dfsg-0ubuntu0.16.04.9, sbeattie
samba, trusty, 2:4.3.11+dfsg-0ubuntu0.14.04.10, sbeattie
samba, yakkety, 2:4.4.5+dfsg-2ubuntu5.8, sbeattie
samba, zesty, 2:4.5.8+dfsg-0ubuntu0.17.04.4, sbeattie
spice, zesty, 0.12.8-2ubuntu1.1, mdeslaur
spice, xenial, 0.12.6-4ubuntu0.3, mdeslaur
spice, trusty, 0.12.4-0nocelt2ubuntu1.5, mdeslaur
spice, xenial, 0.12.6-4ubuntu0.3, mdeslaur
spice, trusty, 0.12.4-0nocelt2ubuntu1.5, mdeslaur
spice, zesty, 0.12.8-2ubuntu1.1, mdeslaur
sssd, xenial, 1.13.4-1ubuntu1.6, slashd
walinuxagent, zesty, 2.2.14-0ubuntu1~17.04.1, sil2100
walinuxagent, yakkety, 2.2.14-0ubuntu1~16.10.1, sil2100
walinuxagent, xenial, 2.2.14-0ubuntu1~16.04.1, sil2100
walinuxagent, trusty, 2.2.14-0ubuntu1~14.04.1, sil2100
xen, zesty, 4.8.0-1ubuntu2.2, mdeslaur
xen, yakkety, 4.7.2-0ubuntu1.3, mdeslaur
xen, xenial, 4.6.5-0ubuntu1.2, mdeslaur
xen, trusty, 4.4.2-0ubuntu0.14.04.12, mdeslaur
xen, xenial, 4.6.5-0ubuntu1.2, mdeslaur
xen, trusty, 4.4.2-0ubuntu0.14.04.12, mdeslaur
xen, yakkety, 4.7.2-0ubuntu1.3, mdeslaur
xen, zesty, 4.8.0-1ubuntu2.2, mdeslaur
Total: 54

Contact the Ubuntu Server team

on July 21, 2017 06:42 PM

Alex Ellis has an excellent tutorial on how to install Kubernetes in 10 minutes. It is a summarized version of what you can find in the official documentation. Read those first, this is a an even shorter version with my choices mixed in.

We’ll install 16.04 on some machines, I’m using three. I just chose to use Weave instead of sending you to a choose your-own-network page as you have other stuff to learn before you dive into an opinion on a networking overlay. We’re also in a lab environment so we assume some things like your machines are on the same network.

Prep the Operating System

First let’s take care of the OS. I set up automatic updates, ensure the latest kernel is installed, and then ensure we’re all up to date, whatever works for you:

sudo -s
dpkg-reconfigure unattended-upgrades
apt install linux-generic-hwe-16.04
apt update
apt dist-upgrade
reboot

Prep each node for Kubernetes:

This is just installing docker and adding the kubernetes repo, we’ll be root for these steps:

sudo -s
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main  
EOF

apt update
apt install -qy docker.io kubelet kubeadm kubernetes-cni

On the master:

Pick a machine to be a master, then on that one:

kubeadm init

And then follow the directions to copy your config file to your user account, we only have a few commands left needed with sudo so you can safely exit out and continue with your user account:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Let’s install the network, and then allow workloads to be scheduled on the master (for a lab we want to use all our hardware for workloads!):

kubectl apply -f https://git.io/weave-kube-1.6
kubectl taint nodes --all node-role.kubernetes.io/master-

On each worker node:

On each machine you want to be a worker (yours will be different, the output of kubeadm init will tell you what to do:

sudo kubeadm join --token 030b75.21ca2b9818ca75ef 192.168.1.202:6443 

You might need to tack on a --skip-preflight-checks, see #347, sorry for the inconvenience.

Ensuring your cluster works

It shouldn’t take long for the nodes to come online, just check em out:

$ kubectl get nodes
NAME       STATUS     AGE       VERSION
dahl       Ready      45m       v1.7.1
hyperion   NotReady   16s       v1.7.1
tediore    Ready      32m       v1.7.1

$ kubectl cluster-info
Kubernetes master is running at https://192.168.1.202:6443
KubeDNS is running at https://192.168.1.202:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'
$

Ok you’re cluster is rocking, now

Set up your laptop

I don’t like to be ssh’ed into my cluster unless I’m doing maintenance, so now that we know stuff is working let’s copy the kubernetes config from the master node to our local workstation. You should know how to copy files around systems already, but here’s mine for reference:

 sudo scp /etc/kubernetes/admin.conf jorge@ivory.local:/home/jorge/.kube/config

I don’t need the entire Kubernetes repo on my laptop, so we’ll just install the snap for kubectl and check that I can access the server:

  sudo snap install kubectl --classic
  kubectl get nodes
  kubectl cluster-info

Don’t forget to turn on autocompletion!

Deploy your first application

Let’s deploy the Kubernetes dashboard:

   kubectl create -f https://git.io/kube-dashboard
   kubectl proxy

Then hit up http://localhost:8001/ui.

That’s it, enjoy your new cluster!

Joining the Community

kubeadm is brought to you by SIG Cluster Lifecycle, they have regular meetings that anyone can attend, and you can give feedback on the mailing list. I’ll see you there!

on July 21, 2017 11:00 AM

A huge THANK YOU to the entire HackerNews community, from the Ubuntu community!  Holy smokes...wow...you are an amazing bunch!  Your feedback in the thread, "Ask HN: What do you want to see in Ubuntu 17.10?" is almost unbelievable!

We're truly humbled by your response.

I penned this thread, somewhat on a whim, from the Terminal 2 lounge at London Heathrow last Friday morning before flying home to Austin, Texas.  I clicked "submit", closed my laptop, and boarded an 11-hour flight, wondering if I'd be apologizing to my boss and colleagues later in the day, for such a cowboy approach to Product Management...

When finally I signed onto the in-flight WiFi some 2 hours later, I saw this post at the coveted top position of HackerNews page 1, with a whopping 181 comments (1.5 comments per minute) in the first two hours.  Impressively, it was only 6am on the US west coast by that point, so SFO/PDX/SEA weren't even awake yet.  I was blown away!

This thread is now among the most discussed thread ever in the history of HackerNews, with some 1115 comments and counting at the time of this blog post.

 2530 comments   3125 points     2016-06-24      UK votes to leave EU    dmmalam
2215 comments 1817 points 2016-11-09 Donald Trump is the president-elect of the U.S. introvertmac
1448 comments 1330 points 2016-05-31 Moving Forward on Basic Income dwaxe
1322 comments 1280 points 2016-10-18 Shame on Y Combinator MattBearman
1215 comments 1905 points 2015-06-26 Same-Sex Marriage Is a Right, Supreme Court Rules imd23
1214 comments 1630 points 2016-12-05 Tell HN: Political Detox Week – No politics on HN for one week dang
1121 comments 1876 points 2016-01-27 Request For Research: Basic Income mattkrisiloff
*1115 comments 1333 points 2017-03-31 Ask HN: What do you want to see in Ubuntu 17.10? dustinkirkland
1090 comments 1493 points 2016-10-20 All Tesla Cars Being Produced Now Have Full Self-Driving Hardware impish19
1088 comments 2699 points 2017-03-07 CIA malware and hacking tools randomname2
1058 comments 1188 points 2014-03-16 Julie Ann Horvath Describes Sexism and Intimidation Behind Her GitHub Exit dkasper
1055 comments 2589 points 2017-02-28 Ask HN: Is S3 down? iamdeedubs
1046 comments 2123 points 2016-09-27 Making Humans a Multiplanetary Species [video] tilt
1030 comments 1558 points 2017-01-31 Welcome, ACLU katm
1013 comments 4107 points 2017-02-19 Reflecting on one very, very strange year at Uber grey-area
1008 comments 1990 points 2014-04-10 Drop Dropbox PhilipA

Rest assured that I have read every single one, and many of my colleagues has followed closely along as well.

In fact, to read and process this thread, I first attempted to print it out -- but cancelled the job before it fully buffered, when I realized that it's 105 pages long!  Here's the PDF (1.6MB), if you're curious, or want to page through it on your e-reader.

So instead, I wrote the following Python script, using the HackerNews REST API, to download the thread from Google Firebase into a JSON document, and import into MongoDB, for item-by-item processing.  Actually, this script will work against any HackerNews thread, and it recursively grabs nested comments.  Next time you're asked to write a recursive function on a white board for a Google interview, hopefully you've remember this code!  :-)

$ cat ~/bin/hackernews.py
#!/usr/bin/python3

import json
import requests
import sys

#https://hacker-news.firebaseio.com/v0/item/14002821.json?print=pretty

def get_json_from_url(item):
url = "https://hacker-news.firebaseio.com/v0/item/%s.json" % item
data = json.loads(requests.get(url=url).text)
#print(json.dumps(data, indent=4, sort_keys=True))
if "kids" in data and len(data["kids"]) > 0:
for k in data["kids"]:
data[k] = json.loads(get_json_from_url(k))
return json.dumps(data)


data = json.loads(get_json_from_url(sys.argv[1]))
print(json.dumps(data, indent=4, sort_keys=False))

It takes 5+ minutes to run, so you can just download a snapshot of the JSON blob from here (768KB), or if you prefer to run it yourself...

$ hackernews.py 14002821 | tee 14002821.json

First some raw statistics...

  • 1109 total comments
  • 713 unique users contributed a comment
  • 211 users contributed more than 1 comment
    • 42 comments/replies contributed by dustinkirkland (that's me)
    • 12 by vetinari
    • 11 by JdeBP
    • 9 by simosx and jnw2
  • 438 top level comments
    • 671 nested/replies
  • 415 properly formatted uses of "Headline:"
    • Thank you!  That was super useful in my processing of these!
  • 519 mentions of Desktop
  • 174 mentions of Server
  • 69 + 64 mentions of Snaps and Core
I'll try to summarize a few of my key interpretations of the trends, having now processed the entire discussion.  Sincere apologies in advance if I've (a) misinterpreted a theme, (b) skipped your favorite them, or (c) conflated concepts.  If any of these are the case, well, please post your feedback in the HackerNews thread associated with this post :-)

First, grouped below are some of the Desktop themes, with some fuzzy, approximate "weighting" by the number of pertinent discussions/mentions/vehemence.
  • Drop MIR/Unity for Wayland/Gnome (351 weight) [Beta available, 17.10]
    • Release/GA Unity 8 (15 weight)
    • Easily, the most heavily requested, major change in this thread was for Ubuntu to drop MIR/Unity in favor of Wayland/Gnome.  And that's exactly what Mark Shuttleworth announced in an Ubuntu Insights post here today.  There were a healthy handful of Unity 8 fans, calling for its GA, and more than a few HackerNews comments lamenting the end of Unity in this thread.
  • Improve HiDPI, 4K, display scaling, multi-monitor (217 weight) [Beta available, 17.10]
    • For the first time in a long time, I feel like a laggard in the technology space!  I own a dozen or so digital displays but not a single 4K or HiDPI monitor.  So while I can't yet directly relate, the HackerNews community is keen to see better support for multiple, high resolution monitors and world class display scaling.  And I suspect you're just a short year or so ahead of much of the rest of the world.
  • Make track pad, touch gestures great (129 weight) [Beta available, 17.10]
    • There's certainly an opportunity to improve the track pad and touch gestures in the Ubuntu Desktop "more Apple-like".
  • Improve Bluetooth, WiFi, Wireless, Network Manager (97 weight) [Beta available, 17.10]
    • This item captures some broad, general requests to make Bluetooth and Wireless more reliable in Ubuntu.  It's a little tough to capture an exact work item, but the relevant teams at Canonical have received the feedback.
  • Better mouse settings, more options, scroll acceleration (89 weight) [Beta available, 17.10]
    • Similar to the touch/track pad request, there was a collection of similar feedback suggesting better mouse settings out-of-the-box, and more fine grained options. 
  • Better NVIDIA, GPU support (87 weight) [In-progress, 17.10]
    • NVIDIA GPUs are extensively used in both Ubuntu Desktops and Servers, and the feedback here was largely around better driver availability, more reliable upgrades, CUDA package access.  For my part, I'm personally engaged with the high end GPU team at NVIDIA and we're actively working on a couple of initiatives to improve GPU support in Ubuntu (both Desktop and Server).
  • Clean up Network Manager, easier VPN (71 weight) [Beta available, 17.10]
    • There were several requests around both Network Manager, and a couple of excellent suggestions with respect to easier VPN configuration and connection.  Given the recent legislation in the USA, I for one am fully supportive of helping Ubuntu users do more than ever before to protect their security and privacy, and that may entail better VPN support.
  • Easily customize, relocate the Unity launcher (53 weight) [Deprecated, 17.10]
    • This thread made it abundantly clear that it's important to people to be able to move, hide, resize, and customize their launcher (Unity or Gnome).  I can certainly relate, as I personally prefer my launcher at the bottom of the screen.
  • Add night mode, redshift, f.lux (42 weight)  [Beta available, 17.10]
    • This request is one of the real gems of this whole exercise!  This seems like a nice, little, bite-sized feature, that we may be able include with minimal additional effort.  Great find.
  • Make WINE and Windows apps work better (10 weight)
    • If Microsoft can make Ubuntu on Windows work so well, why can't Canonical make Windows on Ubuntu work?  :-)  If it were only so easy...  For starters, the Windows Subsystem for Linux "simply" needs to implement a bunch of Linux syscalls, whose source is entirely available.  So there's that :-)  Anyway, this one is really going too be a tough one for us to move the needle on...
  • Better accessibility for disabled users, children (9 weight)
    • As a parent, and as a friend of many Ubuntu users with special needs, this is definitely a worthy cause.  We'll continue to try and push the envelop on accessibility in the Linux desktop.
  • LDAP/ActiveDirectory integration out of the box (7 weight)
    • This is actually a regular request of Canonical's corporate Ubuntu Desktop customers.  We're generally able to meet the needs of our enterprise customers around LDAP and ActiveDirectory authentication.  We'll look at what else we can do natively in the distro to improve this.
  • Add support for voice commands (5 weight)
    • Excellent suggestion.  We've grown so accustomed to "Okay Google...", "Alexa...", "Siri..."  How long until we can, "Hey you, Ubuntu..."  :-)
Grouped below are some themes, requests, and suggestions that generally apply to Ubuntu as an OS, or specifically as a cloud or server OS.
  • Better, easier, safer, faster, rolling upgrades (153 weight)
    • The ability to upgrade from one release of Ubuntu to the next has long been one of our most important features.  A variety of requests have identified a few ways that we should endeavor to improve: snapshots and rollbacks, A/B image based updates, delta diffs, easier with fewer questions, super safe rolling updates to new releases.  Several readers suggested killing off the non-LTS releases of Ubuntu and only releasing once a year, or every 2 years (which is the LTS status quo).  We're working on a number of these, with much of that effort focused on Ubuntu Core.  You'll see some major advances around this by Ubuntu 18.04 LTS.
  • Official hardware that just-works, Nexus-of-Ubuntu (130 weight)
    • This is perhaps my personal favorite suggestion of this entire thread -- for us to declare a "Nexus-of-each-Ubuntu-release", much like Google does for each major Android release.  Hypothetically, this would be an easily accessible, available, affordable hardware platform, perhaps designed in conjunction with an OEM, to work perfectly with Ubuntu out of the box.  That's a new concept.  We do have the Ubuntu Hardware Certification Programme, where we clearly list all hardware that's tested and known to work well with Ubuntu.  And we do work with major manufacturers on some fantastic desktops and laptops -- the Dell XPS and System76 both immediately come to mind.  But this suggestion is a step beyond that.  I'm set to speak to a few trusted partners about this idea in the coming weeks.
  • Lighter, smaller, more minimal (113 weight) [Beta Available, 17.10]
    • Add x-y-z-favorite-package to default install (105 weight)
    • For every Ubuntu user that wants to remove stuff from Ubuntu, to make it smaller/faster/lighter/secure, I'll show you another user who wants to add something else to the default install :-)  This is a tricky one, and one that I'm always keen to keep an eye on.  We try very had to strike a delicate balance between minimal-but-usable.  When we have to err, we tend (usually, but not always) on the side of usability.  That's just the Ubuntu way.  That said, we're always evaluating our Ubuntu Server, Cloud, Container, and Docker images to insure that we minimize (or at least justify) any bloat.  We'll certainly take another hard look at the default package sets at both 17.10 and 18.04.  Thanks for bringing this up and we'll absolutely keep it in mind!
  • More QA, testing, stability, general polish (99 weight) [In-progress, 17.10]
    • The word "polish" is used a total of 24 times, with readers generally asking for more QA, more testing, more stability, and more "polish" to the Ubuntu experience.  This is a tough one to quantify.  That said, we have a strong commitment to quality, and CI/CD (continuous integration, continuous development) testing at Canonical.  As your Product Manager, I'll do my part to ensure that we invest more resources into Ubuntu quality.
  • Fix /boot space, clean up old kernels (92 weight) [In-progress, 17.10]
    • Ouch.  This is such an ugly, nasty problem.  It personally pissed me off so much, in 2010, that I created a script, "purge-old-kernels".  And it personally pissed me off again so much in 2014, that I jammed it into the Byobu package (which I also authored and maintain), for the sole reason to get it into Ubuntu.  That being said, that's the wrong approach.  I've spoken with Leann Ogasawara, the amazing manager and team lead for the Ubuntu kernel team, and she's committed to getting this problem solved once and for all in Ubuntu 17.10 -- and ideally getting those fixes backported to older releases of Ubuntu.
  • ZFS supported as a root filesystem (84 weight)
    • This was one of the more surprising requests I found here, and another real gem.  I know that we have quite a few ZFS fans in the Ubuntu community (of which, I'm certainly one) -- but I had no idea so many people want to see ZFS as a root filesystem option.  It makes sense to me -- integrity checking, compression, copy-on-write snapshots, clones.  In fact, we have some skunkworks engineering investigating the possibility.  Stay tuned...
  • Improve power management, battery usage (73 weight)
    • Longer batteries for laptops, lower energy bills for servers -- an important request.  We'll need to work closer with our hardware OEM/ODM partners to ensure that we're leveraging their latest and greatest energy conservation features, and work with upstream to ensure those changes are integrated into the Linux kernel and Gnome.
  • Security hardening, grsecurity (72 weight)
    • More security!  There were several requests for "extra security hardening" as an option, and the grsecurity kernel patch set.  So the grsecurity Linux kernel is a heavily modified, patched Linux kernel that adds a ton of additional security checks and features at the lowest level of the OS.  But the patch set is huge -- and it's not upstream in the Linux kernel.  It also only applies against the last LTS release of Ubuntu.  It would be difficult, though not necessarily impossible, to offer the grsecurity supported in the Ubuntu archive.  As for "extra security hardening", Canonical is working with IBM on a number of security certification initiatives, around FIPS, CIS Benchmarks, and DISA STIG documentation.  You'll see these becoming available throughout 2017.
  • Dump Systemd (69 weight)
    • Fun.  All the people fighting for Wayland/Gnome, and here's a vocal minority pitching a variety of other init systems besides Systemd :-)  So frankly, there's not much we can do about this one at this point.  We created, and developed, and maintained Upstart over the course of a decade -- but for various reasons, Red Hat, SUSE, Debian, and most of the rest of the Linux community chose Systemd.  We fought the good fight, but ultimately, we lost graciously, and migrated Ubuntu to Systemd.
  • Root disk encryption, ext4 encryption, more crypto (47 weight) [In-progress, 17.10]
    • The very first feature of Ubuntu, that I created when I started working for Canonical in 2008, was the Home Directory Encryption feature introduced in late 2008, so yes -- this feature has been near and dear to my heart!  But as one of the co-authors and co-maintainers of eCryptfs, we're putting our support behind EXT4 Encryption for the future of per-file encryption in Ubuntu.  Our good friends at Google (hi Mike, Ted, and co!) have created something super modern, efficient, and secure with EXT4 Encryption, and we hope to get there in Ubuntu over the next two releases.  Root disk encryption is still important, even more now than ever before, and I do hope we can do a bit better to make root disk encryption easier to enable in the Desktop installer.
  • Fix suspend/resume (24 weight)
    • These were a somewhat general set of bugs or issues around suspend/resume not working as well as it should.  If these are a closely grouped set of corner cases (e.g. multiple displays, particular hardware), then we should be able to shake these out with better QA, bug triage, and upstream fixes.  That said, I remember when suspend/resume never worked at all in Linux, so pardon me while I'm a little nostalgic about how far we've come :-)  Okay...now, yes, you're right.  We should do better.
  • New server installer (19 weight) [Beta available, 17.10]
    • Well aren't you in for a surprise :-)  There's a new server installer coming soon!  Stay tuned.
  • Improve swap space management (12 weight)
    • Another pet peeve of mine -- I feel you!  So I filed this blueprint in 2009, and I'm delighted to say that as of this month (8 years later), Ubuntu 17.04 (Zesty Zapus) will use swap files, rather than swap partitions, by default.  Now, there's a bit more to do -- we should make these a bit more dynamic, tune the swappiness sysctl, etc.  But this is a huge step in the right direction!
  • Reproducible builds (7 weight)
    • Ensuring that builds are reproducible is essential for the security and the integrity of our distribution.  We've been working with Debian upstream on this over the last few years, and will continue to do so.
Ladies and gentlemen, again, a most sincere "thank you", from the Ubuntu community to the HackerNews community.  We value openness -- open source code, open design, open feedback -- and this last week has been a real celebration of that openness for us.  We appreciate the work and effort you put into your comments, and we hope to continue our dialog throughout our future together, and most importantly, that Ubuntu continues to serve your needs and occasionally even exceed your expectations ;-)

Cheers,
:-Dustin
on July 21, 2017 10:56 AM

Recently, I posted a piece about distributions consolidating around a consistent app store. In it I mentioned Flatpak as a potential component and some people wondered why I didn’t recommend Snappy, particularly due to my Canonical heritage.

To be clear (and to clear up my in-articulation): I am a fan of both Snappy and Flatpak: they are both important technologies solving important problems and they are both driven by great teams. To be frank, my main interest and focus in my post was the notion of a consolidated app store platform as opposed to what the specific individual components would be (other people can make a better judgement call on that). Thus, please don’t read my single-line mention of Flatpak as any criticism of Snappy. I realize that this may have been misconstrued as me suggesting that Snappy is somehow not up to the job, which was absolutely not my intent.

Part of the reason I mentioned Flatpak is that I feel there is a natural center of gravity forming around the GNOME Shell and platform, which many distros are shipping. Within the context of that platform I have seen Flatpak commonly mentioned as a component, hence why I mentioned it. Of course, there is no reason why Snappy couldn’t be that component too, and the Snappy team have been doing great work. I was also under the impression (entirely incorrectly) that Snappy is focusing more on the cloud/server market. It has become clear that the desktop is very much within the focus and domain of Snappy, and I apologize for the confusion.

So, to clear up any potential confusion (I can be an inarticulate clod at times), I am a big fan of Snappy, big fan of Flatpak, and an even bigger fan of a consolidated app store that multiple distros use.? My view is simple: competition is healthy, and we have two great projects and teams vying to make app installation and management on Linux easier. Viva la desktop!

The post Clarification: Snappy and Flatpak appeared first on Jono Bacon.

on July 21, 2017 12:26 AM

July 20, 2017

This is a follow-up to the End of Life warning sent earlier this month to confirm that as of today (July 20, 2017), Ubuntu 16.10 is no longer supported. No more package updates will be accepted to 16.10, and it will be archived to old-releases.ubuntu.com in the coming weeks.

The original End of Life warning follows, with upgrade instructions:

Ubuntu announced its 16.10 (Yakkety Yak) release almost 9 months ago, on October 13, 2016. As a non-LTS release, 16.10 has a 9-month support cycle and, as such, the support period is now nearing its end and Ubuntu 16.10 will reach end of life on Thursday, July 20th.

At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 16.10.

The supported upgrade path from Ubuntu 16.10 is via Ubuntu 17.04. Instructions and caveats for the upgrade may be found at:

https://help.ubuntu.com/community/ZestyUpgrades

Ubuntu 17.04 continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce

Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.

Originally posted to the ubuntu-announce mailing list on Thu Jul 20 23:23:31 UTC 2017 by Adam Conrad, on behalf of the Ubuntu Release Team

on July 20, 2017 11:35 PM

S10E20 – Wry Mindless Ice - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

We discuss tormenting Mycroft, review the Dell Precision 5520, give you some USB resetting command line lurve and go over your feedback.

It’s Season Ten Episode Twenty of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on July 20, 2017 02:00 PM

Welcome to the Ubuntu Weekly Newsletter. This is issue #513 for the weeks of July 3 – 17, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • Chris Guiver
  • Athul Muralidhar
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on July 20, 2017 01:38 PM

Akademy 2017

Jonathan Riddell

Time to fly off to the sun to meet KDE friends old and new and plan out the next year of freedom fighting. See you in Almería!

 

Facebooktwittergoogle_pluslinkedinby feather
on July 20, 2017 11:11 AM
Graph of subscribers and moderators over time in /r/NoSleep. The image is taken from our 2016 CHI paper.

Last year at CHI 2016, my research group published a qualitative study examining the effects of a large influx of newcomers to the /r/nosleep online community in Reddit. Our study began with the observation that most research on sustained waves of newcomers focuses on the destructive effect of newcomers and frequently invokes Usenet’s infamous “Eternal September.” Our qualitative study argued that the /r/nosleep community managed its surge of newcomers gracefully through strategic preparation by moderators, technological systems to reign in on norm violations, and a shared sense of protecting the community’s immersive environment among participants.

We are thrilled that, less a year after the publication of our study, Zhiyuan “Jerry” Lin and a group of researchers at Stanford have published a quantitative test of our study’s findings! Lin analyzed 45 million comments and upvote patterns from 10 Reddit communities that a massive inundation of newcomers like the one we studied on /r/nosleep. Lin’s group found that these communities retained their quality despite a slight dip in its initial growth period.

Our team discussed doing a quantitative study like Lin’s at some length and our paper ends with a lament that our findings merely reflected, “propositions for testing in future work.” Lin’s study provides exactly such a test! Lin et al.’s results suggest that our qualitative findings generalize and that sustained influx of newcomers need not doom a community to a descent into an “Eternal September.” Through strong moderation and the use of a voting system, the subreddits analyzed by Lin appear to retain their identities despite the surge of new users.

There are always limits to research projects work—quantitative and qualitative. We think the Lin’s paper compliments ours beautifully, we are excited that Lin built on our work, and we’re thrilled that our propositions seem to have held up!

This blog post was written with Charlie Kiene. Our paper about /r/nosleep, written with Charlie Kiene and Andrés Monroy-Hernández, was published in the Proceedings of CHI 2016 and is released as open access. Lin’s paper was published in the Proceedings of ICWSM 2017 and is also available online.

on July 20, 2017 12:12 AM

July 19, 2017

I’ve recently made some changes to how do-release-upgrade, called by update-manager when you choose to upgrade releases, behaves and thought it’d be a good time to clarify how things work and the changes made.

When do-release-upgrade is called it reads a meta-release file from changelogs.ubuntu.com to determine what releases are supported and to which release to upgrade. The exact meta-release file changes depending on what arguments, –proposed or –devel-release, are passed to do-release-upgrade. The meta-release file is used to determine which tarball to download and use to actually perform the upgrade. So if you are upgrading from Ubuntu 17.04 to Artful then you are actually using the the ubuntu-release-upgrader code from Artful.

One change implemented some time ago was support for the release upgrade process to skip unsupported releases if you are running a supported release. For example, when Ubuntu 16.10 (Yakkety Yak) becomes end of life and you upgrade from Ubuntu 16.04 (Xenial Xerus) with “Prompt=normal” (found in /etc/update-manager/release-upgrades) then Ubuntu 16.10 will be skipped and you will be upgraded to Ubuntu 17.04 (Zesty Zapus). This ensures that you are running a supported release and helps to test the next LTS upgrade path i.e. from Ubuntu 16.04 to Ubuntu 18.04. Similarly, when Ubuntu 17.04 becomes end of life an upgrade from Ubuntu 16.04, with “Prompt=normal”, will upgrade you to Ubuntu 17.10.

I’ve also just modified the documentation regarding the ‘-d’ switch for ubuntu-release-upgrader and update-manager to make it clear that ‘-d’ is from upgrading from the latest supported release (Ubuntu 17.04 right now) to the development release of Ubuntu. The documentation used to incorrectly imply that any release could be updated to the development release, something that would be an unsafe upgrade path. Additionally, the meta-release-development and meta-release-lts-development files were modified to only contain information about releases relevant to the upgrade path. So meta-release-lts-development is currently empty and meta-release-development only contains information about Ubuntu 17.04 and Artful Aadrvark which will become Ubuntu 17.10.

I hope this makes things a bit clearer!

on July 19, 2017 09:52 PM

July 18, 2017

The latest release of stress-ng V0.08.09 incorporates new stressors and a handful of bug fixes. So what is new in this release?
  • memrate stressor to exercise and measure memory read/write throughput
  • matrix yx option to swap order of matrix operations
  • matrix stressor size can now be 8192 x 8192 in size
  • radixsort stressor (using the BSD library radixsort) to exercise CPU and memory
  • improved job script parsing and error reporting
  • faster termination of rmap stressor (this was slow inside VMs)
  • icache stressor now calls cacheflush()
  • anonymous memory mappings are now private allowing hugepage madvise
  • fcntl stressor exercises the 4.13 kernel F_GET_FILE_RW_HINT and F_SET_FILE_RW_HINT
  • stream and vm stressors have new mdavise options
The new memrate stressor performs 64/32/16/8 bit reads and writes to a large memory region.  It will attempt to get some statistics on the memory bandwidth for these simple reads and writes.  One can also specify the read/write rates in terms of MB/sec using the --memrate-rd-mbs and --memrate-wr-mbs options, for example:

 stress-ng --memrate 1 --memrate-bytes 1G \  
--memrate-rd-mbs 1000 --memrate-wr-mbs 2000 -t 60
stress-ng: info: [22880] dispatching hogs: 1 memrate
stress-ng: info: [22881] stress-ng-memrate: write64: 1998.96 MB/sec
stress-ng: info: [22881] stress-ng-memrate: read64: 998.61 MB/sec
stress-ng: info: [22881] stress-ng-memrate: write32: 1999.68 MB/sec
stress-ng: info: [22881] stress-ng-memrate: read32: 998.80 MB/sec
stress-ng: info: [22881] stress-ng-memrate: write16: 1999.39 MB/sec
stress-ng: info: [22881] stress-ng-memrate: read16: 999.66 MB/sec
stress-ng: info: [22881] stress-ng-memrate: write8: 1841.04 MB/sec
stress-ng: info: [22881] stress-ng-memrate: read8: 999.94 MB/sec
stress-ng: info: [22880] successful run completed in 60.00s (1 min, 0.00 secs)

...the memrate stressor will attempt to limit the memory rates but due to scheduling jitter and other memory activity it may not be 100% accurate.  By careful setting of the size of the memory being exercised with the --memrate-bytes option one can exercise the L1/L2/L3 caches and/or the entire memory.

By default, matrix stressor will perform matrix operations with optimal memory access to memory.  The new --matrix-yx option will instead perform matrix operations in a y, x rather than an x, y matrix order, causing more cache stalls on larger matrices.  This can be useful for exercising cache misses.

To complement the heapsort, mergesort and qsort memory/CPU exercising sort stressors I've added the BSD library radixsort stressor to exercise sorting of hundreds of thousands of small text strings.

Finally, while exercising various hugepage kernel configuration options I was inspired to make stress-ng mmap's to work better with hugepage madvise hints, so where possible all anonymous memory mappings are now private to allow hugepage madvise to work.  The stream and vm stressors also have new madvise options to allow one to chose hugepage, nohugepage or normal hints.

No big changes as per normal, just small incremental improvements to this all purpose stress tool.
on July 18, 2017 11:20 AM

My hacker summer camp planning posts are among the most-viewed on my blog, and I was recently reminded I hadn’t done one for 2017 yet, despite it being just around the corner!

Though many tips will be similar, feel free to check out the two posts from last year as well:

If you don’t know, Hacker Summer Camp is a nickname for 3 information security conferences in one week in Las Vegas every July/August. This includes Black Hat, BSides Las Vegas, and DEF CON.

Black Hat is the most “corporate” of the 3 events, with a large area of vendor booths, great talks (though not all are super-technical) and a very corporate/organized feel. If you want a serious, straight-edge security conference, Black Hat is for you. Admission is several thousand dollars, so most attendees are either self-employed and writing it off, or paid by their employer.

BSides Las Vegas is a much smaller (~1000 people) conference, that’s heavily community-focused. With tracks intended for those new to the industry, getting hired, and a variety of technical talks, it has something for everyone. It also has my favorite CTF: Pros vs Joes. You can donate for admission, or get in line for one of ~450 free admissions. (Yes, the line starts early. Yes, it quickly sells out.)

DEF CON is the biggest of the conferences. (And, in my opinion, the “main event”.) I think of DEF CON as the Burning Man of hacker conferences: yes, there’s tons of talks, but it’s also a huge opportunity for members of the community to show off what they’re doing. It’s also a huge party at night: tons of music, drinking, pool parties. At DEF CON, there is more to do than can be done, so you’ll need to pick and choose.

Hopefully you already have your travel plans (hotel/airfare/etc.) sorted. It’s a bit late for me to provide advice there this year. :)

What To Do

Make sure you do things. You only get out of Hacker Summer Camp what you put into it. You can totally just go and sit in conference rooms and listen to talks, but you’re not going to get as much out of it as you otherwise could.

Black Hat has excellent classes, so you can get into significantly more depth than a 45 minute talk would allow. If you have the opportunity (they’re expensive), you should take one.

If you’re not attending Black Hat, come over to BSides Las Vegas. They go on in parallel, so it’s a good opportunity for a cheaper option and for a more community feel. At BSides, you can meet some great members of the community, hear some talks in a smaller intimate setting (you might actually have a chance to talk to the speaker afterwards), and generally have a more laid-back time than Black Hat.

DEF CON is entirely up to you: go to talks, or don’t. Go to villages and meet people, see what they’re doing, get hands on with things. Go to the vendor area and buy some lockpicks, WiFi pineapples, or more black t-shirts. Drink with some of the smartest people in the industry. You never know who you’ll meet. Whatever you choose, you can have a blast, but you need to make sure you manage your energy. I’ve made myself physically sick by trying to do it all – just accept that you can’t and take it easy.

I’m particularly excited to check out the IoT village again this year. (As regular readers know, I have a soft spot for the Insecurity of Things.) Likewise, I look forward to seeing small talks in the villages.

Whatever you do, be an active participant. I’ve personally spent too much time not participating: not talking, not engaging, not doing. You won’t get the most out of this week by being a wallflower.

Digital Security

DEF CON has a reputation for being the most dangerous network in the world, but I believe that title depends on how you look at it. In my experience, it’s a matter of quality vs quantity. While I have no doubt that the open WiFi at DEF CON probably has far more than it’s fair share of various hijinks (sniffing, ARP spoofing, HTTPS downgrades, fake APs, etc.), I genuinely don’t anticipate seeing high-value 0-days being deployed on this network. Using an 0-day on the DEF CON network is going to burn it: someone will see it and your 0-day is over. Some of the best malware reversers and forensics experts in the world are present, I don’t anticipate someone using a high-quality bug in modern software on this network and wasting it like that.

Obviously, I can’t make any guarantees, but the following advice approximately matches my own threat model. If you plan to connect to shady networks or CTF-type networks, you probably want to take additional precautions. (Like using a separate laptop, which is the approach I’m taking this year.)

That being said, you should take reasonable precautions against more run of the mill attacks:

  • Use Full Disk Encryption (in case your device gets lost/stolen)
  • Be fully updated on a modern OS (putting off patches? might be the time to fix that)
  • Don’t use open WiFi
  • Turn off any radios you’re not using (WiFi, BT)
  • Disable 3G downgrade on your phone if you can (LTE only)
  • Don’t accept updates offered while you’re in Vegas
  • Don’t run random downloads :)
  • Run a local firewall dropping all unexpected traffic

Using a current, fully patched iOS or Android device should be relatively safe. ChromeOS is a good choice if you just need internet from a laptop-style device. Fully patched Windows/Linux/OS X are probably okay, but you have somewhat larger attack surface and less protection against drive-by malware.

Your single biggest concern on any network (DEF CON or not) should be sending plaintext over the network. Use a VPN. Use HTTPS. Be especially wary of phishing. Use 2-Factor. (Ideally U2F, which is cryptographically designed to be unphishable.)

Personal Security & Safety

This is Vegas. DEF CON aside, watch what you’re doing. There are plenty of pick pockets, con men, and general thieves in Las Vegas. They’re there to prey on tourists, and whether you’re there for a good time or for a con, you’re their prey. Keep your wits about you.

Check ATMs for skimmers. (This is a good life pro tip.) Don’t use the ATMs near the con. If you’re not sure if you can tell if an ATM has a skimmer: bring enough cash in advance. Lock it in your in-room safe.

Does your hotel use RFID-based door locks? May I suggest RFID-blocking sleeves?

Planning to drink? (I am.) Make sure you drink water too. Vegas is super-hot, and dehydration will make you very sick (or worse). I try to drink 1/2 a liter of water for every drink I have, but I rarely meet that goal. It’s still a good goal to have.

FAQ

Are you paranoid?

Maybe. I get paid to execute attacks and think like an attacker, so it comes with the territory. I’m going to an event to see other people who do the same thing. I’m not convinced the paranoia is unwarranted.

Will I get hacked?

Probably not, if you spend a little time preparing.

Should I go to talks?

Are they interesting to you? Go to talks if they’re interesting and timely. Note that most talks are recorded and will be posted online a couple of months after the conferences (or can be bought sooner from Source of Knowledge). A notable exception is that SkyTalks are not recorded. And don’t try to record them yourself – you’ll get bounced from the room.

What’s the 3-2-1 rule?

3 hours of sleep, 2 meals, and 1 shower. Every day. I prefer 2 showers myself – Vegas is pretty hot.

on July 18, 2017 07:00 AM

Clipped Wings

Stephen Michael Kellat

Well, scratch the last plan I had.1 I will not be able to go to OggCamp 17 as I had planned. Due to a member of my immediate family having been put on the docket for open heart surgery, family wants me to stay on-continent and within six counties of Northeast Ohio if at all possible.2 I do not have a date yet for when that family member will be going into surgery but recovery will be tough.

Yes, I was looking forward to the trip to be able to meet up with everybody in-person. Other relations have indicated to me that, if there is an event next year, they may help me plan for travel. Continuing uncertainty about the status of my job due to proposed cutbacks by the departmental offices in their budget submission to the Congress has not helped things either. I am certainly not happy about this but I will have to soldier on.

Efforts to unravel some of the mysteries behind Outernet continue. Eventually I will be able to put together some sort of paper. My preference was to have presented something at the gathering in Canterbury. I will be having to review plans instead. Learning more about LaTeX may prove useful, I suppose.

The trip was going to be nicely timed before the Fall 2017 semester started at Lakeland Community College. I missed one class during the Spring 2017 semester during to workload constraints at my day job.3 If I had taken that class I could have graduated. If I can talk the program director into getting the capstone offered off-cycle in the fall I may be able to graduate from the program by December 2017.

Somehow the work as an evangelist at West Avenue Church of Christ is continuing.4 It is hard preaching to residents at a nursing home. Shut-in populations still deserve to have the opportunity to hear the Word if they so choose, though.

If you want to talk about anything contained here, I don't have comments on this blog. Use something like Telegram to contact me at https://t.me/smkellat or via Mastodon/GNUSocial/StatusNet/Fediverse at https://quitter.se/alpacaherder/. I've been off IRC for too long so I cannot be found on freenode at the moment. Others have more special, rather direct ways of reaching me.

Have a beautiful day!


  1. It is reasonable to ask exactly which plan at this time as there are so many, though... 

  2. Northern Ireland has 5,460 square miles. Ashtabula, Lake, Geauga, Cuyahoga, Trumbull, and Portage counties come to a mere 2,935 square miles. One is just slightly over half the size of the other. 

  3. It is rare to be in a workplace where you actually have "All Hands On Deck" called and that is an actual operating condition but I digress. 

  4. Eventually the congregation may get a website. That is triaged low. Getting audio issues sorted out in the sanctuary is a higher priority problem right now. 

on July 18, 2017 03:00 AM

July 17, 2017

Ubuntu Artful Desktop July Shakedown

We’re mid-way through the Ubuntu Artful development cycle, with the 17.10 release rapidly approaching on the horizon. Now is a great time to start exercising the new GNOME goodness that’s landed on our recent daily images! Please download the ISO, test it out on your own hardware, and file bugs where appropriate.

If you’re lucky enough to find any new bugs, please tag them with ‘julyshakedown’, so we can easily find them from this testing session.

Ubuntu Artful Desktop

We recently switched the images to GDM as the login manager instead of LightDM, and GNOME Shell is now the default desktop, replacing Unity. These would be great parts of the system to exercise this early in the cycle. It’s also a good time to test out the Ubuntu on Wayland session to see how it performs in your use cases.

Get started

Suggested tests

This early in the cycle we’re not yet recommending full ISO testing, but some exploratory tests on a diverse range of set-ups would be appropriate. There’s enough new and interesting stuff in these ISOs that make it worthwhile giving everything a good exercise. Here’s some examples of things you might want to run through to get started.

Ubuntu on Wayland

  • Logging in using the ‘Ubuntu on Wayland’ session for your normal day to day activities
  • Suspend & resume and check everything still functions as expected
  • Attach to, and switch between wired and wireless networks you have nearby
  • Connect any bluetooth devices you have, especially audio devices, and make sure they work as expected
  • Plug in external displays if you have them, and ensure they work as usual

Reporting issues

The Ubuntu Desktop Team are happy to help you with these ISO images. The team are available in #ubuntu-desktop on freenode IRC. If nobody is about in your timezone, you may need to wait until the European work day to find active developers.

Bugs are tracked in Launchpad, so you’ll need an account there to get started.

If you report defects that occur only when running a wayland session please add the tag ‘wayland’ to the bug report.

Remember to use the 'julyshakedown' tag on your bugs so we can easily find them!

Known issues

There is a known issue with using Bluetooth audio devices from the greeter.  This means that people won’t be able to use screenreaders over Bluetooth at the greeter.  Once in the session this should all work as normal though.

Issues specific to wayland:

We look forward to receiving your feedback, and results!

:)

on July 17, 2017 07:00 AM

July 16, 2017

Carla Sella


Packaging up a Go app as a snap


After building my first Python snap I was asked to try to build a Go snap.
There is a video about building Go snaps with snapcraft here so after watching it I gave Kurly a try. 

"Kurly is an alternative to the widely popular curl program and is designed to operate in a similar manner to curl, with select features. Notably, kurly is not aiming for feature parity, but common flags and mechanisms particularly within the HTTP(S) realm are to be expected.".

First of all I got familiar with the code and got it on my PC:

$ git clone https://github.com/davidjpeacock/kurly.git

I entered the kurly directory:

$ cd kurly

 I created a snap directory and entered it:

$ mkdir snap
$ cd snap
I created a snapcraft.yaml file with the go plugin (plugin: go):

name: kurly
version: master
summary: kurly is an alternative to the widely popular curl program.
description: |
                     kurly is designed to operate in a similar manner to curl, with select features. Notably, kurly is not aiming for feature parity, but common flags and mechanisms particularly within the HTTP(S) realm are to be expected.

confinement: devmode

apps:
  kurly:
     command: kurly

parts:
  kurly:
     source: .                                              
     plugin: go
     go-importpath: github.com/davidjpeacock/kurly

The go-importpath keyword is important and tells the checked out source to live within a certain path with 'GOPATH'. This is required to work with absolute imports and path checking.


I went back to the root of the project and launched the snapcraft command to build the snap:

$ cd ..
$ snapcraft

Once snapcraft has finished building you will find a kurly_master_amd64.snap file in the root directory of the project.

I installed the kurly snap in devmode to test it and see if worked well in non confined mode so that then I could run it in confined mode and add the plugs needed by the snap to work properly:

$ sudo snap install --dangerous --devmode kurly_master_amd64.snap


If you run:

$ snap list

you will see the kurly snap installed in devmode:

Name    Version  Rev   Developer  Notes
core    16-2     2312  canonical  -
kurly   master   x1               devmode

Now i tried kurly out a bit to see if it worked well, for instance:

$ kurly -v https://httpbin.org/ip
$  kurly -R -O -L http://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-8.7.1-amd64-netinst.iso

Ok fine, it worked so now I tried to install it in confined mode changing the snapcraft.yaml file accordingly (confinement: strict).

I ran snapcraft again and installed the snap:

$ snapcraft
$ sudo snap install --dangerous kurly_master_amd64.snap

You can see from  the snap list command that the app is installed not in devmode anymore:
$ snap list

Name    Version  Rev   Developer  Notes
core    16-2     2312  canonical  -
kurly   master   x2               -

I tried out kurly again and got some errors:

$ kurly -v https://httpbin.org/ip
> GET /ip HTTP/1.1
> User-Agent [Kurly/1.0]
> Accept [*/*]
> Host [httpbin.org]
*Error: Unable to get URL; Get https://httpbin.org/ip: dial tcp: lookup httpbin.org: Temporary failure in name resolution

From the error I could understand that kurly needs the network plug (plugs: [network]) so I changed the snapcraft.yaml file so:

name: kurly
version: master
summary: kurly is an alternative to the widely popular curl program.
description: |
                     kurly is designed to operate in a similar manner to curl, with select features. Notably, kurly is not aiming for feature parity, but common flags and mechanisms particularly within the HTTP(S) realm are to be expected.

confinement: strict

apps:
  kurly:
     command: kurly
     plugs: [network]

parts:
  kurly:
     source: .
     plugin: go
     go-importpath: github.com/davidjpeacock/kurly


 I ran snapcraft and installed the kurly snap again:

$ snapcraft
$ sudo snap install --dangerous kurly_master_amd64.snap
But when I ran a kurly command for downloading a file I got another error:

$kurly -R -O -L http://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-8.7.1-amd64-netinst.iso
*Error: Unable to create file 'debian-8.7.1-amd64-netinst.iso' for output
Kurly could not write the file to my home direcotry, so I added the home plug to the snapcraft.yaml file, ran snapcraft and installed the snap again.
This time kurly worked fine.

So here's the final snapcraft.yaml file ready for a PR in Git Hub:
 
name: kurly
version: master
summary: kurly is an alternative to the widely popular curl program.
description: |
                     kurly is designed to operate in a similar manner to curl, with select features. Notably, kurly is not aiming for feature parity, but common flags and mechanisms particularly within the HTTP(S) realm are to be expected.

confinement: strict

apps:
  kurly:
     command: kurly
     plugs: [network, home]

parts:
  kurly:
     source: .
     plugin: go
     go-importpath: github.com/davidjpeacock/kurly

That's it.
The Go snap is done!



 
on July 16, 2017 01:36 PM

July 15, 2017

Carla Sella


My first Snap 


I have been testing for Ubuntu for quite a while so I decided to change a bit and give packaging apps  a go, so here I am writing about how I managed to create my first Python Snap.

Snapcraft is new way to package apps and so I thought it would be nice to learn about it, so I went to the Snapcraft site https://snapcraft.io/ and found out, that with Snapcraft you can:
"Package any app for every Linux desktop, server, cloud or device, and deliver updates directly".

"A snap is a fancy zip file containing an application together with its dependencies, and a description of how it should safely run on your system, especially the different ways it should talk to other software.
Snaps are designed to be secure, sandboxed, containerised applications isolated from the underlying system and from other applications. Snaps allow the safe installation of apps from any vendor on mission critical devices and desktops."

So if you got an app that is too new for the Ubuntu archive, you can get it in the Snaps store and install it on Ubuntu or any other Linux distribution  that supports Snaps.

I started by getting in touch with the guys in the Snapcraft channel on Rocket chat:  https://rocket.ubuntu.com/channel/snapcraft that told me to how to start.

First of all I read the "Snap a Python App" tutorial and then applied what I learned to Lbryum a Lightweight lbrycrd client, a fork of the Electrum bitcoin client.

I couldn't believe how easy it was, I am not a developer but  I know how to code and I know a bit of Python.

First of all you need to get familiar with the code of the app you want to snap so I got Lbryum code from Git Hub:

$ sudo apt install git
$ git clone https://github.com/lbryio/lbryum.git


Once I got familiar with the code I installed Snapcraft:

 $ sudo apt install snapcraft

I generated a Snapcraft projet in the lbryum root directory with:

$ snapcraft init

If everything works, you will get this output:

Created snap/snapcraft.yaml.
Edit the file to your liking or run `snapcraft` to get started
 
Now if you check the content of the project's directory, it has been populated with  a "snap" folder containing the snapcraft.yaml file  that I modified  for creating the Lbryum app snap:


name: lbryum 
version: 'master'
summary: Lightweight lbrycrd client
description: |
Lightweight lbrycrd client, a fork of the Electrum bitcoin client
grade: stable
confinement: devmode

apps:
lbryum:
command: lbryum
parts:
lbryum:
source: .
plugin: python
 
Here is the documentation so you can find the meaning of the fields in the snapcraft.yaml file (the fields are quite self explanatory): 



To find out what plugs or parts your app needs, you need to run snapcraft and debug it until you find out all that's needed, so I tried to build it at this stage to make sure that I had the basic definition correct. 

I ran this command from the root of the lbryum-snap directory:

$ snapcraft prime

Obviously I had some errors  that made me make some changes to the snapcraft.yaml file. I found out that the app needs Python2 so I added "python-version: python2" and I specified to use the requirements.txt file of the Lbryum project for the packages needed during install (requirements: requirements.txt):
 
name: lbryum 
version: 'master'
summary: Lightweight lbrycrd client
description: |
Lightweight lbrycrd client, a fork of the Electrum bitcoin client
grade: stable
confinement: devmode

apps:
lbryum:
command: lbryum
parts:
lbryum:
source: .
plugin: python
requirements: requirements.txt
python-version: python2
I  ran:

$ snapcraft clean 

and 
 
$snapcraft prime
 again.



Success!!!! :)

Ok, so now I tried the snap with:

$ sudo snap try --devmode prime/
$ lbryum daemon start
$ lbryum version
$ lbryum commands

played a bit around with it to see if the snap worked well.

Now before shipping the snap or opening a PR in Git Hub, we need to turn confinement on and see if the snap works or if it needs further changes to the snapcraft.yaml file.

So I changed the confinement from devmode to strict (confinement: strict) and then ran:

$ snapcraft

I got this output:

Skipping pull lbryum (already ran)
Skipping build lbryum (already ran)
Skipping stage lbryum (already ran)
Skipping prime lbryum (already ran)
Snapping 'lbryum' -                                                           
Snapped lbryum_master_amd64.snap
I installed the snap:

$ sudo snap install --dangerous lbryum_master_amd64.snap

When I ran lbryum I started getting a lot of errors that made me understand that lbryum needs to access the network for working so I added the network plug (plugs: [network]) : 
name: lbryum 
version: 'master'
summary: Lightweight lbrycrd client
description: |
Lightweight lbrycrd client, a fork of the Electrum bitcoin client
grade: stable
confinement: strict

apps:
lbryum:
command: lbryum
plugs: [network]
parts:
lbryum:
source: .
plugin: python
requirements: requirements.txt
python-version: python2
I ran:

$ snapcraft

again and installed the snap again:

$ sudo snap install --dangerous lbryum_master_amd64.snap
$ lbryum daemon start
$ lbryum version
$ lbryum commands

Works!

Fine, so I opened a PR on Git Hub proposing my snapcraft.yaml file so that they could use it for creating a Lbryum snap.



If you need to debug your snap for finding what is wrong there is also a debugging tool for debugging confined apps:

https://snapcraft.io/docs/build-snaps/debugging


Thats it. End of my first snap adventure :).
on July 15, 2017 04:16 PM

July 13, 2017

Welcome to the fourth Ubuntu OpenStack development summary!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!

OpenStack Distribution

Stable Releases

We still have a few SRU’s in-flight from the June SRU cadence:

Swift: swift-storage processes die if rsyslog is restarted (Kilo, Mitaka)
https://bugs.launchpad.net/ubuntu/trusty/+source/swift/+bug/1683076

Ocata Stable Point Releases
https://bugs.launchpad.net/ubuntu/+bug/1696139

Hopefully those should flush through to updates in the next week; in the meantime we’re preparing to upload fixes for:

Keystone: keystone-manage mapping_engine federation rule testing
https://bugs.launchpad.net/ubuntu/+bug/1655182

Neutron: router host binding id not updated after failover
https://bugs.launchpad.net/ubuntu/+bug/1694337

Development Release

The first Ceph Luminous RC (12.1.0) has been uploaded to Artful and will be backported to the Ubuntu Cloud Archive for Pike soon.

OpenStack Pike b3 is due towards the end of July; we’ve done some minor dependency updates to support progression towards that goal. It’s also possible to consume packages built from the tip of the upstream git repository master branches using:

sudo add-apt-repository ppa:openstack-ubuntu-testing/pike

Packages are automatically built for Artful and Xenial.

OpenStack Snaps

Refactoring to support the switch back to strict mode snaps has been completed. Corey posted last week on ‘OpenStack in a Snap’ so we’ll not cover too much in this update; have a read to get the full low down.

Work continues on snapstack (the CI test tooling for OpenStack snap validation and testing), with changes landing this week to support Class-based setup/cleanup for the base cloud and a logical step/plan method for creating tests.

The move of snapstack to a Class-based setup/cleanup approach for the base cloud enables flexibility where the base cloud required to test a snap can easily be updated. By default this will provide a snap’s tests with a default OpenStack base cloud, however this can now easily be manipulated to add or remove services.

The snapstack code has also been updated to use a step/plan method for creating tests. These objects provide a simple and logical process for creating tests. The developer can now define the snap being tested, and it’s scripts/tests, in a step object. Each base snap and it’s scripts/tests are also define in individual step objects. All of these steps are then put together into a plan object, which is executed to kick off the deployment and tests.

For more details on snapstack you can check out the snapstack code here.

Nova LXD

The refactoring of the VIF plugging codebase to provide support for Linuxbridge and Open vSwitch + the native OVS firewall driver has been landed for Pike; this corrects a number of issues in the VIF plugging workflow between Neutron and Nova(-LXD) for these specific tenant networking configurations.

The nova-lxd subteam have also done some much needed catch-up on pull requests for pylxd (the underlying Python binding for LXD that nova-lxd uses); pylxd 2.2.4 is now up on pypi and includes fixes for improved forward compatibility with new LXD releases and support for passing network timeout configuration for API calls.

Work is ongoing to add support for LXD storage pools into pylxd.

OpenStack Charms

New Charms

Work has started on the new Gnocchi and GlusterFS charms; These should be up and consumable under the ‘openstack-charmers-next’ team on the charm store in the next week.

Gnocchi will support deployment with MySQL (for indexing), Ceph (for storage) and Memcached (for coordination between Gnocchi metricd workers). We’re taking the opportunity to review and refresh the telemetry support across all of the charms, ensuring that the charms are using up-to-date configuration options and are fully integrated for telemetry reporting via Ceilometer (with storage in Gnocchi). This includes adding support for the Keystone, Rados Gateway and Swift charms. We’ll also be looking at the Grafana Gnocchi integration and hopefully coming up with some re-usable sets of dashboards for OpenStack resource metric reporting.

Deployment Guide

Thanks to help from Graham Morrison in the Canonical docs team, we now have a first cut of the OpenStack Charms Deployment Guide – you can take a preview look in its temporary home until we complete the work to move it up under docs.openstack.org.

This is very much a v1, and the team intends to iterate on the documentation over time, adding coverage for things like high-availability and network space usage both in the charms and in the tools that the charms rely on (MAAS and Juju).

IRC (and meetings)

As always, you can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.

EOM


on July 13, 2017 03:19 PM

We discuss playing Tomb Raider, OEMs “making distros” is so hot right now, RED make a smartphone from the future, Skype gets an update and users hate it, Gangnam style loses its YouTube crown.

It’s Season Ten Episode Nineteen of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on July 13, 2017 02:00 PM

July 12, 2017

The Ubuntu Error Tracker is really good at presenting information about the versions of packages affected by a crash. Additionally, it has current information about crashes regarding stable releases of Ubuntu in addition to the development release. Subsequently, it can be a great resource for verifying that a crash is fixed for the development release or for stable releases.

As a member of the Stable Release Updates team I am excited to see an SRU which includes a bug report either generated from a crash in the Ubuntu Error Tracker (identifiable by either the bug description or errors.ubuntu.com being in the list of subscribers) or with a link to a crash in the Ubuntu Error Tracker.

One example of this is a systemd-resolved crash which while the bug report was not created by the bug bridge does contain a link to a bucket in the Error Tracer. Using the bucket in the Error Tracker we were able to confirm that new version of the package did not appear there and subsequently was no longer experiencing the same crash.

Two crashes about libgweather, bug 1688208 and bug 1695567, are less perfect examples because libgweather ended up causing gnome-shell to crash and the Error Tracker buckets for these crashes only show the version of gnome-shell. But fortunately apport gathers information about the package’s (gnome-shell in this case) dependencies and as the maintainer of the Error Tracker I can query its database. Using that ability I was able to confirm, by querying individual instances in the bucket, that the new version of libgweather did in fact fix both crashes.

So whether you are fixing crashes in the development release of Ubuntu or a stable release keep in mind that its possible to use the Error Tracker to verify that your fix works.

on July 12, 2017 05:48 PM

After quite some time, the first release candidate for the Exo 0.12.x series is ready for some serious testing!

What’s New in Exo 0.11.4?

This release completes the GTK+ 3 port and can now be used for GTK+ 2 or 3 Xfce application development.

New Features

Bug Fixes

  • Removed --disable-debug flag from make distcheck (Xfce #11556)

Icons

  • Replaced non-standard gnome-* icons
  • Replaced non-existent “missing-image” icon

Deprecations

  • Dropped gdk_window_process_updates for GTK+ 3.22
  • Replaced gdk_pixbuf_new_from_inline usage
  • Replaced gdk_screen_* usage
  • Replaced gtk_style_context_get_background_color usage
  • Removed warnings for gtk_dialog_get_action_area and GioScheduler

Translation Updates

Arabic, Catalan, Chinese (China), Danish, Dutch, French, German, Hebrew, Indonesian, Korean, Lithuanian, Portuguese (Brazil), Russian, Spanish, Swedish

Downloads

The latest version of Exo can always be downloaded from the Xfce archives. Grab version 0.11.4 from the below link.

http://archive.xfce.org/src/xfce/exo/0.11/exo-0.11.4.tar.bz2

  • SHA-256: 54fc6d26eff4ca0525aed8484af822ac561cd26adad4a2a13a282b2d9f349d84
  • SHA-1: 49e0fdf6899eea7aa1050055c7fe2dcddd0d1d7a
  • MD5: 7ad88a19ccb4599fd46b53b04325552c
on July 12, 2017 10:34 AM
This is a suite of blog posts explaining how we snapped Ubuntu Make which is a complex software study case with deep interactions with the system. For more background on this, please refer to our previous blog post giving a quick introduction on the topic. Creating the snap skeleton The snap skeleton was pretty easy to create. Galileo from our community got a first stance at it. We can notice multiple things:
on July 12, 2017 09:27 AM

I'm a Quality Assurance Engineer. A big part of my job is to find problems, then make sure that they are fixed and automated so they don't regress. If I do my job well, then our process will identify new and potential problems early without manual intervention from anybody in the team. It's like trying to automate myself, everyday, until I'm no longer needed and have to jump to another project.

However, as we work in the project, it's unavoidable that many small manual tasks accumulate on my hands. This happens because I set up the continuous integration infrastructure, so I'm the one who knows more about it and have easier access, or because I'm the one who requested access to the build farm so I'm the one with the password, or because I configured the staging environment and I'm the only one who knows the details. This is a great way to achieve job security, but it doesn't lead us to higher quality. It's a job half done, and it's terribly boring to be a bottleneck and a silo of information about testing and the release process. All of these tasks should be shared by the whole team, as with all the other tasks in the project.

There are two problems. First, most of these tasks involve delicate credentials that shouldn't be freely shared with everybody. Second, even if the task itself is simple and quick to execute, it's not very simple to document how to set up the environment to be able to execute them, nor how to make sure that the right task is executed in the right moment.

Chatops is how I like to solve all of this. The idea is that every task that requires manual intervention is implemented in a script that can be executed by a bot. This bot joins the communication channel where the entire team is present, and it will execute the tasks and report about their results as a response to external events that happen somewhere in the project infrastructure, or as a response to the direct request of a team member in the channel. The credentials are kept safe, they only have to be shared with the bot and the permissions can be handled with access control lists or membership to the channel. And the operative knowledge is shared with all the team, because they are all listening in the same channel with the bot. This means that anybody can execute the tasks, and the bot assists them to make it simple.

In snapcraft we started writing our bot not so long ago. It's called snappy-m-o (Microbe Obliterator), and it's written in python with errbot. We, of course, packaged it as a snap so we have automated delivery every time we change it's source code, and the bot is also autoupdated in the server, so in the chat we are always interacting with the latest and greatest.

Let me show you how we started it, in case you want to get your own. But let's call this one Baymax, and let's make a virtual environment with errbot, to experiment.

drawing of the Baymax bot

$ mkdir -p ~/workspace/baymax
$ cd ~/workspace/baymax
$ sudo apt install python3-venv
$ python3 -m venv .venv
$ source .venv/bin/activate
$ pip install errbot
$ errbot --init

The last command will initialize this bot with a super simple plugin, and will configure it to work in text mode. This means that the bot won't be listening on any channel, you can just interact with it through the command line (the ops, without the chat). Let's try it:

$ errbot
[...]
>>> !help
All commands
[...]
!tryme - Execute to check if Errbot responds to command.
[...]
>>> !tryme
It works !
>>> !shutdown --confirm

tryme is the command provided by the example plugin that errbot --init created. Take a look at the file plugins/err-example/example.py, errbot is just lovely. In order to define your own plugin you will just need a class that inherits from errbot.BotPlugin, and the commands are methods decorated with @errbot.botcmd. I won't dig into how to write plugins, because they have an amazing documentation about Plugin development. You can also read the plugins we have in our snappy-m-o, one for triggering autopkgtests on GitHub pull requests, and the other for subscribing to the results of the pull requests tests.

Let's change the config of Baymax to put it in an IRC chat:

$ pip install irc

And in the config.py file, set the following values:

BACKEND = 'IRC'
BOT_IDENTITY = {
    'nickname' : 'baymax-elopio',  # Nicknames need to be unique, so append your own.
                                   # Remember to replace 'elopio' with your nick everywhere
                                   # from now on.
    'server' : 'irc.freenode.net',
}
CHATROOM_PRESENCE = ('#snappy',)

Run it again with the errbot command, but this time join the #snappy channel in irc.freenode.net, and write in there !tryme. It works ! :)

screenshot of errbot on IRC

So, this is very simple, but let's package it now to start with the good practice of continuous delivery before it gets more complicated. As usual, it just requires a snapcraft.yaml file with all the packaging info and metadata:

name: baymax-elopio
version: '0.1-dev'
summary: A test bot with errbot.
description: Chat ops bot for my team.
grade: stable
confinement: strict

apps:
  baymax-elopio:
    command: env LC_ALL=C.UTF-8 errbot -c $SNAP/config.py
    plugs: [home, network, network-bind]

parts:
  errbot:
    plugin: python
    python-packages: [errbot, irc]
  baymax:
    source: .
    plugin: dump
    stage:
      - config.py
      - plugins
    after: [errbot]

And we need to change a few more values in config.py to make sure that the bot is relocatable, that we can run it in the isolated snap environment, and that we can add plugins after it has been installed:

import os

BOT_DATA_DIR = os.environ.get('SNAP_USER_DATA')
BOT_EXTRA_PLUGIN_DIR = os.path.join(os.environ.get('SNAP'), 'plugins')
BOT_LOG_FILE = BOT_DATA_DIR + '/err.log'

One final try, this time from the snap:

$ sudo apt install snapcraft
$ snapcraft
$ sudo snap install baymax*.snap --dangerous
$ baymax-elopio

And go back to IRC to check.

Last thing would be to push the source code we have just written to a GitHub repo, and enable the continuous delivery in build.snapcraft.io. Go to your server and install the bot with sudo snap install baymax-elopio --edge. Now everytime somebody from your team makes a change in the master repo in GitHub, the bot in your server will be automatically updated to get those changes within a few hours without any work from your side.

If you are into chatops, make sure that every time you do a manual task, you also plan for some time to turn that task into a script that can be executed by your bot. And get ready to enjoy tons and tons of free time, or just keep going through those 400 open bugs, whichever you prefer :)

on July 12, 2017 04:31 AM

July 11, 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, about 161 work hours have been dispatched among 11 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased slightly with one new bronze sponsor and another silver sponsor is in the process of joining.

The security tracker currently lists 49 packages with a known CVE and the dla-needed.txt file 54. The number of open issues is close to last month.

Thanks to our sponsors

New sponsors are in bold.

One comment | Liked this article? Click here. | My blog is Flattr-enabled.

on July 11, 2017 02:49 PM

I love playing with my prototyping boards. Here at Ubuntu we are designing the core operating system to support every single-board computer, and keep it safe, updated and simple. I've learned a lot about physical computing, but I always have a big problem when my prototype is done, and I want to deploy it. I am working with a Raspberry Pi, a DragonBoard, and a BeagleBone. They are all very different, with different architectures, different pins, onboard capabilities and peripherals, and they can have different operating systems. When I started learning about this, I had to write 3 programs that were very different, if I wanted to try my prototype in all my boards.

picture of the three different SBCs

Then I found Gobot, a framework for robotics and IoT that supports my three boards, and many more. With the added benefit that you can write all the software in the lovely and clean Go language. The Ubuntu store supports all their architectures too, and packaging Go projects with snapcraft is super simple. So we can combine all of this to make a single snap package that with the help of Gobot will work on every board, and deploy it to all the users of these boards through the snaps store.

Let's dig into the code with a very simple example to blink an LED, first for the Raspberry PI only.

package main

import (
  "time"

  "gobot.io/x/gobot"
  "gobot.io/x/gobot/drivers/gpio"
  "gobot.io/x/gobot/platforms/raspi"
)

func main() {
  adaptor := raspi.NewAdaptor()
  led := gpio.NewLedDriver(adaptor, "7")

  work := func() {
    gobot.Every(1*time.Second, func() {
      led.Toggle()
    })
  }

  robot := gobot.NewRobot("snapbot",
    []gobot.Connection{adaptor},
    []gobot.Device{led},
    work,
  )

  robot.Start()
}

In there you will see some of the Gobot concepts. There's an adaptor for the board, a driver for the specific device (in this case the LED), and a robot to control everything. In this program, there are only two things specific to the Raspberry Pi: the adaptor and the name of the GPIO pin ("7").

picture of the Raspberry Pi prototype

It works nicely in one of the boards, but let's extend the code a little to support the other two.

package main

import (
  "log"
  "os/exec"
  "strings"
  "time"

  "gobot.io/x/gobot"
  "gobot.io/x/gobot/drivers/gpio"
  "gobot.io/x/gobot/platforms/beaglebone"
  "gobot.io/x/gobot/platforms/dragonboard"
  "gobot.io/x/gobot/platforms/raspi"
)

func main() {
  out, err := exec.Command("uname", "-r").Output()
  if err != nil {
    log.Fatal(err)
  }
  var adaptor gobot.Adaptor
  var pin string
  kernelRelease := string(out)
  if strings.Contains(kernelRelease, "raspi2") {
    adaptor = raspi.NewAdaptor()
    pin = "7"
  } else if strings.Contains(kernelRelease, "snapdragon") {
    adaptor = dragonboard.NewAdaptor()
    pin = "GPIO_A"
  } else {
    adaptor = beaglebone.NewAdaptor()
    pin = "P8_7"
  }
  digitalWriter, ok := adaptor.(gpio.DigitalWriter)
  if !ok {
    log.Fatal("Invalid adaptor")
  }
  led := gpio.NewLedDriver(digitalWriter, pin)

  work := func() {
    gobot.Every(1*time.Second, func() {
      led.Toggle()
    })
  }

  robot := gobot.NewRobot("snapbot",
    []gobot.Connection{adaptor},
    []gobot.Device{led},
    work,
  )

  robot.Start()
}

We are basically adding in there a block to select the right adaptor and pin, depending on which board the code is running. Now we can compile this program, throw the binary in the board, and give it a try.

picture of the Dragonboard prototype

But we can do better. If we package this in a snap, anybody with one of the boards and an operating system that supports snaps can easily install it. We also open the door to continuous delivery and crowd testing. And as I said before, super simple, just put this in the snapcraft.yaml file:

name: gobot-blink-elopio
version: master
summary:  Blink snap for the Raspberry Pi with Gobot
description: |
  This is a simple example to blink an LED in the Raspberry Pi
  using the Gobot framework.

confinement: devmode

apps:
  gobot-blink-elopio:
    command: gobot-blink

parts:
  gobot-blink:
    source: .
    plugin: go
    go-importpath: github.com/elopio/gobot-blink

To build the snap, here is a cool trick thanks to the work that kalikiana recently added to snapcraft. I'm writing this code in my development machine, which is amd64. But the raspberry pi and beaglebone are armhf, and the dragonboard is arm64; so I need to cross-compile the code to get binaries for all the architectures:

snapcraft --target-arch=armhf
snapcraft clean
snapcraft --target-arch=arm64

That will leave two .snap files in my working directory that then I can upload to the store with snapcraft push. Or I can just push the code to GitHub and let build.snapcraft.io to take care of building and pushing for me.

Here is the source code for this simple example: https://github.com/elopio/gobot-blink

Of course, Gobot supports many more devices that will let you build complex robots. Just take a look at the documentation in the Gobot site, and at the guide about deployable packages with Gobot and snapcraft.

picture of the BeagleBone prototype

If you have one of the boards I'm using here to play, give it a try:

sudo snap install gobot-blink-elopio --edge --devmode
sudo gobot-blink-elopio

Now my experiments will be to try make the snap more secure, with strict confinement. If you have any questions or want to help, we have a topic in the forum.

on July 11, 2017 02:30 PM

Hello MAASters!

The purpose of this update is to keep our community engaged and informed about the work the team is doing. We’ll cover important announcements, work-in-progress for the next release of MAAS and bugs fixes in release MAAS versions.

MAAS 2.3 (current development release)

  • Completed Django 1.11 transition
      • MAAS 2.3 snap will use Django 1.11 by default.
      • Ubuntu package will use Django 1.11 in Artful+
  • Network beaconing & better network discovery
      • MAAS now listens for [unicast and multicast] beacons on UDP port 5240. Beacons are encrypted and authenticated using a key derived from the MAAS shared secret. Upon receiving certain types of beacons, MAAS will reply, confirming the sender that existing MAAS on the network has the same shared key. In addition, records are kept about which interface each beacon was received on, and what VLAN tag (if any) was in use on that interface. This allows MAAS to determine which interfaces observed the same beacon (and thus must be on the same fabric). This information can also determine if [what would previously have been assumed to be] a separate fabric is actually an alternate VLAN in an existing fabric.
      • The maas-rack send-beacons command is now available to test the beacon protocol. (This command is intended for testing and support, not general use.) The MAAS shared secret must be installed before the command can be used. By default, it will send multicast beacons out all possible interfaces, but it can also be used in unicast mode.
      • Note that while IPv6 support is planned, support for receiving IPv6 beacons in MAAS is not yet available. The maas-rack send-beacons command, however, is already capable of sending IPv6 beacons. (Full IPv6 support is expected to make beacons more flexible, since IPv6 multicast can be sent out on interfaces without a specific IP address assignment, and without resorting to raw sockets.)
      • Improvements to rack registration are now under development, so that users will see a more accurate representation of fabrics upon initial installation or registration of a MAAS rack controller.
  • Bug fixes
    • LP: #1701056: Show correct information for a device details page as a normal user
    • LP: #1701052: Do not show the controllers tab as a normal user
    • LP: #1683765: Fix format when devices/controllers are selected to match those of machines
    • LP: #1684216 – Update button label from ‘Save selection’ to ‘Update selection’
    • LP: #1682489 – Fix Cancel button on add user dialog, which caused the user to be added anyway
    • LP: #1682387 – Unassigned should be (Unassigned)

MAAS 2.2.1

The past week the team was also focused on preparing and QA’ing the new MAAS 2.2.1 point release, which was released on Friday June the 30th. For more information about the bug fixes please visit the following https://launchpad.net/maas/+milestone/2.2.1 .

MAAS 2.2.1 is available in:

  • ppa:maas/stable
on July 11, 2017 12:39 PM

Our team and Microsoft released Ubuntu as a Windows app store application today!

Ubuntu in the Windows store

If you, like me, toyed with the "bash on Ubuntu on Windows" version before and would like to try the latest and greatest, read on!

Saving your data

If you "bash on ubuntu on windows" environment has some data you'd like to keep, make sure you back it up now. The uninstall process will nuke everything.

Uninstall the old way

This is relatively simple. Opening a command line (cmd.exe) terminal and typing

lxrun /uninstall

will destroy your previous environment.

Installing the new way

To find the app in the Microsoft store, just type "Ubuntu" in the search bar.

Click install. Voilà!

on July 11, 2017 09:40 AM

July 10, 2017

Previously: v4.11.

Here’s a quick summary of some of the interesting security things in last week’s v4.12 release of the Linux kernel:

x86 read-only and fixed-location GDT
With kernel memory base randomization, it was stil possible to figure out the per-cpu base address via the “sgdt” instruction, since it would reveal the per-cpu GDT location. To solve this, Thomas Garnier moved the GDT to a fixed location. And to solve the risk of an attacker targeting the GDT directly with a kernel bug, he also made it read-only.

usercopy consolidation
After hardened usercopy landed, Al Viro decided to take a closer look at all the usercopy routines and then consolidated the per-architecture uaccess code into a single implementation. The per-architecture code was functionally very similar to each other, so it made sense to remove the redundancy. In the process, he uncovered a number of unhandled corner cases in various architectures (that got fixed by the consolidation), and made hardened usercopy available on all remaining architectures.

ASLR entropy sysctl on PowerPC
Continuing to expand architecture support for the ASLR entropy sysctl, Michael Ellerman implemented the calculations needed for PowerPC. This lets userspace choose to crank up the entropy used for memory layouts.

LSM structures read-only
James Morris used __ro_after_init to make the LSM structures read-only after boot. This removes them as a desirable target for attackers. Since the hooks are called from all kinds of places in the kernel this was a favorite method for attackers to use to hijack execution of the kernel. (A similar target used to be the system call table, but that has long since been made read-only.)

KASLR enabled by default on x86
With many distros already enabling KASLR on x86 with CONFIG_RANDOMIZE_BASE and CONFIG_RANDOMIZE_MEMORY, Ingo Molnar felt the feature was mature enough to be enabled by default.

Expand stack canary to 64 bits on 64-bit systems
The stack canary values used by CONFIG_CC_STACKPROTECTOR is most powerful on x86 since it is different per task. (Other architectures run with a single canary for all tasks.) While the first canary chosen on x86 (and other architectures) was a full unsigned long, the subsequent canaries chosen per-task for x86 were being truncated to 32-bits. Daniel Micay fixed this so now x86 (and future architectures that gain per-task canary support) have significantly increased entropy for stack-protector.

Expanded stack/heap gap
Hugh Dickens, with input from many other folks, improved the kernel’s mitigation against having the stack and heap crash into each other. This is a stop-gap measure to help defend against the Stack Clash attacks. Additional hardening needs to come from the compiler to produce “stack probes” when doing large stack expansions. Any Variable Length Arrays on the stack or alloca() usage needs to have machine code generated to touch each page of memory within those areas to let the kernel know that the stack is expanding, but with single-page granularity.

That’s it for now; please let me know if I missed anything. The v4.13 merge window is open!

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

on July 10, 2017 08:24 AM

Watching Sites Disappear

Stephen Michael Kellat

At this point there seem to be some systems problems arising on the web. I have had more than minor difficulty attempting to access http://identi.ca as well as http://quitter.se. Identica has frequently been down as of late. As to Quitter.se it seems that there is a site in the so-called fediverse that is providing a proof-of-concept bit of destruction to the federation between social sites. At the last bit I could see, there was discussion of implementing a routing "blackhole" against the miscreant to keep the rest of the federation operating.1

Between this and other matters, I do have pause to wonder as of late.2 How does our connected world survive? Frankly, I do not know the answer.

I have noticed lately that the amount of physical media that I own has increased. If there is a DVD version of a movie that I want to watch again, I may in fact own it. I have a decent catalog of books that I own. That you can search that library catalog at https://www.librarycat.org/lib/alpacaherder is something left unfinished.3 Unlike Sheldon Cooper of The Big Bang Theory, I do not have things barcoded and I do not have the circulation module fired up. It is more an attempt to just track what I own.4

Much of the Internet could easily be classed as ephemera. It is full of things that are here today and gone tomorrow. That is why there is often to push to host your own servers so that when servers run by others disappear, you may in fact still be online. To a certain extent, much of what I try to do is attempt to hang on to a sense of permanence.

I am still learning about the morehype package on CTAN to learn how it can be used to take LaTeX2e documents intended for print and convert them for the web.5 Not everything needs to be a PDF although I am finding that I really like the style of the Paratype font. Quite a bit of what I have had to do lately for church-related matters has been intended as "Large Print" for the elderly so I've been working my way through the font catalog.

Change is hard. I like stability. With as bizarre as things have been at work lately, I know I will have plenty of change and little stability there. Watching social sites disappear suddenly also helps contribute to that feeling of lost stability. The threat of a possible reduction-in-force at work coming in late September/early October does not really help either.

My upcoming big goal is to book the tickets, find accommodations, and cross the Atlantic for OggCamp. This is a major undertaking. There are still matters in play on my part. Strangely enough, Canterbury won't be the farthest I have ever traveled into Europe if I manage to pull this off.

These aren't the greatest musings to start a week. For those interested in contact, you can reach out via Telegram at http://t.me/smkellat perhaps. I may inhabit other channels you are already in as well.


  1. An attempt to view the site in question immediately resulted in a possible malware download so that site could easily be branded "anti-social media" perhaps.***

  2. The electromagnetic pulse threat posed by the government of the Democratic People's Republic of Korea to North America makes many scary possibilities become imminent.***

  3. I have an earned master's degree in library science but it has been a rather long time since I have been a proper librarian.***

  4. LibraryThing does not support DVDs yet. I should make a support request, perhaps.***

  5. See: https://ctan.org/pkg/morehype***

on July 10, 2017 02:51 AM

July 08, 2017

En este último capítulo de la primera temporada hablamos del móvil Fairphone y de la Free Software Foundation.

La primera temporada esta disponible completa en:

Ubuntu y otras hierbas S01E07

Intervenimos: Francisco MolineroFrancisco Javier TerueloFernando Lanero y Marcos Costales.
on July 08, 2017 12:00 PM

July 07, 2017

If you follow DEF CON news at all, you’ll know that there’s been some kind of issue with the badges. But don’t worry, DEF CON will have badges, but so will the community!

What do I mean by this? Well, badge hacking has long been a DEF CON tradition, but in the past few years, we’ve seen more and more unofficial badges appearing at DEF CON. This year seems to be a massive upswing, and while I’m sure some of that was in progress before the badge announcement, I believe at least some of it is the community response to the DEF CON badge issue. (Edit: All of the listed badges were apparently in the works before the DEF CON announcement. Thanks to @wbm312 for setting me straight.)

I’ve tried to collect information about all the unofficial badges I can find, but I’d imagine there are many more that I haven’t heard about, or whose creator just isn’t talking about it. I know for a fact at least one such private badge exists!

Know of another badge? Ping me on Twitter (@Matir) and I’ll update. Sorry I have so many unknowns, but lots of the badges are keeping quiet!

Available for Sale

This includes badges that were available for sale at some point, even if now sold out. Basically, if at any point you could exchange cash, credit, bitcoin, litecoin, ethereum, gold ingots, or any other form of value for the badge, I’m putting it here. (I’d call it “commercial”, but most of these are a labor of love and the money just helps the creator not go broke with their labors.)

AND!XOR DEF CON 25 Indie Badge

2017 WiFi Badge

Mr Robot Badge

Puffy

The Ides of DEF CON

  • Link: https://dc25spqr.com/
  • Features: Sub-1GHz Radio, Blinky Lights, Sound, LED Screen
  • Availability: Sold Out, Kickstarter Only, Open Source
  • Price: $120

Queercon 14 Badge

Beyond Binaries Badge

DEF CON Furs

DEF CON Darknet

DC 801

Cryptovillage

Hacker Warehouse

NulliBadge

  • Link: http://nu.llify.com
  • Features: LEDs, IR Tag, Open Source
  • Availability: Onsite, limited pre-reg
  • Price: $60

Private Projects/Little Detail

@noidd

on July 07, 2017 07:00 AM

I recently attended a Snappy Sprint in London, UK. As well as the Canonical people attending (including me) with experience in the whole Snappy stack (Snapcraft, the Snap store, snapd, snapd-glib) we had great representation from the Elementary, Fedora, GNOME, MATE and KDE communities. My goal was to help improve the Snap experience for desktop apps both on Ubuntu and other distributions.

We spent a lot of time working on improving snap metadata for use with desktop apps. Improvements included:
  • Exposing the title field from the store down to clients.
  • A plan to get standard license information (using SPDX) attached to snaps.
  • We made progress on a solution for projects that use AppStream to be able to easily build snaps and provide some AppStream data that doesn't fit the Snap metadata model to pass through to clients.
  • Fixing of many small issues in GNOME Software so it is suitable to work in Fedora and other distributions.
  • Plans for a tool that allows graphical configuration of snap interfaces.
  • A plan to solve the limitation on desktop clients able to install / remove snaps without a store login.
  • Discussions around metadata translations.
I helped the MATE Software Boutique and KDE Discover developers make use of snapd-glib using GIR bindings in Python and the Qt bindings to make their stores work. It was great to see snapd-glib working in these different use cases and got back some great feedback and a few patches.

Thanks to all the community for attending, I found it very productive to work in-person with them all. If you're interested in following Snappy development check out the Snapcraft Forum where you'll find discussions about what I've described above and much more.
on July 07, 2017 03:41 AM

July 05, 2017

Canonical’s webteam manage over 18 websites as well as many supporting projects and frameworks. These projects are built with any combination of Python, Ruby, NodeJS, Go, PostgreSQL, MongoDB or OpenStack Swift.

We have 9 full-time developers – half the number of websites we have. And naturally some of our projects get a lot of time spent on them (like www.ubuntu.com), and others only get worked on once every few months. Most devs will touch most projects at some point, and some may work on a few of them in any given day.

Before any developer can start a new piece of work, they need to get the project running on their computer. These computers may be running any flavour of Linux or macOS (thankfully we don’t yet need to support Windows).

A focus on tooling

If you’ve ever tried to get up and running on a new software project, you’ll certainly appreciate how difficult that can be. Sometimes developers can spend days simply working out how to install a dependency.

XKCD 1742: Will it work?

Given the number and diversity of our projects, and how often we switch between them, this is a delay we simply cannot afford.

This is why we’ve invested a lot of time into refining and standardising the local development tooling, making it as easy as possible for any of our devs, or any contributors, to get up and running as simply as possible.

The standard interface

We needed a simple, standardised set of commands that could be run across all projects, to achieve predictable results. We didn’t want our developers to have to dig into the README or other documentation every time they wanted to get a new project running.

This is the standard interface we chose to implement in all our projects, to cover the basic functions common to almost all our projects:

./run        # An alias for "./run serve"
./run serve  # Prepare the project and run the local server
./run build  # Build the project, ready for distribution or release
./run watch  # Watch local files for changes and rebuild as necessary
./run test   # Check code syntax and run unit tests
./run clean  # Remove any temporary or built files or local databases

We decided on using a single run executable as our single entry-point into all our projects only after previously trying and eventually rejecting a number of alternatives:

  • A Makefile: The syntax can be confusing. Makefiles are really made for compiling system binaries, which doesn’t usually apply to our projects
  • gulp, or NPM scripts: Not all our projects need NodeJS, and NodeJS isn’t always available on a developer’s system
  • docker-compose: Although we do ultimately run everything through Docker (see below), the docker-compose entrypoint alone wasn’t powerful enough to achieve everything we needed

In contrast to all these options, the run script allows us to perform whatever actions we choose, using any interpreter that’s available on the local system. The script is currently written in Bash because it’s available on all Linux and macOS systems. As an additional bonus, ./run is quicker to type than the other options, saving our devs crucial nanoseconds.

The single dependency that developers need to install to run the script is Docker, for reasons outlines below.

Knowing we can run or build our projects through this standard interface is not only useful for humans, but also for supporting services – like our build jobs and automated tests. We can write general solutions, and know they’ll be able to work with any of our projects.

Using ./run is optional

All our website projects are openly available on GitHub. While we believe the ./run script offers a nice easy way of running our projects, we are mindful that people from outside our team may want to run the project without installing Docker, want to have more fine-grained control over how the project is run, or just not trust our script.

For this reason, we have tried to keep the addition of the ./run script from affecting the wider shape of our projects. It remains possible to run each of our projects using standard methods, without ever knowing or caring about the ./run script or Docker.

  • Django projects can still be run with pip install -r requirements.txt; ./manage.py runserver
  • Jekyll projects can still be run with bundle install; bundle exec jekyll serve
  • NodeJS projects can still be run with npm install; npm run serve

While the documentation in our READMEs recommend the ./run script, we also try to mention the alternatives, e.g. www.ubuntu.com’s HACKING.md.

Using Docker for encapsulation

Although we strive to keep our projects as simple as possible, every software project relies on dependent libraries and programs. These dependencies pose 2 problems for us:

  • We need to install and run these dependencies in a predictable way – which may be difficult in some operating systems
  • We must keep these dependencies from affecting the developer’s wider system – there’s nothing worse than having a project break your computer

For a while now, developers have been solving this problem by running applications within virtual machines running Linux (e.g. with VirtualBox and Vagrant), which is a great way of encapsulating software within a predictable environment.

Linux containers offer light-weight encapsulation

More recently, containers have entered the scene.

containers

A container is a part of the existing system with carefully controlled permissions and an encapsulated filesystem, to make it appear and behave like a separate operating system. Containers are much lighter and quicker to run than a full virtual machine, and yet provide similar benefits.

The easiest and most direct way to run containers is probably LXD, but unfortunately there’s no easy way to run LXD on macOS. By contrast, Docker CE is trivial to install and use on macOS, and so this became our container manager of choice. When it becomes easier to run LXD on macOS, we’ll revisit this decision.

Each project uses a number of Docker images

docker cat

Running containers through Docker helps us to carefully manage our projects’ dependencies, by:

  • Keeping all our software, from Python modules to databases, from affecting the wider system
  • Logically grouping our dependencies into separate light-weight containers: one for the database, and a separate one for each technology stack (Python, Ruby, Node etc.)
  • Easily cleaning up a project by simply deleting its associated containers

So the ./run script in each project will run the necessary commands to start the project by running the relevant commands inside the relevant Docker images. For example, in partners.ubuntu.com, the ./run command will:

Docker is the only dependency

By using Docker images in this way, the developer doesn’t need to install any of the project dependencies on their local system (NodeJS, Python, PostgreSQL etc.). Docker – which should be trivial to install on both Linux and macOS – is the single dependency they need to run any of our projects.

Keeping the ./run script up-to-date across projects

A key feature of this our solution is to provide a consistent interface in all of our projects. However, the script itself will vary between projects, as different projects have different requirements. So we needed a way of sharing relevant parts of the script while keeping the ability to customise it locally.

It is also important that we don’t add significant bloat to the project’s dependencies. This script is just meant to be a useful shorthand way of running the project, but we don’t want it to affect the shape of the project at large, or add too much extra complexity.

However, we still need a way of making improvements to the script in a centralised way and easily updating the script in existing projects.

A yeoman generator

To achieve these goals, we maintain a yeoman generator called canonical-webteam. This generator contains a few ways of adding the ./run architecture, for some common types of projects we use:

$ yo canonical-webteam:run            # Add ./run for a basic node-only project
$ yo canonical-webteam:run-django     # Add ./run for a databaseless Django project
$ yo canonical-webteam:run-django-db  # Add ./run for a Django project with a database
$ yo canonical-webteam:run-jekyll     # Add ./run for a Jekyll project

These generator scripts can be used either to add the ./run script to a project that doesn’t have it, or to replace an existing ./run script with the latest version. It will also optionally update .gitignore and package.json with some of our standard settings for our projects.

Try it out!

To see this ./run tooling in action, first install Docker by following the official instructions.

Run the www.ubuntu.com website

You should now be able to run a version of the www.ubuntu.com website on your computer:

  • Download the www.ubuntu.com codebase, E.g.:

    curl -L https://github.com/canonical-websites/www.ubuntu.com/archive/master.zip > www.ubuntu.com-master.zip
    unzip www.ubuntu.com-master.zip
    cd www.ubuntu.com-master
    
  • Run the site!

    $ ./run
    # Wait a while (the first time) for it to download and install dependencies. Until:
    Starting development server at http://0.0.0.0:8001/
    Quit the server with CONTROL-C.
    
  • Visit http://127.0.0.1:8001 in your browser, and you should see the latest version of the https://www.ubuntu.com website.

Forking or improving our work

We have documented this standard interface in our team practices repository, and we keep the central code in our canonical-webteam Yeoman generator.

Feel free to fork our code, or if you’d like to suggest improvements please submit an issue or pull-request against either repository.


Also published on Medium.

on July 05, 2017 06:45 PM

OpenStack in a Snap

Corey Bryant

OpenStack is complex and many OpenStack community members are working hard to make the deployment and operation of OpenStack easier. Much of this time is focused on tools such as Ansible, Puppet, Kolla, Juju, Triple-O, Chef (to name a few). But what if we step down a level and also make the package experience easier?

With snaps we’re working on doing just that. Snaps are a new way of delivering software. The following description from snapcraft.io provides a good summary of the core benefits of snaps:

“Snaps are quick to install, easy to create, safe to run, and they update automatically and transactionally so your app is always fresh and never broken.”

Bundled software

A single snap can deliver multiple pieces of software from different sources to provide a solution that gets you up and running fast.  You’ll notice that installing a snap is quick. That’s because when you install a snap, that single snap bundles all of its dependencies.  That’s a bit different from installing a deb, where all of the dependencies get pulled down and installed separately.

Snaps are easy to create

In my time working on Ubuntu, I’ve spent much of it working on Debian packaging for OpenStack. It’s a niche skill that takes quite a bit of time to understand the nuances of.  When compared with snaps, the difference in complexity between deb packages and snaps is like night and day. Snaps are just plain simple to work on, and even quite fun!

A few more features of snaps

  • Each snap is installed in it’s own read-only squashfs filesystem.
  • Each snap is run in a strict environment sandboxed by AppArmor and seccomp policy.
  • Snaps are transactional. New versions of a snap install to a new read-only squashfs filesystem. If an upgrade fails, it will roll-back to the old version.
  • Snaps will auto-refresh when new versions are available.
  • OpenStack Snaps are guaranteed to be aligned with OpenStack’s upper-constraints. Packagers no longer need to maintain separate packages for the OpenStack dependency chain. Woo-hoo!

Introducing the OpenStack Snaps!

We currently have the following projects snapped:

  • Keystone – This snap provides the OpenStack identity service.
  • Glance – This snap provides the OpenStack image service.
  • Neutron – This snap specifically provides the ‘neutron-server’ process as part of a snap based OpenStack deployment.
  • Nova – This snap provides the Nova controller component of an OpenStack deployment.
  • Nova-hypervisor – This snap provides the hypervisor component of an OpenStack deployment, configured to use Libvirt/KVM + Open vSwitch which are installed using deb packages.  This snap also includes nova-lxd, allowing for use of nova-lxd instead of KVM.

This is enough to get a minimal working OpenStack cloud.  You can find the source for all of the OpenStack snaps on github.  For more details on the OpenStack snaps please refer to the individual READMEs in the upstream repositories. There you can find more details for managing the snaps, such as overriding default configs, restarting services, setting up aliases, and more.

Want to create your own OpenStack snap?

Check out the snap cookie cutter.

I’ll be writing a blog post soon that walks you through using the snap cookie cutter. It’s really simple and will help get the creation of a new OpenStack snap bootstrapped in no time.

Testing the OpenStack snaps

We’ve been using a simple script for initial testing of the OpenStack snaps. The script installs the snaps on a single node and provides additional post-install configuration for services. To try it out:

git clone https://github.com/openstack-snaps/snap-test
cd snap-test
./snap-deploy

At this point we’ve been doing all of our testing on Ubuntu Xenial (16.04).  Also note that this will install and configure quite a bit of software on your system so you’ll likely want to run it on a disposable machine.

Tracking OpenStack

Today you can install snaps from the edge channel of the snap store. For example:

sudo snap install --edge keystone

The OpenStack team is working toward getting CI/CD in place to enable publishing snaps across tracks for OpenStack releases (Ie. a track for ocata, another track for pike, etc). Within each track will be 4 different channels. The edge channel for each track will contain the tip of the OpenStack project’s corresponding branch, with the beta, candidate and release channels being reserved for released versions.  This should result in an experience such as:

sudo snap install --channel=ocata/stable keystone
sudo snap install --channel=pike/edge keystone

Poking around

Snaps have various environment variables available to them that simplify the creation of the snap. They’re all documented here.  You probably won’t need to know much about them to be honest, however there are a few locations that you’ll want to be familiar with once you’ve installed a snap:

$SNAP == /snap/<snap-name>/current

This is where the snap and all of it’s files are mounted. Everything here is read-only. In my current install of keystone, $SNAP is /snap/keystone/91. Fortunately you don’t need to know the current version number as there’s a symlink to that directory at /snap/keystone/current.

$ ls /snap/keystone/current/
bin                     etc      pysqlite2-doc        usr
command-manage.wrapper  include  snap                 var
command-nginx.wrapper   lib      snap-openstack.yaml
command-uwsgi.wrapper   meta     templates

$ ls /snap/keystone/current/bin/
alembic                oslo-messaging-send-notification
convert-json           oslo-messaging-zmq-broker
jsonschema             oslo-messaging-zmq-proxy
keystone-manage        oslopolicy-checker
keystone-wsgi-admin    oslopolicy-list-redundant
keystone-wsgi-public   oslopolicy-policy-generator
lockutils-wrapper      oslopolicy-sample-generator
make_metadata.py       osprofiler
mako-render            parse_xsd2.py
mdexport.py            pbr
merge_metadata.py      pybabel
migrate                snap-openstack
migrate-repository     sqlformat
netaddr                uwsgi
oslo-config-generator

$ ls /snap/keystone/current/usr/bin/
2to3               idle     pycompile     python2.7-config
2to3-2.7           pdb      pydoc         python2-config
cautious-launcher  pdb2.7   pydoc2.7      python-config
compose            pip      pygettext     pyversions
dh_python2         pip2     pygettext2.7  run-mailcap
easy_install       pip2.7   python        see
easy_install-2.7   print    python2       smtpd.py
edit               pyclean  python2.7

$ ls /snap/keystone/current/lib/python2.7/site-packages/
...

$SNAP_COMMON == /var/snap/<snap-name>/common

This directory is used for system data that is common across revisions of a snap. This is where you’ll override default config files and access log files.

$ ls /var/snap/keystone/common/
etc  fernet-keys  lib  lock  log  run

$ sudo ls /var/snap/keystone/common/etc/
keystone  nginx  uwsgi

$ ls /var/snap/keystone/common/log/
keystone.log  nginx-access.log  nginx-error.log  uwsgi.log

Strict confinement

The snaps all run under strict confinement, where each snap is run in a restricted environment that is sandboxed with seccomp and AppArmor policy.  More details on snap confinement can be viewed here.

New features/updates coming for snaps

There are a few features and updates coming for snaps that I’m looking forward to:

  • We’re working on getting libvirt AppArmor policy in place so that the nova-hypervisor snap can access qcow2 backing files.
    • For now, as a work-around, you can put virt-aa-helper in complain mode: sudo aa-complain /usr/lib/libvirt/virt-aa-helper
  • We’re also working on getting additional snapd interface policy in place that will enable network connectivity for deployed instances.
    • For now you can install the nova-hypervisor snap in devmode, which disables security confinement:  snap install –devmode –edge nova-hypervisor
  • Auto-connecting nova-hypervisor interfaces. We’re working on getting the interfaces for the nova-hypervisor defined automatically at install time.
    • Interfaces define the AppArmor and seccomp policy that enables a snap to access resources on the system.
    • For now you can manually connect the required interfaces as described in the nova-hypervisor snap’s README.
  • Auto-alias support for commands. We’re working on getting auto-alias support defined for commands across the snaps, where aliases will be defined automatically at install time.
    • This enables use of the traditional command names. Instead of ‘nova.manage db sync‘ you’ll be able to issue ‘nova-manage db sync’ right after installing the snap.
    • For now you can manually enable aliases after the snap is installed, such as ‘snap alias nova.manage nova-manage’. See the snap READMEs for more details.
  • Auto-alias support for daemons.  Currently snappy only supports aliases for commands (not daemons).  Once alias support is available for daemons, we’ll set them up to be automatically configured at install time.
    • This enables use of the traditional unit file names. Instead of ‘systemctl restart snap.nova.nova-compute’ you’ll be able to issue ‘systemctl restart nova-compute’.
  • Asset tracking for snaps. This will enables tracking of versions used to build the snap which can be re-used in future builds.

If you’d like to chat more about snaps you can find us on IRC in #openstack-snaps on freenode. We welcome your feedback and contributions!

Thanks and have fun!

Corey


on July 05, 2017 06:37 PM