July 01, 2016

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian packaging

Django and Python. I uploaded Django 1.9.7 and filed an upstream ticket (#26755) for a failure seen in its DEP-8 tests.

I packaged/sponsored python-django-modeltranslation and python-paypal. I opened a pull request on model-translation to fix failing tests in the Debian package build.

I packaged a new python-django-jsonfield (1.0.0), filed a bug and discovered some regression in its PostgreSQL support. I helped on the upstream ticket and I have been granted commit rights. I used this opportunity to do some bug triage and push a few fixes. I also discussed the future of the module and ended up starting a discussion on Django’s developer list about the possibility to add a JSONField to the core.

CppUTest. I uploaded a new upstream version (3.8) with more than a year of work. I found out that make install does not install a required header so I opened a ticket with a patch. The package ended up not compiling on quite a few architectures so I opened a ticket and prepared a fix for some of those failures with the help of the upstream developers. I also added a DEP-8 tests after having uploaded a broken (untested) package…

systemd support in net-snmp and postfix. I worked on adding native systemd service units to net-snmp (#782243) and postfix (#715188). In both cases, the maintainers have not been very reactive so far so I uploaded my changes as delayed NMU.

pkg-security team. The team that I started quietly a few months ago is now growing, both with new members and new packages. I created the required Teams/pkg-security wiki page. I sponsored xprobe, hydra, made an upload of medusa to merge Kali changes into Debian (and at the same time submitting the patch to upstream).

fontconfig. After having read Jonathan McDowell’s analysis of a bug that I experienced multiple times (and that many Kali users had too), I opened bug #828037 to get it fixed once for all. Unfortunately, nothing happened yet.

DebConf 16

I spent some time to prepare the 2 talks and the BoF that I will give/manage in Cape Town next week:

  • Kali Linux’s Experience https://debconf16.debconf.org/talks/39/
  • 2 Years of Work of Paid Contributors in the Debian LTS Project https://debconf16.debconf.org/talks/40/
  • Using Debian Money to Fund Debian Projects https://debconf16.debconf.org/talks/41/

Distro Tracker

I continued to mentor Vladimir Likic who managed to finish his first patch. He is now working on documentation for new contributors based on his recent experience.

I enhanced the tox configuration to run tests with Django 1.8 LTS with fatal warnings (python -Werror) so as to ensure that I’m not relying on any deprecated feature and so that I can be sure that the codebase will work on the next Django LTS release (1.11). Thanks to this, I did discover quite a few places where I have been using deprecated API and I fixed them all (the JSONField update to 1.0.0 I mentionned above was precisely to fix such a warning).

I also fixed a few more issues with folded mail headers that you can’t inject back in a new Message object and with messages lacking the subject field. All those have been caught through real (spam) email generating exceptions wich are then mailed to me.

Kali related work

I uploaded a new live-boot (5.20160608) to Debian to fix a bug where the boot process was blocking on some timeout.

I forwarded a Kali bug against libatk-wrapper-java (#827741) which turned out to be an OpenJDK bug.

I filed #827749 against reprepro to request a way to remove selected internal file references. This is required if you want to be able to make a file disappear and if that file is part of a snapshot that you want to keep despite this. But in truth, my real need is to be able to replace the .orig.tar.gz used by Kali by the orig.tar.gz used by Debian… those conflicts break the mirroring/import script.

Salt

I have been using salt to deploy a new service, and I developed patches for a few issues in salt formulas. I also created a new letsencrypt-sh formula to manage TLS certificates with the letsencrypt.sh ACME client.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on July 01, 2016 01:14 PM

June’s reading list

Canonical Design Team

Here are the best links shared by the design team over the last month:

  1. SuperHi Summer School
  2. Cartographic ethics: Oceania, the truncated continent
  3. New Google Fonts
  4. The True Size Of …
  5. Cartography Comparison: Google Maps & Apple Maps
  6. Web Accessibility: Developing with Empathy
  7. Improving UX For Color-Blind Users
  8. Invisible Design: Co-Designing with Machines. Airbnb Design
  9. Great Apps Timeline

Thank you to Ant, Jamie, Joe, Karl, Luca, Pierre and me for the links this month!

on July 01, 2016 08:23 AM

From the “I should have posted this months ago” vault…

When I led technology development at One Laptop per Child Australia, I maintained two golden rules:

  1. everything that we release must ‘just work’ from the perspective of the user (usually a child or teacher), and
  2. no special technical expertise should ever be required to set-up, use or maintain the technology.

In large part, I believe that we were successful.

Once the more obvious challenges have been identified and cleared, some more fundamental problems become evident. Our goal was to improve educational opportunities for children as young as possible, but proficiently using computers to input information can require a degree of literacy.

Sugar Labs have done stellar work in questioning the relevance of the desktop metaphor for education, and in coming up with a more suitable alternative. This proved to be a remarkable platform for developing a touch-screen laptop, in the form of the XO-4 Touch: the icons-based user interface meant that we could add touch capabilities with relatively few user-visible tweaks. The screen can be swivelled and closed over the keyboard as with previous models, meaning that this new version can be easily converted into a pure tablet at will.

Revisiting Our Assumptions

Still, a fundamental assumption has long gone unchallenged on all computers: the default typeface and keyboard. It doesn’t at all represent how young children learn the English alphabet or literacy. Moreover, at OLPC Australia we were often dealing with children who were behind on learning outcomes, and who were attending school with almost no exposure to English (since they speak other languages at home). How are they supposed to learn the curriculum when they can barely communicate in the classroom?

Looking at a standard PC keyboard, you’ll see that the keys are printed with upper-case letters. And yet, that is not how letters are taught in Australian schools. Imagine that you’re a child who still hasn’t grasped his/her ABCs. You see a keyboard full of unfamiliar symbols. You press one, and on the screen pops up a completely different looking letter! The keyboard may be in upper-case, but by default you’ll get the lower-case variants on the screen.

A standard PC keyboardA standard PC keyboard

Unfortunately, the most prevalent touch-screen keyboard on the marke isn’t any better. Given the large education market for its parent company, I’m astounded that this has not been a priority.

The Apple iOS keyboardThe Apple iOS keyboard

Better alternatives exist on other platforms, but I still was not satisfied.

A Re-Think

The solution required an examination of how children learn, and the challenges that they often face when doing so. The end result is simple, yet effective.

The standard OLPC XO mechanical keyboard (above) versus the OLPC Australia Literacy keyboard (below)The standard OLPC XO mechanical keyboard (above) versus the OLPC Australia Literacy keyboard (below)

This image contrasts the standard OLPC mechanical keyboard with the OLPC Australia Literacy keyboard that we developed. Getting there required several considerations:

  1. a new typeface, optimised for literacy
  2. a cleaner design, omitting characters that are not common in English (they can still be entered with the AltGr key)
  3. an emphasis on lower-case
  4. upper-case letters printed on the same keys, with the Shift arrow angled to indicate the relationship
  5. better use of symbols to aid instruction

One interesting user story with the old keyboard that I came across was in a remote Australian school, where Aboriginal children were trying to play the Maze activity by pressing the opposite arrows that they were supposed to. Apparently they thought that the arrows represented birds’ feet! You’ll see that we changed the arrow heads on the literacy keyboard as a result.

We explicitly chose not to change the QWERTY layout. That’s a different debate for another time.

The Typeface

The abc123 typeface is largely the result of work I did with John Greatorex. It is freely downloadable (in TrueType and FontForge formats) and open source.

After much research and discussions with educators, I was unimpressed with the other literacy-oriented fonts available online. Characters like ‘a’ and ‘9’ (just to mention a couple) are not rendered in the way that children are taught to write them. Young children are also susceptible to confusion over letters that look similar, including mirror-images of letters. We worked to differentiate, for instance, the lower-case L from the upper-case i, and the lower-case p from the lower-case q.

Typography is a wonderfully complex intersection of art and science, and it would have been foolhardy for us to have started from scratch. We used as our base the high-quality DejaVu Sans typeface. This gave us a foundation that worked well on screen and in print. Importantly for us, it maintained legibility at small point sizes on the 200dpi XO display.

On the Screen

abc123 is a suitable substitute for DejaVu Sans. I have been using it as the default user interface font in Ubuntu for over a year.

It looks great in Sugar as well. The letters are crisp and easy to differentiate, even at small point sizes. We made abc123 the default font for both the user interface and in activities (applications).

The abc123 font in Sugar's Write activity, on an XO laptop screenThe abc123 font in Sugar’s Write activity, on an XO laptop screen

Likewise, the touch-screen keyboard is clear and simple to use.

The abc123 font on the XO touch-screen keyboard, on an XO laptop screenThe abc123 font on the XO touch-screen keyboard, on an XO laptop screen

The end result is a more consistent literacy experience across the whole device. What you press on the hardware or touch-screen keyboard will be reproduced exactly on the screen. What you see on the user interface is also what you see on the keyboards.

on July 01, 2016 07:26 AM

Linkdump 26/2016 ...

Dirk Deimeke

Nicht erschrecken, da sind sehr viele kurze Artikel dabei.

Ja, es gibt sie, die Teammitglieder, die an der Leistung gemessen, kaum etwas beitragen: Lernen von Lukas Podolski.

Wir haben ein anderes Menschenbild als Linkedin - wir ballern einfach alles mit Werbung voll ...

I can not agree more. Even though SELinux is a very important tool, it is close to be unusable, blog/linux/SELinuxBeyondSaving.

Pikanter Vergleich Schweiz-Deutschland - mich würde ja schon interessieren, ob die Schweiz die gleichen Tricks zur Schönung der Statistiken anwendet wie Deutschland.

I am more with RSS-Feeds and think every forum should offer them, Email wins over Forums.

Social networks are passé. Messaging is the new thing. Take two. Agreed. Worth reading.

Warum wird eigentlich Karriere immerzu synonym mit Führungsverantwortung verwendet? Jetzt oder nie? Wann ist die beste Zeit für einen Karriereschritt?.

Vier Dinge, die wir vom Silicon Valley nicht lernen sollten, aber es ist doch alles so toll da ...

Schon so ein bisschen "Captain Obvious", So bewältigen Sie die Email-Flut. Produktiv..

I love Taskwarrior, therefore I love Free Software, I love my wife. ;-)

Psychotest durch Wegbleiben, Was Ihre Abwesenheitsnotiz über Sie verrät.

How to enforce better technical debt practices - use Git to get rid of garbage before you commit.

I like it very little coloured, Syntax Highlighting Off.

New research reveals surprising truths about why some work groups thrive and others falter. is my "read of the week". What makes one team more successful than another one?

on July 01, 2016 03:37 AM

June 30, 2016

"I want to figure out a way to not be stupid with money, then make a whole bunch of it, then I want to move to Outer Mongolia. I want to milk a yak. Maybe I’ll just settle for a cow."
– Dave Matthews

The first alpha of the Yakkety Yak (to become 16.10) has now been released!

This milestone features images for Lubuntu, Ubuntu MATE and Ubuntu Kylin.

Pre-releases of the Yakkety Yak are *not* encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent breakage. They are, however, recommended for Ubuntu flavor developers and those who want to help in testing, reporting and fixing bugs as we work towards getting this release ready.

Alpha 1 includes a number of software updates that are ready for wider testing. This is quite an early set of images, so you should expect some bugs.

While these Alpha 1 images have been tested and work, except as noted in the release notes, Ubuntu developers are continuing to improve the Yakkety Yak. In particular, once newer daily images are available, system installation bugs identified in the Alpha 1 installer should be verified against the current daily image before being reported in Launchpad. Using an obsolete image to re-report bugs that have already been fixed wastes your time and the time of developers who are busy trying to make 16.10 the best Ubuntu release yet. Always ensure your system is up to date before reporting bugs.

Lubuntu

Lubuntu is a flavour of Ubuntu based on LXDE and focused on providing a very lightweight distribution.

The Lubuntu 16.10 Alpha 1 images can be downloaded from:

More information about Lubuntu 16.10 Alpha 1 can be found here:

Ubuntu MATE

Ubuntu MATE is a flavour of Ubuntu featuring the MATE desktop environment for people who just want to get stuff done.

The Ubuntu MATE 16.10 Alpha 1 images can be downloaded from:

More information about Ubuntu MATE 16.10 Alpha 1 can be found here:

Ubuntu Kylin

Ubuntu Kylin is a flavour of Ubuntu that is more suitable for Chinese users.

The Ubuntu Kylin 16.10 Alpha 1 images can be downloaded from:

More information about Ubuntu Kylin 16.10 Alpha 1 can be found here:

If you’re interested in following the changes as we further develop Yakkety, we suggest that you subscribe to the ubuntu-devel-announce list. This is a low-traffic list (a few posts a week) carrying announcements of approved specifications, policy changes, alpha releases and other interesting events.

A big thank you to the developers and testers for their efforts to pull together this Alpha release!

Originally posted to the ubuntu-release mailing list on Thu Jun 30 19:09:16 UTC 2016 by Martin Wimpress and Simon Quigley, on behalf of the Ubuntu Release Team

on June 30, 2016 09:02 PM
Lubuntu Yakkety Yak Alpha 1 (soon to be 16.10) has been released! We have a couple papercuts listed in the release notes, so please take a look. We currently have a bug (with a workaround) that breaks creating BTRFS and XFS partitions using the desktop installer. A big thanks to Brendan Perrine and Nio Wiklund […]
on June 30, 2016 06:27 PM

Last week I was invited to Beijing to take part in the China Launch Sprint. The focus of the sprint was to identify action items in our product roadmap for the next devices that will ship Ubuntu Touch in the Chinese market later this year.

photo_2016-06-17_15-25-12

I am a lead UX designer in the product strategy team currently doing many exciting things, such as designing the convergence experience across the Ubuntu platform. I was invited to offer design support and participate in the planning of the work we will be doing with our industry partner, China Mobile, after reviewing the CTA test results.

What is CTA?

CTA stands for China type approval which is a certificate granted to a product that meets a set of regulatory, technical and safety requirements. Generally, type approval is required before a product is allowed to be sold in a particular country.

Topics covered:

  • CTA Level 1-4 test cases and developed a new testing tool for pre-install applications.
    We reviewed the content and proposed design for all five of Migu scopes with design team’s input.
  • Also, we discussed the new RCS (Rich Communication Suite) integration with our Messaging app and prepared demos [link] for MWC Shanghai, Asia’s biggest mobile event happening at the end of this month.
  • And explored ideas around the design of mCloud service integration with our storage framework.

Achievements

The sprint was very productive and a great experience to sync up with old and new faces. We were all excited to explore ideas and work together on the next steps for China Mobile and Ubuntu.

Downtown in Beijing

I had some downtime to explore the city and have a taste of Beijing’s most interesting local dishes and potions with people I met from the sprint…

photo_2016-06-17_15-23-56

Michi has creatively named this one as snake juice.

Team dinner :)

A large team dinner.

photo_2016-06-28_15-40-59

The famous Great Wall of China.

The city lights of Beijing :)

The city lights of Beijing :)

on June 30, 2016 04:15 PM

Next week on Tuesday, 5th July, we want to have our next Snappy Playpen event. As always we are going to work together on snapping software for our repository on github. Whatever app, service or piece of software you bring is welcome.

The focus of last week was ironing out issues and documenting what we currently have. Some outcomes of this were:

We want to continue this work, but add a new side to this: upstreaming our work. It is great that we get snaps working, but it is much better if the upstream project in question can take over the ownership of snaps themselves. Having snapcraft.yaml in their source tree will make this a lot easier. To kick off this work, we started some documentation on how to best do that and track this effort.

You are all welcome to the event and we look forward to work together with you. Coordination is happening on #snappy on Freenode and Gitter. We will make sure all our experts are around to help you if you have questions.

Looking forward to seeing you there!

on June 30, 2016 03:39 PM

S09E18 – Suspicious Package - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Eighteen of Season Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Laura Cowen and Martin Wimpress are connected and speaking to your brain.

We’re here again!

In this week’s show:

  • We discuss the snap packaging format.

  • We also discuss going to Download Festival and discovering Open Store.

  • We share a Command Line Lurve Clonezilla, which is an amazing way to copy bits from one harddisk to another.

  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

    • Andy Smith at the brilliant Bitfolk upgraded our VPS data transfer allowance without us even asking! Go buy your VPS from them!
    • Entroware have released another beast of a Laptop, worth looking into
    • David Wolski told us how to monitor progress using dd itself. Here are the three examples he gave in the show:
      sudo dd if=raspbian.img of=/dev/sdb bs=512 status=progress
      conv=noerror,sync
      pv bigfile.iso | md5sum
    • Asa similarly emailed. Here are those examples too:
      killall -USR1 dd
      watch -n 5 "killall -USR1 dd"
  • This weeks cover image is taken from Wikimedia.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on June 30, 2016 02:00 PM

June 29, 2016

Podcast fans will know that we were struck down with lucky show thirteen. Google Hangouts crashed out twice, and we lost the live stream. We ended up half an hour late, with no Hangouts, and a hastily make-shift YouTube live stream hooked together in record time by the #awesome Ovidiu-florin Bogdan.

The upsimascot_konqi-commu-journalistde of this being that we were rescued again by the amazing Big Blue Button.

We have decided the we are going to move to using Big Blue Button permanently for the Podcast show, which is great news for you in the audience.

Why ?

It means that you can join us on the show live. That’s right; You too can join us in the Big Blue Button conference server whilst we are making and recording the show. Maybe you just want to listen in live and watch, or perhaps ask questions and make comments in the built-in chat system.

Of course you can take it a step further and join our Audio conference bridge and interact, chat, make comments and ask questions. Provided you use the “Hand Up” feature to grab our attention first.

So come and join us in Room 1 of the Kubuntu Big Blue Button Conference Server. Password is welcome.

Wednesday 6th July at 19:00 UTC

To get the access details drop by IRC a few minutes before the show starts, at freenode.net #kubuntu-podcast. Or you can join IRC directly from this website via the embedded IRC client on our Podcast page.

on June 29, 2016 09:32 PM
Since my last blog post about stress-ng, I've pushed out several more small releases that incorporate new features and (as ever) a bunch more bug fixes.  I've been eyeballing gcov kernel coverage stats to find more regions in the kernel where stress-ng needs to exercise.   Also, testing on a range of hardware (arm64, s390x, etc) and a range of kernels has eeked out some bugs and helped me to improve stress-ng.  So what's new?

New stressors:
  • ioprio  - exercises ioprio_get(2) and ioprio_set(2) (I/O scheduling classes and priorities)
  • opcode - generates random object code and executes this, generating and catching illegal instructions, bus errors,  segmentation  faults,  traps and floating  point errors.
  • stackmmap - allocates a 2MB stack that is memory mapped onto a temporary file. A recursive function works down the stack and flushes dirty stack pages back to the memory mapped file using msync(2) until the end of the stack is reached (stack overflow). This exercises dirty page and stack exception handling.
  • madvise - applies random madvise(2) advise settings on pages of a 4MB file backed shared memory mapping.
  • pty - exercise pseudo terminal operations.
  • chown - trivial chown(2) file ownership exerciser.
  • seal - fcntl(2) file SEALing exerciser.
  • locka - POSIX advisory locking exerciser.
  • lockofd - fcntl(2) F_OFD_SETLK/GETLK open file description lock exerciser.
Improved stressors:
  • msg: add in IPC_INFO, MSG_INFO, MSG_STAT msgctl calls
  • vecmath: add more ops to make vecmath more demanding
  • socket: add --sock-type socket type option, e.g. stream or seqpacket
  • shm and shm-sysv: add msync'ing on the shm regions
  • memfd: add hole punching
  • mremap: add MAP_FIXED remappings
  • shm: sync, expand, shrink shm regions
  • dup: use dup2(2)
  • seek: add SEEK_CUR, SEEK_END seek options
  • utime: exercise UTIME_NOW and UTIME_OMIT settings
  • userfaultfd: add zero page handling
  • cache:  use cacheflush() on systems that provide this syscall
  • key:  add request_key system call
  • nice: add some randomness to the delay to unsync nicenesses changes
If any new features land in Linux 4.8 I may add stressors for them, but for now I suspect that's about it for the big changes for stress-ng for the Ubuntu Yakkey 16.10 release.
on June 29, 2016 04:46 PM

Snapcraft 2.12 is here and is making its way to your 16.04 machines today.

This release takes Snapcraft to a whole new level. For example, instead of defining your own project parts, you can now use and share them from a common, open, repository. This feature was already available in previous versions, but is now much more visible, making this repo searchable and locally cached.

Without further ado, here is a tour of what’s new in this release.

Commands

2.12 introduces ‘snapcraft update’, ‘search’ and ‘define’, which bring more visibility to the Snapcraft parts ecosystem. Parts are pieces of code for your app, that can also help you bundle libraries, set up environment variables and other tedious tasks app developers are familiar with.

They are literally parts you aggregate and assemble to create a functional app. The benefits of using a common tool is that these parts can be shared amongst developers. Here is how you can access this repository.

  • snapcraft update : refresh the list of remote parts
  • snapcraft search : list and search remote parts
  • snapcraft define : display information and content about a remote part

5273725bbff337eaf4eb07a81af97cd82051866b.png

To get a sense of how these commands are used, have a look at the above example, then you can dive into details and what we mean by “ecosystem of parts”.

Snap name registration

Another command you will find useful is the new ‘register’ one. Registering a snap name is reserving the name on the store.

  • snapcraft register

6875784c98c671707e1de1b27bb0cdba4690d68e.png

As a vendor or upstream, you can secure snap names when you are the publisher of what most users expect to see under this name.

Of course, this process can be reverted and disputed. Here is what the store workflow looks like when I try to register an already registered name:

snap-name-register.png

On the name registration page of the store, I’m going to try to register ‘my-cool-app’, which already exists.

snap-name-register-failed.png

I’m informed that the name has already been registered, but I can dispute this or use another name.

snap-name-register-dispute.png

I can now start a dispute process to retrieve ownership of the snap name.

Plugins and sources

Two new plugins have been added for parts building: qmake and gulp.

qmake

The qmake plugin has been requested since the advent of the project, and we have seen many custom versions to fill this gap. Here is what the default qmake plugin allows you to do:

  • Pass a list of options to qmake
  • Specify a Qt version
  • Declare list of .pro files to pass to the qmake invocation

gulp

The hugely popular nodejs builder is now a first class citizen in Snapcraft. It inherits from the existing nodejs plugin and allows you to:

  • Declare a list of gulp tasks
  • Request a specific nodejs version

Subversion

SVN is still a major version control system and thanks to Simon Quigley from the Lubuntu project, you can now use svn: URIs in the source field of your plugins.

Highlights

Many other fixes made their way into the release, with two highlights:

  • You can now use hidden .snapcraft.yaml files
  • snapcraft cleanbuild’ now creates ephemeral LXC containers and won’t clutter your drive anymore

The full changelog for this milestone is available here and the list of bugs in sight for 2.13 can be found here. Note that this list will probably change until the next release, but if you have a Snapcraft itch to scratch, it’s a good list to pick your first contribution from.

Install Snapcraft

On Ubuntu

Simply open up a terminal with Ctrl+Alt+t and run these commands to install Snapcraft from the Ubuntu archives on Ubuntu 16.04 LTS

sudo apt update
sudo apt install snapcraft

On other platforms

Get the Snapcraft source code ›

Get snapping!

There is a thriving community of developers who can give you a hand getting started or unblock you when creating your snap. You can participate and get help in multiple ways:

on June 29, 2016 03:20 PM

Plasma565XX

 

1. sudo apt-add-repository ppa:kubuntu-ppa/backports
2. sudo apt update
3. sudo apt full-upgrade -y

on June 29, 2016 02:43 PM

June 28, 2016

SELF 2016

Aaron Honeycutt

This post has been in my box since the 19th, I just got a bit lazy on finishing it up and posting it sorry!

This SELF (SouthEast LinuxFest) was as great the one before… ok maybe a little bit better with all the beer sponsored by my favorite VPS Linode and Google. I mean A LOT of beer!

received_1207441689276852

 

There was also a ton of Ubuntu devices at the booth. From gaming, convergence and a surprise visit from the UbuntuFL LoCo penguin!

img_20160610_102623_26997424843_o img_20160610_104832_27572691806_o img_20160610_123009_26997408133_o img_20160610_112609_27533559201_o

I even found a BQ M10 Ubuntu Tablet out in the wild!

 

IMG_20160611_101524

We also had awesome booth neighbors: system76 and Linode! I loved this trip from exploring the city again to making n

 

IMG_20160611_095103

I loved this trip from exploring the city again to making new friends!

img_20160610_211320_26997269103_o img_20160610_200539_27606537975_o img_20160610_203315_27329392750_o 27329347250_d2e6733091_o

photo644958832521488306

 

 

on June 28, 2016 11:31 PM

Are you running i386 (32-bit) Ubuntu?   We need your help to decide how much longer to build i386 images of Ubuntu Desktop, Server, and all the flavors.

There is a real cost to support i386 and the benefits have fallen as more software goes 64-bit only.

Please fill out the survey here ONLY if you currently run i386 on one of your machines.  64-bit users will NOT be affected by this, even if you run 32-bit applications.
http://goo.gl/forms/UfAHxIitdWEUPl5K2

on June 28, 2016 08:04 PM

The /etc/os-release zoo

Zygmunt Krynicki

If you've ever wanted to do something differently depending on the /etc/os-release but weren't in the mood of installing every common distribution under the sun, look no further. I give you the /etc/os-release zoo project.

A project like this is never complete so please feel free to contribute additional distribution bits there.
on June 28, 2016 12:16 PM

New Ubuntu SDK Beta Version

Ubuntu App Developer Blog

A few days ago we have released the first Beta of the Ubuntu SDK IDE using the LXD container solution to build and execute applications. 

A few days ago we have released the first Beta of the Ubuntu SDK IDE using the LXD container solution to build and execute applications.

The first reports were positive, however one big problem was discovered pretty quickly:

Applications would not start on machines using the proprietary Nvidia drivers. Reason for this is that indirect GLX is not allowed by default when using those. The applications need to have access to:

  1. The glx libraries for the currently used driver
  2. The DRI and Nvidia device files

Luckily the snappy team already tackled a similar problem, so thanks to Michael Vogt (a.k.a mvo) we had a first idea how to solve it by reusing the Nvidia binaries and device files from the host by mounting them into the container.

However it is a bit more complicated in our case, because once we have the devices and directories mounted into the containers they will stay there permanently. This is a problem because the Nvidia binary directory has a version numbering, e.g. /usr/lib/nvidia-315, which changes with the currently loaded module and would stop the container from booting after the driver was changed and the old directory on the host is gone, or the container would use the wrong nvidia dir if it was not removed from the host.

The situation gets worse with optimus graphics cards were the user can switch between a integrated and dedicated graphics chip, which means device files in /dev can come and go between reboots.

Our solution to the problem is to check the integrity of the containers on every start of the Ubuntu SDK IDE and if problems are detected, the user is informed and asked for the root password to run automatic fixes. Those checks and fixes are implemented in the “usdk-target” tool and can be used from the CLI as well.

As a bonus this work will enable direct rendering for other graphics chips as well, however since we do not have access to all possible chips there might be still special cases that we could not catch.

So please report all problems to us on one of those channels:

We have released the new tool into the Tools-development PPA where the first beta was released too. However existing container might not be completely fixed automatically. They are better be recreated or manually fixed. To manually fix an existing container use the maintain mode from the options menu and add the current user into the “video” group.

To get the new version of the IDE please update the installed Ubuntu SDK IDE package:

$ sudo apt-get update && sudo apt-get install ubuntu-sdk-ide ubuntu-sdk-tools

on June 28, 2016 05:53 AM

Welcome to the Ubuntu Weekly Newsletter. This is issue #471 for the week June 20 – 26, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Guiver
  • Chris Sirrs
  • Leonard Viator
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on June 28, 2016 02:19 AM

June 27, 2016

Today I am going to be discussing parts. This is one of the pillars of snapcraft (together with plugins and the lifecycle).

For those not familiar, this is snapcraft’s general purpose landing page, http://snapcraft.io/ but if you are a developer and have already been introduced to this new world of snaps, you probably want to just go and hop on to http://snapcraft.io/create/

If you go over this snapcraft tour you will notice the many uses of parts and start to wonder how to get started or think that maybe you are duplicating work done by others, or even better, maybe an upstream. This is where we start to think about the idea of sharing parts and this is exactly what we are going to go over in this post.

To be able to reproduce what follows, you’d need to have snapcraft 2.12 installed.

An overview to using remote parts

So imagine I am someone wanting to use libcurl. Normally I would write the part definition from scratch and be on with my own business but surely I might be missing out on something about optimal switches used to configure the package or even build it. I would also need to research on how to use the specific plugin required. So instead, I’ll see if someone already has done the work for me, hence I will,

$ snapcraft update
Updating parts list... |
$ snapcraft search curl
PART NAME  DESCRIPTION
curl       A tool and a library (usable from many languages) for client side URL tra...

Great, there’s a match, but is this what I want?

$ snapcraft define curl
Maintainer: 'Sergio Schvezov <sergio.schvezov@ubuntu.com>'
Description: 'A tool and a library (usable from many languages) for client side URL transfers, supporting FTP, FTPS, HTTP, HTTPS, TELNET, DICT, FILE and LDAP.'

curl:
  configflags:
  - --enable-static
  - --enable-shared
  - --disable-manual
  plugin: autotools
  snap:
  - -bin
  - -lib/*.a
  - -lib/pkgconfig
  - -lib/*.la
  - -include
  - -share
  source: http://curl.haxx.se/download/curl-7.44.0.tar.bz2
  source-type: tar

Yup, it’s what I want.

An example

There are two ways to use these parts in your snapcraft.yaml, say this is your parts section

parts:
    client:
       plugin: autotools
       source: .

My client part which is using sources that sit alongside this snapcraft.yaml, will hypothetically fail to build as it depends on the curl library I don’t yet have. There are some options here to get this going, one using after in the part definition implicitly, another involving composing and last but not least just copy pasting what snapcraft define curl returned for the part.

Implicitly

The implicit path is really straightforward. It only involves making the part look like:

parts:
    client:
       plugin: autotools
       source: .
       after: [curl]

This will use the cached definition of the part and may potentially be updated by running snapcraft update.

Composing

What if we like the part, but want to try out a new configure flag or source release? Well we can override pieces of the part; so for the case of wanting to change the source:

parts:
    client:
        plugin: autotools
        source: .
        after: [curl]
    curl:
        source: http://curl.haxx.se/download/curl-7.45.0.tar.bz2

And we will get to build curl but using a newer version of curl. The trick is that the part definition here is missing the plugin entry, thereby instructing snapcraft to look for the full part definition from the cache.

Copy/Pasting

This path is a path one would take if they want full control over the part. It is as simple as copying in the part definition we got from running snapcraft define curl into your own. For the sake of completeness here’s how it would look like:

parts:
    client:
        plugin: autotools
        source: .
        after: [curl]
    curl:
        configflags:
            - --enable-static
            - --enable-shared
            - --disable-manual
        plugin: autotools
        snap:
            - -bin
            - -lib/*.a
            - -lib/pkgconfig
            - -lib/*.la
            - -include
            - -share
        source: http://curl.haxx.se/download/curl-7.44.0.tar.bz2
        source-type: tar

Sharing your part

Now what if you have a part and want to share it with the rest of the world? It is rather simple really, just head over to https://wiki.ubuntu.com/snapcraft/parts and add it.

In the case of curl, I would write a yaml document that looks like:

origin: https://github.com/sergiusens/curl.git
maintainer: Sergio Schvezov <sergio.schvezov@ubuntu.com>
description:
  A tool and a library (usable from many languages) for
  client side URL transfers, supporting FTP, FTPS, HTTP,
  HTTPS, TELNET, DICT, FILE and LDAP.
project-part: curl

What does this mean? Well, the part itself is not defined on the wiki, just a pointer to it with some meta data, the part is really defined inside a snapcraft.yaml living in the origin we just told it to use.

The extent of the keywords is explained in the documentation, that is an upstream link to it.

The core idea is that a maintainer decides he wants to share a part. Such a maintainer would add a description that provides an idea of what that part (or collection of parts) is doing. Then, last but not least, the maintainer declares which parts to expose to the world as maybe not all of them should. The main part is exposed in project-part and will carry a top level name, the maintainer can expose more parts from snapcraft.yaml using the general parts keyword. These parts will be namespaced with the project-part.

on June 27, 2016 03:57 PM

 

Island of Ventotene – Roman harbour

There once was a Kingdom strongly United, built on the honours of the people of Wessex, of Mercia, Northumbria and East Anglia who knew how to deal with the invasion of the Vikings from the east and of Normans from the south, to come to unify the territory under an umbrella of common intents. Today, however, 48% of them, while keeping solid traditions, still know how to look forward to the future, joining horizons and commercial developments along with the rest of Europe. The remaining 52%, however, look back and can not see anything in front of them if not a desire of isolation, breaking the European dream born on the shores of Ventotene island in 1944 by Altiero Spinelli, Ernesto Rossi and Ursula Hirschmann through the “Manifesto for a free and united Europe“. An incurable fracture in the country was born in a referendum on 23 June, in which just over half of the population asked to terminate his marriage to the great European family, bringing the UK back by 43 years of history.

<Read More…[by Fabio Marzocca]>

on June 27, 2016 07:54 AM

Hello, Sense!

Paul Tagliamonte

A while back, I saw a Kickstarter for one of the most well designed and pretty sleep trackers on the market. I fell in love with it, and it has stuck with me since.

A few months ago, I finally got my hands on one and started to track my data. Naturally, I now want to store this new data with the rest of the data I have on myself in my own databases.

I went in search of an API, but I found that the Sense API hasn't been published yet, and is being worked on by the team. Here's hoping it'll land soon!

After some subdomain guessing, I hit on api.hello.is. So, naturally, I went to take a quick look at their Android app and network traffic, lo and behold, there was a pretty nicely designed API.

This API is clearly an internal API, and as such, it's something that should not be considered stable. However, I'm OK with a fragile API, so I've published a quick and dirty API wrapper for the Sense API to my GitHub..

I've published it because I've found it useful, but I can't promise the world, (since I'm not a member of the Sense team at Hello!), so here are a few ground rules of this wrapper:

  • I make no claims to the stability or completeness.
  • I have no documentation or assurances.
  • I will not provide the client secret and ID. You'll have to find them on your own.
  • This may stop working without any notice, and there may even be really nasty bugs that result in your alarm going off at 4 AM.
  • Send PRs! This is a side-project for me.

This module is currently Python 3 only. If someone really needs Python 2 support, I'm open to minimally invasive patches to the codebase using six to support Python 2.7.

Working with the API:

First, let's go ahead and log in using python -m sense.

$ python -m sense
Sense OAuth Client ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Sense OAuth Client Secret: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Sense email: paultag@gmail.com
Sense password: 
Attempting to log into Sense's API
Success!
Attempting to query the Sense API
The humidity is **just right**.
The air quality is **just right**.
The light level is **just right**.
It's **pretty hot** in here.
The noise level is **just right**.
Success!

Now, let's see if we can pull up information on my Sense:

>>> from sense import Sense
>>> sense = Sense()
>>> sense.devices()
{'senses': [{'id': 'xxxxxxxxxxxxxxxx', 'firmware_version': '11a1', 'last_updated': 1466991060000, 'state': 'NORMAL', 'wifi_info': {'rssi': 0, 'ssid': 'Pretty Fly for a WiFi (2.4 GhZ)', 'condition': 'GOOD', 'last_updated': 1462927722000}, 'color': 'BLACK'}], 'pills': [{'id': 'xxxxxxxxxxxxxxxx', 'firmware_version': '2', 'last_updated': 1466990339000, 'battery_level': 87, 'color': 'BLUE', 'state': 'NORMAL'}]}

Neat! Pretty cool. Look, you can even see my WiFi AP! Let's try some more and pull some trends out.

>>> values = [x.get("value") for x in sense.room_sensors()["humidity"]][:10]
>>> min(values)
45.73904
>>> max(values)
45.985928
>>> 

I plan to keep maintaining it as long as it's needed, so I welcome co-maintainers, and I'd love to see what people build with it! So far, I'm using it to dump my room data into InfluxDB, pulling information on my room into Grafana. Hopefully more to come!

Happy hacking!

on June 27, 2016 01:42 AM

SNAPs are the cross-distro, cross-cloud, cross-device Linux packaging format of the future.  And we're already hosting a fantastic catalog of SNAPs in the SNAP store provided by Canonical.  Developers are welcome to publish their software for distribution across hundreds millions of Ubuntu servers, desktops, and devices.

Several people have asked the inevitable open source software question, "SNAPs are awesome, but how can I stand up my own SNAP store?!?"

The answer is really quite simple...  SNAP stores are really just HTTP web servers!  Of course, you can get fancy with branding, and authentication, and certificates.  But if you just want to host SNAPs and enable downstream users to fetch and install software, well, it's pretty trivial.

In fact, Bret Barker has published an open source (Apache License) SNAP store on GitHub.  We're already looking at how to flesh out his proof-of-concept and bring it into snapcore itself.

Here's a little HOWTO install and use it.

First, I launched an instance in AWS.  Of course I could have launched an Ubuntu 16.04 LTS instance, but actually, I launched a Fedora 24 instance!  In fact, you could run your SNAP store on any OS that currently supports SNAPs, really, or even just fork this GitHub repo and install it stand alone..  See snapcraft.io.



Now, let's find and install a snapstore SNAP.  (Note that in this AWS instance of Fedora 24, I also had to 'sudo yum install squashfs-tools kernel-modules'.


At this point, you're running a SNAP store (webserver) on port 5000.


Now, let's reconfigure snapd to talk to our own SNAP store, and search for a SNAP.


Finally, let's install and inspect that SNAP.


How about that?  Easy enough!

Cheers,
Dustin
on June 27, 2016 01:09 AM

June 26, 2016

You can have LXD containers on your home computer, you can also have them on your Virtual-Private Server (VPS). If you have any further questions on LXD, see https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/

Here we see how to configure on a VPS at DigitalOcean (yeah, referral). We go cheap and select the 512MB RAM and 20GB disk VPS for $5/month. Containers are quite lightweight, so it’s interesting to see how many we can squeeze. We are going to use ZFS for the storage of the containers, stored on a file and not a block device. Here is what we are doing today,

  1. Set up LXD on a 512MB RAM/20GB diskspace VPS
  2. Create a container with a web server
  3. Expose the container service to the Internet
  4. Visit the webserver from our browser

Set up LXD on DigitalOcean

do-create-droplet

When creating the VPS, it is important to change these two options; we need 16.04 (default is 14.04) so that it has ZFS pre-installed as a kernel module, and we try out the cheapest VPS offering with 512MB RAM.

Once we create the VPS, we connect with

$ ssh root@128.199.41.205    # change with the IP address you get from the DigitalOcean panel
The authenticity of host '128.199.41.205 (128.199.41.205)' can't be established.
ECDSA key fingerprint is SHA256:7I094lF8aeLFQ4WPLr/iIX4bMs91jNiKhlIJw3wuMd4.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '128.199.41.205' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com/

0 packages can be updated.
0 updates are security updates.

root@ubuntu-512mb-ams3-01:~# apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]
Hit:2 http://ams2.mirrors.digitalocean.com/ubuntu xenial InRelease 
Get:3 http://security.ubuntu.com/ubuntu xenial-security/main Sources [24.9 kB]
...
Fetched 10.2 MB in 4s (2,492 kB/s)
Reading package lists... Done
Building dependency tree 
Reading state information... Done
13 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@ubuntu-512mb-ams3-01:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
 dnsmasq-base initramfs-tools initramfs-tools-bin initramfs-tools-core
 libexpat1 libglib2.0-0 libglib2.0-data lshw python3-software-properties
 shared-mime-info snapd software-properties-common wget
13 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 6,979 kB of archives.
After this operation, 78.8 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
...
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
update-initramfs: Generating /boot/initrd.img-4.4.0-24-generic
W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
Processing triggers for libc-bin (2.23-0ubuntu3) ...

We update the package list and then upgrade any packages that need upgrading.

root@ubuntu-512mb-ams3-01:~# apt policy lxd
lxd:
 Installed: 2.0.2-0ubuntu1~16.04.1
 Candidate: 2.0.2-0ubuntu1~16.04.1
 Version table:
 *** 2.0.2-0ubuntu1~16.04.1 500
 500 http://mirrors.digitalocean.com/ubuntu xenial-updates/main amd64 Packages
 500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages
 100 /var/lib/dpkg/status
 2.0.0-0ubuntu4 500
 500 http://mirrors.digitalocean.com/ubuntu xenial/main amd64 Packages

The lxd package is already installed, all the better. Nice touch 🙂

root@ubuntu-512mb-ams3-01:~# apt install zfsutils-linux
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following additional packages will be installed:
 libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-doc zfs-zed
Suggested packages:
 default-mta | mail-transport-agent samba-common-bin nfs-kernel-server
 zfs-initramfs
The following NEW packages will be installed:
 libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-doc zfs-zed
 zfsutils-linux
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 881 kB of archives.
After this operation, 2,820 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
...
zed.service is a disabled or a static unit, not starting it.
Processing triggers for libc-bin (2.23-0ubuntu3) ...
Processing triggers for systemd (229-4ubuntu6) ...
Processing triggers for ureadahead (0.100.0-19) ...
root@ubuntu-512mb-ams3-01:~# _

We installed zfsutils-linux in order to be able to use ZFS as storage for our containers. In this tutorial we are going to use a file as storage (still, ZFS filesystem) instead of a block device. If you subscribe to the DO Beta for block storage volumes, you can get a proper block device for the storage of the containers. Currently free to beta members, available only on the NYC1 datacenter.

root@ubuntu-512mb-ams3-01:~# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/vda1  20G  1.1G 18G     6% /
root@ubuntu-512mb-ams3-01:~# _

We got 18GB free diskspace, so let’s allocate 15GB for LXD.

root@ubuntu-512mb-ams3-01:~# lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd-pool
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 15
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes
we accept the default settings for the bridge configuration
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket
LXD has been successfully configured.
root@ubuntu-512mb-ams3-01:~# _

What we did,

  • we initialized LXD with the ZFS storage backend,
  • we created a new pool and gave a name (here, lxd-pool),
  • we do not have a block device, so we get a (sparse) image file that contains the ZFS filesystem
  • we do not want now to make LXD available over the network
  • we want to configure the LXD bridge for the inter-networking of the containters

Let’s create a new user and add them to the lxd group,

root@ubuntu-512mb-ams3-01:~# adduser ubuntu
Adding user `ubuntu' ...
Adding new group `ubuntu' (1000) ...
Adding new user `ubuntu' (1000) with group `ubuntu' ...
Creating home directory `/home/ubuntu' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: ********
Retype new UNIX password: ********
passwd: password updated successfully
Changing the user information for ubuntu
Enter the new value, or press ENTER for the default
 Full Name []: <ENTER>
 Room Number []: <ENTER>
 Work Phone []: <ENTER>
 Home Phone []: <ENTER>
 Other []: <ENTER>
Is the information correct? [Y/n] Y
root@ubuntu-512mb-ams3-01:~# _

The username is ubuntu. Make sure you add a good password, since we do not deal in this tutorial with best security practices. Many people use scripts on these VPSs that try common usernames and passwords. When you create a VPS, it is nice to have a look at /var/log/auth.log for those failed attempts to get into your VPS. Here are a few lines from this VPS,

Jun 26 18:36:15 digitalocean sshd[16318]: Failed password for root from 121.18.238.29 port 45863 ssh2
Jun 26 18:36:15 digitalocean sshd[16320]: Connection closed by 123.59.134.76 port 49378 [preauth]
Jun 26 18:36:17 digitalocean sshd[16318]: Failed password for root from 121.18.238.29 port 45863 ssh2
Jun 26 18:36:20 digitalocean sshd[16318]: Failed password for root from 121.18.238.29 port 45863 ssh2

We add the ubuntu user into the lxd group in order to be able to run commands as a non-root user.

root@ubuntu-512mb-ams3-01:~# adduser ubuntu lxd
Adding user `ubuntu' to group `lxd' ...
Adding user ubuntu to group lxd
Done.
root@ubuntu-512mb-ams3-01:~# _

We are now good to go. Log in as user ubuntu and run an LXD command to list images.

do-lxc-list

Create a Web server in a container

We launch (init and start) a container named c1.

do-lxd-launch

The ubuntu:x in the screenshot is an alias for Ubuntu 16.04 (Xenial), that resides in the ubuntu: repository of images. You can find other distributions in the images: repository.

As soon as the launch action was completed, I run the list action. Then, after a few seconds, I run it again. You can notice that it took a few seconds before the container actually booted and got an IP address.

Let’s enter into the container by executing a shell. We update and then upgrade the container.

ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec c1 -- /bin/bash
root@c1:~# apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]
Hit:2 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:3 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [94.5 kB]
...
Fetched 9819 kB in 2s (3645 kB/s) 
Reading package lists... Done
Building dependency tree 
Reading state information... Done
13 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@c1:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
 dnsmasq-base initramfs-tools initramfs-tools-bin initramfs-tools-core libexpat1 libglib2.0-0 libglib2.0-data lshw python3-software-properties shared-mime-info snapd
 software-properties-common wget
13 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 6979 kB of archives.
After this operation, 3339 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 initramfs-tools all 0.122ubuntu8.1 [8602 B]
...
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
root@c1:~#

Let’s install nginx, our Web server.

root@c1:~# apt install nginx
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following additional packages will be installed:
 fontconfig-config fonts-dejavu-core libfontconfig1 libfreetype6 libgd3 libjbig0 libjpeg-turbo8 libjpeg8 libtiff5 libvpx3 libxpm4 libxslt1.1 nginx-common nginx-core
Suggested packages:
 libgd-tools fcgiwrap nginx-doc ssl-cert
The following NEW packages will be installed:
 fontconfig-config fonts-dejavu-core libfontconfig1 libfreetype6 libgd3 libjbig0 libjpeg-turbo8 libjpeg8 libtiff5 libvpx3 libxpm4 libxslt1.1 nginx nginx-common nginx-core
0 upgraded, 15 newly installed, 0 to remove and 0 not upgraded.
Need to get 3309 kB of archives.
After this operation, 10.7 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libjpeg-turbo8 amd64 1.4.2-0ubuntu3 [111 kB]
...
Processing triggers for ufw (0.35-0ubuntu2) ...
root@c1:~#

Is the Web server running? Let’s check with the ss command (preinstalled, from package iproute2)

root@c1:~# ss -tula 
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port 
udp UNCONN 0 0 *:bootpc *:* 
tcp LISTEN 0 128 *:http *:* 
tcp LISTEN 0 128 *:ssh *:* 
tcp LISTEN 0 128 :::http :::* 
tcp LISTEN 0 128 :::ssh :::*
root@c1:~#

The parameters mean

  • -t: Show only TCP sockets
  • -u: Show only UDP sockets
  • -l: Show listening sockets
  • -a: Show all sockets (makes no difference because of previous options; it’s just makes an easier word to remember, tula)

Of course, there is also lsof with the parameter -i (IPv4/IPv6).

root@c1:~# lsof -i
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
dhclient 240 root 6u IPv4 45606 0t0 UDP *:bootpc 
sshd 306 root 3u IPv4 47073 0t0 TCP *:ssh (LISTEN)
sshd 306 root 4u IPv6 47081 0t0 TCP *:ssh (LISTEN)
nginx 2034 root 6u IPv4 51636 0t0 TCP *:http (LISTEN)
nginx 2034 root 7u IPv6 51637 0t0 TCP *:http (LISTEN)
nginx 2035 www-data 6u IPv4 51636 0t0 TCP *:http (LISTEN)
nginx 2035 www-data 7u IPv6 51637 0t0 TCP *:http (LISTEN)
root@c1:~#

From both commands we verify that the Web server is indeed running inside the VPS, along with a SSHD server.

Let’s change a bit the default Web page,

root@c1:~# nano /var/www/html/index.nginx-debian.html

do-lxd-nginx-page

Expose the container service to the Internet

Now, if we try to visit the public IP of our VPS at http://128.199.41.205/ we obviously notice that there is no Web server there. We need to expose the container to the world, since the container only has a private IP address.

The following iptables line exposes the container service at port 80. Note that we run this as root on the VPS (root@ubuntu-512mb-ams3-01:~#), NOT inside the container (root@c1:~#).

iptables -t nat -I PREROUTING -i eth0 -p TCP -d 128.199.41.205/32 --dport 80 -j DNAT --to-destination 10.160.152.184:80

Adapt accordingly the public IP of your VPS and the private IP of your container (10.x.x.x). Since we have a web server, this is port 80.

We have not made this firewall rule persistent as it is outside of our scope; see iptables-persistent on how to make it persistent.

Visit our Web server

Here is the URL, http://128.199.41.205/ so let’s visit it.

do-lxd-welcome-nginx

That’s it! We created an LXD container with the nginx Web server, then exposed it to the Internet.

 

on June 26, 2016 11:57 PM

June 25, 2016

Post-Brexit - The What Now?

Dimitri John Ledkov

Out of 46,500,001 electorate 17,410,742 voted to leave, which is a mere 37.4% or just over a third. [source]. On my books this is not a clear expression of the UK wishes.

The reaction that the results have caused are devastating. The Scottish First Minister has announced plans for 2nd Scottish Independence referendum [source]. Londoners are filing petitions calling for Independent London [source, source]. The Prime Minister announced his resignation [source]. Things are not stable.

I do not believe that super majority of the electorate are in favor of leaving the EU. I don't even believe that those who voted to leave have considered the break up of the UK as the inevitable outcome of the leave vote. There are numerous videos on the internet about that, impossible to quantify or reliably cite, but for example this [source]

So What Now?

P R O T E S T

I urge everyone to start protesting the outcome of the mistake that happened last Thursday. 4th of July is a good symbolic date to show your discontent with the UK governemnt and a tiny minority who are about to cause the country to fall apart with no other benefits. Please stand up and make yourself heard.
  • General Strikes 4th & 5th of July
There are 64,100,000 people living in the UK according to the World Bank, maybe the government should fear and listen to the unheard third. The current "majority" parliament was only elected by 24% of electorate.

It is time for people to actually take control, we can fix our parliament, we can stop austerity, we can prevent the break up of the UK, and we can stay in the EU. Over to you.

ps. How to elect next PM?

Electing next PM will be done within the Conservative Party, and that's kind of a bummer, given that the desperate state the country currently is in. It is not that hard to predict that Boris Johnson is a front-runner. If you wish to elect a different PM, I urge you to splash out 25 quid and register to be a member of the Conservative Party just for one year =) this way you will get a chance to directly elect the new Leader of the Conservative Party and thus the new Prime Minister. You can backdoor the Conservative election here.
on June 25, 2016 07:24 PM

This post is about containers, a construct similar to virtual machines (VM) but so much lightweight that you can easily create a dozen on your desktop Ubuntu!

A VM virtualizes a whole computer and then you install in there the guest operating system. In contrast, a container reuses the host Linux kernel and simply contains just the root filesystem (aka runtimes) of our choice. The Linux kernel has several features that rigidly separate the running Linux container from our host computer (i.e. our desktop Ubuntu).

By themselves, Linux containers would need some manual work to manage them directly. Fortunately, there is LXD (pronounced Lex-deeh), a service that manages Linux containers for us.

We will see how to

  1. setup our Ubuntu desktop for containers,
  2. create a container,
  3. install a Web server,
  4. test it a bit, and
  5. clear everything up.

Set up your Ubuntu for containers

If you have Ubuntu 16.04, then you are ready to go. Just install a couple of extra packages that we see below. If you have Ubuntu 14.04.x or Ubuntu 15.10, see LXD 2.0: Installing and configuring LXD [2/12] for some extra steps, then come back.

Make sure the package list is up-to-date:

sudo apt update
sudo apt upgrade

Install the lxd package:

sudo apt install lxd

If you have Ubuntu 16.04, you can enable the feature to store your container files in a ZFS filesystem. The Linux kernel in Ubuntu 16.04 includes the necessary kernel modules for ZFS. For LXD to use ZFS for storage, we just need to install a package with ZFS utilities. Without ZFS, the containers would be stored as separate files on the host filesystem. With ZFS, we have features like copy-on-write which makes the tasks much faster.

Install the zfsutils-linux package (if you have Ubuntu 16.04.x):

sudo apt install zfsutils-linux

Once you installed the LXD package on the desktop Ubuntu, the package installation scripts should have added you to the lxd group. If your desktop account is a member of that group, then your account can manage containers with LXD and can avoid adding sudo in front of all commands. The way Linux works, you would need to log out from the desktop session and then log in again to activate the lxd group membership. (If you are an advanced user, you can avoid the re-login by newgrp lxd in your current shell).

Before use, LXD should be initialized with our storage choice and networking choice.

Initialize lxd for storage and networking by running the following command:

$ sudo lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd-pool
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 30
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes 
> You will be asked about the network bridge configuration. Accept all defaults and continue.
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket
 LXD has been successfully configured.
$ _

We created the ZFS pool as a filesystem inside a (single) file, not a block device (i.e. in a partition), thus no need for extra partitioning. In the example I specified 30GB, and this space will come from the root (/) filesystem. If you want to look at this file, it is at /var/lib/lxd/zfs.img.

 

That’s it! The initial configuration has been completed. For troubleshooting or background information, see https://www.stgraber.org/2016/03/15/lxd-2-0-installing-and-configuring-lxd-212/

Create your first container

All management commands with LXD are available through the lxc command. We run lxc with some parameters and that’s how we manage containers.

lxc list

to get a list of installed containers. Obviously, the list will be empty but it verifies that all are fine.

lxc image list

shows the list of (cached) images that we can use to launch a container. Obviously, the list will be empty but it verifies that all are fine.

lxc image list ubuntu:

shows the list of available remote images that we can use to download and launch as containers. This specific list shows Ubuntu images.

lxc image list images:

shows the list of available remote images for various distributions that we can use to download and launch as containers. This specific list shows all sort of distributions like Alpine, Debian, Gentoo, Opensuse and Fedora.

Let’s launch a container with Ubuntu 16.04 and call it c1:

$ lxc launch ubuntu:x c1
Creating c1
Starting c1
$ _

We used the launch action, then selected the image ubuntu:x (x is an alias for the Xenial/16.04 image) and lastly we use the name c1 for our container.

Let’s view our first installed container,

$ lxc list

+---------+---------+----------------------+------+------------+-----------+
| NAME | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS    |
+---------+---------+----------------------+------+------------+-----------+
| c1   | RUNNING | 10.173.82.158 (eth0) |      | PERSISTENT | 0            |
+---------+---------+----------------------+------+------------+-----------+

Our first container c1 is running and it has an IP address (accessible locally). It is ready to be used!

Install a Web server

We can run commands in our container. The action for running commands, is exec.

$ lxc exec c1 -- uptime
 11:47:25 up 2 min, 0 users, load average: 0.07, 0.05, 0.04
$ _

After the action exec, we specify the container and finally we type command to run inside the container. The uptime is just 2 minutes, it’s a fresh container :-).

The — thing on the command line has to do with parameter processing of our shell. If our command does not have any parameters, we can safely omit the –.

$ lxc exec c1 -- df -h

This is an example that requires the –, because for our command we use the parameter -h. If you omit the –, you get an error.

Let’s get a shell in the container, and update the package list already.

$ lxc exec c1 bash
root@c1:~# apt update
Ign http://archive.ubuntu.com trusty InRelease
Get:1 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB]
Get:2 http://security.ubuntu.com trusty-security InRelease [65.9 kB]
...
Hit http://archive.ubuntu.com trusty/universe Translation-en 
Fetched 11.2 MB in 9s (1228 kB/s) 
Reading package lists... Done
root@c1:~# apt upgrade
Reading package lists... Done
Building dependency tree 
...
Processing triggers for man-db (2.6.7.1-1ubuntu1) ...
Setting up dpkg (1.17.5ubuntu5.7) ...
root@c1:~# _

We are going to install nginx as our Web server. nginx is somewhat cooler than Apache Web server.

root@c1:~# apt install nginx
Reading package lists... Done
Building dependency tree
...
Setting up nginx-core (1.4.6-1ubuntu3.5) ...
Setting up nginx (1.4.6-1ubuntu3.5) ...
Processing triggers for libc-bin (2.19-0ubuntu6.9) ...
root@c1:~# _

Let’s view our Web server with our browser. Remeber the IP address you got 10.173.82.158, so I enter it into my browser.

lxd-nginx

Let’s make a small change in the text of that page. Back inside our container, we enter the directory with the default HTML page.

root@c1:~# cd /var/www/html/
root@c1:/var/www/html# ls -l
total 2
-rw-r--r-- 1 root root 612 Jun 25 12:15 index.nginx-debian.html
root@c1:/var/www/html#

We can edit the file with nano, then save

lxd-nginx-nano

Finally, let’s check the page again,

lxd-nginx-modified

Clearing up

Let’s clear up the container by deleting it. We can easily create new ones when we need them.

$ lxc list
+---------+---------+----------------------+------+------------+-----------+
| NAME | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS    |
+---------+---------+----------------------+------+------------+-----------+
| c1   | RUNNING | 10.173.82.169 (eth0) |      | PERSISTENT | 0            |
+---------+---------+----------------------+------+------------+-----------+
$ lxc stop c1
$ lxc delete c1
$ lxc list
+---------+---------+----------------------+------+------------+-----------+
| NAME | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS    |
+---------+---------+----------------------+------+------------+-----------+
+---------+---------+----------------------+------+------------+-----------+

We stopped (shutdown) the container, then we deleted it.

That’s all. There are many more ideas on what do with containers. Here are the first steps on setting up our Ubuntu desktop and trying out one such container.

on June 25, 2016 12:26 PM

June 24, 2016

Akademy! and fundraising

Valorie Zimmerman

https://qtcon.org/


Akademy is approaching! And I can hardly wait. This spring has been personally difficult, and meeting with friends and colleagues is the perfect way to end the summer. This year will be special because it's in Berlin, and because it is part of Qt.con, with a lot of our freedom-loving friends, such as Qt, VideoLAN, Free Software Foundation Europe and KDAB. As usual, Kubuntu will also be having our annual meetup there.

Events are expensive! KDE needs money to support Akademy the event, support for those who need travel and lodging subsidy, support for other events such as our Randa Meetings, which just successfully ended. We're still raising money to support the sprints:

https://www.kde.org/fundraisers/randameetings2016/

Of course that money supports Akademy too, which is our largest annual meeting.

Ubuntu helps here too! The Ubuntu Community fund sends many of the Kubuntu team, and often funds a shared meal as well. Please support the Ubuntu Community Fund too if you can!

I'm going!

I can't seem to make the image a link, so go to https://qtcon.org/ for more information.
on June 24, 2016 11:07 PM

SNAPs are the cross-distro, cross-cloud, cross-device Linux packaging format of the future.  And we’re already hosting a fantastic catalog of SNAPs in the SNAP store provided by Canonical.  Developers are welcome to publish their software for distribution across hundreds millions of Ubuntu servers, desktops, and devices.

Several people have asked the inevitable open source software question, “SNAPs are awesome, but how can I stand up my own SNAP store?!?”

The answer is really quite simple…  SNAP stores are really just HTTP web servers!  Of course, you can get fancy with branding, and authentication, and certificates.  But if you just want to host SNAPs and enable downstream users to fetch and install software, well, it’s pretty trivial.

In fact, Bret Barker has published an open source (Apache License) SNAP store on GitHub.  We’re already looking at how to flesh out his proof-of-concept and bring it into snapcore itself.

Here’s a little HOWTO install and use it.

First, I launched an instance in AWS.  Of course I could have launched an Ubuntu 16.04 LTS instance, but actually, I launched a Fedora 24 instance!  In fact, you could run your SNAP store on any OS that currently supports SNAPs, really, or even just fork this GitHub repo and install it stand alone..  See snapcraft.io.

Now, let’s find and install a snapstore SNAP.  (Note that in this AWS instance of Fedora 24, I also had to ‘sudo yum install squashfs-tools kernel-modules’.

At this point, you’re running a SNAP store (webserver) on port 5000.

Now, let’s reconfigure snapd to talk to our own SNAP store, and search for a SNAP.

Finally, let’s install and inspect that SNAP.<

How about that?  Easy enough!

Original article

on June 24, 2016 07:58 PM

Linkdump 25/2016 ...

Dirk Deimeke

Nettigkeiten der vergangenen Woche.

Eine der besten Zusammenfassungen, warum man sein eigenes Blog starten sollte: Starte (d)ein Blog – heute!

Dies ist keine Übung - externe Festplatten werden im schlimmsten Fall übrigens auch verschlüsselt. In jedem Fall sollte man die Verschlüsselung rechtzeitig bemerken, sonst ist Essig mit Restore.

Es lohnt sich der Aufwand, sich selber kennen zu lernen. Man könnte sogar ein ganz netter Typ sein. :-) Positives Selbstwertgefühl, wie Sie Ihre persönlichen Stärken im Job am besten einsetzen.

The Life-Changing Magic Of Shorter Emails comes from the "Captain Obvious Department" ... (sorry!).

Ja, wissen wir, aber Ausbildung tut trotzdem Not. Der Fachkräftemangel ist ein Phantom.

The Epic Story of Dropbox’s Exodus From the Amazon Cloud Empire - nice one about a major infrastructure change without interruption of service.

Billig ist nicht immer günstig, auch wenn es sehr einfach wirkt, Fachwissen ist dennoch nötig: Auf diese Aspekte müssen Admins achten.

on June 24, 2016 03:36 AM

June 23, 2016

To celebrate Xubuntu’s tenth birthday*, the Xubuntu team is glad to announce a new campaign and competition!

We’re looking for your most memorable and fun Xubuntu story. In order to participate, submit the story to xubuntu-contacts@lists.ubuntu.com. Or you may share an image (photo, drawing, painting, etc) to Elizabeth K. Joseph <lyz@ubuntu.com> and Pasi Lallinaho <pasi@shimmerproject.org>, please restrict your file size to a maximum of 5M.

For example, have you shared Xubuntu with a friend or family member, and had them react in a memorable way? Or have you created Xubuntu-themed cookies, cakes or artwork? No story or experience is too simple to share and don’t be restricted by these examples, surprise us!

Bonus: Share it on Twitter and hashtag it with #LoveXubuntu and during the competition, the Xubuntu team will retweet a posts on the Twitter account for Xubuntu. Additionally, we encourage to share your stories all over the social media!

At the end of the competition, we will select 5 finalists. All finalists will receive a set of Xubuntu stickers from UnixStickers! We will pick 2 winners from the finalists who will also receive a Xubuntu t-shirt! We will be in touch with the finalists and winners after the contest has ended to check their address details and preferred t-shirt size and color (for winners).

Notes on licensing: Submissions to the #LoveXubuntu campaign will be accepted under the CC BY-SA 4.0 license and available for use for Xubuntu marketing in the future without further consent from the participants. That said, we’re friendly folks and will try to communicate with you before using your story or image!

* The first official Xubuntu release was 6.06, released on June 1, 2006.
on June 23, 2016 09:12 PM

Want to get deeper under the hood with Kubuntu?

Interested in becoming a developer?

Then come and join us in the Kubuntu Dojo:

Thursday 30th June 2016 – 18:00 UTC

Packaging is one of the primary development tasks in a large Linux distribution project. Packaging is the essential way of getting the latest and best software to the user.

We continue our Kubuntu Dojo and Ninja developers training courses. These courses are free to attend, and take place on the last Thursday of the month at 18:00 UTC.

This is course number 2, where we will look at Debian and Ubuntu packaging. Candidates will create their first packages including uploading them to their own PPA on LaunchPad, all deliver inside our online video classroom.

Details for accessing the Kubuntu Big Blue Button Conference server will be announced in the G+ event stream, and on IRC: irc://irc.freenode.net

#kubuntu
#kubuntu-devel

Why it rocks

All the cool kids are doing it.
Packagers know everyone.
Not only will you be part of an elite group, but also get to know with Debian’s finest, as well as KDE developers and other application developers.

For more details about the Kubuntu Ninja’s programme see our wiki:

https://wiki.kubuntu.org/Kubuntu/GettingInvolved/Development

on June 23, 2016 06:22 PM

It’s Episode Seventeen of Season Nine of the Ubuntu Podcast! Mark Johnson, Alan Pope, Laura Cowen, Martin Wimpress, and Mycroft are here and speaking to your brain.

We’re here – all of us!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on June 23, 2016 02:00 PM

June 21, 2016

KDE neon User Edition 5.6 came out a couple of weeks ago, let’s have a look at the commentry.

Phoronix stuck to their reputation by announcing it a day early but redeemed them selves with a follow up article KDE neon: The Rock & Roll Distribution. “KDE neon feels amazing. There’s simply no other way to say it.

CIO had an exclusive interview with moi, “It is a continuously updated installable image that can be used not just for exploration and testing but as the main operating system for people enthusiastic about the latest desktop software.”

For the Spanish speaker MuyLinux wrote KDE Neon lanza su primera versión para usuarios. “La primera impresión ha sido buena.” or “The first impression was good”.

On YouTube we got a review from Jeff Linux Turner. “This thing’s actually pretty good.  I like it.” While Wooden User gives an unvoiced tour with funky music.  Riba Linux has the same but with more of an indy soundtrack.

Reddit had several threads on it including a review by luxitanium which I’ll selectively quote with “Is it ready for consumers? It is definitely getting there, oh yes“.

The award winning Spanish language KDE Blog covered Probando KDE Neon User Edition 5.6. “Estamos ante un gran avance para la Comunidad KDE” or “We are facing a breakthrough for the KDE Community“.

Meanwhile on Twitter:

Want to meet the genius behind the neon light? Harald is giving a talk at the opensuse conference on Thursday. Do drop by in Nürnberg.

facebooktwittergoogle_pluslinkedinby feather
on June 21, 2016 10:29 PM

Please welcome our newest Member, vasa1.

Not only vasa1 has been a long time contributor to the forums, but he’s also a member of the Forums Staff.

vasa1’s application thread can be viewed here.

Congratulations from the Forums Council!

If you have been a contributor to the forums and wish to apply to Ubuntu Membership, please follow the process outlined here.


on June 21, 2016 05:23 PM

June 20, 2016

Classic Ubuntu 16.04 LTS, on an rpi2
Hopefully by now you're well aware of Ubuntu Core -- the snappiest way to run Ubuntu on a Raspberry Pi...

But have you ever wanted to run classic (apt/deb) Ubuntu Server on a RaspberryPi2?


Well, you're in luck!  Follow these instructions, and you'll be up in running in minutes!

First, download the released image (214MB):

$ wget http://cdimage.ubuntu.com/releases/16.04/release/ubuntu-16.04-preinstalled-server-armhf+raspi2.img.xz

Next, uncompress it:

$ unxz *xz

Now, write it to a microSD card using dd.  I'm using the card reader built into my Thinkpad, but you might use a USB adapter.  You'll need to figure out the block device of your card, and perhaps unmount it, if necessary.  Then, you can write the image to disk:

$ sudo dd if=ubuntu-16.04-preinstalled-server-armhf+raspi2.img of=/dev/mmcblk0 bs=32M
$ sync

Now, pop it into your rpi2, and power it on.

If it's connected to a USB mouse and an HDMI monitor, then you'll land in a console where you can login with the username 'ubuntu' and password 'ubuntu', and then you'll be forced to choose a new password.

Assuming it has an Ethernet connection, it should DHCP.  You might need to check your router to determine what IP address it got, or it sets it's hostname to 'ubuntu'.  In my case, I could automatically resolve it on my network, at ubuntu.canyonedge, with IP address 10.0.0.113, and ssh to it:

$ ssh ubuntu@ubuntu.canyonedge

Again, you can login on first boot with password 'ubuntu' and you're required to choose a new password.

On first boot, it will automatically resize the filesystem to use all of the available space on the MicroSD card -- much nicer than having to resize2fs yourself in some offline mode!

Now, you're off and running.  Have fun with sudo, apt, byobu, lxd, docker, and everything else you'd expect to find on a classic Ubuntu server ;-)  Heck, you'll even find the snap command, where you'll be able to install snap packages, right on top of your classic Ubuntu Server!  And if that doesn't just bake your noodle...

Cheers,
Dustin
on June 20, 2016 07:35 PM


Classic Ubuntu 16.04 LTS, on an rpi2

Hopefully by now you’re well aware of Ubuntu Core — the snappiest way to run Ubuntu on a Raspberry Pi…

But have you ever wanted to run classic (apt/deb) Ubuntu Server on a RaspberryPi2?

Well, you’re in luck!  Follow these instructions, and you’ll be up in running in minutes!

First, download the released image (214MB):

$ wget http://cdimage.ubuntu.com/releases/16.04/release/ubuntu-16.04-preinstalled-server-armhf+raspi2.img.xz

Next, uncompress it:

$ unxz *xz

Now, write it to a microSD card using dd.  I’m using the card reader built into my Thinkpad, but you might use a USB adapter.  You’ll need to figure out the block device of your card, and perhaps unmount it, if necessary.  Then, you can write the image to disk:

$ sudo dd if=ubuntu-16.04-preinstalled-server-armhf+raspi2.img of=/dev/mmcblk0 bs=32M
$ sync

Now, pop it into your rpi2, and power it on.

If it’s connected to a USB mouse and an HDMI monitor, then you’ll land in a console where you can login with the username ‘ubuntu‘ and password ‘ubuntu‘, and then you’ll be forced to choose a new password.

Assuming it has an Ethernet connection, it should DHCP.  You might need to check your router to determine what IP address it got, or it sets it’s hostname to ‘ubuntu’.  In my case, I could automatically resolve it on my network, at ubuntu.canyonedge, with IP address 10.0.0.113, and ssh to it:

$ ssh ubuntu@ubuntu.canyonedge

Again, you can login on first boot with password ‘ubuntu‘ and you’re required to choose a new password.

On first boot, it will automatically resize the filesystem to use all of the available space on the MicroSD card — much nicer than having to resize2fs yourself in some offline mode!

Now, you’re off and running.  Have fun with sudo, apt, byobu, lxd, docker, and everything else you’d expect to find on a classic Ubuntu server 😉

Heck, you’ll even find the snap command, where you’ll be able to install snap packages, right on top of your classic Ubuntu Server!  And if that doesn’t just bake your noodle…

Original article

on June 20, 2016 07:35 PM

Doxyqml 0.3.0 released

Aurélien Gâteau

The master branch of Doxyqml, a QML input filter for Doxygen, had been waiting for a release for a long time. Olivier Churlaud, the new KApidox hero, reported that it did not work with Python 3 and submitted a patch to fix this. I integrated his patch, fixed a few other things, set up Travis to test future commits and finally released Doxyqml 0.3.0, featuring the following changes:

  • Port to Python 3 (Olivier Churlaud, Aurélien Gâteau)
  • Skip directory imports (Aurélien Gâteau)
  • Support comment after class declaration (Cédric Cabessa)
  • Find qmldir for relative paths (Mathias Hasselmann)
  • Read import statements to help base class lookup (Mathias Hasselmann)
  • Generate qualified component names (Mathias Hasselmann)
  • Handle singleton pragmas (Mathias Hasselmann)

Note that this new version is Python 3 only, I think it is safe to assume that Python 3 is widespread enough nowadays that this should not be a problem.

on June 20, 2016 05:59 PM

This year a significant number of students are working on RTC-related projects as part of Google Summer of Code, under the umbrella of the Debian Project. You may have already encountered some of them blogging on Planet or participating in mailing lists and IRC.

WebRTC plugins for popular CMS and web frameworks

There are already a range of pseudo-WebRTC plugins available for CMS and blogging platforms like WordPress, unfortunately, many of them are either not releasing all their source code, locking users into their own servers or requiring the users to download potentially untrustworthy browser plugins (also without any source code) to use them.

Mesut is making plugins for genuinely free WebRTC with open standards like SIP. He has recently created the WPCall plugin for WordPress, based on the highly successful DruCall plugin for WebRTC in Drupal.

Keerthana has started creating a similar plugin for MediaWiki.

What is great about these plugins is that they don't require any browser plugins and they work with any server-side SIP infrastructure that you choose. Whether you are routing calls into a call center or simply using them on a personal blog, they are quick and convenient to install. Hopefully they will be made available as packages, like the DruCall packages for Debian and Ubuntu, enabling even faster installation with all dependencies.

Would you like to try running these plugins yourself and provide feedback to the students? Would you like to help deploy them for online communities using Drupal, WordPress or MediaWiki to power their web sites? Please come and discuss them with us in the Free-RTC mailing list.

You can read more about how to run your own SIP proxy for WebRTC in the RTC Quick Start Guide.

Finding all the phone numbers and ham radio callsigns in old emails

Do you have phone numbers and other contact details such as ham radio callsigns in old emails? Would you like a quick way to data-mine your inbox to find them and help migrate them to your address book?

Jaminy is working on Python scripts to do just that. Her project takes some inspiration from the Telify plugin for Firefox, which detects phone numbers in web pages and converts them to hyperlinks for click-to-dial. The popular libphonenumber from Google, used to format numbers on Android phones, is being used to help normalize any numbers found. If you would like to test the code against your own mailbox and address book, please make contact in the #debian-data channel on IRC.

A truly peer-to-peer alternative to SIP, XMPP and WebRTC

The team at Savoir Faire Linux has been busy building the Ring softphone, a truly peer-to-peer solution based on the OpenDHT distribution hash table technology.

Several students (Simon, Olivier, Nicolas and Alok) are actively collaborating on this project, some of them have been fortunate enough to participate at SFL's offices in Montreal, Canada. These GSoC projects have also provided a great opportunity to raise Debian's profile in Montreal ahead of DebConf17 next year.

Linux Desktop Telepathy framework and reSIProcate

Another group of students, Mateus, Udit and Balram have been busy working on C++ projects involving the Telepathy framework and the reSIProcate SIP stack. Telepathy is the framework behind popular softphones such as GNOME Empathy that are installed by default on the GNU/Linux desktop.

I previously wrote about starting a new SIP-based connection manager for Telepathy based on reSIProcate. Using reSIProcate means more comprehensive support for all the features of SIP, better NAT traversal, IPv6 support, NAPTR support and TLS support. The combined impact of all these features is much greater connectivity and much greater convenience.

The students are extending that work, completing the buddy list functionality, improving error handling and looking at interaction with XMPP.

Streamlining provisioning of SIP accounts

Currently there is some manual effort for each user to take the SIP account settings from their Internet Telephony Service Provider (ITSP) and transpose these into the account settings required by their softphone.

Pranav has been working to close that gap, creating a JAR that can be embedded in Java softphones such as Jitsi, Lumicall and CSipSimple to automate as much of the provisioning process as possible. ITSPs are encouraged to test this client against their services and will be able to add details specific to their service through Github pull requests.

The project also hopes to provide streamlined provisioning mechanisms for privately operated SIP PBXes, such as the Asterisk and FreeSWITCH servers used in small businesses.

Improving SIP support in Apache Camel and the Jitsi softphone

Apache Camel's SIP component and the widely known Jitsi softphone both use the JAIN SIP library for Java.

Nik has been looking at issues faced by SIP users in both projects, adding support for the MESSAGE method in camel-sip and looking at why users sometimes see multiple password prompts for SIP accounts in Jitsi.

If you are trying either of these projects, you are very welcome to come and discuss them on the mailing lists, Camel users and Jitsi users.

GSoC students at DebConf16 and DebConf17 and other events

Many of us have been lucky to meet GSoC students attending DebConf, FOSDEM and other events in the past. From this year, Google now expects the students to complete GSoC before they become eligible for any travel assistance. Some of the students will still be at DebConf16 next month, assisted by the regular travel budget and the diversity funding initiative. Nik and Mesut were already able to travel to Vienna for the recent MiniDebConf / LinuxWochen.at

As mentioned earlier, several of the students and the mentors at Savoir Faire Linux are based in Montreal, Canada, the destination for DebConf17 next year and it is great to see the momentum already building for an event that promises to be very big.

Explore the world of Free Real-Time Communications (RTC)

If you are interesting in knowing more about the Free RTC topic, you may find the following resources helpful:

RTC mentoring team 2016

We have been very fortunate to build a large team of mentors around the RTC-themed projects for 2016. Many of them are first time GSoC mentors and/or new to the Debian community. Some have successfully completed GSoC as students in the past. Each of them brings unique experience and leadership in their domain.

Helping GSoC projects in 2016 and beyond

Not everybody wants to commit to being a dedicated mentor for a GSoC student. In fact, there are many ways to help without being a mentor and many benefits of doing so.

Simply looking out for potential applicants for future rounds of GSoC and referring them to the debian-outreach mailing list or an existing mentor helps ensure we can identify talented students early and design projects around their capabilities and interests.

Testing the projects on an ad-hoc basis, greeting the students at DebConf and reading over the student wikis to find out where they are and introduce them to other developers in their area are all possible ways to help the projects succeed and foster long term engagement.

Google gives Debian a USD $500 grant for each student who completes a project successfully this year. If all 2016 students pass, that is over $10,000 to support Debian's mission.

on June 20, 2016 03:02 PM

A little while back I shared that I decided to leave GitHub. Firstly, thanks to all of you for your incredible support. I am blessed to have such wonderful people in my life.

Since that post I have been rather quiet about what my next adventure is going to be, and some of the speculation has been rather amusing. Now I am finally ready to share more details.

In a nutshell, I have started a new consultancy practice to provide community management, innersourcing, developer workflow/relations, and other related services. To keep things simple right now, this new practice is called Jono Bacon Consulting (original, eh?)

As some of you know, I have actually been providing community strategy and management consultancy for quite some time. Previously I have worked with organizations such as Deutsche Bank, Sony Mobile, ON.LAB, Open Networking Foundation, Intel and others. I am also an active advisor for organizations such as AlienVault, Open Networking Foundation, Open Cloud Consortium, Mycroft AI and I also advise some startup accelerators.

I have always loved this kind of work. My wider career ambitions have always been to help organizations build great communities and to further the wider art and science of collaboration and community development. I love the experience and insight I gain with each new client.

When I made the decision to move on from GitHub I was fortunate to have some compelling options on the table for new roles. After spending some time thinking about what I love doing and these wider ambitions, it became clear that consulting was the right step forward. I would have shared this news earlier but I have already been busy traveling and working with clients. 😉

I am really excited about this new chapter. While I feel I have a lot I can offer my clients today, I am looking forward to continuing to broaden my knowledge, expertise, and diversity of community strategy and leadership. I am also excited to share these learnings with you all in my writing, presentations, and elsewhere. This has always been a journey, and each new road opens up interesting new questions and potential, and I am thirsty to discover and explore more.

So, if you are interested in building a community, either inside or outside (or both) your organization, feel free to discover more and get in touch and we can talk more.

on June 20, 2016 02:45 PM

It takes a special kind of people who enjoy being in the first in a new community. It’s a time when there’s a lot of empty canvas, wide landscapes to uncover, lots of dragons still on a map, I guess you already see what I mean. It takes some pioneer spirit to feel comfortable when the rules are not all figured out yet and stuff is still a bit harder than it should be.

The last occurrence where I saw this live was the Snappy Playpen. A project where all the early snap contributors hang out, figure out problems, document best-practices and have fun together.

We use Github and Gitter/IRC to coordinate things, we have been going for a bit more than two weeks now and I’m quite happy with where we’ve got. We had about 60 people in the Gitter channel, had more than 30 snaps contributed and about the same number or more being in the works.

playpen

But it’s not just the number of snaps. It’s also the level of helping each other out and figuring out bigger problems together. Here’s just a (very) few things as an example:

  • David Planella wrote a common launcher for GTK apps and we could move snaps like leafpad, galculator and ristretto off of their own custom launchers today. It’s available as a wiki part, so it’s quite easy to consume today.
  • Simon Quigley and Didier Roche figured out better contribution guidelines and moved the existing snaps to use them instead.
  • With new interfaces landing in snapd, it was nice to see how they were picked up in existing snaps and formerly existing issues resolved. David Callé for example fixed the vlc and scummvm snaps this way.
  • Sometimes it takes perseverance to your snap landed. It took Andy Keech quite a while to get imagemagick (both stable and from git) to build and work properly, but thanks to Andy’s hard work and collaboration with the Snapcraft developers they’re included now.
  • The docs are good, but they don’t cover all use-cases yet and we’re finding new ways to use the tools every day.

As I said earlier: it takes some pioneer spirit to be happy in such circumstances and all the folks above (and many others) have been working together as a team together in the last days. For me, as somebody who’s supporting the project, this was very nice to see. Particularly seeing people from all over the open source spectrum (users of cloud tools, GTK and Qt apps, python scripts, upstream developers, Java tools and many more).

Tomorrow we are going to have our kickoff event for week 3 of Snappy Playpen. As I said in the mail, one area of focus is going to be server apps and electron based apps, but feel free to bring whatever you enjoy working on.

I’d like to thank each and everyone of you who is participating in this initiative (not just the people who committed something). The atmosphere is great, we’re solving problems together and we’re excited to bring a more complete, easier to digest and better to use snap experience to new users.

on June 20, 2016 02:41 PM

June 19, 2016

Go Debian!

Paul Tagliamonte

As some of the world knows full well by now, I've been noodling with Go for a few years, working through its pros, its cons, and thinking a lot about how humans use code to express thoughts and ideas. Go's got a lot of neat use cases, suited to particular problems, and used in the right place, you can see some clear massive wins.

I've started writing Debian tooling in Go, because it's a pretty natural fit. Go's fairly tight, and overhead shouldn't be taken up by your operating system. After a while, I wound up hitting the usual blockers, and started to build up abstractions. They became pretty darn useful, so, this blog post is announcing (a still incomplete, year old and perhaps API changing) Debian package for Go. The Go importable name is pault.ag/go/debian. This contains a lot of utilities for dealing with Debian packages, and will become an edited down "toolbelt" for working with or on Debian packages.

Module Overview

Currently, the package contains 4 major sub packages. They're a changelog parser, a control file parser, deb file format parser, dependency parser and a version parser. Together, these are a set of powerful building blocks which can be used together to create higher order systems with reliable understandings of the world.

changelog

The first (and perhaps most incomplete and least tested) is a changelog file parser.. This provides the programmer with the ability to pull out the suite being targeted in the changelog, when each upload was, and the version for each. For example, let's look at how we can pull when all the uploads of Docker to sid took place:

func main() {
    resp, err := http.Get("http://metadata.ftp-master.debian.org/changelogs/main/d/docker.io/unstable_changelog")
    if err != nil {
        panic(err)
    }
    allEntries, err := changelog.Parse(resp.Body)
    if err != nil {
        panic(err)
    }
    for _, entry := range allEntries {
        fmt.Printf("Version %s was uploaded on %s\n", entry.Version, entry.When)
    }
}

The output of which looks like:

Version 1.8.3~ds1-2 was uploaded on 2015-11-04 00:09:02 -0800 -0800
Version 1.8.3~ds1-1 was uploaded on 2015-10-29 19:40:51 -0700 -0700
Version 1.8.2~ds1-2 was uploaded on 2015-10-29 07:23:10 -0700 -0700
Version 1.8.2~ds1-1 was uploaded on 2015-10-28 14:21:00 -0700 -0700
Version 1.7.1~dfsg1-1 was uploaded on 2015-08-26 10:13:48 -0700 -0700
Version 1.6.2~dfsg1-2 was uploaded on 2015-07-01 07:45:19 -0600 -0600
Version 1.6.2~dfsg1-1 was uploaded on 2015-05-21 00:47:43 -0600 -0600
Version 1.6.1+dfsg1-2 was uploaded on 2015-05-10 13:02:54 -0400 EDT
Version 1.6.1+dfsg1-1 was uploaded on 2015-05-08 17:57:10 -0600 -0600
Version 1.6.0+dfsg1-1 was uploaded on 2015-05-05 15:10:49 -0600 -0600
Version 1.6.0+dfsg1-1~exp1 was uploaded on 2015-04-16 18:00:21 -0600 -0600
Version 1.6.0~rc7~dfsg1-1~exp1 was uploaded on 2015-04-15 19:35:46 -0600 -0600
Version 1.6.0~rc4~dfsg1-1 was uploaded on 2015-04-06 17:11:33 -0600 -0600
Version 1.5.0~dfsg1-1 was uploaded on 2015-03-10 22:58:49 -0600 -0600
Version 1.3.3~dfsg1-2 was uploaded on 2015-01-03 00:11:47 -0700 -0700
Version 1.3.3~dfsg1-1 was uploaded on 2014-12-18 21:54:12 -0700 -0700
Version 1.3.2~dfsg1-1 was uploaded on 2014-11-24 19:14:28 -0500 EST
Version 1.3.1~dfsg1-2 was uploaded on 2014-11-07 13:11:34 -0700 -0700
Version 1.3.1~dfsg1-1 was uploaded on 2014-11-03 08:26:29 -0700 -0700
Version 1.3.0~dfsg1-1 was uploaded on 2014-10-17 00:56:07 -0600 -0600
Version 1.2.0~dfsg1-2 was uploaded on 2014-10-09 00:08:11 +0000 +0000
Version 1.2.0~dfsg1-1 was uploaded on 2014-09-13 11:43:17 -0600 -0600
Version 1.0.0~dfsg1-1 was uploaded on 2014-06-13 21:04:53 -0400 EDT
Version 0.11.1~dfsg1-1 was uploaded on 2014-05-09 17:30:45 -0400 EDT
Version 0.9.1~dfsg1-2 was uploaded on 2014-04-08 23:19:08 -0400 EDT
Version 0.9.1~dfsg1-1 was uploaded on 2014-04-03 21:38:30 -0400 EDT
Version 0.9.0+dfsg1-1 was uploaded on 2014-03-11 22:24:31 -0400 EDT
Version 0.8.1+dfsg1-1 was uploaded on 2014-02-25 20:56:31 -0500 EST
Version 0.8.0+dfsg1-2 was uploaded on 2014-02-15 17:51:58 -0500 EST
Version 0.8.0+dfsg1-1 was uploaded on 2014-02-10 20:41:10 -0500 EST
Version 0.7.6+dfsg1-1 was uploaded on 2014-01-22 22:50:47 -0500 EST
Version 0.7.1+dfsg1-1 was uploaded on 2014-01-15 20:22:34 -0500 EST
Version 0.6.7+dfsg1-3 was uploaded on 2014-01-09 20:10:20 -0500 EST
Version 0.6.7+dfsg1-2 was uploaded on 2014-01-08 19:14:02 -0500 EST
Version 0.6.7+dfsg1-1 was uploaded on 2014-01-07 21:06:10 -0500 EST

control

Next is one of the most complex, and one of the oldest parts of go-debian, which is the control file parser (otherwise sometimes known as deb822). This module was inspired by the way that the json module works in Go, allowing for files to be defined in code with a struct. This tends to be a bit more declarative, but also winds up putting logic into struct tags, which can be a nasty anti-pattern if used too much.

The first primitive in this module is the concept of a Paragraph, a struct containing two values, the order of keys seen, and a map of string to string. All higher order functions dealing with control files will go through this type, which is a helpful interchange format to be aware of. All parsing of meaning from the Control file happens when the Paragraph is unpacked into a struct using reflection.

The idea behind this strategy that you define your struct, and let the Control parser handle unpacking the data from the IO into your container, letting you maintain type safety, since you never have to read and cast, the conversion will handle this, and return an Unmarshaling error in the event of failure.

Additionally, Structs that define an anonymous member of control.Paragraph will have the raw Paragraph struct of the underlying file, allowing the programmer to handle dynamic tags (such as X-Foo), or at least, letting them survive the round-trip through go.

The default decoder contains an argument, the ability to verify the input control file using an OpenPGP keyring, which is exposed to the programmer through the (*Decoder).Signer() function. If the passed argument is nil, it will not check the input file signature (at all!), and if it has been passed, any signed data must be found or an error will fall out of the NewDecoder call. On the way out, the opposite happens, where the struct is introspected, turned into a control.Paragraph, and then written out to the io.Writer.

Here's a quick (and VERY dirty) example showing the basics of reading and writing Debian Control files with go-debian.

package main

import (
    "fmt"
    "io"
    "net/http"
    "strings"

    "pault.ag/go/debian/control"
)

type AllowedPackage struct {
    Package     string
    Fingerprint string
}

func (a *AllowedPackage) UnmarshalControl(in string) error {
    in = strings.TrimSpace(in)
    chunks := strings.SplitN(in, " ", 2)
    if len(chunks) != 2 {
        return fmt.Errorf("Syntax sucks: '%s'", in)
    }
    a.Package = chunks[0]
    a.Fingerprint = chunks[1][1 : len(chunks[1])-1]

    return nil
}

type DMUA struct {
    Fingerprint     string
    Uid             string
    AllowedPackages []AllowedPackage `control:"Allow" delim:","`
}

func main() {
    resp, err := http.Get("http://metadata.ftp-master.debian.org/dm.txt")
    if err != nil {
        panic(err)
    }

    decoder, err := control.NewDecoder(resp.Body, nil)
    if err != nil {
        panic(err)
    }

    for {
        dmua := DMUA{}
        if err := decoder.Decode(&dmua); err != nil {
            if err == io.EOF {
                break
            }
            panic(err)
        }
        fmt.Printf("The DM %s is allowed to upload:\n", dmua.Uid)
        for _, allowedPackage := range dmua.AllowedPackages {
            fmt.Printf("   %s [granted by %s]\n", allowedPackage.Package, allowedPackage.Fingerprint)
        }
    }
}

Output (truncated!) looks a bit like:

...
The DM Allison Randal <allison@lohutok.net> is allowed to upload:
   parrot [granted by A4F455C3414B10563FCC9244AFA51BD6CDE573CB]
...
The DM Benjamin Barenblat <bbaren@mit.edu> is allowed to upload:
   boogie [granted by 3224C4469D7DF8F3D6F41A02BBC756DDBE595F6B]
   dafny [granted by 3224C4469D7DF8F3D6F41A02BBC756DDBE595F6B]
   transmission-remote-gtk [granted by 3224C4469D7DF8F3D6F41A02BBC756DDBE595F6B]
   urweb [granted by 3224C4469D7DF8F3D6F41A02BBC756DDBE595F6B]
...
The DM أحمد المحمودي <aelmahmoudy@sabily.org> is allowed to upload:
   covered [granted by 41352A3B4726ACC590940097F0A98A4C4CD6E3D2]
   dico [granted by 6ADD5093AC6D1072C9129000B1CCD97290267086]
   drawtiming [granted by 41352A3B4726ACC590940097F0A98A4C4CD6E3D2]
   fonts-hosny-amiri [granted by BD838A2BAAF9E3408BD9646833BE1A0A8C2ED8FF]
   ...
...

deb

Next up, we've got the deb module. This contains code to handle reading Debian 2.0 .deb files. It contains a wrapper that will parse the control member, and provide the data member through the archive/tar interface.

Here's an example of how to read a .deb file, access some metadata, and iterate over the tar archive, and print the filenames of each of the entries.

func main() {
    path := "/tmp/fluxbox_1.3.5-2+b1_amd64.deb"
    fd, err := os.Open(path)
    if err != nil {
        panic(err)
    }
    defer fd.Close()

    debFile, err := deb.Load(fd, path)
    if err != nil {
        panic(err)
    }

    version := debFile.Control.Version
    fmt.Printf(
        "Epoch: %d, Version: %s, Revision: %s\n",
        version.Epoch, version.Version, version.Revision,
    )

    for {
        hdr, err := debFile.Data.Next()
        if err == io.EOF {
            break
        }
        if err != nil {
            panic(err)
        }
        fmt.Printf("  -> %s\n", hdr.Name)
    }
}

Boringly, the output looks like:

Epoch: 0, Version: 1.3.5, Revision: 2+b1
  -> ./
  -> ./etc/
  -> ./etc/menu-methods/
  -> ./etc/menu-methods/fluxbox
  -> ./etc/X11/
  -> ./etc/X11/fluxbox/
  -> ./etc/X11/fluxbox/window.menu
  -> ./etc/X11/fluxbox/fluxbox.menu-user
  -> ./etc/X11/fluxbox/keys
  -> ./etc/X11/fluxbox/init
  -> ./etc/X11/fluxbox/system.fluxbox-menu
  -> ./etc/X11/fluxbox/overlay
  -> ./etc/X11/fluxbox/apps
  -> ./usr/
  -> ./usr/share/
  -> ./usr/share/man/
  -> ./usr/share/man/man5/
  -> ./usr/share/man/man5/fluxbox-style.5.gz
  -> ./usr/share/man/man5/fluxbox-menu.5.gz
  -> ./usr/share/man/man5/fluxbox-apps.5.gz
  -> ./usr/share/man/man5/fluxbox-keys.5.gz
  -> ./usr/share/man/man1/
  -> ./usr/share/man/man1/startfluxbox.1.gz
...

dependency

The dependency package provides an interface to parse and compute dependencies. This package is a bit odd in that, well, there's no other library that does this. The issue is that there are actually two different parsers that compute our Dependency lines, one in Perl (as part of dpkg-dev) and another in C (in dpkg).

To date, this has resulted in me filing three different bugs. I also found a broken package in the archive, which actually resulted in another bug being (totally accidentally) already fixed. I hope to continue to run the archive through my parser in hopes of finding more bugs! This package is a bit complex, but it basically just returns what amounts to be an AST for our Dependency lines. I'm positive there are bugs, so file them!

func main() {
    dep, err := dependency.Parse("foo | bar, baz, foobar [amd64] | bazfoo [!sparc], fnord:armhf [gnu-linux-sparc]")
    if err != nil {
        panic(err)
    }

    anySparc, err := dependency.ParseArch("sparc")
    if err != nil {
        panic(err)
    }

    for _, possi := range dep.GetPossibilities(*anySparc) {
        fmt.Printf("%s (%s)\n", possi.Name, possi.Arch)
    }
}

Gives the output:

foo (<nil>)
baz (<nil>)
fnord (armhf)

version

Right off the bat, I'd like to thank Michael Stapelberg for letting me graft this out of dcs and into the go-debian package. This was nearly entirely his work (with a one or two line function I added later), and was amazingly helpful to have. Thank you!

This module implements Debian version comparisons and parsing, allowing for sorting in lists, checking to see if it's native or not, and letting the programmer to implement smart(er!) logic based on upstream (or Debian) version numbers.

This module is extremely easy to use and very straightforward, and not worth writing an example for.

Final thoughts

This is more of a "Yeah, OK, this has been useful enough to me at this point that I'm going to support this" rather than a "It's stable!" or even "It's alive!" post. Hopefully folks can report bugs and help iterate on this module until we have some really clean building blocks to build solid higher level systems on top of. Being able to have multiple libraries interoperate by relying on go-debian will be a massive ease. I'm in need of more documentation, and to finalize some parts of the older sub package APIs, but I'm hoping to be at a "1.0" real soon now.

on June 19, 2016 04:30 PM