July 23, 2014

The problem: Some time ago, I had a server “in the wild” from which I
wanted some data backed up to my rsync.net account. I didn’t want to
put sensitive credentials on this server in case it got compromised.

The awesome admins at rsync.net pointed out their subuid feature. For
no extra charge, they’ll give you another uid, which can have its own
ssh keys, whose home directory is symbolically linked under your main
uid’s home directory. So the server can rsync backups to the subuid,
and if it is compromised, attackers cannot get at any info which didn’t
originate from that server anyway.

Very nice.


on July 23, 2014 04:02 AM

July 22, 2014

Those of you who follow this blog since some time know for sure that the preferred language is English (a little number of posts in the early stages are an exception). Things are changing though.

It’s not that difficult to understand: if you go on it.deshack.net you can see this website in Italian. I’ve been thinking about giving a big change to this little place in the web for a while, as I want it to become more than a simple blog. I am working on a new theme for business websites, but I’ll let you know when it’s time. In the mean time, don’t be amazed if you see some small changes here.

Note

The main language will remain the English. You will find all the Italian content on it.deshack.net, as said before. Old posts will be translated only if someone asks.

Now it’s time for me to ask something to you: do you think this is an interesting change? Let me know with a comment!

on July 22, 2014 05:21 PM

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140722 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel has been rebased to v3.16-rc6 and officially uploaded
to the archive. We (as in apw) has also completed a hurculean config
review for Utopic and administered the appropriate changes. Please test
and let us know your results.
—–
Important upcoming dates:
Thurs Jul 24 – 14.04.1 (~2 days away)
Thurs Aug 07 – 12.04.5 (~2 weeks away)
Thurs Aug 21 – Utopic Feature Freeze (~4 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (Jul. 22):

  • Lucid – Released
  • Precise – Released
  • Saucy – Released
  • Trusty – Released

    Current opened tracking bugs details:

  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html

    Schedule:

    14.04.1 cycle: 29-Jun through 07-Aug
    ====================================================================
    27-Jun Last day for kernel commits for this cycle
    29-Jun – 05-Jul Kernel prep week.
    06-Jul – 12-Jul Bug verification & Regression testing.
    13-Jul – 19-Jul Regression testing & Release to -updates.
    20-Jul – 24-Jul Release prep
    24-Jul 14.04.1 Release [1]
    07-Aug 12.04.5 Release [2]

    cycle: 08-Aug through 29-Aug
    ====================================================================
    08-Aug Last day for kernel commits for this cycle
    10-Aug – 16-Aug Kernel prep week.
    17-Aug – 23-Aug Bug verification & Regression testing.
    24-Aug – 29-Aug Regression testing & Release to -updates.

    [1] This will be the very last kernels for lts-backport-quantal, lts-backport-raring,
    and lts-backport-saucy.

    [2] This will be the lts-backport-trusty kernel as the default in the precise point
    release iso.


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

on July 22, 2014 05:12 PM

Box for Qt

Lubuntu Blog

Box's evolution continues ahead. Due to the Qt development, the main theme for Lubuntu must grow a bit more to cover more apps, devices and, of course, environments. Now it's Qt, the sub-system for the next Lubuntu desktop, but this will allow its use for KDE5 and Plasma Next. For now it's just a project, but the Dolphin file manager looks fine! Note: this is under heavy development, no
on July 22, 2014 04:11 PM

The Community Team

Rick Spencer



So, given Jono’s departure a few weeks back, I bet a lot of folks have been wondering about the Canonical Community Team. For a little background, the community team reports into the Ubuntu Engineering division of Canonical, which means that they all report into me. We have not been idle, and this post is to discuss a bit about the Community Team going forward.

What has Stayed the Same?

First, we have made some changes to the structure of the community team itself. However, one thing did not change. I kept the community team reporting directly into me, VP of Engineering, Ubuntu. I decided to do this so that there is a direct line to me for any community concerns that have been raised to anyone on the community team.

I had a call with the Community Council a couple of weeks ago to discuss the community team and get feedback about how it is functioning and how things could be improved going forward. I laid out the following for the team.

First, there were three key things that I think that I wanted the Community Team to continue to focus on:
  • Continue to create and run innovative programs to facilitate ever more community contributions and growing the community.
  • Continue to provide good advice to me and the rest of Canonical regarding how to be the best community members we can be, given our privileged positions of being paid to work within that community.
  • Continue to assist with outward communication from Canonical to the community regarding plans, project status, and changes to those.
The Community Council was very engaged in discussing how this all works and should work in the future, as well as other goals and responsibilities for the community team.

What Has Changed?

In setting up the team, I had some realizations. First, there was no longer just one “Community Manager”. When the project was young and Canonical was small, we had only one, and the team slowly grew. However, the Team is now four people dedicated to just the Community Team, and there are others who spend almost all of their time working on Community Team projects.

Secondly, while individuals on the team had been hired to have specific roles in the community, every one of them had branched out to tackle new challenges as needed.

Thirdly, there is no longer just one “Community Spokesperson”. Everyone in Ubuntu Engineering can and should speak to/for Canonical and to/for the Ubuntu Community in the right contexts.
So, we made some small, but I think important changes to the Community Team.

First, we created the role Community Team Manager. Notice the important inclusion of the word Team”. This person’s job is not to “manage the community”, but rather to organize and lead the rest of the community team members. This includes things like project planning, HR responsibilities, strategic planning and everything else entailed in being a good line manager. After a rather competitive interview process, with some strong candidates, one person clearly rose to the top as the best candidate. So, I would like formally introduce David Planella (lp, g+) as the Community Team Manager!

Second, I change the other job titles from their rather specific titles to just “Community Manager” in order to reflect the reality that everyone on the community team is responsible for the whole community. So that means, Michael Hall (lp, g+), Daniel Holbach (lp, g+), and Nicholas Skaggs (lp, g+), are all now “Community Manager”.

What's Next?

This is a very strong team, and a really good group of people. I know them each personally, and have a lot of confidence in each of them personally. Combined as a team, they are amazing. I am excited to see what comes next.

In light of these changes, the most common question I get is, “Who do I talk to if I have a question or concern?” The answer to that is “anyone.” It’s understandable if you feel the most comfortable talking to someone on the community team, so please feel free to find David, Michael, Daniel, or Nicholas online and ask their question. There are, of course, other stalwarts like Alan Pope (lp, g+) and Oliver Grawert (lp, g+) who seem to be always online :) By which, I mean to say that while the Community Managers are here to serve the Ubuntu Community, I hope that anyone in Ubuntu Engineering considers their role in the Ubuntu Community to include working with anyone else in the Ubuntu Community :)

Want talk directly to the community team today? Easy, join their Ubuntu on Air Q&A Session at 15 UTC :)

Finally, please note that I love to be "interrupted" by questions from community members :) The best way to get in touch with me is on freenode, where I go by rickspencer3. Otherwise, I am also on g+, and of course there is this blog :)
on July 22, 2014 12:56 PM

Box support for MATE

Lubuntu Blog

The Box theme support continues growing, covering more and more environments. Now we're celebrating that the MATE desktop environment, a GTK3 fork of the traditional Gnome2, will have its own Ubuntu flavour, named Ubuntu MATE Remix. Once tested, I noticed I missed something familiar, our beloved Lubuntu spirit on it. So here begins the (experimental) theme support. It'll be available to download
on July 22, 2014 10:24 AM

Yesterday’s autopkgtest 3.2 release brings several changes and improvements that developers should be aware of.

Cleanup of CLI options, and config files

Previous adt-run versions had rather complex, confusing, and rarely (if ever?) used options for filtering binaries and building sources without testing them. All of those (--instantiate, --sources-tests, --sources-no-tests, --built-binaries-filter, --binaries-forbuilds, and --binaries-fortests) now went away. Now there is only -B/--no-built-binaries left, which disables building/using binaries for the subsequent unbuilt tree or dsc arguments (by default they get built and their binaries used for tests), and I added its opposite --built-binaries for completeness (although you most probably never need this).

The --help output now is a lot easier to read, both due to above cleanup, and also because it now shows several paragraphs for each group of related options, and sorts them in descending importance. The manpage got updated accordingly.

Another new feature is that you can now put arbitrary parts of the command line into a file (thanks to porting to Python’s argparse), with one option/argument per line. So you could e. g. create config files for options and runners which you use often:

$ cat adt_sid
--output-dir=/tmp/out
-s
---
schroot
sid

$ adt-run libpng @adt_sid

Shell command tests

If your test only contains a shell command or two, or you want to re-use an existing upstream test executable and just need to wrap it with some command like dbus-launch or env, you can use the new Test-Command: field instead of Tests: to specify the shell command directly:

Test-Command: xvfb-run -a src/tests/run
Depends: @, xvfb, [...]

This avoids having to write lots of tiny wrappers in debian/tests/. This was already possible for click manifests, this release now also brings this for deb packages.

Click improvements

It is now very easy to define an autopilot test with extra package dependencies or restrictions, without having to specify the full command, using the new autopilot_module test definition. See /usr/share/doc/autopkgtest/README.click-tests.html for details.

If your test fails and you just want to run your test with additional dependencies or changed restrictions, you can now avoid having to rebuild the .click by pointing --override-control (which previously only worked for deb packages) to the locally modified manifest. You can also (ab)use this to e. g. add the autopilot -v option to autopilot_module.

Unpacking of test dependencies was made more efficient by not downloading Python 2 module packages (which cannot be handled in “unpack into temp dir” mode anyway).

Finally, I made the adb setup script more robust and also faster.

As usual, every change in control formats, CLI etc. have been documented in the manpages and the various READMEs. Enjoy!

on July 22, 2014 06:16 AM

I picked up Zoe from Sarah this morning and dropped her at Kindergarten. Traffic seemed particularly bad this morning, or I'm just out of practice.

I spent the day powering through the last two parts of the registration block of my real estate licence training. I've got one more piece of assessment to do, and then it should be done. The rest is all dead-tree written stuff that I have to mail off to get marked.

Zoe's doing tennis this term as her extra-curricular activity, and it's on a Tuesday afternoon after Kindergarten at the tennis court next door.

I'm not sure what proportion of the class is continuing on from previous terms, and so how far behind the eight ball Zoe will be, but she seemed to do okay today, and she seemed to enjoy it. Megan's in the class too, and that didn't seem to result in too much cross-distraction.

After that, we came home and just pottered around for a bit and then Zoe watched some TV until Sarah came to pick her up.

on July 22, 2014 01:23 AM

July 21, 2014

Welcome to the Ubuntu Weekly Newsletter. This is issue #375 for the weeks July 7 – 20, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Jose Antonio Rey
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on July 21, 2014 11:42 PM

KDE Project:

When life gives you a sunny beach to live on, make a mojito and go for a swim. Since KDE has an office that all KDE developer are welcome to use in Barcelona I decided to move to Barcelona until I get bored. So far there's an interesting language or two, hot weather to help my fragile head and water polo in the sky. Do drop by next time you're in town.


Plasma 5 Release Party Drinks


Also new poll for Plasma 5. What's your favourite feature?
on July 21, 2014 07:22 PM

This past spring I had the great opportunity to work with Matthew Helmke, José Antonio Rey and Debra Williams of Pearson on the 8th edition of The Official Ubuntu Book.

Official Ubuntu Book, 8th Edition

In addition to the obvious task of updating content, one of our most important tasks was working to “future proof” the book more by doing rewrites in a way that would make sure the content of the book was going to be useful until the next Long Term Support release, in 2016. This meant a fair amount of content refactoring, less specifics when it came to members of teams and lots of goodies for folks looking to become power users of Unity.

Quoting the product page from Pearson:

The Official Ubuntu Book, Eighth Edition, has been extensively updated with a single goal: to make running today’s Ubuntu even more pleasant and productive for you. It’s the ideal one-stop knowledge source for Ubuntu novices, those upgrading from older versions or other Linux distributions, and anyone moving toward power-user status.

Its expert authors focus on what you need to know most about installation, applications, media, administration, software applications, and much more. You’ll discover powerful Unity desktop improvements that make Ubuntu even friendlier and more convenient. You’ll also connect with the amazing Ubuntu community and the incredible resources it offers you.

Huge thanks to all my collaborators on this project. It was a lot of fun to work them and I already have plans to work with all three of them on other projects in the future.

So go pick up a copy! As my first published book, I’d be thrilled to sign it for you if you bring it to an event I’m at, upcoming events include:

And of course, monthly Ubuntu Hours and Debian Dinners in San Francisco.

on July 21, 2014 04:21 PM

Technically a fork is any instance of a codebase being copied and developed independently of its parent.  But when we use the word it usually encompasses far more than that. Usually when we talk about a fork we mean splitting the community around a project, just as much as splitting the code itself. Communities are not like code, however, they don’t always split in consistent or predictable ways. Nor are all forks the same, and both the reasons behind a fork, and the way it is done, will have an effect on whether and how the community around it will split.

There are, by my observation, three different kinds of forks that can be distinguished by their intent and method.  These can be neatly labeled as Convergent, Divergent and Emergent forks.

Convergent Forks

Most often when we talk about forks in open source, we’re talking about convergent forks. A convergent fork is one that shares the same goals as its parent, seeks to recruit the same developers, and wants to be used by the same users. Convergent forks tend to happen when a significant portion of the parent project’s developers are dissatisfied with the management or processes around the project, but otherwise happy with the direction of its development. The ultimate goal of a convergent fork is to take the place of the parent project.

Because they aim to take the place of the parent project, convergent forks must split the community in order to be successful. The community they need already exists, both the developers and the users, around the parent project, so that is their natural source when starting their own community.

Divergent Forks

Less common that convergent forks, but still well known by everybody in open source, are the divergent forks.  These forks are made by developers who are not happy with the direction of a project’s development, even if they are generally satisfied with its management.  The purpose of a divergent fork is to create something different from the parent, with different goals and most often different communities as well. Because they are creating a different product, they will usually be targeting a different group of users, one that was not well served by the parent project.  They will, however, quite often target many of the same developers as the parent project, because most of the technology and many of the features will remain the same, as a result of their shared code history.

Divergent forks will usually split a community, but to a much smaller extent than a convergent fork, because they do not aim to replace the parent for the entire community. Instead they often focus more on recruiting those users who were not served well, or not served at all, by the existing project, and will grown a new community largely from sources other than the parent community.

Emergent Forks

Emergent forks are not technically forks in the code sense, but rather new projects with new code, but which share the same goals and targets the same users as an existing project.  Most of us know these as NIH, or “Not Invented Here”, projects. They come into being on their own, instead of splitting from an existing source, but with the intention of replacing an existing project for all or part of an existing user community. Emergent forks are not the result of dissatisfaction with either the management or direction of an existing project, but most often a dissatisfaction with the technology being used, or fundamental design decisions that can’t be easily undone with the existing code.

Because they share the same goals as an existing project, these forks will usually result in a split of the user community around an existing project, unless they differ enough in features that they can targets users not already being served by those projects. However, because they do not share much code or technology with the existing project, they most often grow their own community of developers, rather than splitting them from the existing project as well.

All of these kinds of forks are common enough that we in the open source community can easily name several examples of them. But they are all quite different in important ways. Some, while forks in the literal sense, can almost be considered new projects in a community sense.  Others are not forks of code at all, yet result in splitting an existing community none the less. Many of these forks will fail to gain traction, in fact most of them will, but some will succeed and surpass those that came before them. All of them play a role in keeping the wider open source economy flourishing, even though we may not like them when they affect a community we’ve been involved in building.

on July 21, 2014 08:00 AM

I have a couple of virt-manager virtual machines for doing DHCP-related work. I have one for the DHCP server and one for the DHCP client, and I have a private network between the two so I can simulate DHCP requests without messing up anything else. It works nicely.

I got a bit carried away, and I use LVM to snapshots for the work I do, so that when I'm done I can throw away the virtual machine's disks and work with a new snapshot next time I want to do something.

I have a cron job, that on a good day, fires up the virtual machines using the master logical volumes and does a dist-upgrade on a weekly basis. It seems to have varying degrees of success though.

So I fired up my VMs to do some investigation of the problem for #749410 and discovered that they weren't booting, because the initramfs couldn't find the root filesystem.

Upon investigation, the problem seemed to be that the logical volumes weren't getting activated. I didn't get to the bottom of why, but a manual activation of the logical volumes allowed the instances to continue booting successfully, and after doing manual dist-upgrades and kernel upgrades, they booted cleanly again. I'm not sure if I got hit by a passing bug in unstable, or what the problem was. I did burn about 2.5 hours just fixing everything up though.

Then I realised that there'd been more activity on the bug since I'd last read it while I was on vacation, and half the investigation I needed to do wasn't necessary any more. Lesson learned.

I haven't got to the bottom of the bug yet, but I had a fun day anyway.

on July 21, 2014 01:23 AM

July 20, 2014

Plymouth Bootsplashes

Paul Tagliamonte

Why oh why are they so hard to write?

Even using the built in modules it is insanely hard to debug. Playing a bootsplash in X sucks and my machine boots too fast to test it on reboot.

Basically, euch. All I wanted was a hackers zebra on boot :(

on July 20, 2014 09:02 PM

July 19, 2014

Transition tracker

Jo Shields

Friday was my last day at Collabora, the awesome Open Source consultancy in Cambridge. I’d been there more than three years, and it was time for a change.

As luck would have it, that change came in the form of a job offer 3 months ago from my long-time friend in Open Source, Miguel de Icaza. Monday morning, I fly out to Xamarin’s main office in Boston, for just over a week of induction and face time with my new co workers, as I take on the title of Release Engineer.

My job is to make sure Mono on Linux is a first-class citizen, rather than the best-effort it’s been since Xamarin was formed from the ashes of the Attachmate/Novell deal. I’m thrilled to work full-time on what I do already as community work – including making Mono great on Debian/Ubuntu – and hope to form new links with the packer communities in other major distributions. And I’m delighted that Xamarin has chosen to put its money where its mouth is and fund continued Open Source development surrounding Mono.

If you’re in the Boston area next week or the week after, ping me via the usual methods!

IMG_20140719_203043

on July 19, 2014 07:35 PM

July 18, 2014

So it's that time of year again, it's my ex-smoker-versary. Okay I'll come up with a better name for next year but for now you'll have to make do with my reflection on smoking and why it's really not that hard to quit, as well as a few silly numbers.

Thirty-six-score-and-ten days ago I stopped smoking.

  • I stopped picking them up.
  • I stopped buying them.
  • I stopped doing things that made me want to smoke.
  • I stopped cold turkey. No NRT, no e-cigs.

I just stopped and braced for the worst.
And I was expecting the worst.

It took me quitting to realise that it was probably that fear of unworldly cravings that kept me smoking for 10 years. When I got to 4 weeks without any nicotine I realised hadn't been that bad at all... Anything that says otherwise is probably either trying to keep you smoking or is trying to sell you something to do instead.

Quitting is easy, just stop smoking and you'll realise that

And this isn't a silly confidence trick. I'm not going to get all happy-clappy and woosah about this. Just stop smoking and you'll see that after a week you won't physically crave (the worst bit), after two or three you stop thinking about them, and after four weeks you're awesome...

Just don't start smoking again. A sober, smoke-free mind is jubilant you're not smoking, and under its sole influence you'll do anything to avoid clouds of smoke... But have a couple of drinks and you can very quickly find yourself drifting intimately close to smokers.

I've also heard from more than a couple of people who "tried to quit" but were still surrounded by cigarettes. Quitting does take willpower and few have enough to resist that "emergency packet" especially in the first couple of weeks. Chuck them all, avoid your triggers and make it easy on yourself.

You have to be vigilant. And consistent.
One cigarette is the end of the world. No, you cannot have a cigar.

Two years smoke-free in numbers

Now onto the fun stuff. There's a silly little tray application called QuitCount in the Ubuntu repos that I set up when I quit. It just keeps track of the number of days, an accumulated number of cigarettes (based on my rate of ~13 a day) and works out how much that would cost, as well as using some formula to work out how much less dead I'm going to be.

  • 9490 cigarettes have gone un-smoked
  • 94.9g of tar not in my lungs
  • An extra £3368.95 cluttering up my bank account (I wish), which is good because I also get:
  • An extra 66 days cluttering up the planet.

And I wasn't a heavy smoker. If you're on 20 or 40 a day, those numbers could be a whole lot higher if you quit today.

Photo credit: mendhak

on July 18, 2014 07:47 AM
Vladimir Levenshtein
This post should have a subtitle: "Using Team Analysis and Levenshtein Distance to Reveal said Structure." It's the first part of that subtitle that is the secret, though: being able to correctly analyze and classify individual teams. Without that, using something clever like Levenshtein distance isn't going to be very much help.

But that's coming in towards the end of the story. Let's start at the beginning.

What You're Going to See

This post is a bit long. Here are the sections I've divided it into:

  • What You're Going to See
  • Premise
  • Introducing ACME
  • Categorizing Teams
  • Category Example
  • Calculating the Levenshtein Distance of Teams
  • Sorting and Interpretation
  • Conclusion

However, you don't need to read the whole thing to obtain the main benefits. You can get the Cliff Notes version by reading the Premise, Categorizing Teams, Interpretation, and the Conclusion.

Premise

Companies grow. Teams expand. If you're well-placed in your industry and providing in-demand services or products, this is happening to you. Individuals and small teams tend to deal with this sort of change pretty well. At an organizational level, however, this sort of change tends to have an impact that can bring a group down, or rocket it up to the next level.

Of the many issues faced by growing companies (or rapidly growing orgs within large companies), the structuring one can be most problematic: "Our old structures, though comfortable, won't scale well with all these new teams and all the new hires joining our existing teams. How do we reorganize? Where do we put folks? Are there natural lines along which we can provide better management (and vision!) structure?"

The answer, of course, is "yes" -- but! It requires careful analysis and a deep understanding every team in your org.

The remainder of this post will set up a scenario and then figure out how to do a re-org. I use a software engineering org as an example, but that's just because I have a long and intimate knowledge of them and understand the ways in which one can classify such teams. These same methods could be applied a Sales group, Marketing groups, etc., as long as you know the characteristics that define the teams of which these orgs are comprised.



Introducing ACME

ACME Corporation is the leading producer of some of the most innovative products of the 20th century. The CTO had previously tasked you, the VP of Software Development to bring this product line into the digital age -- and you did! Your great ideas for the updated suite are the new hottness that everyone is clamouring for. Subsequently, the growth of your teams has been fast, and dare we say, exponential.

More details on the scenario: your Software Development Group has several teams of engineers, all working on different products or services, each of which supports ACME Corporation in different ways. In the past 2 years, you've built up your org by an order of magnitude in size. You've started promoting and hiring more managers and directors to help organize these teams into sensible encapsulating structures. These larger groups, once identified, would comprise the whole Development Group.

Ideally, the new groups would represent some aspect of the company, software development, engineering, and product vision -- in other words, some sensible clustering of teams doing related work. How would you group the teams in the most natural way?

Simply dividing along language or platform lines may seem like the obvious answer, but is it the best choice? There are some questions that can help guide you in figuring this out:
  • How do these teams interact with other parts of the company? 
  • Who are the stakeholders in feature development? 
  • Which sorts of customers does each team primarily serve?
There are many more questions you could ask (some are implicit in the analysis data linked below), but this should give a taste.

ACME Software Development has grown the following teams, some of which focus on products, some on infrastructure, some on services, etc.:
  • Digital Anvil Product Team
  • Giant Rubber Band App Team
  • Digital Iron Carrot Team
  • Jet Propelled Unicycle Service Team
  • Jet Propelled Pogo Stick Service Team
  • Ultimatum Dispatcher API Team
  • Virtual Rocket Powered Roller Skates Team
  • Operations (release management, deployments, production maintenance)
  • QA (testing infrastructure, CI/CD)
  • Community Team (documentation, examples, community engagement, meetups, etc.)

Early SW Dev team hacking the ENIAC

Categorizing Teams

Each of those teams started with 2-4 devs hacking on small skunkworks projects. They've now blossomed to the extent that each team has significant sub-teams working on new features and prototyping for the product they support. These large teams now need to be characterized using a method that will allow them to be easily compared. We need the ability to see how closely related one team is to another, across many different variables. (In the scheme outlined below, we end up examining 50 bits of information for each team.)

Keep in mind that each category should be chosen such that it would make sense for teams categorized similarly to be grouped together. A counter example might be "Team Size"; you don't necessarily want all large teams together in one group, and all small teams in a different group. As such, "Team Size" is probably a poor category choice.

Here are the categories which we will use for the ACME Software Development Group:
  • Language
  • Syntax
  • Platform
  • Implementation Focus
  • Supported OS
  • Deployment Type
  • Product?
  • Service?
  • License Type
  • Industry Segment
  • Stakeholders
  • Customer Type
  • Corporate Priority
Each category may be either single-valued or multi-valued. For instance, the categories ending in question marks will be booleans. In contrast, multiple languages might be used by the same team, so the "Language" category will sometimes have several entries.

Category Example

(Things are going to get a bit more technical at this point; for those who care more about the outcomes than the methods used, feel free to skip to the section at the end: Sorting and Interpretation.)

In all cases, we will encode these values as binary digits -- this allows us to very easily compare teams using Levenshtein distance, since the total of all characteristics we are filtering on can be represented as a string value. An example should illustrate this well.

(The Levenshtein distance between two words is the minimum number of single-character edits -- such as insertions, deletions or substitutions -- required to change one word into the other. It is named after Vladimir Levenshtein, who defined this "distance" in 1965 when exploring the possibility of correcting deletions, insertions, and reversals in binary codes.)

Let's say the Software Development Group supports the following languages, with each one assigned a binary value:
  • LFE - #b0000000001
  • Erlang - #b0000000010
  • Elixir - #b0000000100
  • Ruby - #b0000001000
  • Python - #b0000010000
  • Hy - #b0000100000
  • Clojure - #b0001000000
  • Java - #b0010000000
  • JavaScript - #b0100000000
  • CoffeeScript - #b1000000000
A team that used LFE, Hy, and Clojure would obtain its "Language" category value by XOR'ing the three supported languages, and would thus be #b0001100001. In LFE, that could be done by entering the following code the REPL:


We could then compare this to a team that used just Hy and Clojure (#b0001100000), which has a Levenshtein distance of 1 with the previous language category value. A team that used Ruby and Elixir (#b0000001100) would have a Levenshtein distance of 5 with the LFE/Hy/Clojure team (which makes sense: a total of 5 languages between the two teams with no languages shared in common). 


Calculating the Levenshtein Distance of Teams

As a VP who is keen on deeply understanding your teams, you have put together a spreadsheet with a break-down of not only languages used in each team, but lots of other categories, too. For easy reference, you've put a "legend" for the individual category binary values is at the bottom of the linked spreadsheet.

In the third table on that sheet, all of the values for each column are combined into a single binary string. This (or a slight modification of this) is what will be the input to your calculations. Needless to say, as a complete fan of LFE, you will be writing some Lisp code :-)

Partial view of the spreadsheet's first page.
(If you would like to try the code out yourself while reading, and you have lfetool installed, simply create a new project and start up the REPL: $ lfetool new library ld; cd ld && make-shell
That will download and compile the dependencies for you. In particular, you will then have access to the lfe-utils project -- which contains the Levenshtein distance functions we'll be using. You should be able to copy-and-paste functions, vars, etc., into the REPL from the Github gists.)

Let's create a couple of data structures that will allow us to more easily work with the data you collected about your teams in the spreadsheet:


We can use a quick copy and paste into the LFE REPL for two of those numbers to do a sanity check on the distance between the Community Team and the Digital Iron Carrot Team:


That result doesn't seem unreasonable, given that at a quick glance we can see both of these strings have many differences in their respective character positions.

It looks like we're on solid ground, then, so let's define some utility functions to more easily work with our data structures:


Now we're ready to roll; let's try sorting the data based on a comparison with a one of the teams:


It may not be obvious at first glance, but what the levenshtein-sort function did for us is compare our "control" string to every other string in our data set, providing both the distance and the string that the control was compared to. The first entry in the results is the our control string, and we see what we would expect: the Levenshtein distance with itself is 0 :-)

The result above is not very easily read by most humans ... so let's define a custom sorter that will take human-readable text and then output the same, after doing a sort on the binary strings:


(If any of that doesn't make sense, please stop in and say "hello" on the LFE mail list -- ask us your questions! We're a friendly group that loves to chat about LFE and how to translate from Erlang, Common Lisp, or Clojure to LFE :-) )



Sorting and Interpretation

Before we try out our new function, we should ponder which team will be compared to all the others -- the sort results will change based on this choice. Looking at the spreadsheet, we see that the "Digital Iron Carrot Team" (DICT) has some interesting properties that make it a compelling choice:
  • it has stakeholders in Sales, Engineering, and Senior Leadership;
  • it has a "Corporate Priority" of "Business critical"; and
  • it has both internal and external customers.
Of all the products and services, it seems to be the biggest star. Let's try a sort now, using our new custom function -- inputting something that's human-readable: 


Here we're making the request "Show me the sorted results of each team's binary string compared to the binary string of the DICT." Here are the human-readable results:


For a better visual on this, take a look at the second tab of the shared spreadsheet. The results have been applied to the collected data there, and then colored by major groupings. The first group shares these things in common:
  • Lisp- and Python-heavy
  • Middleware running on BSD boxen
  • Mostly proprietary
  • Externally facing
  • Focus on apps and APIs
It would make sense to group these three together.

A sort (and thus grouping) by comparison to critical team.
Next on the list is Operations and QA -- often a natural pairing, and this process bears out such conventional wisdom. These two are good candidates for a second group.

Things get a little trickier at the end of the list. Depending upon the number of developers in the Java-heavy Giant Rubber Band App Team, they might make up their own group. However, both that one and the next team on the list have frontend components written in Angular.js. They both are used internally and have Engineering as a stakeholder in common, so let's go ahead and group them.

The next two are cloud-deployed Finance APIs running on the Erlang VM. These make a very natural pairing.

Which leaves us with the oddball: the Community Team. The Levenshtein distance for this team is the greatest for all the teams ... but don't be mislead. Because it has something in common with all teams (the Community Team supports every product with docs, example code, Sales and TAM support, evangelism for open source projects, etc.), it will have many differing bits with each team. This really should be in a group all its own so that structure represents reality: all teams depend upon the Community Team. A good case could also probably be made for having the manager of this team report directly up to you. 

The other groups should probably have directors that the team managers report to (keeping in mind that the teams have grown to anywhere from 20 to 40 per team). The director will be able to guide these teams according to your vision for the Software Group and the shared traits/common vision you have uncovered in the course of this analysis.

Let's go back to the Community Team. Perhaps in working with them, you have uncovered a hidden fact: the community interactions your devs have are seriously driving market adoption through some impressive and passionate service and open source docs+evangelism. You are curious how your teams might be grouped if sorted from the perspective of the Community Team.

Let's find out!


As one might expect, most of the teams remain grouped in the same way ... the notable exception being the split-up of the Anvil and Rubber Band teams. Mostly no surprises, though -- the same groupings persist in this model.

A sort (and thus grouping) by comparison to highly-connected team.
To be fair, if this is something you'd want to fully explore, you should bump the "Corporate Priority" for the Community Team much higher, recalculate it's overall bits, regenerate your data structures, and then resort. It may not change too much in this case, but you'd be applying consistent methods, and that's definitely the right thing to do :-) You might even see the Anvil and Rubber Band teams get back together (left as an exercise for the reader).

As a last example, let's throw caution and good sense to the wind and get crazy. You know, like the many times you've seen bizarre, anti-intuitive re-orgs done: let's do a sort that compares a team of middling importance and a relatively low corporate impact with the rest of the teams. What do we see then?


This ruins everything. Well, almost everything: the only group that doesn't get split up is the middleware product line (Jet Propelled and Iron Carrot). Everything else suffers from a bad re-org.

A sort (and thus grouping) by comparison to a non-critical team.

If you were to do this because a genuine change in priority had occurred, where the Giant Rubber Band App Team was now the corporate leader/darling, then you'd need to recompute the bit values and do re-sorts. Failing that, you'd just be falling into a trap that has beguiled many before you.


Conclusion

If there's one thing that this exercise should show you, it's this: applying tools and analyses from one field to fresh data in another -- completely unrelated -- field can provide pretty amazing results that turn mystery and guesswork into science and planning.

If we can get two things from this, the other might be: knowing the parts of the system may not necessarily reveal the whole (c.f. Complex Systems), but it may provide you with the data that lets you better predict emergent behaviours and identify patterns and structure where you didn't see them (or even think to look!) before.


on July 18, 2014 06:09 AM
A few weeks back -- the week of the PyCon sprints, in fact -- was the San Francisco Erlang conference. This was a small conference (I haven't been to one so small since PyCon was at GW in the early 2000s), and absolutely charming as a result. There were some really nifty talks and a lot of fantastic hallway and ballroom conversations... not to mention Robert Virding's very sweet Raspberry Pi Erlang-powered wall-sensing Lego robot.

My first Erlang Factory, the event lasted for two fun-filled days and culminated with a stroll in the evening sun of San Francisco down to the Rackspace office where we held a Meetup mini-conference (beer, food, and three more talks). Conversations lasted until well after 10pm with the remaining die-hards making a trek through the nighttime streets SOMA and the Financial District back to their respective abodes.

Before the close of the conference, however, we managed to sneak a ride (4 of us in a Mustang) to Scoble's studio and conduct an interview with Joe Armstrong and Robert Virding. We covered some of the basics in order to provide a gentle overview for folks who may not have been exposed to Erlang yet and are curious about what it has to offer our growing multi-core world. This wend up on the Rackspace blog as well as the Building 43 site (also on YouTube). We've got a couple of teams using Erlang in Rackspace; if you're interested, be sure to email Steve Pestorich and ask him what's available!


on July 18, 2014 05:49 AM

July 17, 2014

This is a follow-up to the End of Life warning sent last month to confirm that as of today (July 17, 2014), Ubuntu 13.10 is no longer supported. No more package updates will be accepted to 13.10, and it will be archived to old-releases.ubuntu.com in the coming weeks.

The original End of Life warning follows, with upgrade instructions:

Ubuntu announced its 13.10 (Saucy Salamander) release almost 9 months ago, on October 17, 2013. This was the second release with our new 9 month support cycle and, as such, the support period is now nearing its end and Ubuntu 13.10 will reach end of life on Thursday, July 17th. At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 13.10.

The supported upgrade path from Ubuntu 13.10 is via Ubuntu 14.04 LTS. Instructions and caveats for the upgrade may be found at:

https://help.ubuntu.com/community/TrustyUpgrades

Ubuntu 14.04 LTS continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce

Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.

Originally posted to the ubuntu-announce mailing list on Thu Jul 17 16:19:36 UTC 2014 by Adam Conrad

on July 17, 2014 09:10 PM

S07E16 – The One with the Race Car Bed

Ubuntu Podcast from the UK LoCo

We’re back with Season Seven, Episode Sixteen of the Ubuntu Podcast! Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are drinking tea and eating Battenberg cake in Studio L.

In this week’s show:

  • We interview David Hermann about his MiracleCast project…

  • We also discuss:

    • Getting a dashcam
    • Going to Bruges
    • Reading a Stephen King book (Pet Sematary) for the first time…
    • And something that we’ll never know now…
  • We share some Command Line Lurve: YouTube-Upload to upload videos to YouTube from the command line:

     $ youtube-upload --email=myemail@gmail.com --password=mypassword 
                 --title="A.S. Mutter" --description="$(< description.txt)" 
                 --category=Music --keywords="mutter, beethoven" anne_sophie_mutter.flv
  • And we read your feedback, including:

    • Simon’s link to Seafile
    • ”’full screen cast email here when we have permission”’

    Thanks for sending it in!

We’ll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on July 17, 2014 08:33 PM

KDE Project:

Barcelona Free Software Users & Hackers are having a Barcelona Free Software Users & Hackers mañana, see you there!

on July 17, 2014 06:29 PM

Ubucon LatinAmerica speakers list is here! This is the first post about our Speakers.

1- Bhavani Shankar

He is coming from India, he has an amazing background as Ubuntu Developer and member of the Ubuntu Loco Council.

He will talk about “Ubuntu Developing for Dummies” Esp:”Desarrollo de Ubuntu para dummies”. We will learn about all the components of Ubuntu, coding and making own our software and much more!

You can find more information about him in his wikipage: https://wiki.ubuntu.com/BhavaniShankar
Information in Spanish: http://ubuntu-co.com/node/3233

2- Marcos Alvarez Costales

He is Linux Developer and Works with Ubuntu Spain Community, founder and developer of Gufw ( http://gufw.org/ ) , Pylang, Folder Color and the Weapp for Telegram.

His conferences will be about Linux Security and how to create your own WebAppps in Ubuntu.

You can find more information about him in his wikipage: https://wiki.ubuntu.com/costales
Information in Spanish: http://ubuntu-co.com/node/3230

3- Fernando Lanero

Teacher, administrator of http://ubuntuleon.com , worked with migrations in education centers to Ubuntu. His talk called “Linux is Education, Linux is Science” , Spanish: “Linux es Educación, Linux es Ciencia”

Information in Spanish: http://ubuntu-co.com/node/3231

4- Fernando García Amen

Information in Spanish: http://ubuntu-co.com/node/3231

5- Darwin Proaño Orellana

IT engineer from Universidad del Azuay and President of CloudIT Ecuador, his talks will be about “freee-clouds” and “How to migrate from Windows in a safety mode”

As you know the Ubucon LatinAmerica will be on August 14th until 16th.

You can find all the information about the UbuconLA in our website or the wikipage.


on July 17, 2014 05:58 PM

Two years ago, I got appointed as chairman of the openSUSE Board. I was very excited about this opportunity, especially as it allowed me to keep contributing to openSUSE, after having moved to work on the cloud a few months before. I remember how I wanted to find new ways to participate in the project, and this was just a fantastic match for this. I had been on the GNOME Foundation board for a long time, so I knew it would not be easy and always fun, but I also knew I would pretty much enjoy it. And I did.

Fast-forward to today: I'm still deeply caring about the project and I'm still excited about what we do in the openSUSE board. However, some happy event to come in a couple of months means that I'll have much less time to dedicate to openSUSE (and other projects). Therefore I decided a couple of months ago that I would step down before the end of the summer, after we'd have prepared the plan for the transition. Not an easy decision, but the right one, I feel.

And here we are now, with the official news out: I'm no longer the chairman :-) (See also this thread) Of course I'll still stay around and contribute to openSUSE, no worry about that! But as mentioned above, I'll have less time for that as offline life will be more "busy".

openSUSE Board Chairman at oSC14

openSUSE Board Chairman at oSC14

Since I mentioned that we were working on a transition... First, knowing the current board, I have no doubt everything will be kept pushed in the right direction. But on top of that, my good friend Richard Brown has been appointed as the new chairman. Richard knows the project pretty well and he has been on the board for some time now, so is aware of everything that's going on. I've been able to watch his passion for the project, and that's why I'm 100% confident that he will rock!

on July 17, 2014 02:40 PM

As many of you will know, I organize an event every year called the Community Leadership Summit. The event brings together community leaders, organizers and managers and the projects and organizations that are interested in growing and empowering a strong community.

The event kicks off this week on Thursday evening (17th July) with a pre-CLS gathering at the Doubletree Hotel at 7.30pm, and then we get started with the main event on Friday (18th July) and Saturday (19th July). For more details, see http://www.communityleadershipsummit.com/.

This year’s event is shaping up to be incredible. We have a fantastic list of registered attendees and I want to thank our sponsors, O’Reilly, Citrix, Oracle, Mozilla, Ubuntu, and LinuxFund.

Also, be sure to join the new Community Leadership Forum for discussing topics that relate to community management, as well as topics for discussion at the Community Leadership Summit event each year. The forum is designed to be a great place for sharing and learning tips and techniques, getting to know other community leaders, and having fun.

The forum is powered by Discourse, so it is a pleasure to use, and I want to thank discoursehosting.com for generously providing free hosting for us.

Speaking Events and Training at OSCON

I also have a busy OSCON schedule. Here is the summary:

Community Management Training

On Monday 21st July from 9am – 6pm in D135 I will be providing a full day of community management training at OSCON. This full day of training will include topics such as

  • The Core Mechanics Of Community
  • Planning Your Community
  • Building a Strategic Plan
  • Building Collaborative Workflow
  • Defining Community Governance
  • Marketing, Advocacy, Promotion, and Social Media
  • Measuring Your Community
  • Tracking and Measuring Community Management
  • Conflict Resolution

Dealing With Disrespect

On Tues 22nd July at 10.40am in Expo Hall A I will be providing an Office Hours Meeting in which you can come and ask me about:

  • Building collaborative workflow and tooling
  • Conflict resolution and managing complex personalities
  • Building buzz and excitement around your community
  • Incentivized prizes and innovation
  • Hiring community managers
  • Anything else!

Office Hours

Finally, on Wed 23rd July at 2.30pm in E144 I will be giving a presentation called Dealing With Disrespect that is based upon my free book of the same name for managing complex communications.

This is the summary of the talk:

In this new presentation from Jono Bacon, author of The Art of Community, founder of the Community Leadership Summit, and Ubuntu Community Manager, he discusses how to process, interpret, and manage rude, disrespectful, and non-constructive feedback in communities so the constructive criticism gets through but the hate doesn’t.

The presentation covers the three different categories of communications, how we evaluate and assess different attributes in each communication, the factors that influence all of our communications, and how to put in place a set of golden rules for handling feedback and putting it in perspective.

If you personally or your community has suffered rudeness, trolling, and disrespect, this presentation is designed to help.

I will also be available for discussions and meetings. Just drop me an email at jono@jonobacon.org if you want to meet.

I hope to see many of you in Portland this week!

on July 17, 2014 12:44 AM

July 16, 2014

The problem
How acceptance tests are packaged and run has morphed over time. When autopilot was originally conceived the largest user was the unity project and debian packaging was the norm. Now that autopilot has moved well beyond that simple view to support many types of applications running across different form factors, it was time to address the issue of how to run and package these high-level tests.

While helping develop testsuites for the core apps targeting ubuntu touch, it became increasingly difficult for developers to run their application's testsuites. This gave rise to further integration points inside qtcreator, enhancements to click and its manifest files, and tools like the phablet-tools suite and click-buddy. All of these tools operate well within the confines they are intended, but none truly meets the needs for test provisioning and execution.

A solution?
With these thoughts in mind I opened the floor for discussion a couple months ago detailing the need for a proper tool that could meet all of my needs, as well as those of the application developer, test author and CI folks. In a nutshell, a workflow to setup a device as well as properly manage dependencies and resolve them was needed.

Autopkg tests all the things
I'm happy to report that as of a couple weeks ago such a tool now exists in autopkgtest. If the name sounds familar, that's because it is. Autopkgtest already runs all of our automated testing at the archive level. New package uploads are tested utilizing its toolset.

So what does this mean? Utilizing the format laid out by autopkgtest, you can now run your autopilot testsuite on a phablet device in a sane manner. If you have test dependencies, they can be defined and added to the click manifest as specified. If you don't have any test dependencies, then you can run your testsuite today without any modifications to the click manifest.

Yes, but what does this really mean?
This means you can now run a testsuite with adt-run in a similar manner to how debian packages are tested. The runner will setup the device, copy the tests, resolve any dependencies, run them, and report the results back to you.

Some disclaimers
Support for running tests this way is still new. If you do find a bug, please file it!

To use the tool first install autopkgtest. If you are running trusty, the version in the archive is old. For now download the utopic deb file and install it manually. A proper backport still needs to be done.

Also as of this writing, I must caution you that you may run into this bug. If the application fails to download dependencies (you see 404 errors during setup), update your device to the latest image and try again. Note, the latest devel image might be too old if a new image hasn't been promoted in a long time.

I want to see it!
Go ahead, give it a whirl with the calendar application (or your favorite core app). Plug in a device, then run the following on your pc.

bzr branch lp:ubuntu-calendar-app
adt-run ubuntu-calendar-app --click=com.ubuntu.calendar --- ssh -s /usr/share/autopkgtest/ssh-setup/adb

Autopkgtest will give you some output along the way about what is happening. The tests will be copied, and since --click= was specified, the runner will use the click from the device, install the click in our temporary environment, and read the click manifest file for dependencies and install those too. Finally, the tests will be executed with the results returned to you.

Feedback please!
Please try running your autopilot testsuites this way and give feedback! Feel free to contact myself, the upstream authors (thanks Martin Pitt for adding support for this!), or simply file a bug. If you run into trouble, utilize the -d and the --shell switches to get more insight into what is happening while running.

on July 16, 2014 10:10 PM

Plasma 5 in the News

Kubuntu Wire

Plasma 5 was released yesterday and the internet is ablaze with praise and delight at the results.

Slashdot just posted their story KDE Releases Plasma 5 and the first comment is “Thank our KDE developers for their hard work. I’m really impressed by KDE and have used it a lot over the years.“  You’re welcome Slashdot reader.

They point to themukt’s Details Review of Plasma 5 which rightly says “Don’t be fooled, a lot of work goes down there“.

The science fictional name reflects the futuristic technology which is already ahead of its time (companies like Apple, Microsoft, Google and Canonical are using ideas conceptualized or developed by the KDE community).

With the release of Plasma 5, the community has once again shown that free software code developed without any dictator or president or prime-minister or chancellor can be one of the best software code.

LWN has a short story on KDE Plasma 5.0 which is as usual the place to go for detailed comments including several from KDE developers.

ZDNet’s article KDE Plasma 5 Linux desktop arrives says “I found this new KDE Plasma 5 to be a good, solid desktop” and concludes “I expect most, if not all, KDE users to really enjoy this new release. Have fun!” And indeed we do have lots of fun.

And Techage gets nonenculture wrong in KDE 5 is Here: Introducing a Cleaner Frontend & An Overhauled Backend and says “On behalf of KDE fans everywhere, thank you, KDE dev team” aah, it’s nice to be thanked :)

Web UPD8 has How To Install KDE Plasma 5 In Kubuntu 14.10 Or 14.04 a useful guide to setting it up which even covers removing it when you want to go back to the old stuff.  Give it a try!

 

on July 16, 2014 03:14 PM

Today we announced that Brightbox has joined the Ubuntu Certified Cloud programme, and is our first European Cloud Partner.

If you’re wondering where have you heard the name Brightbox before, it might be because they contribute updated Ruby packages for all Ubuntu users.

Official images that stay up to day, updated Ruby stacks for developers, it’s pretty much win-win for Ubuntu users!

on July 16, 2014 02:57 PM

Or yet another reason why it’s really important that we succeed with Debian LTS. Last year we heard of Dreamhost switching to Ubuntu because they can maintain a stable Ubuntu release for longer than a Debian stable release (and this despite the fact that Ubuntu only supports software in its main section, which misses a lot of popular software).

Spotify Logo

A few days ago, we just learned that Spotify took a similar decision:

A while back we decided to move onto Ubuntu for our backend server deployment. The main reasons for this was a predictable release cycle and long term support by upstream (this decision was made before the announcement that the Debian project commits to long term support as well.) With the release of the Ubuntu 14.04 LTS we are now in the process of migrating our ~5000 servers to that distribution.

This is just a supplementary proof that we have to provide long term support for Debian releases if we want to stay relevant in big deployments.

But the task is daunting and it’s difficult to find volunteers to do the job. That’s why I believe that our best answer is to get companies to contribute financially to Debian LTS.

We managed to convince a handful of companies already and July is the first month where paid contributors have joined the effort for a modest participation of 21 work hours (watch out for Thorsten Alteholz and Holger Levsen on debian-lts and debian-lts-announce). But we need to multiply this figure by 5 or 6 at least to make a correct work of maintaining Debian 6.

So grab the subscription form and have a chat with your management. It’s time to convince your company to join the initiative. Don’t hesitate to get in touch if you have questions or if you prefer that I contact a representative of your company. Thank you!

3 comments | Liked this article? Click here. | My blog is Flattr-enabled.

on July 16, 2014 08:07 AM

July 15, 2014

Plasma 5 Ingredients

Sebastian Kügler

Plasma 5.0 is out!

Plasma 5.0 is out. I’ve compiled a (non-exhaustive) list of ingredients and that have been put into this release to give the reader an estimate of the dimensions of the project and the achievement of this milestone:

  • 46 kilo of espresso (pure arabica)
  • The milk of 3 cows
  • a Swiss mountain of chocolate
  • 140 sleepless nights mulling over code
  • 354 liters of pressurized air breathed during scuba dives
  • One encounter with a Mantis shrimp
  • The total length of 43 bathtubs full of tiger tails fixed in pixel-alignment problems
  • 817 hours spent in front of webcams
  • 189MB of irc lines written (compressed)
  • 80.000 automated builds to keep us in check
  • 2403 bugs in the code that had to die
  • A swimming-pool full of tears cried over graphics driver problems and crashers buried deep down in scripting engines, scenegraphs and (the pool allegedly was previously used for skateboarding by Greg KH)
  • 5 magic wands
  • 800 million pixels
  • 37843200000 frames rendered
  • Too many puppies
  • 7 virtual goats sacrificed during a total of 28 full moon ceremonies
  • 450 ml of holy water
  • 76 rock bands
  • 119 beats per minute
  • 8 bits alpha channels
  • 52 WTFs
  • The equivalent of 3 dead trees in recycled paper
  • 2 small branches of cederwood for pencils
  • 1 box of crayons

Nothing like entirely made-up statistics.

tl;dr:

Plasma == ♥

… but also some really hard work, made possible by the sacrifices (see above) of many great people.

on July 15, 2014 12:54 PM

July 14, 2014

We're having our first hackfest of the utopic cycle this week on Tuesday, July 15th. You can catch us live in a hangout on ubuntuonair.com starting at 1900 UTC. Everything you need to know can be found on the wiki page for the event.

During the hangout, we'll be demonstrating writing a new manual testcase, as well as reviewing writing automated testcases. We'll be answering any questions you have as well about contributing a testcase.

We need your help to write some new testcases! We're targeting both manual and automated testcase, so everyone is welcome to pitch in.

We are looking at writing and finishing some testcases for ubuntu studio and some other flavors. All you need is some basic tester knowledge and the ability to write in English.

If you know python, we are also going to be hacking on the toolkit helper for autopilot for the ubuntu sdk. That's a mouthful! Specifically it's the helpers that we use for writing autopilot tests against ubuntu-sdk applications. All app developers make use of these helpers, and we need more of them to ensure we have good coverage for all components developers use. 

Don't worry about getting stuck, we'll be around to help, and there's guides to well, guide you!

Hope to see everyone there!
on July 14, 2014 06:09 PM

Content Hub to replace Friends API

Ubuntu App Developer Blog

As part of the continued development of the Ubuntu platform, the Content Hub has gained the ability to share links (and soon text) as a content type, just as it has been able to share images and other file-based content in the past. This allows applications to more easily, and more consistently, share things to a user’s social media accounts.

Consolidating APIs

facebook-sharing
Thanks to the collaborative work going on between the Content Hub and the Ubuntu Webapps developers, it is now possible for remote websites to be packaged with local user scripts that provide deep integration with our platform services. One of the first to take advantage of this is the Facebook webapp, which while displaying remote content via a web browser wrapper, is also a Content Hub importer. This means that when you go to share an image from the Gallery app, the Facebook webapp is displayed as an optional sharing target for that image. If you select it, it will use the Facebook web interface to upload that image to your timeline, without having to go through the separate Friends API.

This work not only brings the social sharing user experience inline with the rest of the system’s content sharing experience, it also provide a much simpler API for application developers to use for accomplishing the same thing. As a result, the Friends API is being deprecated in favor of the new Content Hub functionality.

What it means for App Devs

Because this is an API change, there are things that you as an app developer need to be aware of. First, though the API is being deprecated immediately, it is not being removed from the device images until after the release of 14.10, which will continue to support the ubuntu-sdk-14.04 framework which included the Friends API. The API will not be included in the final ubuntu-sdk-14.10 framework, or any new 14.10-dev frameworks after -dev2.

After the 14.10 release in October, when device images start to build for utopic+1, the ubuntu-sdk-14.04 framework will no longer be on the images. So if you haven’t updated your Click package by then to use the ubuntu-sdk-14.10 framework, it won’t be available to install on devices with the new image. If you are not using the Friends API, this would simply be a matter of changing your package metadata to the new framework version.  For new apps, it will default to the newer version to begin with, so you shouldn’t have to do anything.

on July 14, 2014 04:52 PM

July 13, 2014

I wanted to take a look at all HTTP(S) traffic coming from an Android device, even if applications made direct connections without a proxy, so I set up a transparent Burp proxy. I decided to put the Proxy on my Kali VM on my laptop, but didn't want to run an AP on there, so I needed to get the traffic to there.

Network Setup

Network Topology Diagram

The diagram shows that my wireless lab is on a separate subnet from the rest of my network, including my laptop. The lab network is a NAT run by IPTables on the Virtual Router. While I certainly could've ARP poisoned the connection between the Internet Router and the Virtual Router, or even added a static route, I wanted a cleaner solution that would be easier to enable/disable.

Setting up the Redirect

I decided to use IPTables on the virtual router to redirect the traffic to my Kali Laptop. Furthermore, I decided to enable/disable the redirect based on logging in/out via SSH, but I needed to make sure the redirect would get torn down even if there's not a clean logout: i.e., the VM crashes, the SSH connection gets interrupted, etc. Enter pam_exec. By using the pam_exec module, we can have an arbitrary command run on log in/out, which can setup and reset the IPTables REDIRECT via an SSH tunnel to my Burp Proxy.

In order to get the command executed on any login/logout, I added the following line to /etc/pam.d/common-session:

session optional    pam_exec.so log=/var/log/burp.log   /opt/burp.sh

This launches the following script, that checks if its being invoked for the right user, for SSH sessions, and then inserts or deletes the relevant IPTables rules.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#!/bin/bash

BURP_PORT=8080
BURP_USER=tap
LAN_IF=eth1

set -o nounset

function ipt_command {
    ACTION=$1
    echo iptables -t nat $ACTION PREROUTING -i $LAN_IF -p tcp -m multiport --dports 80,443 -j REDIRECT --to-ports $BURP_PORT\;
    echo iptables $ACTION INPUT -i $LAN_IF -p tcp --dport $BURP_PORT -j ACCEPT\;
}

if [ $PAM_USER != $BURP_USER ] ; then
    exit 0
fi

if [ $PAM_TTY != "ssh" ] ; then
    exit 0
fi

if [ $PAM_TYPE == "open_session" ] ; then
    CMD=`ipt_command -I`
elif [ $PAM_TYPE == "close_session" ] ; then
    CMD=`ipt_command -D`
fi

date
echo $CMD

eval $CMD

This redirects all traffic incoming from $LAN_IF destined for ports 80 and 443 to local port 8080. This does have the downside of missing traffic on other ports, but this will get nearly all HTTP(S) traffic.

Of course, since the IPTables REDIRECT target still maintains the same interface as the original incoming connection, we need to allow our SSH Port Forward to bind to all interfaces. Add this line to /etc/ssh/sshd_config and restart SSH:

GatewayPorts clientspecified

Setting up Burp and SSH

Burp's setup is pretty straightforward, but since we're not configuring a proxy in our client application, we'll need to use invisible proxying mode. I actually put invisible proxying on a separate port (8081) so I have 8080 setup as a regular proxy. I also use the per-host certificate setting to get the "best" SSL experience.

Burp Setup

It turns out that there's an issue with OpenJDK 6 and SSL certificates. Apparently it will advertise algorithms not actually available, and then libnss will throw an exception, causing the connection to fail, and the client will retry with SSLv3 without SNI, preventing Burp from creating proper certificates. It can be worked around by disabling NSS in Java. In /etc/java-6-openjdk/security/java.security, comment out the line with security.provider.9=sun.security.pkcs11.SunPKCS11 ${java.home}/lib/security/nss.cfg.

Forwarding the port over to the wifilab server is pretty straightforward. You can either use the -R command-line option, or better, set things up in ~/.ssh/config.

Host wifitap
  User tap
  Hostname wifilab
  RemoteForward *:8080 localhost:8081

This logs in as user tap on host wifilab, forwarding local port 8081 to port 8080 on the wifilab machine. The * for a hostname is to ensure it binds to all interfaces (0.0.0.0), not just localhost.

Setting up Android

At this point, you should have a good setup for intercepting traffic from any client of the WiFi lab, but since I started off wanting to intercept Android traffic, let's optimize for that by installing our certificate. You can install it as a user certificate, but I'd rather do it as a system cert, and my testing tablet is already rooted, so it's easy enough.

You'll want to start by exporting the certificate from Burp and saving it to a file, say burp.der.

Android's system certificate store is in /system/etc/security/cacerts, and expects OpenSSL-hashed naming, like a0b1c2d3.0 for the certificate names. Another complication is that it's looking for PEM-formatted certificates, and the export from Burp is DER-formatted. We'll fix all that up in one chain of OpenSSL commands:

(openssl x509 -inform DER -outform PEM -in burp.der;
 openssl x509 -inform DER -in burp.der -text -fingerprint -noout
 ) > /tmp/`openssl x509 -inform DER -in burp.der -subject_hash -noout`.0

Android before ICS (4.0) uses OpenSSL versions below 1.0.0, so you'll need to use -subject_hash_old if you're using an older version of Android. Installing is a pretty simple task (replace HASH.0 with the filename produced by the command above):

$ adb push HASH.0 /tmp/HASH.0
$ adb shell
android$ su
android# mount -o remount,rw /system
android# cp /tmp/HASH.0 /system/etc/security/cacerts/
android# chmod 644 /system/etc/security/cacerts/HASH.0
android# reboot

Connect your Android device to your WiFi lab, ssh wifitap from your Kali install running Burp, and you should see your HTTP(S) traffic in Burp (excepting apps that use pinned certificates, that's another matter entirely). You can check your installed certificate from the Android Security Settings.

Good luck with your Android auditing!

on July 13, 2014 08:57 PM
While hoping to get a feature complete stress-ng sooner than later, I found a few more ways to fiendishly stress a system.

Stress-ng 0.01.22 will be landing soon in Ubuntu 14.10 with three more stress mechanisms:
  • CPU affinity stressing; this rapidly changes CPU affinity of the stress processes just to keep the scheduling busy wasting effort.
  • Timer stressing using the real-time clock; this allows one to generate a large amount of timer interrupts, so it is a useful interrupt saturation test.
  • Directory entry thrashing; this creates and deletes a selectable number of zero length files and hence populates and destroys directory entries.
I have also removed the need to use rand() for random number generation for some of the stress tests and re-used a the faster MWC "random" number generator to add in some well known and very simple math operations for CPU stressing.

Stress-ng now has 15 different simple stress mechanisms that exercise CPU, cache, memory, file system, I/O and CPU schedulers.  I could add more tests, but I think this is a large enough set to allow one to thrash a machine and see how well it performs under pressure.
on July 13, 2014 04:47 PM

Hacking on launchpadlib

Dimitri John Ledkov

So here is a quick sample of my progress playing around with launchpadlib using lp-shell from lptools:
In [1]: lp
Out[1]: <launchpadlib.launchpad.Launchpad at 0x7f49ecc649b0>

In [2]: lp.distributions
Out[2]: <launchpadlib.launchpad.DistributionSet at 0x7f49ddf0e630>

In [3]: lp.distributions['ubuntu']
Out[3]: <distribution at https://api.launchpad.net/1.0/ubuntu>

In [4]: lp.distributions['ubuntu'].display_name
Out[4]: 'Ubuntu'

In [5]: lp.distributions['ubuntu'].summary
Out[5]: 'Ubuntu is a complete Linux-based operating system, freely available with both community and professional support.'

In [7]: import sys; print(sys.version)
3.4.1 (default, Jun 9 2014, 17:34:49)
[GCC 4.8.3]

There is not much yet, but it's a start. python3 port of launchpadlib is coming soon. It has been attempted a few times before and I am leveraging that work. Porting this stack has proven to be the most difficult python3 port I have ever done. But there is always python-libvirt that still needs porting ;-)

Some of above is just merge proposals against launchpadlib & lazr.restfulclient, and requires not yet packaged modules in the archive. When trying it out, I'm still getting a lot of run-time asserts and things that haven't been picked up by e.g. pyflakes3 and has not been unit-tested yet.
on July 13, 2014 12:32 PM

July 12, 2014

My old desktop was seeing random drive errors on multiple drives, including a drive I only got a few months ago. And since my motherboard was about 5 years old, I decided it was time to replace it.

I asked the KWLUG mailing list if they had any advice on picking motherboards. The consensus seems to be pretty much “it’s still a crapshoot.” But I bit the bullet and reported back:

I bought a motherboard! An ASUS Z97-A

Mostly because I wanted Intel integrated graphics and I’ve got 3 monitors it needs to drive. And I was hoping the mSATA SSD card I got to replace the one in my Dell Mini 9 (that didn’t work) would fit in the m.2 slot. It doesn’t. Oh well.

I wanted to get it all set up while I was off for Canada Day. Except Canada Computers didn’t have any of my preferred CPU options. So I’ll be waiting for that to come in via NewEgg.

I gave myself a budget of about $500 for mobo, CPU and RAM and I’ll end up going over a little bit (mostly tax and shipping), and tried to build the best machine I could for that.

One of the things I did this time that I hadn’t done before was spec out a desktop machine at System76 and used that as a starting point. System76 is more explicit about things like chipsets for desktops than Zareason is. Which would be great, except they’re using the older H87 chipsets.

…Like the latest Ars System Guide Hot Rod But that’s over 6 months old now. And >they’re balancing their budget against having to buy a graphics card, which I don’t want to do.

I still have some unanswered questions about the Z97 chipset. It’s only been out for about a month. So who knows?

My laptop has mostly been my desktop for the last few years. But I want to knock that off because I’ve been developing back and neck problems. My desktop layout is okay ergonomically, at least better than anything I have for the laptop (including and especially my easy chair with a lapdesk, which is comfy, but kind of horrible on the neck). One of the things that’s holding me back is my desktop is 5 years old and was built cheap because I was mostly using it as a server by that point. I really want to make it something I want to use over the laptop (which is a very nice laptop). Which is why I ended up going somewhat upper-mid range.

That’s one of the nice things about building from parts, despite the lack of useful information: This is the 3rd motherboard I’ve put in this case. I replaced the PSU once a couple years ago so it’s quite sufficient to handle the new stuff. I’m keeping my old harddrives. I could keep the graphics card. I’ll need to buy an adapter for the DVD burner (and I’ve yet to decide if I’m going to do that, or buy a new SATA one or just go without). And I can keep my (frankly pretty awesome) monitors. So $500 gets me a kick-ass whole new machine.

Anyway, long story short, I still have a lot of questions about whether this was the best purchase, but I’m hopeful it’s a good one.

Aside: is Canada Computers really the only store in town that keeps desktop CPUs in stock anymore? I couldn’t get into the UW Tech Shop, but since they’re mostly iPads and crap now, I’m not optimistic. Computer XS doesn’t (at least the Waterloo one). Future Shop and Best Buy don’t. I even went into Neutron for the first time in over 15 years. Nope. Nobody.

It… didn’t go as well as I’d hoped:

So, anyway, I got the motherboard, CPU and put it all in my old case.

I booted up and all three monitors came up without any fuss, which has never happened for me. Awesome! This is great!

Then I tried to play game.

Apparently the current snd_intel_hda ALSA drivers don’t like H97 and Z97 chipsets. The sound was staticky, crackly and distorted.

I’ve spent more than a few hours over the last week hunting around for a fix. I installed Windows on a spare harddrive to make sure it wasn’t a hardware problem (for which I needed to spend the $20 to get a new SATA DVD drive so I could run the Windows driver disk to actually get actual video, networking and sound support :P). And I found this thing on the Arch WIki which, while not fixing the problem, did actually make it worse, leading me to conclude there was some sort of sound driver/pulseaudio problem.

Top tip: when trying to sort out sound driver problems for specific hardware the best thing to do is search for the hardware product id (in my case “8ca0″). That’s how I found this:

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1321421

Hurray! The workaround works great and now I’m back in business!

So I got burned by going with the bleeding edge, and I should know better. But, even though the information isn’t widely diseminated yet, there is a fix. And a workaround. I’m sure Ubuntu 14.10 will have no problem with it. It’s not as bad as the bleeding edge was years ago. If the fix was easier to find (and I’m going to work on that), it was easier getting going with Ubuntu than it was with Windows.

on July 12, 2014 05:31 AM

Satuday's the new Sunday

Paul Tagliamonte

Hello, World!

For those of you who enforce my Sundays on me (keep doing that, thank you!), I’ll be changing my Saturdays with my Sundays.

That’s right! In this new brave world, I’ll be taking Saturdays off, not Sundays. Feel free to pester me all day on Sunday, now!

This means, as a logical result, I will not be around tomorrow, Saturday.

Much love.

on July 12, 2014 12:41 AM

July 11, 2014

For those of you who haven’t seen Dekko in the software store, it’s a native IMAP email client for Ubuntu Touch. Dekko is essentially my development/ideas branch of my work on Trojita, which in the end is intended to replace Dekko in the store.

The reasoning behind publishing Dekko is for a few reasons really, firstly Trojita prides itself on being standards compliant, already has a desktop client that uses QtWidgets, supports both Qt4 & Qt5 and also has a technical preview Harmattan qml front-end, which was great as most of the initial work for the IMAP parts was in place, so we didn’t need to “re-invent the wheel” (for the most-part anyway), but we soon hit a point where we had surpassed what had previously been done and now was the job of unwinding the intertwined style that QtWidget UI’s naturally ensue. So that we can share the same business logic between all front-ends without losing standards compliance, support both Qt4 & Qt5 and maintain Trojita’s robust quality standards.

I am still relatively new to C++ so this is like one of those “in at the deep end” scenario’s, resulting in the ietf rfc specifications and Qt’s documentation having become the majority of my daily reading.  Dekko was born out of the need to understand the separation (call it a learning project) and to devise a way to create common components that can be shared between all front-ends. This “learning project” resulted in a functional but limited capability email client, so I decided to publish it with the hope of getting as much feedback, bug reports or design ideas as possible, and use this to ensure Trojita becomes a rock solid native email client for Ubuntu.

A quick list of current features in Dekko,

  • Support for viewing of plain text messages. We cannot show html messages, due to not being able to block network requests with QtWebKits custom url scheme functionality (If you are an Oxide dev who happens to be reading this “wink wink”  :-D ). But is great for viewing all your launchpad mail.
  • Navigating the mailbox hierarchy, it’s not entirely obvious at first (Open to new ideas here) If you see a progression arrow on a mailbox, tapping the arrow displays the nested mailbox’s. Otherwise tapping elsewhere shows the messages within that mailbox.
  • Composing and replying to messages, this utilizes the bottom edge so pulling up on an opened message will set up a reply to the opened message. One thing to note with replying to messages, at the moment it basically does a “reply all” action so you need to delete or add recipients to the message manually until support for mailing lists and other reply modes are implemented.
  • Supports defining a single sender identity for mail submission.
  • Mark message as deleted, expunge mailbox and auto-expunge on marked for deletion options.
  • Mark all messages as read.
  • Offline, Online and Bandwidth saving mode, perfect for mobile data connections

There is a known bug with the message list view sometimes not updating properly, but can usually be resolved by closing and reopening that mailbox.

So if you haven’t already please give it a try, and if you have any design/implementation ideas, issues, bugs or anything else you wish to say, please report them to the dekko project on launchpad https://launchpad.net/dekko.

Note: Please don’t file bugs against upstream Trojita, unless you are using a build of Trojita and not a Dekko build. 

And finally a few snaps to wet the appetite

 

Settings tabs Message reply pulled from bottom edge Message View Message list view Saved drafts page Message composer opened Message composer pulled from bottom edge
on July 11, 2014 12:47 PM

The PHP Group has released new versions of the popular scripting language that fix a number of bugs, including two in OpenSSL. The flaws fixed in OpenSSL don’t rise to the level of the major bugs such as Heartbleed that have popped up in the last few months. But PHP 5.5.14 and 5.4.30 both contain fixes for the two vulnerabilities, one of which is related to the way that OpenSSL handles timestamps on some certificates, and the other of which also involves timestamps, but in a different way.

Source:

http://threatpost.com/php-fixes-openssl-flaws-in-new-releases/106908

Submitted by: Dennis Fisher

 

on July 11, 2014 07:00 AM

July 10, 2014

S07E15 – The One with the Thumb

Ubuntu Podcast from the UK LoCo

Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are in Studio L for Season Seven, Episode Fifteen of the Ubuntu Podcast!

In this week’s show:-

We’ll be back next week, when we’ll be interviewing David Hermann about his MiracleCast project, and we’ll go through your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on July 10, 2014 08:02 PM
Transcoding video is a very resource intensive process.

It can take many minutes to process a small, 30-second clip, or even hours to process a full movie.  There are numerous, excellent, open source video transcoding and processing tools freely available in Ubuntu, including libav-toolsffmpegmencoder, and handbrake.  Surprisingly, however, none of those support parallel computing easily or out of the box.  And disappointingly, I couldn't find any MPI support readily available either.

I happened to have an Orange Box for a few days recently, so I decided to tackle the problem myself, and develop a scalable, parallel video transcoding solution myself.  I'm delighted to share the result with you today!

When it comes to commercial video production, it can take thousands of machines, hundreds of compute hours to render a full movie.  I had the distinct privilege some time ago to visit WETA Digital in Wellington, New Zealand and tour the render farm that processed The Lord of the Rings triology, Avatar, and The Hobbit, etc.  And just a few weeks ago, I visited another quite visionary, cloud savvy digital film processing firm in Hollywood, called Digital Film Tree.

Windows and Mac OS may be the first platforms that come to mind, when you think about front end video production, Linux is far more widely used for batch video processing, and with Ubuntu, in particular, being extensively at both WETA Digital and Digital Film Tree, among others.

While I could have worked with any of a number of tools, I settled on avconv (the successor(?) of ffmpeg), as it was the first one that I got working well on my laptop, before scaling it out to the cluster.

I designed an approach on my whiteboard, in fact quite similar to some work I did parallelizing and scaling the john-the-ripper password quality checker.

At a high level, the algorithm looks like this:
  1. Create a shared network filesystem, simultaneously readable and writable by all nodes
  2. Have the master node split the work into even sized chunks for each worker
  3. Have each worker process their segment of the video, and raise a flag when done
  4. Have the master node wait for each of the all-done flags, and then concatenate the result
And that's exactly what I implemented that in a new transcode charm and transcode-cluster bundle.  It provides linear scalability and performance improvements, as you add additional units to the cluster.  A transcode job that takes 24 minutes on a single node, is down to 3 minutes on 8 worker nodes in the Orange Box, using Juju and MAAS against physical hardware nodes.


For the curious, the real magic is in the config-changed hook, which has decent inline documentation.



The trick, for anyone who might make their way into this by way of various StackExchange questions and (incorrect) answers, is in the command that splits up the original video (around line 54):

avconv -ss $start_time -i $filename -t $length -s $size -vcodec libx264 -acodec aac -bsf:v h264_mp4toannexb -f mpegts -strict experimental -y ${filename}.part${current_node}.ts

And the one that puts it back together (around line 72):

avconv -i concat:"$concat" -c copy -bsf:a aac_adtstoasc -y ${filename}_${size}_x264_aac.${format}

I found this post and this documentation particularly helpful in understanding and solving the problem.

In any case, once deployed, my cluster bundle looks like this.  8 units of transcoders, all connected to a shared filesystem, and performance monitoring too.


I was able to leverage the shared-fs relation provided by the nfs charm, as well as the ganglia charm to monitor the utilization of the cluster.  You can see the spikes in the cpu, disk, and network in the graphs below, during the course of a transcode job.




For my testing, I downloaded the movie Code Rushfreely available under the CC-BY-NC-SA 3.0 license.  If you haven't seen it, it's an excellent documentary about the open source software around Netscape/Mozilla/Firefox and the dotcom bubble of the late 1990s.

Oddly enough, the stock, 746MB high quality MP4 video doesn't play in Firefox, since it's an mpeg4 stream, rather than H264.  Fail.  (Yes, of course I could have used mplayer, vlc, etc., that's not the point ;-)


Perhaps one of the most useful, intriguing features of HTML5 is it's support for embedding multimedia, video, and sound into webpages.  HTML5 even supports multiple video formats.  Sounds nice, right?  If it only were that simple...  As it turns out, different browsers have, and lack support for the different formats.  While there is no one format to rule them all, MP4 is supported by the majority of browsers, including the two that I use (Chromium and Firefox).  This matrix from w3schools.com illustrates the mess.

http://www.w3schools.com/html/html5_video.asp

The file format, however, is only half of the story.  The audio and video contents within the file also have to be encoded and compressed with very specific codecs, in order to work properly within the browsers.  For MP4, the video has to be encoded with H264, and the audio with AAC.

Among the various brands of phones, webcams, digital cameras, etc., the output format and codecs are seriously all over the map.  If you've ever wondered what's happening, when you upload a video to YouTube or Facebook, and it's a while before it's ready to be viewed, it's being transcoded and scaled in the background. 

In any case, I find it quite useful to transcode my videos to MP4/H264/AAC format.  And for that, a scalable, parallel computing approach to video processing would be quite helpful.

During the course of the 3 minute run, I liked watching the avconv log files of all of the nodes, using Byobu and Tmux in a tiled split screen format, like this:


Also, the transcode charm installs an Apache2 webserver on each node, so you can expose the service and point a browser to any of the nodes, where you can find the input, output, and intermediary data files, as well as the logs and DONE flags.



Once the job completes, I can simply click on the output file, Code_Rush.mp4_1280x720_x264_aac.mp4, and see that it's now perfectly viewable in the browser!


In case you're curious, I have verified the same charm with a couple of other OGG, AVI, MPEG, and MOV input files, too.


Beyond transcoding the format and codecs, I have also added configuration support within the charm itself to scale the video frame size, too.  This is useful to take a larger video, and scale it down to a more appropriate size, perhaps for a phone or tablet.  Again, this resource intensive procedure perfectly benefits from additional compute units.


File format, audio/video codec, and frame size changes are hardly the extent of video transcoding workloads.  There are hundreds of options and thousands of combinations, as the manpages of avconv and mencoder attest.  All of my scripts and configurations are free software, open source.  Your contributions and extensions are certainly welcome!

In the mean time, I hope you'll take a look at this charm and consider using it, if you have the need to scale up your own video transcoding ;-)

Cheers,
Dustin
on July 10, 2014 01:01 PM