November 26, 2014


Randall Ross

A while back, as part of my new role, I began looking for opportunities to:

  1. Challenge the status quo, and,
  2. Connect people together that want to solve big problems.

(Luckily, the two are closely related.)

Recently, I was introduced to some fine folks at SiteOx in Franklin, TN (that's just outside of Memphis) that happen to have some really fast POWER8 systems that provide infrastructure-as-a-service (IaaS).

I mentioned that previously unknown tidbit to some of my colleagues (who are are awesome Juju Charmers) to see if/how the service could be used to speed Juju Charm development.

As it turns out, it can! In case you missed it, Matt Bruzek of Juju Charmer fame, figured it all out and then wrote a concise guide to do just that. Check it out here, and then...

Click the button to feel the POWER!Click the button to feel the POWER!

Thanks Matt, and thanks SiteOx.

on November 26, 2014 12:03 AM

Pondering Contingencies

Stephen Michael Kellat

Preparedness is an odd topic. As people in the United States might have recalled from last week, snow abounded in certain parts of the country. Although not located in the New York State community of Buffalo, I am located down the Lake Erie shoreline in Ashtabula. I too am seasonally afflicted with Lake Effect Snow Storms.

Heck, I have even seen Thunder Snow!

Following the major snow, I got to see "High Wind Warning". That was not fun as it did lead to a blackout. The various UPS units around the house started screaming. Once that happened I had multiple systems to shut down. The Xubuntu meeting log this week even shows me shutting down things while departing mid-way. As you might imagine, overhead electrical lines do not play nicely with 50 mile per hour wind gusts.

When using a computer, you never truly have an ideal environment for the bare metal to operate in. Although contemporary life leaves the impression that electricity and broadband service should be constant let alone stable, bad things do happen. I already have multiple UPS units scattered around as it is.

Donald Rumsfeld, the former US Secretary of Defense, had a saying that fits:

As you know, you go to war with the army you have, not the army you might want or wish to have at a later time.

I live in what is termed by our census officials a "Micropolitan Statistical Area" compared to a "Metropolitan Statistical Area" so I know it is small. I know our infrastructure is not the greatest. Planning ahead means being ready to be without electricity for an extended period of time here.

While the Buffalo Bills football team had to move their home game to Detroit due to their stadium filling with snow, imagine the flooding aftermath that may happen when that snow melts. Extreme cases like that are hard to plan for but at least the game is going to happen somewhere. What contingencies have you at least thought about working around?

on November 26, 2014 12:00 AM

November 25, 2014

The AWS Lambda Walkthrough 2 uses AWS Lambda to automatically resize images added to one bucket, placing the resulting thumbnails in another bucket. The walkthrough documentation has a mix of aws-cli commands, instructions for hand editing files, and steps requiring the AWS console.

For my personal testing, I converted all of these to command line instructions that can simply be copied and pasted, making them more suitable for adapting into scripts and for eventual automation. I share the results here in case others might find this a faster way to get started with Lambda.

These instructions assume that you have already set up and are using an IAM user / aws-cli profile with admin credentials.

The following is intended as a companion to the Amazon walkthrough documentation, simplifying the execution steps for command line lovers. Read the AWS documentation itself for more details explaining the walkthrough.

Set up

Set up environment variables describing the associated resources:

# Change to your own unique S3 bucket name:

# Do not change this. Walkthrough code assumes this name


Install some required software:

sudo apt-get install nodejs nodejs-legacy npm

Step 1.1: Create Buckets and Upload a Sample Object (walkthrough)

Create the buckets:

aws s3 mb s3://$source_bucket
aws s3 mb s3://$target_bucket

Upload a sample photo:

# by Hatalmas:
wget -q -OHappyFace.jpg \

aws s3 cp HappyFace.jpg s3://$source_bucket/

Step 2.1: Create a Lambda Function Deployment Package (walkthrough)

Create the Lambda function nodejs code:

# JavaScript code as listed in walkthrough
wget -q -O $function.js \

Install packages needed by the Lambda function code. Note that this is done under the local directory:

npm install async gm # aws-sdk is not needed

Put all of the required code into a ZIP file, ready for uploading:

zip -r $ $function.js node_modules

Step 2.2: Create an IAM Role for AWS Lambda (walkthrough)

IAM role that will be used by the Lambda function when it runs.

lambda_execution_role_arn=$(aws iam create-role \
  --role-name "$lambda_execution_role_name" \
  --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": ""
          "Action": "sts:AssumeRole"
    }' \
  --output text \
  --query 'Role.Arn'
echo lambda_execution_role_arn=$lambda_execution_role_arn

What the Lambda function is allowed to do/access. This is slightly tighter than the generic role policy created with the IAM console:

aws iam put-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name" \
  --policy-document '{
    "Version": "2012-10-17",
    "Statement": [
        "Effect": "Allow",
        "Action": [
        "Resource": "arn:aws:logs:*:*:*"
        "Effect": "Allow",
        "Action": [
        "Resource": "arn:aws:s3:::'$source_bucket'/*"
        "Effect": "Allow",
        "Action": [
        "Resource": "arn:aws:s3:::'$target_bucket'/*"

Step 2.3: Upload the Deployment Package and Invoke it Manually (walkthrough)

Upload the Lambda function, specifying the IAM role it should use and other attributes:

# Timeout increased from walkthrough based on experience
aws lambda upload-function \
  --function-name "$function" \
  --function-zip "$" \
  --role "$lambda_execution_role_arn" \
  --mode event \
  --handler "$function.handler" \
  --timeout 30 \
  --runtime nodejs

Create fake S3 event data to pass to the Lambda function. The key here is the source S3 bucket and key:

cat > $function-data.json <<EOM

Invoke the Lambda function, passing in the fake S3 event data:

aws lambda invoke-async \
  --function-name "$function" \
  --invoke-args "$function-data.json"

Look in the target bucket for the converted image. It could take a while to show up since the Lambda function is running asynchronously:

aws s3 ls s3://$target_bucket

Look at the Lambda function log output in CloudWatch:

aws logs describe-log-groups \
  --output text \
  --query 'logGroups[*].[logGroupName]'

log_stream_names=$(aws logs describe-log-streams \
  --log-group-name "$log_group_name" \
  --output text \
  --query 'logStreams[*].logStreamName')
echo log_stream_names="'$log_stream_names'"
for log_stream_name in $log_stream_names; do
  aws logs get-log-events \
    --log-group-name "$log_group_name" \
    --log-stream-name "$log_stream_name" \
    --output text \
    --query 'events[*].message'
done | less

Step 3.1: Create an IAM Role for Amazon S3 (walkthrough)

This role may be assumed by S3.

lambda_invocation_role_arn=$(aws iam create-role \
  --role-name "$lambda_invocation_role_name" \
  --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": ""
          "Action": "sts:AssumeRole",
          "Condition": {
            "StringLike": {
              "sts:ExternalId": "arn:aws:s3:::*"
    }' \
  --output text \
  --query 'Role.Arn'
echo lambda_invocation_role_arn=$lambda_invocation_role_arn

S3 may invoke the Lambda function.

aws iam put-role-policy \
  --role-name "$lambda_invocation_role_name" \
  --policy-name "$lambda_invocation_access_policy_name" \
  --policy-document '{
     "Version": "2012-10-17",
     "Statement": [
         "Effect": "Allow",
         "Action": [
         "Resource": [

Step 3.2: Configure a Notification on the Bucket (walkthrough)

Get the Lambda function ARN:

lambda_function_arn=$(aws lambda get-function-configuration \
  --function-name "$function" \
  --output text \
  --query 'FunctionARN'
echo lambda_function_arn=$lambda_function_arn

Tell the S3 bucket to invoke the Lambda function when new objects are created (or overwritten):

aws s3api put-bucket-notification \
  --bucket "$source_bucket" \
  --notification-configuration '{
    "CloudFunctionConfiguration": {
      "CloudFunction": "'$lambda_function_arn'",
      "InvocationRole": "'$lambda_invocation_role_arn'",
      "Event": "s3:ObjectCreated:*"

Step 3.3: Test the Setup (walkthrough)

Copy your own jpg and png files into the source bucket:

aws s3 cp $myimages s3://$source_bucket/

Look for the resized images in the target bucket:

aws s3 ls s3://$target_bucket

Check out the environment

These handy commands let you review the related resources in your acccount:

aws lambda list-functions \
  --output text \
  --query 'Functions[*].[FunctionName]'

aws lambda get-function \
  --function-name "$function"

aws iam list-roles \
  --output text \
  --query 'Roles[*].[RoleName]'

aws iam get-role \
  --role-name "$lambda_execution_role_name" \
  --output json \
  --query 'Role.AssumeRolePolicyDocument.Statement'

aws iam list-role-policies  \
  --role-name "$lambda_execution_role_name" \
  --output text \
  --query 'PolicyNames[*]'

aws iam get-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name" \
  --output json \
  --query 'PolicyDocument'

aws iam get-role \
  --role-name "$lambda_invocation_role_name" \
  --output json \
  --query 'Role.AssumeRolePolicyDocument.Statement'

aws iam list-role-policies  \
  --role-name "$lambda_invocation_role_name" \
  --output text \
  --query 'PolicyNames[*]'

aws iam get-role-policy \
  --role-name "$lambda_invocation_role_name" \
  --policy-name "$lambda_invocation_access_policy_name" \
  --output json \
  --query 'PolicyDocument'

aws s3api get-bucket-notification \
  --bucket "$source_bucket"

Clean up

If you are done with the walkthrough, you can delete the created resources:

aws s3 rm s3://$target_bucket/resized-HappyFace.jpg
aws s3 rm s3://$source_bucket/HappyFace.jpg
aws s3 rb s3://$target_bucket/
aws s3 rb s3://$source_bucket/

aws lambda delete-function \
  --function-name "$function"

aws iam delete-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name"

aws iam delete-role \
  --role-name "$lambda_execution_role_name"

aws iam delete-role-policy \
  --role-name "$lambda_invocation_role_name" \
  --policy-name "$lambda_invocation_access_policy_name"

aws iam delete-role \
  --role-name "$lambda_invocation_role_name"

log_stream_names=$(aws logs describe-log-streams \
  --log-group-name "$log_group_name" \
  --output text \
  --query 'logStreams[*].logStreamName') &&
for log_stream_name in $log_stream_names; do
  echo "deleting log-stream $log_stream_name"
  aws logs delete-log-stream \
    --log-group-name "$log_group_name" \
    --log-stream-name "$log_stream_name"

aws logs delete-log-group \
  --log-group-name "$log_group_name"

If you try these instructions, please let me know in the comments where you had trouble or experienced errors.

Original article:

on November 25, 2014 09:36 PM

Sorry I had neglected this for a bit, but the latest version of Galileo is now available in my PPA.  It has also been uploaded for 12.04, 14.04, 14.10, and vivid (15.04).  Please test and if you find any issues, shoot me an email at

on November 25, 2014 07:01 PM

Ubuntu Incubator

Michael Hall

The Ubuntu Core Apps project has proven that the Ubuntu community is not only capable of building fantastic software, but they’re capable of the meeting the same standards, deadlines and requirements that are expected from projects developed by employees. One of the things that I think made Core Apps so successful was the project management support that they all received from Alan Pope.

Project management is common, even expected, for software developed commercially, but it’s just as often missing from community projects. It’s time to change that. I’m kicking off a new personal[1] project, I’m calling it the Ubuntu Incubator.

get_excited_banner_banner_smallThe purpose of the Incubator is to help community projects bootstrap themselves, obtain the resources they need to run their project, and put together a solid plan that will set them on a successful, sustainable path.

To that end I’m going to devote one month to a single project at a time. I will meet with the project members regularly (weekly or every-other week), help define a scope for their project, create a spec, define work items and assign them to milestones. I will help them get resources from other parts of the community and Canonical when they need them, promote their work and assist in recruiting contributors. All of the important things that a project needs, other than direct contributions to the final product.

I’m intentionally keeping the scope of my involvement very focused and brief. I don’t want to take over anybody’s project or be a co-founder. I will take on only one project at a time, so that project gets all of my attention during their incubation period. The incubation period itself is very short, just one month, so that I will focus on getting them setup, not on running them.  Once I finish with one project, I will move on to the next[2].

How will I choose which project to incubate? Since it’s my time, it’ll be my choice, but the most important factor will be whether or not a project is ready to be incubated. “Ready” means they are more than just an idea: they are both possible to accomplish and feasible to accomplish with the person or people already involved, the implementation details have been mostly figured out, and they just need help getting the ball rolling. “Ready” also means it’s not an existing project looking for a boost, while we need to support those projects too, that’s not what the Incubator is for.

So, if you have a project that’s ready to go, but you need a little help taking that first step, you can let me know by adding your project’s information to this etherpad doc[3]. I’ll review each one and let you know if I think it’s ready, needs to be defined a little bit more, or not a good candidate. Then each month I’ll pick one and reach out to them to get started.

Now, this part is important: don’t wait for me! I want to speed up community innovation, not slow it down, so even if I add your project to the “Ready” queue, keep on doing what you would do otherwise, because I have no idea when (or if) I will be able to get to yours. Also, if there are any other community leaders with project management experience who have the time and desire to help incubate one of these project, go ahead and claim it and reach out to that team.

[1] While this compliments my regular job, it’s not something I’ve been asked to do by Canonical, and to be honest I have enough Canonical-defined tasks to consume my working hours. This is me with just my community hat on, and I’m inclined to keep it that way.

[2] I’m not going to forget about projects after their month is up, but you get 100% of the time I spend on incubation during your month, after that my time will be devoted to somebody else.

[3] I’m using Etherpad to keep the process as lightweight as possible, if we need something better in the future we’ll adopt it then.

on November 25, 2014 06:47 PM


  • Review ACTION points from previous meeting
    • None
  • V Development
  • Server & Cloud Bugs (caribou)
  • Weekly Updates & Questions for the QA Team (psivaa)
  • Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges)
  • Ubuntu Server Team Events
  • Open Discussion
  • Announce next meeting date, time and chair


Meeting Actions
  • matsubara to chase someone that can update release bugs report:
Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges)
  • smb reports: “I did a few stable uploads for Xen in Utopic and Trusty. Though zul, you may want to hold back doing cloud-archive versions. There is more to come. ;) Also from some email report on xen-devel there are a few things missing to make openstack and xen a better experience (bug #1396068 and bug #1394327 at least). I am working on getting things applied and SRUed.”
Agree on next meeting date and time

Next meeting will be on Tuesday, Dec 2nd at 16:00 UTC in #ubuntu-meeting. kickinz1 will chair.

on November 25, 2014 05:55 PM

There has been a lot of discussion recently where there is strong disagreement, even about how to discuss the disagreement. Here’s a few thoughts on the matter.

The thing I personally find the most annoying is when someone thinks what someone else says is inappropriate and says so, it seems like the inevitable response is to scream censorship. When people do that, I’m pretty sure they don’t know what the word censorship actually means. Debian/Ubuntu/Insert Project Name Here resources are not public spaces and no government is telling people what they can and can’t say.

When you engage in speech and people respond to that speech, even if you don’t feel all warm and fuzzy after reading the response, it’s not censorship. It’s called discussion.

When someone calls out speech that they think is inappropriate, the proper response is not to blame a Code of Conduct or some other set of rules. Projects that have a code, also have a process for dealing with claims the code has been violated. Unless someone invokes that process (which almost never happens), the code is irrelevant. What’s relevant is that someone is having a problem with what or how you are saying something and are in some way hurt by it.

Let’s focus on that. The rules are irrelevant, what matters is working together in a collegial way. I really don’t think project members actively want other project members to feel bad/unsafe, but it’s hard to get outside ones own defensive reaction to being called out. So please pay less attention to how you’re feeling about things and try to see things from the other side. If we can all do a bit more of that, then things can be better for all of us.

Final note: If you’ve gotten this far and thought “Oh, that other person is doing this to me”, I have news for you – it’s not just them.

on November 25, 2014 04:47 PM

Just in time for the end of the year holidays…

I have a new edition of Ubuntu Unleashed 2015 Edition (affiliate link), now available for preorder. This book is intended for intermediate to advanced users.

I also failed to mention on this blog the newest edition of The Official Ubuntu Book (another affiliate link), now in its eighth edition. The book continues to serve as a quality introduction for newcomers to Ubuntu, both the software and the community that surrounds it.

on November 25, 2014 04:27 PM

As some of you might know, I was appointed as the Xubuntu website lead after taking a 6-month break from leadership in Xubuntu.

Since this position was passed on from Lyz (who is by the way doing fantastic job as our marketing lead!), I wouldn’t have wanted to be nominated unless I could actually bring something to the table. Thus, the xubuntu-v-website blueprint lists all the new (and old) projects that I am driving to finish during the Vivid cycle.

Now, please let me briefly introduce you to the field which I’m currently improving…

Responsive design!

In the past days, I have been preparing responsive stylesheets for the Xubuntu website. While Xubuntu isn’t exactly targeted at any device that itself would have a great need for fully responsive design, we do think that it is important to be available for users browsing with those devices as well.

Currently, we have four stylesheets in addition to the regular ones. Two of these are actually useful even for people without small-resolution screens; they improve the user experience for situations when the browser viewport is simply limited.

In the first phase of building the responsive design, I have had three main goals. Maybe the most important aspect is to avoid horizontal scrolling. Accomplishing this already improves the browsing experience a lot especially on small screens. The two other goals are to make some of the typography adjust better to small resolutions while keeping it readable and keeping links, especially internal navigation, easily accessible by expanding their clickable area.

At this point, I’ve pretty much accomplished the first goal, but still have work to do with the other two. There are also some other visual aspects that I would like to improve before going public, but ultimately, they aren’t release-critical changes and can wait for later.

For now, the new stylesheets are only used in the staging site. Once we release them for the wider public, or if we feel like we need some broader beta testing, we will reach for people with mobile (and other small-resolution) devices on the Xubuntu development mailing list for testing.

If you can’t wait to have a preview and are willing to help testing, show up on our development IRC channel #xubuntu-devel on Freenode and introduce yourself. I’ll make sure to get a hold of you sooner than later.

What about Xubuntu documentation?

The Xubuntu documentation main branch has responsive design stylesheets applied already. This change have yet to make it to any release (including the development version), but will land at least in Vivid soon enough.

Once I have prepared the responsive stylesheets for the Xubuntu online documentation frontpage, I will coordinate an effort to get the online documentation to use the responsive design as soon as possible. Expect some email about this on the development mailing list as well.

While we are at it… Paperspace

On a similar note… Last night I released the responsive design that I had been preparing for quite some time for Paperspace, or in other words, the WordPress theme for this blog (and the other blogs in this domain). That said, if you see anything that looks off in any browser resolution below 1200 pixels wide, be in touch. Thank you!

on November 25, 2014 02:49 PM

In a presentation to my colleagues last week, I shared a few tips I've learned over the past 8 years, maintaining a reasonably active and read blog.  I'm delighted to share these with you now!

1. Keep it short and sweet

Too often, we spend hours or days working on a blog post, trying to create an epic tome.  I have dozens of draft posts I'll never finish, as they're just too ambitious, and I should really break them down into shorter, more manageable articles.

Above, you can see Abraham Lincoln's Gettysburg Address, from November 19, 1863.  It's merely 3 paragraphs, 10 sentences, and less than 300 words.  And yet it's one of the most powerful messages ever delivered in American history.  Lincoln wrote it himself on the train to Gettysburg, and delivered it as a speech in less than 2 minutes.

2. Use memorable imagery

Particularly, you need one striking image at the top of your post.  This is what most automatic syndicates or social media platforms will pick up and share, and will make the first impression on phones and tablets.

3. Pen a catchy, pithy title

More people will see or read your title than the post itself.  It's sort of like the chorus to that song you know, but you don't know the rest of the lyrics.  A good title attracts readers and invites re-shares.

4. Publish midweek

This is probably more applicable for professional, rather than hobbyist, topics, but the data I have on my blog (1.7 million unique page views over 8 years), is that the majority of traffic lands on Tuesday, Wednesday, and Thursday.  While I'm writing this very post on a rainy Saturday morning over a cup of coffee, I've scheduled it to publish at 8:17am (US Central time) on the following Tuesday morning.

5. Share to your social media circles

My posts are generally professional in nature, so I tend to share them on G+, Twitter, and LinkedIn.  Facebook is really more of a family-only thing for me, but you might choose to share your posts there too.  With the lamentable death of the Google Reader a few years ago, it's more important than ever to share links to posts on your social media platforms.

6. Hope for syndication, but never expect it

So this is the one "tip" that's really out of your control.  If you ever wake up one morning to an overflowing inbox, congratulations -- your post just went "viral".  Unfortunately, this either "happens", or it "doesn't".  In fact, it almost always "doesn't" for most of us.

7. Engage with comments only when it makes sense

If you choose to use a blog platform that allows comments (and I do recommend you do), then be a little careful about when and how to engage in the comments.  You can easily find yourself overwhelmed with vitriol and controversy.  You might get a pat on the back or two.  More likely, though, you'll end up under a bridge getting pounded by a troll.  Rather than waste your time fighting a silly battle with someone who'll never admit defeat, start writing your next post.  I ignore trolls entirely.

A Case Study

As a case study, I'll take as an example the most successful post I've written: Fingerprints are Usernames, Not Passwords, with nearly a million unique page views.

  1. The entire post is short and sweet, weighing in at under 500 words and about 20 sentences
  2. One iconic, remarkable image at the top
  3. A succinct, expressive title
  4. Published on Tuesday, October 1, 2013
  5. 1561 +1's on G+, 168 retweets on Twitter
  6. Shared on Reddit and HackerNews (twice)
  7. 434 comments, some not so nice

on November 25, 2014 02:17 PM

Welcome to the Ubuntu Weekly Newsletter. This is issue #392 for the week November 10 – 16, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Elizabeth K. Joseph
  • Jose Antonio Rey
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on November 25, 2014 05:33 AM

Welcome to the Ubuntu Weekly Newsletter. This is issue #393 for the week November 17 – 23, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Elizabeth K. Joseph
  • Jose Antonio Rey
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on November 25, 2014 05:33 AM

Verifying Verification

Stephen Michael Kellat

Please remember that this is written by myself alone. Any reference to "we" below either refers to the five human beings that currently comprise the LoCo Council that I am part of or to the Ubuntu Realm in general. I apologize for any difficulties or consternation caused.

From the perspective of a community team, it can seem daunting when a "case management" bug is imposed relative to Verification or Re-Verification of a team. Many people wonder what that may mean. It might seem like a lot of work. It truly isn't.

In the Verification process, LoCo Council is checking to see if a community team has taken care of setting up some bare minimums. There is a basic expectation of some baseline things that all community teams should possess. Those items include:

  • A "Point of Contact" is set as the team's owner on Launchpad and is reachable
  • Online resources including IRC channel, wiki page, website, e-mail list, Forum/Discourse section, and LoCo Team Portal entry are set up
  • Your team conforms to naming standards

Some things that are useful to mention in a write-up to the LoCo Council include but are not limited to:

  • Links to your social media presences
  • Do you have members of your community who are part of the Ubuntu Members set?
  • What is your roadmap for the future?
  • What brought you to this point?

This doesn't have to be a magnum opus of literary works. An application for this does not even need copious pictures. What the Council needs are just the facts so that members of the Council can look at a glance to review where your community stands. From there we end up asking what your community's needs are and how the Council might assist you. If you've taken over three hours to put together the application, you may have possibly put too much effort into it. It is meant to be a quick process instead of a major high-stakes presentation.

We have only a fraction of community teams checked out to show that they in fact have baseline items set up. We could increase that in this cycle to be much better. There is a page on the wiki with links to a template for building your teams own application. If your team isn't currently verified, you can write to the Council at to set up a time and date when the Council can consider such.

on November 25, 2014 12:00 AM

November 24, 2014

Sorry, this is long, but hang in there.

A little while back I wrote a blog post that seemed to inspire some people and ruffle the feathers of some others. It was designed as a conversation-starter for how we can re-energize leadership in Ubuntu.

When I kicked off the blog post, Elizabeth quite rightly gave me a bit of a kick in the spuds about not providing a place to have a discussion, so I amended the blog post to a link to this thread where I encourage your feedback and participation.

Rather unsurprisingly, there was some good feedback, before much of it started wandering off the point a little bit.

I was delighted to see that Laura posted that a Community Council meeting on the 4th Dec at 5pm UTC has been set up to further discuss the topic. Thanks, CC, for taking the time to evaluate and discuss the topic in-hand.

I plan on joining the meeting, but I wanted to post five proposed recommendations that we can think about. Again, please feel free to share feedback about these ideas on the mailing list

1. Create our Governance Mission/Charter

I spent a bit of time trying to find the charter or mission statements for the Community Council and Technical Board and I couldn’t find anything. I suspect they are not formally documented as they were put together back in the early days, but other sub-councils have crisp charters (mostly based off the first sub-council, the Forum Council).

I think it could be interesting to define a crisp mission statement for Ubuntu governance. What is our governance here to do? What are the primary areas of opportunity? What are the priorities? What are the risks we want to avoid? Do we need both a CC and TB?

We already have the answers to some of these questions, but are the answers we have the right ones? Is there an opportunity to adjust our goals with our leadership and governance in the project?

Like many of the best mission statements, this should be a collaborative process. Not a mission defined by a single person or group, but an opportunity for multiple people to feed into so it feels like a shared mission. I would recommend that this be a process that all Ubuntu members can play a role in. Ubuntu members have earned their seat at the table via their contributions, and would be a wonderfully diverse group to pull ideas from.

This would give us a mission that feels shared, and feels representative of our community and culture. It would feel current and relevant, and help guide our governance and wider project forward.

2. Create an ‘Impact Constitution’

OK, I just made that term up, and yes, it sounds a bit buzzwordy, but let me explain.

The guiding principles in Ubuntu are the Ubuntu Promise. It puts in place a set of commitments that ensure Ubuntu always remains a collaborative Open Source project.

What we are missing though is a document that outlines the impact that Ubuntu gives you, others, and the wider world…the ways in which Ubuntu empowers us all to succeed, to create opportunity in our own lives and the life of others.

As an example:

Ubuntu is a Free Software platform and community. Our project is designed to create open technology that empowers individuals, groups, businesses, charities, and others. Ubuntu breaks down the digital divide, and brings together our collective energy into a system that is useful, practical, simple, and accessible.

Ubuntu empowers you to:

  1. Deploy an entirely free Operating System and archive of software to one of multiple computers in homes, offices, classrooms, government institutions, charities, and elsewhere.
  2. Learn a variety of programming and development languages and have the tools to design, create, test, and deploy software across desktops, phones, tablets, the cloud, the web, embedded devices and more.
  3. Have the tools for artistic creativity and expression in music, video, graphics, writing, and more.
  4. . . .

Imagine if we had a document with 20 or so of these impact statements that crisply show the power of our collective work. I think this will regularly remind us of the value of Ubuntu and provide a set of benefits that we as a wider community will seek to protect and improve.

I would then suggest that part of the governance charter of Ubuntu is that our leadership are there to inspire, empower, and protect the ‘impact constitution'; this then directly connects our governance and leadership to what we consider to be the primary practical impact of Ubuntu in making the world a better place.

3. Cross-Governance Strategic Meetings

Today we have CC meetings, TB meetings, FC meetings etc. I think it would be useful to have a monthly, or even quarterly meeting that brings together key representatives from each of the governance boards with a single specific goal – how do the different boards help further each other’s mission. As an example, how does the CC empower the TB for success? How does the TB empower the FC for success?

We don’t want governance that is either independent or dependent at the individual board level. We want governance that is inter-dependent with each other. This then creates a more connected network of leadership.

4. Annual In-Person Governance Summit

We have a community donations fund. I believe we should utilize it to get together key representatives across Ubuntu governance into the same room for two or three days to discuss (a) how to refine and optimize process, but also (b) how to further the impact of our ‘impact constitution’ and inspire wider opportunity in Ubuntu.

If Canonical could chip in and potentially there were a few sponsors, we could potentially bring all governance representatives together.

Now, it could be tempting to suggest we do this online. I think this would be a mistake. We want to get our leaders together to work together, socialize together, and bond together. The benefits of doing this in person significantly outweigh doing it online.

5. Optimize our community brand around “innovation”

Ubuntu has a good reputation for innovation. Desktop, Mobile, Tablet, Cloud…it is all systems go. Much of this innovation though is seen in the community as something that Canonical fosters and drives. There was a sentiment in the discussion after my last blog post that some folks feel that Canonical is in the driving seat of Ubuntu these days and there isn’t much the community can do to inspire and innovate. There was at times a jaded feeling that Canonical is standing in the way of our community doing great things.

I think this is a bit of an excuse. Yes, Canonical are primarily driving some key pieces…Unity, Mir, Juju for example…but there is nothing stopping anyone innovating in Ubuntu. Our archives are open, we have a multitude of toolsets people can use, we have extensive collaborative infrastructure, and an awesome community. Our flavors are a wonderful example of much of this innovation that is going on. There is significantly more in Ubuntu that is open than restricted.

As such, I think it could be useful to focus on this in our outgoing Ubuntu messaging and advocacy. As our ‘impact constitution’ could show, Ubuntu is a hotbed of innovation, and we could create some materials, messaging, taglines, imagery, videos, and more that inspires people to join a community that is doing cool new stuff.

This could be a great opportunity for designers and artists to participate, and I am sure the Canonical design team would be happy to provide some input too.

Imagine a world in which we see a constant stream of social media, blog posts, videos and more all thematically orientated around how Ubuntu is where the innovators innovate.

Bonus: Network of Ubucons

OK, this is a small extra one I would like to throw in for good measure. :-)

The in-person Ubuntu Developer Summits were a phenomenal experience for so many people, myself included. While the Ubuntu Online Summit is an excellent, well-organized online event, there is something to be said about in-person events.

I think there is a great opportunity for us to define two UbuCons that become the primary in-person events where people meet other Ubuntu folks. One would be focused on the US, and one of Europe, and if we could get more (such as an Asian event), that would be awesome.

These would be driven by the community for the community. Again, I am sure the donations fund could help with the running costs.

In fact, before I left Canonical, this is something I started working on with the always-excellent Richard Gaskin who puts on the UbuCon before SCALE in LA each year.

This would be more than a LoCo Team meeting. It would be a formal Ubuntu event before another conference that brings together speakers, panel sessions, and more. It would be where Ubuntu people to come to meet, share, learn, and socialize.

I think these events could be a tremendous boon for the community.

Well, that’s it. I hope this provided some food for thought for further discussion. I am keen to hear your thoughts on the mailing list!

on November 24, 2014 10:35 PM
Recently I was playing around with CPU loading and was trying to estimate the number of compute operations being executed on my machine.  In particular, I was interested to see how many instructions per cycle and stall cycles I was hitting on the more demanding instructions.   Fortunately, perf stat allows one to get detailed processor statistics to measure this.

In my first test, I wanted to see how the Intel rdrand instruction performed with 2 CPUs loaded (each with a hyper-thread):

$ perf stat stress-ng --rdrand 4 -t 60 --times
stress-ng: info: [7762] dispatching hogs: 4 rdrand
stress-ng: info: [7762] successful run completed in 60.00s
stress-ng: info: [7762] for a 60.00s run time:
stress-ng: info: [7762] 240.01s available CPU time
stress-ng: info: [7762] 231.05s user time ( 96.27%)
stress-ng: info: [7762] 0.11s system time ( 0.05%)
stress-ng: info: [7762] 231.16s total time ( 96.31%)

Performance counter stats for 'stress-ng --rdrand 4 -t 60 --times':

231161.945062 task-clock (msec) # 3.852 CPUs utilized
18,450 context-switches # 0.080 K/sec
92 cpu-migrations # 0.000 K/sec
821 page-faults # 0.004 K/sec
667,745,260,420 cycles # 2.889 GHz
646,960,295,083 stalled-cycles-frontend # 96.89% frontend cycles idle
13,702,533,103 instructions # 0.02 insns per cycle
# 47.21 stalled cycles per insn
6,549,840,185 branches # 28.334 M/sec
2,352,175 branch-misses # 0.04% of all branches

60.006455711 seconds time elapsed

stress-ng's rdrand test just performs a 64 bit rdrand read and loops on this until the data is ready, and performs this 32 times in an unrolled loop.  Perf stat shows that each rdrand + loop sequence on average consumes about 47 stall cycles showing that rdrand is probably just waiting for the PRNG block to produce random data.

My next experiment was to run the stress-ng ackermann stressor; this performs a lot of recursion, hence one should see a predominantly large amount of branching.

$ perf stat stress-ng --cpu 4 --cpu-method ackermann -t 60 --times
stress-ng: info: [7796] dispatching hogs: 4 cpu
stress-ng: info: [7796] successful run completed in 60.03s
stress-ng: info: [7796] for a 60.03s run time:
stress-ng: info: [7796] 240.12s available CPU time
stress-ng: info: [7796] 226.69s user time ( 94.41%)
stress-ng: info: [7796] 0.26s system time ( 0.11%)
stress-ng: info: [7796] 226.95s total time ( 94.52%)

Performance counter stats for 'stress-ng --cpu 4 --cpu-method ackermann -t 60 --times':

226928.278602 task-clock (msec) # 3.780 CPUs utilized
21,752 context-switches # 0.096 K/sec
127 cpu-migrations # 0.001 K/sec
927 page-faults # 0.004 K/sec
594,117,596,619 cycles # 2.618 GHz
298,809,437,018 stalled-cycles-frontend # 50.29% frontend cycles idle
845,746,011,976 instructions # 1.42 insns per cycle
# 0.35 stalled cycles per insn
298,414,546,095 branches # 1315.017 M/sec
95,739,331 branch-misses # 0.03% of all branches

60.032115099 seconds time elapsed about 35% of the time is used in branching and we're getting  about 1.42 instructions per cycle and no many stall cycles, so the code is most probably executing inside the instruction cache, which isn't surprising because the test is rather small.

My final experiment was to measure the stall cycles when performing complex long double floating point math operations, again with stress-ng.

$ perf stat stress-ng --cpu 4 --cpu-method clongdouble -t 60 --times
stress-ng: info: [7854] dispatching hogs: 4 cpu
stress-ng: info: [7854] successful run completed in 60.00s
stress-ng: info: [7854] for a 60.00s run time:
stress-ng: info: [7854] 240.00s available CPU time
stress-ng: info: [7854] 225.15s user time ( 93.81%)
stress-ng: info: [7854] 0.44s system time ( 0.18%)
stress-ng: info: [7854] 225.59s total time ( 93.99%)

Performance counter stats for 'stress-ng --cpu 4 --cpu-method clongdouble -t 60 --times':

225578.329426 task-clock (msec) # 3.757 CPUs utilized
38,443 context-switches # 0.170 K/sec
96 cpu-migrations # 0.000 K/sec
845 page-faults # 0.004 K/sec
651,620,307,394 cycles # 2.889 GHz
521,346,311,902 stalled-cycles-frontend # 80.01% frontend cycles idle
17,079,721,567 instructions # 0.03 insns per cycle
# 30.52 stalled cycles per insn
2,903,757,437 branches # 12.873 M/sec
52,844,177 branch-misses # 1.82% of all branches

60.048819970 seconds time elapsed

The complex math operations take some time to complete, stalling on average over 35 cycles per op.  Instead of using 4 concurrent processes, I re-ran this using just the two CPUs and eliminating 2 of the hyperthreads.  This resulted in 25.4 stall cycles per instruction showing that hyperthreaded processes are stalling because of contention on the floating point units.

Perf stat is an incredibly useful tool for examining performance issues at a very low level.   It is simple to use and yet provides excellent stats to allow one to identify issues and fine tune performance critical code.  Well worth using.
on November 24, 2014 07:43 PM

I had the great pleasure to deliver a 90 minute talk at the USENIX LISA14 conference, in Seattle, Washington.

During the course of the talk, we managed to:

  • Deploy OpenStack Juno across 6 physical nodes, on an Orange Box on stage
  • Explain all of the major components of OpenStack (Nova, Neutron, Swift, Cinder, Horizon, Keystone, Glance, Ceilometer, Heat, Trove, Sahara)
  • Explore the deployed OpenStack cloud's Horizon interface in depth
  • Configured Neutron networking with internal and external networks, as well as a gateway and a router
  • Setup our security groups to open ICMP and SSH ports
  • Upload an SSH keypair
  • Modify the flavor parameters
  • Update a bunch of quotas
  • Add multiple images to Glance
  • Launch some instances until we max out our hypervisor limits
  • Scale up the Nova Compute nodes from 3 units to 6 units
  • Deploy a real workload (Hadoop + Hive + Kibana + Elastic Search)
  • Then, we deleted the entire environment, and ran it all over again from scratch, non-stop
Slides and a full video are below.  Enjoy!

on November 24, 2014 05:01 PM

We’ve been talking about the Ubuntu Developer Tools Center for a few months now. We’ve seen a lot of people testing it out & contributing and we had a good session at the Ubuntu Online Summit about what the near future holds for UDTC.

Also during that session, emerging from feedback we received we talked about how “UDTC” and “Ubuntu Developer Tools Centre” is a bit of mouthfull, and the acronym is quite easy to muddle. We agreed that we needed a new name, and that’s where we need your help.

We’re looking for a name which succinctly describes what the Developer Tools Center is all about, its values and philosophy. Specifically, that we are about developing ON Ubuntu, not just FOR Ubuntu. That we strive to ensure that the tools made available via the tools center are always in line with latest version delivered by the upstream developers. That we automate the testing and validation of this, so developers can rely on us. And that use LTS releases as our environment of choice so developers have a solid foundation on which to build. In a nutshell, a name that conveys that we love developers!

If you have a great idea for a new name please let us know by commenting on the Google+ post or by commenting on this blog post.

The final winner will be chosen by a group of Ubuntu contributors but please +1 your favorite to help us come up with a shortlist. The winner will receive the great honor of an Ubuntu T Shirt and knowing that they have changed history! We’ll close this contest by Monday 8th of December.

Now, it’s all up to you! If you want to also contribute to other parts of this ubuntu loves developers effort, you’re more than welcome!

on November 24, 2014 03:31 PM

I read the “We Are Not Loco” post  a few days ago. I could understand that Randall wanted to further liberate his team in terms of creativity and everything else, but to me it looks feels the wrong approach.

The post makes a simple promise: do away with bureaucracy, rename the team to use a less ambiguous name, JFDI! and things are going to be a lot better. This sounds compelling. We all like simplicity; in a faster and more complicated world we all would like things to be simpler again.

What I can also agree with is the general sense of empowerment. If you’re member of a team somewhere or want to become part of one: go ahead and do awesome things – your team will appreciate your hard work and your ideas.

So what was it in the post that made me sad? It took me a while to find out what specifically it was. The feeling set in when I realised somebody turned their back on a world-wide community and said “all right, we’re doing our own thing – what we used to do together to us is just old baggage”.

Sure, it’s always easier not having to discuss things in a big team. Especially if you want to agree on something like a name or any other small detail this might take ages. On the other hand: the world-wide LoCo community has achieved a lot of fantastic things together: there are lots of coordinated events around the world, there’s the LoCo team portal, and most importantly, there’s a common understanding of what teams can do and we all draw inspiration from each other’s teams. By making this a global initiative we created numerous avenues where new contributors find like-minded individuals (who all live in different places on the globe, but share the same love for Ubuntu and organising local events and activities). Here we can learn from each other, experiment and find out together what the best practices for local community awesomeness are.

Going away and equating the global LoCo community with bureaucracy to me is desolidarisation – it’s quite the opposite of “I Am Who I Am Because Of Who We All Are”.

Personally I would have preferred a set of targeted discussions which try to fix processes, improve communication channels and inspire a new round leaders of Ubuntu LoCo teams. Not everything you do in a LoCo team has to be approved by the entire set of other teams, actual reality in the LoCo world is quite different from that.

If you have ideas to discuss or suggestions, feel free to join our loco-contacts mailing list and bring it up there! It’s your chance to hang out with a lot of fun people from around the globe. :-)

on November 24, 2014 03:07 PM

November 23, 2014

Analyzing public OpenPGP keys

Dimitri John Ledkov

OpenPGP Message Format (RFC 4880) well defines key structure and wire formats (openpgp packets). Thus when I looked for public key network (SKS) server setup, I quickly found pointers to dump files in said format for bootstrapping a key server.

I did not feel like experimenting with Python and instead opted for Go and found library that has comprehensive support for parsing openpgp low level structures. I've downloaded the SKS dump, verified it's MD5SUM hashes (lolz), and went ahead to process them in Go.

With help from and database/sql, I've written a small program to churn through all the dump files, filter for primary RSA keys (not subkeys) and inject them into a database table. The things that I have chosen to inject are fingerprint, N, E. N & E are the modulus of the RSA key pair and the public exponent. Together they form a public part of an RSA keypair. So far, nothing fancy.

Next I've run an SQL query to see how unique things are... and found 92 unique N & E pairs that have from two and up to fifteen duplicates. In total it is 231 unique fingerprints, which use key material with a known duplicate in the public key network. That didn't sound good. And also odd - given that over 940 000 other RSA keys managed to get unique enough entropy to pull out a unique key out of the keyspace haystack (which is humongously huge by the way).

Having the list of the keys, I've fetched them and they do not look like regular keys - their UIDs do not have names & emails, instead they look like something from the monkeysphere. The keys look like they are originally used for TLS and/or SSH authentication, but were converted into OpenPGP format and uploaded into the public key server. This reminded me of the Debian's SSL key generation vulnerability CVE-2008-0166. So these keys might have been generated with bad entropy due to affected tools by that CVE and later converted to OpenPGP.

Looking at the openssl-blacklist package, it should be relatively easy for me to generate all possible RSA key-pairs and I believe all other material that is hashed to generate the fingerprint are also available (RFC 4880#12.2). Thus it should be reasonably possible to generate matching private keys, generate revocation certificates and publish the revocation certificate with pointers to CVE-2008-0166. (Or email it to the people who have signed given monkeysphered keys). When I have a minute I will work on generating openpgp-blacklist type of scripts to address this.

If anyone is interested in the Go source code I've written to process openpgp packets, please drop me a line and I'll publish it on github or something.
on November 23, 2014 09:15 PM

Awesome BSP in München

Ovidiu-Florin Bogdan

An awesome BSP just took place in München where teams from Kubuntu, Kolab, KDE PIM, Debian and LibreOffice came and planned the future and fixed bugs. This is my second year participating at this BSP and I must say it was an awesome experience. I got to see again my colleagues from Kubuntu and got to […]
on November 23, 2014 06:10 PM

A significant part of cooking is chemical science, but few people think of it in this way, but when combining cooking with what people consider stereotypical chemistry –using & mixing things with long technical names– you can have more fun.

Cheese as a Condiment

A typical method of adding cheese to things is simply to place grated cheese over a pile of food and melting it (in an oven usually). Now one of the problems (as I see it) with this, is that when you heat cheese it tends to split apart into milk solids and liquid milk fat, so you end up with unnecessary grease.

In my mind, the cheese-as-a-condiment is one of that is more smooth & creamy, such as that of a fondue, but your average "out-of-the-package" cheese does not melt this way. You can purchase one of several (disgusting) cheese products that gives you this effect, but it's more fun to make one yourself and you have the added benefit of knowing what goes into it.

Can be then used on nachos, for example:


One way to do this to use a chemical emulsifier to make the liquid fats –cheese– soluble in something that it is normally not soluble in –such as water. Essentially, something that's done frequently in a factory setting to help make many processed cheese products, spreads, dips, etc.

Now there are a tonne of food-safe chemical emulsifiers each with slightly different properties that you could use, but the one that I have a stock of, and that works particularly well with milk fats, like those in cheese, is sodium citrate –the salt of citric acid– which you can get from your friendly online distributor of science-y cooking products.

Many of these are also flavourless, or given the usually relatively small amount in food, the flavour that might be imparted is insignificant. They're essentially used for textural changes.


  • 250 mL water*
  • 10 grams sodium citrate
  • 3-4 cups grated cheese –such as, cheddar**

*if you're feeling more experimentative, you can use a different (water-based) liquid for additional flavour, such as wine or an infusion

**you can use whichever cheeses you fancy, but I'd avoid processed cheeses as they may have additives that could mess up the chemisty


  1. In a pot make a solution of 0.25g/mL sodium citrate in water –boil the water and dissolve the salt.
  2. Reduce the heat and begin to melt the cheese into the water handful at a time while whisking constantly.
  3. When all the cheese has melted keep stirring while the mixture thickens.
  4. Serve or use hot –keep it warm.

At the end of this what you'll essentially have is a "cheese gel" which will stiffen as it cools, but it can easily be reheated to regain its smooth consistency.

When you've completed the emulusion you can add other ingredients to jazz it up a bit –some dried spices or chopped jalapenos, for example– before pouring it over things or using it as a dip. Do note, if you're pouring it over nachos, it's best to have heated the chips first.

Another great use for your cheese gel is to pour it out, while hot, onto a baking sheet and let cool. Then you can cut it into squares for that perfect melt needed for the perfect cheeseburger.

on November 23, 2014 06:00 PM

S07E34 – The One with Unagi

Ubuntu Podcast from the UK LoCo

We’re back with Season Seven, Episode Thirty-four of the Ubuntu Podcast! Just Laura Cowen and Mark Johnson here again.

In this week’s show:

  • We discuss the crowdsourcing campaign.

  • We also discuss:

  • We share some Command Line Lurve (from ionagogo) which finds live streams on a page. It’s great for watching online feeds without Flash. Just point it at a web page and it finds all the streams. Run with “best” (or a specific stream type) and it launches your video player such as VLC:
  • And we read your feedback. Thanks for sending it in!

We’ll be back next week, so please send your comments and suggestions to:
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on November 23, 2014 02:30 PM

Here’s a nice project if you’re bored and wanting to help make a very visual difference to KDE, port the Breeze icon theme to LibreOffice.

Wiki page up at

All help welcome

Open, Save and PDF icons are breeze, all the rest still to go


facebooktwittergoogle_pluslinkedinby feather
on November 23, 2014 02:23 PM

KDE Promo Idea

Jonathan Riddell


New seasonal KDE marketing campaign.  Use Kubuntu to get off the naughty list.


facebooktwittergoogle_pluslinkedinby feather
on November 23, 2014 12:11 PM

Custom Wallpaper

Charles Profitt

I recently upgraded to Ubuntu 14.10 and wanted to adorn my desktop with some new wallpapers. Usually, I find several suitable wallpapers on the web, but this time I did not. I then decided to make my own and wanted to share the results. All the following wallpapers were put together using GIMP.

plain hex template

Plain Hex Template

Hex Template Two

Hex Template Two

hex with dwarf

Hex With Dwarf

Hex Dragon

Hex Dragon

on November 23, 2014 04:34 AM

November 22, 2014

UPDATE - I’ve removed the silly US restriction.  I know there are more options in Europe, China, India, etc, but why shouldn’t you get access to the “open to the core” laptop!
This would definitely come with at least 3 USB ports (and at least one USB 3.0 port).

Since Jolla had success with crowdfunding a tablet, it’s a good time to see if we can get some mid-range Ubuntu laptops for sale to consumers in as many places as possible.  I’d like to get some ideas about whether there is enough demand for a very open $500 Ubuntu laptop.

Would you crowdfund this? (Core Goals)

  • 15″ 1080p Matte Screen
  • 720p Webcam with microphone
  • Spill-resistant and nice to type on keyboard
  • Intel i3+ or AMD A6+
  • Built-in Intel or AMD graphics with no proprietary firmware
  • 4 GB Ram
  • 128 GB SSD (this would be the one component that might have to be proprietary as I’m not aware of another option)
  • Ethernet 10/100/1000
  • Wireless up to N
  • HDMI
  • SD card reader
  • CoreBoot (No proprietary BIOS)
  • Ubuntu 14.04 preloaded of course
  • Agreement with manufacturer to continue selling this laptop (or similar one) with Ubuntu preloaded to consumers for at least 3 years.

Stretch Goals? Or should they be core goals?

Will only be added if they don’t push the cost up significantly (or if everyone really wants them) and can be done with 100% open source software/firmware.

  • Touchscreen
  • Convertible to Tablet
  • GPS
  • FM Tuner (and built-in antenna)
  • Digital TV Tuner (and built-in antenna)
  • Ruggedized
  • Direct sunlight readable screen
  • “Frontlight” tech.  (think Amazon PaperWhite)
  • Bluetooth
  • Backlit keyboard
  • USB Power Adapter

Take my quick survey if you want to see this happen.  If at least 1000 people say “Yes,” I’ll approach manufacturers.   The first version might just end up being a Chromebook modified with better specs, but I think that would be fine.

Link to survey –

on November 22, 2014 09:37 PM

Blog Moved

Jonathan Riddell

KDE Project:

I've moved my developer blog to my vanity domain, which has hosted my personal blog since 1999 (before the word existed). Tags used are Planet KDE and Planet Ubuntu for the developer feeds.

Sorry no DCOP news on

on November 22, 2014 03:21 PM

Release party in Barcelona

Rafael Carreras


Another time, and there has been 16, ubuntaires celebrated the release party of the next Ubuntu version, in this case, 14.10 Utopic Unicorn.

This time, we went to Barcelona, at Raval, at the very centre, thanks to our friends of the TEB.

As always, we started with explaining what Ubuntu is and how our Catalan LoCo Team works and later Núria Alonso from the TEB explained the Ubuntu migration done at the Xarxa Òmnia.


The installations room was plenty from the very first moment.


There also was a very profitable auto-learning workshop on how to do an Ubuntu metadistribution.



And in another room, there were two Arduino workshops.



And, of course, ubuntaires love to eat well.


15615259540_76daed408b_z 15614277959_c98bda1d33_z


Pictures by Martina Mayrhofer and Walter García, all rights reserved.

on November 22, 2014 02:32 PM
Hi folks,

Our Community Working Group has dwindled a bit, and some of our members have work that keeps them away from doing CWG work. So it is time to put out another call for volunteers.

The KDE community is growing, which is wonderful. In spite of that growth, we have less "police" type work to do these days. This leaves us more time to make positive efforts to keep the community healthy, and foster dialog and creativity within our teams.

One thing I've noticed is that listowners, IRC channel operators and forum moderators are doing an excellent job of keeping our communication channels friendly, welcoming and all-around helpful. Each of these leadership roles is crucial to keeping the community healthy.

Also, the effort to create the KDE Manifesto has adjusted KDE infrastructure to be directly and consciously supporting community values. The commitments section is particularly helpful.

Please write us at if you would like to become a part of our community gardening work.

on November 22, 2014 05:35 AM


"U can't touch this" Source

“U can’t touch this”[4] Source

“Touch-a touch-a touch-a touch me. I wanna be dirty.”[1] — Love, Your Dumb Phone

It’s not a problem with a dirty touch screen; that would be a stretch for an entire post. It’s a problem with the dirty power[2]: perhaps an even farther stretch. But, “I’m cold on a mission, so pull on back,”[4] and stretch yourself for a moment because your phone won’t stretch for you.

We’re constantly trying to stretch the battery life of our phones, but the phones keep demanding to be touched, which drains the battery. Phones have this “dirty power” over us, but maybe there are also some “spikes” in the power management of these dumb devices. The greatest feature is also the greatest flaw in the device. It is the fact that it has to be touched in order to react. Does it even react in the most effective way? What indication is there to let you know how the phone has been touched? Do the phone reduce the amount of touches in order so save battery power? If it is not smart enough to do so, then maybe it shouldn’t have a touch screen at all!

Auto-brightness. “Can’t touch this.”[4]
Lock screen. “Can’t touch this.”[4]
Phone clock. “Can’t touch this.”[4]

Yes, your phone has these things, but they never seem to work at the right time. Never mind that I have to turn on the screen to check the time. These things currently seem to follow one set of rules instead of knowing when to activate. So when you “move slide your rump,”[4] you still end up with the infamous butt dial, and the “Dammit, Janet![1] My battery is about to die” situation.

There are already developments in these areas, which indicate that the dumb phone is truly on its last legs. “So wave your hands in the air.”[4] But, seriously, let’s reduce the number of touches, “get your face off the screen”[3] and live your life.

“Stop. Hammer time!”[4]


[1] Song by Richard O’Brien
[2] Fartbarf is fun.
[3] Randall RossCommunity Leadership Summit 2014
[4] Excessively touched on “U Can’t Touch This” by MC Hammer

on November 22, 2014 03:36 AM

My Vivid Vervet has crazy hair

Elizabeth K. Joseph

Keeping with my Ubuntu toy tradition, I placed an order for a vervet stuffed toy, available in the US via: Miguel the Vervet Monkey.

He arrived today!

He’ll be coming along to his first Ubuntu event on December 10th, a San Francisco Ubuntu Hour.

on November 22, 2014 02:57 AM

The phrase "The Year of the Linux Desktop" is one we see being used by hopefuls, to describe a future in which desktop Linux has reached the masses.

But I'm more pragmatic, and would like to describe the past and tweak this phrase to (I believe) accurately surmise 2011 as "The Year of the Linux Desktop Schism".

So let me tell you a little story of about this schism.

A Long Time Ago in 2011

The Linux desktop user base was happily enjoying the status quo. We had (arguably) two major desktops: GNOME and KDE , with a few smaller, less popular or used desktops as well (mostly named with initialisms).

It was the heyday of GNOME 2 on the desktop, being the default desktop used in many of the major distributions. But bubbling out of the ether of the GNOME Project was this idea for a new shell and a overhaul of GNOME, so GNOME 2 was brought to a close and GNOME Shell was born as the future of GNOME.

The Age of Dissent, Madness & Innovation

GNOME 3 and its new Shell did not sit well with everyone and many in the great blogosphere saw it as disastrous for GNOME and for users.

Much criticism was spouted and controversy raised and many started searching for alternatives. But there were those who stood by their faithful project and seeing the new version for what it was: a new beginning for GNOME and they knew that beginnings are not perfect.

Nevertheless, with this massive split in the desktop market we saw much change. There came a rapid flurry of several new projects and a moonshot from one for human beings.

The Ubuntu upgraded their fledging "netbook interface" and promoted it to the desktop, calling it Unity and it was took off down a path to unite the desktop with other emerging platforms yet to come.

There was also much dissatisfaction with the abandonment of GNOME 2, and the community they decided to lower their figurative pitchforks and use them to do some literal forking. They took up the remenants of this legacy desktop and used it to forge a new project. This project was to be named MATE and was to continue in the original spirit of GNOME 2.

The Linux Mint team, unsure of their future with GNOME under the Shell created the "Mint GNOME Shell Linux Mint Extension Pack of Extensions for GNOME Shell". This addon to the new GNOME experience would eventually lead to the creation Cinnamon, which itself was a fork of GNOME 3.

Despite being a relatively new arrival, the ambitious elementary and its team was developing the Pantheon desktop in relative secrecy for use in future versions of their OS, having previously relied on a slimmed-down GNOME 2. They were to become one of the most polished of them all.

And they all live happily ever since.

The end.

The Moral of the Story

All of these projects have been thriving in these 3 years hence, and why? Because of their communities.

All that has occurred is what the Linux community is and it is exemplary of the freedom that it and the whole of open source represents. We have the freedom in open source to exact our own change or act upon what may not agree with. We are not confined to a set of strictures, we are able to do what we feel is right and find other people who do as well.

To deride and belittle others for acting in their freedom or because they may not agree with you is just wrong and not keeping with the ethos of our community.

on November 22, 2014 12:00 AM

November 21, 2014

To ensure quality of the Juju charm store there are automatic processes that test charms on multiple cloud environments. These automated tests help identify the charms that need to be fixed. This has become so useful that charm tests are a requirement to become a recommended charm in the charm store for the trusty release.

What are the goals of charm testing?

For Juju to be magic, the charms must always deploy, scale and relate as they were designed. The Juju charm store contains over 200 charms and those charms can be deployed to more than 10 different cloud environments. That is a lot of environments to ensure charms work, which is why tests are now required!


The Juju ecosystem team has created different tools to make writing tests easier. The charm-tools package has code that generates tests for charms. Amulet is a python 3 library that makes it easier to programatically work with units and whole deployments. To get started writing tests you will need to install charm-tools and amulet packages:

sudo add-apt repository -y ppa:juju/stable
sudo apt-get update
sudo apt-get install -y charm-tools amulet

Now that the tools are installed, change directory to the charm directory and run the following command:

juju charm add tests

This command generates two executable files 00-setup and 99-autogen into the tests directory. The test are prefixed with a number so they are run in the correct lexicographical order.


The first file is a bash script that adds the Juju PPA repository, updates the package list, and installs the amulet package so the next tests can use the Amulet library.


This file contains python 3 code that uses the Amulet library. The class extends a unittest class that is a standard unit testing framework for python. The charm tool test generator creates a skeleton test that deploys related charms and adds relations, so most of the work done already.

This automated test is almost never a good enough test on its own. Ideal tests do a number of things:

  1. Deploy the charm and make sure it deploys successfully (no hook errors)
  2. Verify the service is running as expected on the remote unit (sudo service apache2 status).
  3. Change configuration values to verify users can set different values and the changes are reflected in the resulting unit.
  4. Scale up. If the charm handles the peer relation make sure it works with multiple units.
  5. Validate the relationships to make sure the charm works with other related charms.

Most charms will need additional lines of code in the 99-autogen file to verify the service is running as expected. For example if your charm implements the http interface you can use the python 3 requests package to verify a valid webpage (or API) is responding.

def test_website(self):  
    unit = self.deployment.sentry.unit['<charm-name>/0']
    url = 'http://%s' % unit['public-address']
    response = requests.get(url)
    # Raise an exception if the url was not a valid web page.

What if I don't know python?

Charm tests can be written in languages other than python. The automated test program called bundletester will run the test target in a Makefile if one exists. Including a 'test' target would allow a charm author to build and run tests from the Makefile.

Bundletester will run any executable files in the tests directory of a charm. There are example tests written in bash in the Juju documentation. A test fails if the executable returns a value other than zero.

Where can I get more information about writing charm tests?

There are several videos on about charm testing:
Charm testing video
Documentation on charm testing can be found here:
Documentation on Amulet:
Check out the lamp charm as an example of multiple amulet tests:

on November 21, 2014 11:15 PM
This is a short little blog post I've been wanting to get out there ever since I ran across the erlport project a few years ago. Erlang was built for fault-tolerance. It had a goal of unprecedented uptimes, and these have been achieved. It powers 40% of our world's telecommunications traffic. It's capable of supporting amazing levels of concurrency (remember the 2007 announcement about the performance of YAWS vs. Apache?).

With this knowledge in mind, a common mistake by folks new to Erlang is to think these performance characteristics will be applicable to their own particular domain. This has often resulted in failure, disappointment, and the unjust blaming of Erlang. If you want to process huge files, do lots of string manipulation, or crunch tons of numbers, Erlang's not your bag, baby. Try Python or Julia.

But then, you may be thinking: I like supervision trees. I have long-running processes that I want to be managed per the rules I establish. I want to run lots of jobs in parallel on my 64-core box. I want to run jobs in parallel over the network on 64 of my 64-core boxes. Python's the right tool for the jobs, but I wish I could manage them with Erlang.

(There are sooo many other options for the use cases above, many of them really excellent. But this post is about Erlang/LFE :-)).

Traditionally, if you want to run other languages with Erlang in a reliable way that doesn't bring your Erlang nodes down with badly behaved code, you use Ports. (more info is available in the Interoperability Guide). This is what JInterface builds upon (and, incidentally, allows for some pretty cool integration with Clojure). However, this still leaves a pretty significant burden for the Python or Ruby developer for any serious application needs (quick one-offs that only use one or two data types are not that big a deal).

erlport was created by Dmitry Vasiliev in 2009 in an effort to solve just this problem, making it easier to use of and integrate between Erlang and more common languages like Python and Ruby. The project is maintained, and in fact has just received a few updates. Below, we'll demonstrate some usage in LFE with Python 3.

If you want to follow along, there's a demo repo you can check out:
Change into the repo directory and set up your Python environment:
Next, switch over to the LFE directory, and fire up a REPL:
Note that this will first download the necessary dependencies and compile them (that's what the [snip] is eliding).

Now we're ready to take erlport for a quick trip down to the local:
And that's all there is to it :-)

Perhaps in a future post we can dive into the internals, showing you more of the glory that is erlport. Even better, we could look at more compelling example usage, approaching some of the functionality offered by such projects as Disco or Anaconda.

on November 21, 2014 11:08 PM
Because I was asleep at the wheel (err, keyboard) yesterday I failed to express my appreciation for some folks. It's a day for hugging! And I missed it!

I gave everyone a shoutout on social media, but since planet looks best overrun with thank you posts, I shall blog it as well!

Thank you to:

David Planella for being the rock that has anchored the team.
Leo Arias for being super awesome and making testing what it is today on all the core apps.
Carla Sella for working tirelessly on many many different things in the years I've known her. She never gives up (even when I've tried too!), and has many successes to her name for that reason.
Nekhelesh Ramananthan for always being willing to let clock app be the guinea pig
Elfy, for rocking the manual tests project. Seriously awesome work. Everytime you use the tracker, just know elfy has been a part of making that testcase happen.
Jean-Baptiste Lallement and Martin Pitt for making some of my many wishes come true over the years with quality community efforts. Autopkgtest is but one of these.

And many more. Plus some I've forgotten. I can't give hugs to everyone, but I'm willing to try!

To everyone in the ubuntu community, thanks for making ubuntu the wonderful community it is!
on November 21, 2014 10:09 PM

The Secret History of Lambda

Duncan McGreggor

Being a bit of an origins nut (I always want to know how something came to be or why it is a certain way), one of the things that always bothered me with regard to Lisp was that no one seemed to talking about the origin of lambda in the lambda calculus. I suppose if I wasn't lazy, I'd have gone to a library and spent some time looking it up. But since I was lazy, I used Wikipedia. Sadly, I never got what I wanted: no history of lambda. [1] Well, certainly some information about the history of the lambda calculus, but not the actual character or term in that context.

Why lambda? Why not gamma or delta? Or Siddham ṇḍha?

To my great relief, this question was finally answered when I was reading one of the best Lisp books I've ever read: Peter Norvig's Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp. I'll save my discussion of that book for later; right now I'm going to focus on the paragraph at location 821 of my Kindle edition of the book. [2]

The story goes something like this:
  • Between 1910 and 1913, Alfred Whitehead and Bertrand Russell published three volumes of their Principia Mathematica, a work whose purpose was to derive all of mathematics from basic principles in logic. In these tomes, they cover two types of functions: the familiar descriptive functions (defined using relations), and then propositional functions. [3]
  • Within the context of propositional functions, the authors make a typographical distinction between free variables and bound variables or functions that have an actual name: bound variables use circumflex notation, e.g. x̂(x+x). [4]
  • Around 1928, Church (and then later, with his grad students Stephen Kleene and J. B. Rosser) started attempting to improve upon Russell and Whitehead regarding a foundation for logic. [5]
  • Reportedly, Church stated that the use of  in the Principia was for class abstractions, and he needed to distinguish that from function abstractions, so he used x [6] or ^x [7] for the latter.
  • However, these proved to be awkward for different reasons, and an uppercase lambda was used: Λx. [8].
  • More awkwardness followed, as this was too easily confused with other symbols (perhaps uppercase delta? logical and?). Therefore, he substituted the lowercase λ. [9]
  • John McCarthy was a student of Alonzo Church and, as such, had inherited Church's notation for functions. When McCarthy invented Lisp in the late 1950s, he used the lambda notation for creating functions, though unlike Church, he spelled it out. [10] 
It seems that our beloved lambda [11], then, is an accident in typography more than anything else.

Somehow, this endears lambda to me even more ;-)

[1] As you can see from the rest of the footnotes, I've done some research since then and have found other references to this history of the lambda notation.

[2] Norvig, Peter (1991-10-15). Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp (Kindle Locations 821-829). Elsevier Science - A. Kindle Edition. The paragraph in question is quoted here:
The name lambda comes from the mathematician Alonzo Church’s notation for functions (Church 1941). Lisp usually prefers expressive names over terse Greek letters, but lambda is an exception. Abetter name would be make - function. Lambda derives from the notation in Russell and Whitehead’s Principia Mathematica, which used a caret over bound variables: x( x + x). Church wanted a one-dimensional string, so he moved the caret in front: ^ x( x + x). The caret looked funny with nothing below it, so Church switched to the closest thing, an uppercase lambda, Λx( x + x). The Λ was easily confused with other symbols, so eventually the lowercase lambda was substituted: λx( x + x). John McCarthy was a student of Church’s at Princeton, so when McCarthy invented Lisp in 1958, he adopted the lambda notation. There were no Greek letters on the keypunches of that era, so McCarthy used (lambda (x) (+ xx)), and it has survived to this day.

[4] Norvig, 1991, Location 821.

[5] History of Lambda-calculus and Combinatory Logic, page 7.

[6] Ibid.

[7] Norvig, 1991, Location 821.

[8] Ibid.

[9] Looking at Church's works online, he uses lambda notation in his 1932 paper A Set of Postulates for the Foundation of Logic. His preceding papers upon which the seminal 1932 is based On the Law of Excluded Middle (1928) and Alternatives to Zermelo's Assumption (1927), make no reference to lambda notation. As such, A Set of Postulates for the Foundation of Logic seems to be his first paper that references lambda.

[10] Norvig indicates that this is simply due to the limitations of the keypunches in the 1950s that did not have keys for Greek letters.

[11] Alex Martelli is not a fan of lambda in the context of Python, and though a good friend of Peter Norvig, I've heard Alex refer to lambda as an abomination :-) So, perhaps not beloved for everyone. In fact, Peter Norvig himself wrote (see above) that a better name would have been make-function.

on November 21, 2014 09:12 PM

The quotes below are real(ish).

"Hi honey, did you just call me? I got a weird message that sounded like you were in some kind of trouble. All I could hear was traffic noise and sirens..."

"I'm sorry. I must have dialed your number by mistake. I'm not in the habit of dialing my ex-boyfriends, but since you asked, would you like to go out with me again? One more try?"

"Once a friend called me and I heard him fighting with his wife. It sounded pretty bad."

"I got a voicemail one time and it was this guy yelling at me in Hindi for almost 5 minutes. The strange thing is, I don't speak Hindi."

"I remember once my friend dialed me. I called back and left a message asking whether it was actually the owner or...

...the butt."

It's called "butt dialing" in my parts of the world, or "purse dialing" (if one carries a purse), or sometimes just called pocket dialing: That accidental event where something presses the phone and it dials a number in memory without the knowlege of its owner.

After hearing these phone stories, I'm reminded that humanity isn't perfect. Among other things, we have worries, regrets, ex's, outbursts, frustrations, and maybe even laziness. One might be inclined to write these occurrences off as natural or inevitable. But, let's reflect a little. Were the people that this happened to any happier for it? Did it improve their lives? I tend to think it created unnecessary stress. Were they to blame? Was this preventable?

"Smart" phones. I'm inclined to call you what you are: The butt of technology.

We're not living in the 90's anymore. Sure, there was a time when phones had real keys and possibly weren't lockable and maybe were even prone to the occasional purse dial. Those days are long gone. "Smart" phones, you know when you're in a pocket or a purse. Deal with it. You are as dumb as my first feature phone. Actually, you are dumber. At least my first feature phone had a keyboard cover.

Folks, I hope that in my lifetime we'll actually see a phone that is truly smart. Perhaps the Ubuntu Phone will make that hope a reality.

I can see the billboards now. "Ubuntu Phone. It Will Save Your Butt." (Insert your imagined inappropriate billboard photo alongside the caption. ;)

Do you have a great butt dialing story? Please share it in the comments.


No people were harmed in the making of this article. And not one person who shared their story is or was a "user". They are real people that were simply excluded from the decisions that made their phones dumb.

Image: Gwyneth Anne Bronwynne Jones (The Daring Librarian), CC BY-SA 2.0

on November 21, 2014 07:00 PM

git your blog

Walter Lapchynski

So I deleted my whole website by accident.

Yep, it wasn't very fun. Luckily, Linode's Backup Service saved the day. Though they backup the whole machine, it was easy to restore to the linode, change the configuration to use the required partition as a block device, reboot, and then manually mount the block device. At that point, restoration was a cp away.

The reason why this all happened is because I was working on the final piece to my ideal blogging workflow: putting everything under version control.

The problem came when I tried to initialize my current web folder. I mean, it worked, and I could clone the repo on my computer, but I couldn't push. Worse yet, I got something scary back:

remote: error: refusing to update checked out branch: refs/heads/master
remote: error: By default, updating the current branch in a non-bare repository
remote: error: is denied, because it will make the index and work tree inconsistent
remote: error: with what you pushed, and will require 'git reset --hard' to match
remote: error: the work tree to HEAD.
remote: error:
remote: error: You can set 'receive.denyCurrentBranch' configuration variable to
remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into
remote: error: its current branch; however, this is not recommended unless you
remote: error: arranged to update its work tree to match what you pushed in some
remote: error: other way.
remote: error:
remote: error: To squelch this message and still keep the default behaviour, set
remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'.

So in the process of struggling with this back and forth between local and remote, I killed my files. Don't you usually panic when you get some long error message that doesn't make a darn bit of sense?

Yeah, well, I guess I kind of got the idea, but it wasn't entirely clear. The key point is that we're trying to push to a non-bare folder— i.e. one that includes all the tracked files— and it's on a main branch.

Why is this bad? Well, what if you had uncommited changes on the remote repo and you pushed changes from the local repo? Data loss. So Git now won't let you do it unless you specifically allow it in your remote config in the receive.denyCurrentBranch variable.

So let's move to the solution: don't do this. You should either push to a different branch and then manually merge on remote, but merges aren't always guaranteed. Why not do something entirely different? Something more proper.

First, start with the remote:

# important: make a new folder!
git init --bare ~/path/to/some/new/folder

Then local:

git clone user@server:path/to/the/aforementioned/folder
cd folder
# make some changes
git add -A
git commit -am "initial commit or whatever you want to say"
git push

If you check out what's in that folder on remote you'll find it has no tracked files. Basically, a bare repo is basically just an index. It's a place to pull and push to. You're not goinng to go there and start changing files and getting git all confused.

Now here's the magic part: in the hooks subfolder of your folder, create a new executable file called post-receive containing the following:

#!/usr/bin/env sh
export GIT_WORK_TREE=/path/to/your/final/live/folder
git checkout -f master
# add any other commands that need to happen to rebuild your site, e.g.:
# blogofile build

Assuming you've already committed some changes, go ahead and run it and check your website.

Pretty cool, huh? Well, it gets even better. Next push you do will automatically update your website for you. So now for me, an update to the website is just a local push away. No need to even login to the server anymore.

There are other solutions to this problem but this one seems to be the most consistent and easy.

on November 21, 2014 03:42 PM

This is a technical post about PulseAudio internals and the upcoming protocol improvements in the upcoming PulseAudio 6.0 release.

PulseAudio memory copies and buffering

PulseAudio is said to have a “zero-copy” architecture. So let’s look at what copies and buffers are involved in a typical playback scenario.

Client side

When PulseAudio server and client runs as the same user, PulseAudio enables shared memory (SHM) for audio data. (In other cases, SHM is disabled for security reasons.) Applications can use pa_stream_begin_write to get a pointer directly into the SHM buffer. When using pa_stream_write or through the ALSA plugin, there will be one memory copy into the SHM.

Server resampling and remapping

On the server side, the server might need to convert the stream into a format that fits the hardware (and potential other streams that might be running simultaneously). This step is skipped if deemed unnecessary.

First, the samples are converted to either signed 16 bit or float 32 bit (mainly depending on resampler requirements).
In case resampling is necessary, we make use of external resampler libraries for this, the default being speex.
Second, if remapping is necessary, e g if the input is mono and the output is stereo, that is performed as well. Finally, the samples are converted to a format that the hardware supports.

So, in worst case, there might be up to four different buffers involved here (first: after converting to “work format”, second: after resampling, third: after remapping, fourth: after converting to hardware supported format), and in best case, this step is entirely skipped.

Mixing and hardware output

PulseAudio’s built in mixer multiplies each channel of each stream with a volume factor and writes the result to the hardware. In case the hardware supports mmap (memory mapping), we write the mix result directly into the DMA buffers.


The best we can do is one copy in total, from the SHM buffer directly into the DMA hardware buffer. I hope this clears up any confusion about what PulseAudio’s advertised “zero copy” capabilities means in practice.

However, memory copies is not the only thing you want to avoid to get good performance, which brings us to the next point:

Protocol improvements in 6.0

PulseAudio does pretty well CPU wise for high latency loads (e g music playback), but a bit worse for low latency loads (e g VOIP, gaming). Or to put it another way, PulseAudio has a low per sample cost, but there is still some optimisation that can be done per packet.

For every playback packet, there are three messages sent: from server to client saying “I need more data”, from client to server saying “here’s some data, I put it in SHM, at this address”, and then a third from server to client saying “thanks, I have no more use for this SHM data, please reclaim the memory”. The third message is not sent until the audio has actually been played back.
For every message, it means syscalls to write, read, and poll a unix socket. This overhead turned out to be significant enough to try to improve.

So instead of putting just the audio data into SHM, as of 6.0 we also put the messages into two SHM ringbuffers, one in each direction. For signalling we use eventfds. (There is also an optimisation layer on top of the eventfd that tries to avoid writing to the eventfd in case no one is currently waiting.) This is not so much for saving memory copies but to save syscalls.

From my own unscientific benchmarks (i e, running “top”), this saves us ~10% – 25% of CPU power in low latency use cases, half of that being on the client side.

on November 21, 2014 03:36 PM

I recently updated the PostBooks packages in Debian and Ubuntu to version 4.7. This is the version that was released in Ubuntu 14.10 (Utopic Unicorn) and is part of the upcoming Debian 8 (jessie) release.

Better prospects for Fedora and RHEL/CentOS/EPEL packages

As well as getting the packages ready, I've been in contact with xTuple helping them generalize their build system to make packaging easier. This has eliminated the need to patch the makefiles during the build. As well as making it easier to support the Debian/Ubuntu packages, this should make it far easier for somebody to create a spec file for RPM packaging too.

Debian wins a prize

While visiting xTupleCon 2014 in Norfolk, I was delighted to receive the Community Member of the Year award which I happily accepted not just for my own efforts but for the Debian Project as a whole.

Steve Hackbarth, Director of Product Development at xTuple, myself and the impressive Community Member of the Year trophy

This is a great example of the productive relationships that exist between Debian, upstream developers and the wider free software community and it is great to be part of a team that can synthesize the work from so many other developers into ready-to-run solutions on a 100% free software platform.

Receiving this award really made me think about all the effort that has gone into making it possible to apt-get install postbooks and all the people who have collectively done far more work than myself to make this possible:

Here is a screenshot of the xTuple web / JSCommunicator integration, it was one of the highlights of xTupleCon:

and gives a preview of the wide range of commercial opportunities that WebRTC is creating for software vendors to displace traditional telecommunications providers.

xTupleCon also gave me a great opportunity to see new features (like the xTuple / Drupal web shop integration) and hear about the success of consultants and their clients deploying xTuple/PostBooks in various scenarios. The product is extremely strong in meeting the needs of manufacturing and distribution and has gained a lot of traction in these industries in the US. Many of these features are equally applicable in other markets with a strong manufacturing industry such as Germany or the UK. However, it is also flexible enough to simply disable many of the specialized features and use it as a general purpose accounting solution for consulting and services businesses. This makes it a good option for many IT freelancers and support providers looking for a way to keep their business accounts in a genuinely open source solution with a strong SQL backend and a native Linux desktop interface.

on November 21, 2014 02:12 PM