November 24, 2014

Our Tools: MORE POWER

Stephen Michael Kellat

Once the blog post by Jono Bacon hit about seeking a reboot in community governance, multiple threads bloomed in several directions. Things have wandered away from the original topic of governance structures to hit more vague, more general issues. To an extent I metaphorically keep biting my tongue about saying much more in the thread.

I do know that I have put forward the notion that we attempt an export to EPUB format of the xubuntu-docs package documentation. This, in part, is to help potentially ease a threshold for access. With the e-reader devices that do exist you could access the documentation on a separate device to read while you sit at the computer. This is only meant as an exploratory experimental notion rather than a commitment to ship.

In light of the feedback complaining about how DocBook can be difficult to address, sometimes it can be appropriate to test some of its power and show it off. DocBook has quite a lot of power to it if you have the ability to leverage it. With the variety of ways it can be exported into other formats than just the HTML files we already see shipped in Xubuntu, we can test new ways of shipping.

To the outsider, many of the processes used in creation of the various flavors of Ubuntu may seem like they can be simplified as they seem unnecessarily complicated. In some cases, we have excess power and flexibility built in for future expansion. In the times between Long Term Support releases we may need to take the time to show those who wish to join the community the power of our toolsets and what we can do with them.

on November 24, 2014 12:00 AM

November 23, 2014

Analyzing public OpenPGP keys

Dimitri John Ledkov

OpenPGP Message Format (RFC 4880) well defines key structure and wire formats (openpgp packets). Thus when I looked for public key network (SKS) server setup, I quickly found pointers to dump files in said format for bootstrapping a key server.

I did not feel like experimenting with Python and instead opted for Go and found http://code.google.com/p/go.crypto/openpgp/packet library that has comprehensive support for parsing openpgp low level structures. I've downloaded the SKS dump, verified it's MD5SUM hashes (lolz), and went ahead to process them in Go.

With help from http://github.com/lib/pq and database/sql, I've written a small program to churn through all the dump files, filter for primary RSA keys (not subkeys) and inject them into a database table. The things that I have chosen to inject are fingerprint, N, E. N & E are the modulus of the RSA key pair and the public exponent. Together they form a public part of an RSA keypair. So far, nothing fancy.

Next I've run an SQL query to see how unique things are... and found 92 unique N & E pairs that have from two and up to fifteen duplicates. In total it is 231 unique fingerprints, which use key material with a known duplicate in the public key network. That didn't sound good. And also odd - given that over 940 000 other RSA keys managed to get unique enough entropy to pull out a unique key out of the keyspace haystack (which is humongously huge by the way).

Having the list of the keys, I've fetched them and they do not look like regular keys - their UIDs do not have names & emails, instead they look like something from the monkeysphere. The keys look like they are originally used for TLS and/or SSH authentication, but were converted into OpenPGP format and uploaded into the public key server. This reminded me of the Debian's SSL key generation vulnerability CVE-2008-0166. So these keys might have been generated with bad entropy due to affected tools by that CVE and later converted to OpenPGP.

Looking at the openssl-blacklist package, it should be relatively easy for me to generate all possible RSA key-pairs and I believe all other material that is hashed to generate the fingerprint are also available (RFC 4880#12.2). Thus it should be reasonably possible to generate matching private keys, generate revocation certificates and publish the revocation certificate with pointers to CVE-2008-0166. (Or email it to the people who have signed given monkeysphered keys). When I have a minute I will work on generating openpgp-blacklist type of scripts to address this.

If anyone is interested in the Go source code I've written to process openpgp packets, please drop me a line and I'll publish it on github or something.
on November 23, 2014 09:15 PM

Awesome BSP in München

Ovidiu-Florin Bogdan

An awesome BSP just took place in München where teams from Kubuntu, Kolab, KDE PIM, Debian and LibreOffice came and planned the future and fixed bugs. This is my second year participating at this BSP and I must say it was an awesome experience. I got to see again my colleagues from Kubuntu and got to […]
on November 23, 2014 06:10 PM

A significant part of cooking is chemical science, but few people think of it in this way, but when combining cooking with what people consider stereotypical chemistry –using & mixing things with long technical names– you can have more fun.

Cheese as a Condiment

A typical method of adding cheese to things is simply to place grated cheese over a pile of food and melting it (in an oven usually). Now one of the problems (as I see it) with this, is that when you heat cheese it tends to split apart into milk solids and liquid milk fat, so you end up with unnecessary grease.

In my mind, the cheese-as-a-condiment is one of that is more smooth & creamy, such as that of a fondue, but your average "out-of-the-package" cheese does not melt this way. You can purchase one of several (disgusting) cheese products that gives you this effect, but it's more fun to make one yourself and you have the added benefit of knowing what goes into it.

Can be then used on nachos, for example:

Emulsification

One way to do this to use a chemical emulsifier to make the liquid fats –cheese– soluble in something that it is normally not soluble in –such as water. Essentially, something that's done frequently in a factory setting to help make many processed cheese products, spreads, dips, etc.

Now there are a tonne of food-safe chemical emulsifiers each with slightly different properties that you could use, but the one that I have a stock of, and that works particularly well with milk fats, like those in cheese, is sodium citrate –the salt of citric acid– which you can get from your friendly online distributor of science-y cooking products.

Many of these are also flavourless, or given the usually relatively small amount in food, the flavour that might be imparted is insignificant. They're essentially used for textural changes.

    Ingredients

  • 250 mL water*
  • 10 grams sodium citrate
  • 3-4 cups grated cheese –such as, cheddar**

*if you're feeling more experimentative, you can use a different (water-based) liquid for additional flavour, such as wine or an infusion

**you can use whichever cheeses you fancy, but I'd avoid processed cheeses as they may have additives that could mess up the chemisty

    Directions

  1. In a pot make a solution of 0.25g/mL sodium citrate in water –boil the water and dissolve the salt.
  2. Reduce the heat and begin to melt the cheese into the water handful at a time while whisking constantly.
  3. When all the cheese has melted keep stirring while the mixture thickens.
  4. Serve or use hot –keep it warm.

At the end of this what you'll essentially have is a "cheese gel" which will stiffen as it cools, but it can easily be reheated to regain its smooth consistency.

When you've completed the emulusion you can add other ingredients to jazz it up a bit –some dried spices or chopped jalapenos, for example– before pouring it over things or using it as a dip. Do note, if you're pouring it over nachos, it's best to have heated the chips first.

Another great use for your cheese gel is to pour it out, while hot, onto a baking sheet and let cool. Then you can cut it into squares for that perfect melt needed for the perfect cheeseburger.

on November 23, 2014 06:00 PM

S07E34 – The One with Unagi

Ubuntu Podcast from the UK LoCo

We’re back with Season Seven, Episode Thirty-four of the Ubuntu Podcast! Just Laura Cowen and Mark Johnson here again.

In this week’s show:

  • We discuss the Ind.ie crowdsourcing campaign.

  • We also discuss:

  • We share some Command Line Lurve (from ionagogo) which finds live streams on a page. It’s great for watching online feeds without Flash. Just point it at a web page and it finds all the streams. Run with “best” (or a specific stream type) and it launches your video player such as VLC:
    livestreamer
    
  • And we read your feedback. Thanks for sending it in!

We’ll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on November 23, 2014 02:30 PM

Here’s a nice project if you’re bored and wanting to help make a very visual difference to KDE, port the Breeze icon theme to LibreOffice.

Wiki page up at https://community.kde.org/KDE_Visual_Design_Group/LibreOffice_Breeze

All help welcome

Open, Save and PDF icons are breeze, all the rest still to go

 

facebooktwittergoogle_pluslinkedinby feather
on November 23, 2014 02:23 PM

KDE Promo Idea

Jonathan Riddell

we-strongly-suggest-using-kde-this-christmas

New seasonal KDE marketing campaign.  Use Kubuntu to get off the naughty list.

 

facebooktwittergoogle_pluslinkedinby feather
on November 23, 2014 12:11 PM

Custom Wallpaper

Charles Profitt

I recently upgraded to Ubuntu 14.10 and wanted to adorn my desktop with some new wallpapers. Usually, I find several suitable wallpapers on the web, but this time I did not. I then decided to make my own and wanted to share the results. All the following wallpapers were put together using GIMP.

plain hex template

Plain Hex Template

Hex Template Two

Hex Template Two

hex with dwarf

Hex With Dwarf

Hex Dragon

Hex Dragon


on November 23, 2014 04:34 AM

Checking Links Post-Snow

Stephen Michael Kellat

There may have been a ton of snowfall in the Lake Erie shore region susceptible to "Lake Effect" over the past week. We have had some warming up.

Thankfully we haven't had infrastructure failures. There had been some fears of that. A week has come to a close and a new one is to begin.

on November 23, 2014 12:00 AM

November 22, 2014

UPDATE - I’ve removed the silly US restriction.  I know there are more options in Europe, China, India, etc, but why shouldn’t you get access to the “open to the core” laptop!
This would definitely come with at least 3 USB ports (and at least one USB 3.0 port).

Since Jolla had success with crowdfunding a tablet, it’s a good time to see if we can get some mid-range Ubuntu laptops for sale to consumers in as many places as possible.  I’d like to get some ideas about whether there is enough demand for a very open $500 Ubuntu laptop.

Would you crowdfund this? (Core Goals)

  • 15″ 1080p Matte Screen
  • 720p Webcam with microphone
  • Spill-resistant and nice to type on keyboard
  • Intel i3+ or AMD A6+
  • Built-in Intel or AMD graphics with no proprietary firmware
  • 4 GB Ram
  • 128 GB SSD (this would be the one component that might have to be proprietary as I’m not aware of another option)
  • Ethernet 10/100/1000
  • Wireless up to N
  • HDMI
  • SD card reader
  • CoreBoot (No proprietary BIOS)
  • Ubuntu 14.04 preloaded of course
  • Agreement with manufacturer to continue selling this laptop (or similar one) with Ubuntu preloaded to consumers for at least 3 years.

Stretch Goals? Or should they be core goals?

Will only be added if they don’t push the cost up significantly (or if everyone really wants them) and can be done with 100% open source software/firmware.

  • Touchscreen
  • Convertible to Tablet
  • GPS
  • FM Tuner (and built-in antenna)
  • Digital TV Tuner (and built-in antenna)
  • Ruggedized
  • Direct sunlight readable screen
  • “Frontlight” tech.  (think Amazon PaperWhite)
  • Bluetooth
  • Backlit keyboard
  • USB Power Adapter

Take my quick survey if you want to see this happen.  If at least 1000 people say “Yes,” I’ll approach manufacturers.   The first version might just end up being a Chromebook modified with better specs, but I think that would be fine.

Link to survey – http://goo.gl/forms/bwmBf92O1d
Loading…

on November 22, 2014 09:37 PM

Blog Moved

Jonathan Riddell

KDE Project:

I've moved my developer blog to my vanity domain jriddell.org, which has hosted my personal blog since 1999 (before the word existed). Tags used are Planet KDE and Planet Ubuntu for the developer feeds.

Sorry no DCOP news on jriddell.org.

on November 22, 2014 03:21 PM

Release party in Barcelona

Rafael Carreras

15794067981_0d173ce352_z

Another time, and there has been 16, ubuntaires celebrated the release party of the next Ubuntu version, in this case, 14.10 Utopic Unicorn.

This time, we went to Barcelona, at Raval, at the very centre, thanks to our friends of the TEB.

As always, we started with explaining what Ubuntu is and how our Catalan LoCo Team works and later Núria Alonso from the TEB explained the Ubuntu migration done at the Xarxa Òmnia.

15797518182_0a05d96fde_z

The installations room was plenty from the very first moment.

15611105340_1de89d36b4_z

There also was a very profitable auto-learning workshop on how to do an Ubuntu metadistribution.

15772275826_99d1a77d8b_z

 

And in another room, there were two Arduino workshops.

15610528118_927a8d7cc2_z15794076701_cc538bf9ba_z

 

And, of course, ubuntaires love to eat well.

 

15615259540_76daed408b_z 15614277959_c98bda1d33_z

 

Pictures by Martina Mayrhofer and Walter García, all rights reserved.

 
 
on November 22, 2014 02:32 PM
Hi folks,

Our Community Working Group has dwindled a bit, and some of our members have work that keeps them away from doing CWG work. So it is time to put out another call for volunteers.

The KDE community is growing, which is wonderful. In spite of that growth, we have less "police" type work to do these days. This leaves us more time to make positive efforts to keep the community healthy, and foster dialog and creativity within our teams.

One thing I've noticed is that listowners, IRC channel operators and forum moderators are doing an excellent job of keeping our communication channels friendly, welcoming and all-around helpful. Each of these leadership roles is crucial to keeping the community healthy.

Also, the effort to create the KDE Manifesto has adjusted KDE infrastructure to be directly and consciously supporting community values. The commitments section is particularly helpful.

Please write us at Community-wg@kde.org if you would like to become a part of our community gardening work.




on November 22, 2014 05:35 AM

 

"U can't touch this" Source

“U can’t touch this”[4] Source

“Touch-a touch-a touch-a touch me. I wanna be dirty.”[1] — Love, Your Dumb Phone

It’s not a problem with a dirty touch screen; that would be a stretch for an entire post. It’s a problem with the dirty power[2]: perhaps an even farther stretch. But, “I’m cold on a mission, so pull on back,”[4] and stretch yourself for a moment because your phone won’t stretch for you.

We’re constantly trying to stretch the battery life of our phones, but the phones keep demanding to be touched, which drains the battery. Phones have this “dirty power” over us, but maybe there are also some “spikes” in the power management of these dumb devices. The greatest feature is also the greatest flaw in the device. It is the fact that it has to be touched in order to react. Does it even react in the most effective way? What indication is there to let you know how the phone has been touched? Do the phone reduce the amount of touches in order so save battery power? If it is not smart enough to do so, then maybe it shouldn’t have a touch screen at all!

Auto-brightness. “Can’t touch this.”[4]
Lock screen. “Can’t touch this.”[4]
Phone clock. “Can’t touch this.”[4]

Yes, your phone has these things, but they never seem to work at the right time. Never mind that I have to turn on the screen to check the time. These things currently seem to follow one set of rules instead of knowing when to activate. So when you “move slide your rump,”[4] you still end up with the infamous butt dial, and the “Dammit, Janet![1] My battery is about to die” situation.

There are already developments in these areas, which indicate that the dumb phone is truly on its last legs. “So wave your hands in the air.”[4] But, seriously, let’s reduce the number of touches, “get your face off the screen”[3] and live your life.

“Stop. Hammer time!”[4]

sop

[1] Song by Richard O’Brien
[2] Fartbarf is fun.
[3] Randall RossCommunity Leadership Summit 2014
[4] Excessively touched on “U Can’t Touch This” by MC Hammer

on November 22, 2014 03:36 AM

My Vivid Vervet has crazy hair

Elizabeth K. Joseph

Keeping with my Ubuntu toy tradition, I placed an order for a vervet stuffed toy, available in the US via: Miguel the Vervet Monkey.

He arrived today!

He’ll be coming along to his first Ubuntu event on December 10th, a San Francisco Ubuntu Hour.

on November 22, 2014 02:57 AM

The phrase "The Year of the Linux Desktop" is one we see being used by hopefuls, to describe a future in which desktop Linux has reached the masses.

But I'm more pragmatic, and would like to describe the past and tweak this phrase to (I believe) accurately surmise 2011 as "The Year of the Linux Desktop Schism".

So let me tell you a little story of about this schism.

A Long Time Ago in 2011

The Linux desktop user base was happily enjoying the status quo. We had (arguably) two major desktops: GNOME and KDE , with a few smaller, less popular or used desktops as well (mostly named with initialisms).

It was the heyday of GNOME 2 on the desktop, being the default desktop used in many of the major distributions. But bubbling out of the ether of the GNOME Project was this idea for a new shell and a overhaul of GNOME, so GNOME 2 was brought to a close and GNOME Shell was born as the future of GNOME.

The Age of Dissent, Madness & Innovation

GNOME 3 and its new Shell did not sit well with everyone and many in the great blogosphere saw it as disastrous for GNOME and for users.

Much criticism was spouted and controversy raised and many started searching for alternatives. But there were those who stood by their faithful project and seeing the new version for what it was: a new beginning for GNOME and they knew that beginnings are not perfect.

Nevertheless, with this massive split in the desktop market we saw much change. There came a rapid flurry of several new projects and a moonshot from one for human beings.

The Ubuntu upgraded their fledging "netbook interface" and promoted it to the desktop, calling it Unity and it was took off down a path to unite the desktop with other emerging platforms yet to come.

There was also much dissatisfaction with the abandonment of GNOME 2, and the community they decided to lower their figurative pitchforks and use them to do some literal forking. They took up the remenants of this legacy desktop and used it to forge a new project. This project was to be named MATE and was to continue in the original spirit of GNOME 2.

The Linux Mint team, unsure of their future with GNOME under the Shell created the "Mint GNOME Shell Linux Mint Extension Pack of Extensions for GNOME Shell". This addon to the new GNOME experience would eventually lead to the creation Cinnamon, which itself was a fork of GNOME 3.

Despite being a relatively new arrival, the ambitious elementary and its team was developing the Pantheon desktop in relative secrecy for use in future versions of their OS, having previously relied on a slimmed-down GNOME 2. They were to become one of the most polished of them all.

And they all live happily ever since.

The end.

The Moral of the Story

All of these projects have been thriving in these 3 years hence, and why? Because of their communities.

All that has occurred is what the Linux community is and it is exemplary of the freedom that it and the whole of open source represents. We have the freedom in open source to exact our own change or act upon what may not agree with. We are not confined to a set of strictures, we are able to do what we feel is right and find other people who do as well.

To deride and belittle others for acting in their freedom or because they may not agree with you is just wrong and not keeping with the ethos of our community.

on November 22, 2014 12:00 AM

November 21, 2014

To ensure quality of the Juju charm store there are automatic processes that test charms on multiple cloud environments. These automated tests help identify the charms that need to be fixed. This has become so useful that charm tests are a requirement to become a recommended charm in the charm store for the trusty release.

What are the goals of charm testing?

For Juju to be magic, the charms must always deploy, scale and relate as they were designed. The Juju charm store contains over 200 charms and those charms can be deployed to more than 10 different cloud environments. That is a lot of environments to ensure charms work, which is why tests are now required!

Prerequisites

The Juju ecosystem team has created different tools to make writing tests easier. The charm-tools package has code that generates tests for charms. Amulet is a python 3 library that makes it easier to programatically work with units and whole deployments. To get started writing tests you will need to install charm-tools and amulet packages:

sudo add-apt repository -y ppa:juju/stable
sudo apt-get update
sudo apt-get install -y charm-tools amulet

Now that the tools are installed, change directory to the charm directory and run the following command:

juju charm add tests

This command generates two executable files 00-setup and 99-autogen into the tests directory. The test are prefixed with a number so they are run in the correct lexicographical order.

00-setup

The first file is a bash script that adds the Juju PPA repository, updates the package list, and installs the amulet package so the next tests can use the Amulet library.

99-autogen

This file contains python 3 code that uses the Amulet library. The class extends a unittest class that is a standard unit testing framework for python. The charm tool test generator creates a skeleton test that deploys related charms and adds relations, so most of the work done already.

This automated test is almost never a good enough test on its own. Ideal tests do a number of things:

  1. Deploy the charm and make sure it deploys successfully (no hook errors)
  2. Verify the service is running as expected on the remote unit (sudo service apache2 status).
  3. Change configuration values to verify users can set different values and the changes are reflected in the resulting unit.
  4. Scale up. If the charm handles the peer relation make sure it works with multiple units.
  5. Validate the relationships to make sure the charm works with other related charms.

Most charms will need additional lines of code in the 99-autogen file to verify the service is running as expected. For example if your charm implements the http interface you can use the python 3 requests package to verify a valid webpage (or API) is responding.

def test_website(self):  
    unit = self.deployment.sentry.unit['<charm-name>/0']
    url = 'http://%s' % unit['public-address']
    response = requests.get(url)
    # Raise an exception if the url was not a valid web page.
    response.raise_for_status()

What if I don't know python?

Charm tests can be written in languages other than python. The automated test program called bundletester will run the test target in a Makefile if one exists. Including a 'test' target would allow a charm author to build and run tests from the Makefile.

Bundletester will run any executable files in the tests directory of a charm. There are example tests written in bash in the Juju documentation. A test fails if the executable returns a value other than zero.

Where can I get more information about writing charm tests?

There are several videos on youtube.com about charm testing:
Charm testing video
Documentation on charm testing can be found here:
* https://juju.ubuntu.com/docs/authors-testing.html
Documentation on Amulet:
* https://juju.ubuntu.com/docs/tools-amulet.html
Check out the lamp charm as an example of multiple amulet tests:
* http://bazaar.launchpad.net/~charmers/charms/precise/lamp/trunk/files

on November 21, 2014 11:15 PM
This is a short little blog post I've been wanting to get out there ever since I ran across the erlport project a few years ago. Erlang was built for fault-tolerance. It had a goal of unprecedented uptimes, and these have been achieved. It powers 40% of our world's telecommunications traffic. It's capable of supporting amazing levels of concurrency (remember the 2007 announcement about the performance of YAWS vs. Apache?).

With this knowledge in mind, a common mistake by folks new to Erlang is to think these performance characteristics will be applicable to their own particular domain. This has often resulted in failure, disappointment, and the unjust blaming of Erlang. If you want to process huge files, do lots of string manipulation, or crunch tons of numbers, Erlang's not your bag, baby. Try Python or Julia.

But then, you may be thinking: I like supervision trees. I have long-running processes that I want to be managed per the rules I establish. I want to run lots of jobs in parallel on my 64-core box. I want to run jobs in parallel over the network on 64 of my 64-core boxes. Python's the right tool for the jobs, but I wish I could manage them with Erlang.

(There are sooo many other options for the use cases above, many of them really excellent. But this post is about Erlang/LFE :-)).

Traditionally, if you want to run other languages with Erlang in a reliable way that doesn't bring your Erlang nodes down with badly behaved code, you use Ports. (more info is available in the Interoperability Guide). This is what JInterface builds upon (and, incidentally, allows for some pretty cool integration with Clojure). However, this still leaves a pretty significant burden for the Python or Ruby developer for any serious application needs (quick one-offs that only use one or two data types are not that big a deal).

erlport was created by Dmitry Vasiliev in 2009 in an effort to solve just this problem, making it easier to use of and integrate between Erlang and more common languages like Python and Ruby. The project is maintained, and in fact has just received a few updates. Below, we'll demonstrate some usage in LFE with Python 3.

If you want to follow along, there's a demo repo you can check out:
Change into the repo directory and set up your Python environment:
Next, switch over to the LFE directory, and fire up a REPL:
Note that this will first download the necessary dependencies and compile them (that's what the [snip] is eliding).

Now we're ready to take erlport for a quick trip down to the local:
And that's all there is to it :-)

Perhaps in a future post we can dive into the internals, showing you more of the glory that is erlport. Even better, we could look at more compelling example usage, approaching some of the functionality offered by such projects as Disco or Anaconda.


on November 21, 2014 11:08 PM
Because I was asleep at the wheel (err, keyboard) yesterday I failed to express my appreciation for some folks. It's a day for hugging! And I missed it!

I gave everyone a shoutout on social media, but since planet looks best overrun with thank you posts, I shall blog it as well!

Thank you to:

David Planella for being the rock that has anchored the team.
Leo Arias for being super awesome and making testing what it is today on all the core apps.
Carla Sella for working tirelessly on many many different things in the years I've known her. She never gives up (even when I've tried too!), and has many successes to her name for that reason.
Nekhelesh Ramananthan for always being willing to let clock app be the guinea pig
Elfy, for rocking the manual tests project. Seriously awesome work. Everytime you use the tracker, just know elfy has been a part of making that testcase happen.
Jean-Baptiste Lallement and Martin Pitt for making some of my many wishes come true over the years with quality community efforts. Autopkgtest is but one of these.

And many more. Plus some I've forgotten. I can't give hugs to everyone, but I'm willing to try!

To everyone in the ubuntu community, thanks for making ubuntu the wonderful community it is!
on November 21, 2014 10:09 PM

The Secret History of Lambda

Duncan McGreggor

Being a bit of an origins nut (I always want to know how something came to be or why it is a certain way), one of the things that always bothered me with regard to Lisp was that no one seemed to talking about the origin of lambda in the lambda calculus. I suppose if I wasn't lazy, I'd have gone to a library and spent some time looking it up. But since I was lazy, I used Wikipedia. Sadly, I never got what I wanted: no history of lambda. [1] Well, certainly some information about the history of the lambda calculus, but not the actual character or term in that context.

Why lambda? Why not gamma or delta? Or Siddham ṇḍha?

To my great relief, this question was finally answered when I was reading one of the best Lisp books I've ever read: Peter Norvig's Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp. I'll save my discussion of that book for later; right now I'm going to focus on the paragraph at location 821 of my Kindle edition of the book. [2]

The story goes something like this:
  • Between 1910 and 1913, Alfred Whitehead and Bertrand Russell published three volumes of their Principia Mathematica, a work whose purpose was to derive all of mathematics from basic principles in logic. In these tomes, they cover two types of functions: the familiar descriptive functions (defined using relations), and then propositional functions. [3]
  • Within the context of propositional functions, the authors make a typographical distinction between free variables and bound variables or functions that have an actual name: bound variables use circumflex notation, e.g. x̂(x+x). [4]
  • Around 1928, Church (and then later, with his grad students Stephen Kleene and J. B. Rosser) started attempting to improve upon Russell and Whitehead regarding a foundation for logic. [5]
  • Reportedly, Church stated that the use of  in the Principia was for class abstractions, and he needed to distinguish that from function abstractions, so he used x [6] or ^x [7] for the latter.
  • However, these proved to be awkward for different reasons, and an uppercase lambda was used: Λx. [8].
  • More awkwardness followed, as this was too easily confused with other symbols (perhaps uppercase delta? logical and?). Therefore, he substituted the lowercase λ. [9]
  • John McCarthy was a student of Alonzo Church and, as such, had inherited Church's notation for functions. When McCarthy invented Lisp in the late 1950s, he used the lambda notation for creating functions, though unlike Church, he spelled it out. [10] 
It seems that our beloved lambda [11], then, is an accident in typography more than anything else.

Somehow, this endears lambda to me even more ;-)



[1] As you can see from the rest of the footnotes, I've done some research since then and have found other references to this history of the lambda notation.

[2] Norvig, Peter (1991-10-15). Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp (Kindle Locations 821-829). Elsevier Science - A. Kindle Edition. The paragraph in question is quoted here:
The name lambda comes from the mathematician Alonzo Church’s notation for functions (Church 1941). Lisp usually prefers expressive names over terse Greek letters, but lambda is an exception. Abetter name would be make - function. Lambda derives from the notation in Russell and Whitehead’s Principia Mathematica, which used a caret over bound variables: x( x + x). Church wanted a one-dimensional string, so he moved the caret in front: ^ x( x + x). The caret looked funny with nothing below it, so Church switched to the closest thing, an uppercase lambda, Λx( x + x). The Λ was easily confused with other symbols, so eventually the lowercase lambda was substituted: λx( x + x). John McCarthy was a student of Church’s at Princeton, so when McCarthy invented Lisp in 1958, he adopted the lambda notation. There were no Greek letters on the keypunches of that era, so McCarthy used (lambda (x) (+ xx)), and it has survived to this day.
[3] http://plato.stanford.edu/entries/pm-notation/#4

[4] Norvig, 1991, Location 821.

[5] History of Lambda-calculus and Combinatory Logic, page 7.

[6] Ibid.

[7] Norvig, 1991, Location 821.

[8] Ibid.

[9] Looking at Church's works online, he uses lambda notation in his 1932 paper A Set of Postulates for the Foundation of Logic. His preceding papers upon which the seminal 1932 is based On the Law of Excluded Middle (1928) and Alternatives to Zermelo's Assumption (1927), make no reference to lambda notation. As such, A Set of Postulates for the Foundation of Logic seems to be his first paper that references lambda.

[10] Norvig indicates that this is simply due to the limitations of the keypunches in the 1950s that did not have keys for Greek letters.

[11] Alex Martelli is not a fan of lambda in the context of Python, and though a good friend of Peter Norvig, I've heard Alex refer to lambda as an abomination :-) So, perhaps not beloved for everyone. In fact, Peter Norvig himself wrote (see above) that a better name would have been make-function.


on November 21, 2014 09:12 PM

The quotes below are real(ish).

"Hi honey, did you just call me? I got a weird message that sounded like you were in some kind of trouble. All I could hear was traffic noise and sirens..."

"I'm sorry. I must have dialed your number by mistake. I'm not in the habit of dialing my ex-boyfriends, but since you asked, would you like to go out with me again? One more try?"

"Once a friend called me and I heard him fighting with his wife. It sounded pretty bad."

"I got a voicemail one time and it was this guy yelling at me in Hindi for almost 5 minutes. The strange thing is, I don't speak Hindi."

"I remember once my friend dialed me. I called back and left a message asking whether it was actually the owner or...

...the butt."


It's called "butt dialing" in my parts of the world, or "purse dialing" (if one carries a purse), or sometimes just called pocket dialing: That accidental event where something presses the phone and it dials a number in memory without the knowlege of its owner.

After hearing these phone stories, I'm reminded that humanity isn't perfect. Among other things, we have worries, regrets, ex's, outbursts, frustrations, and maybe even laziness. One might be inclined to write these occurrences off as natural or inevitable. But, let's reflect a little. Were the people that this happened to any happier for it? Did it improve their lives? I tend to think it created unnecessary stress. Were they to blame? Was this preventable?

"Smart" phones. I'm inclined to call you what you are: The butt of technology.

We're not living in the 90's anymore. Sure, there was a time when phones had real keys and possibly weren't lockable and maybe were even prone to the occasional purse dial. Those days are long gone. "Smart" phones, you know when you're in a pocket or a purse. Deal with it. You are as dumb as my first feature phone. Actually, you are dumber. At least my first feature phone had a keyboard cover.

Folks, I hope that in my lifetime we'll actually see a phone that is truly smart. Perhaps the Ubuntu Phone will make that hope a reality.

I can see the billboards now. "Ubuntu Phone. It Will Save Your Butt." (Insert your imagined inappropriate billboard photo alongside the caption. ;)

Do you have a great butt dialing story? Please share it in the comments.

--


No people were harmed in the making of this article. And not one person who shared their story is or was a "user". They are real people that were simply excluded from the decisions that made their phones dumb.

Image: Gwyneth Anne Bronwynne Jones (The Daring Librarian), CC BY-SA 2.0
https://www.flickr.com/photos/info_grrl/

on November 21, 2014 07:00 PM

git your blog

Walter Lapchynski

So I deleted my whole website by accident.

Yep, it wasn't very fun. Luckily, Linode's Backup Service saved the day. Though they backup the whole machine, it was easy to restore to the linode, change the configuration to use the required partition as a block device, reboot, and then manually mount the block device. At that point, restoration was a cp away.

The reason why this all happened is because I was working on the final piece to my ideal blogging workflow: putting everything under version control.

The problem came when I tried to initialize my current web folder. I mean, it worked, and I could clone the repo on my computer, but I couldn't push. Worse yet, I got something scary back:

remote: error: refusing to update checked out branch: refs/heads/master
remote: error: By default, updating the current branch in a non-bare repository
remote: error: is denied, because it will make the index and work tree inconsistent
remote: error: with what you pushed, and will require 'git reset --hard' to match
remote: error: the work tree to HEAD.
remote: error:
remote: error: You can set 'receive.denyCurrentBranch' configuration variable to
remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into
remote: error: its current branch; however, this is not recommended unless you
remote: error: arranged to update its work tree to match what you pushed in some
remote: error: other way.
remote: error:
remote: error: To squelch this message and still keep the default behaviour, set
remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'.

So in the process of struggling with this back and forth between local and remote, I killed my files. Don't you usually panic when you get some long error message that doesn't make a darn bit of sense?

Yeah, well, I guess I kind of got the idea, but it wasn't entirely clear. The key point is that we're trying to push to a non-bare folder— i.e. one that includes all the tracked files— and it's on a main branch.

So let's move to the solution: don't do this. You should either push to a different branch and then manually merge on remote, but merges aren't always guaranteed. Why not do something entirely different? Something more proper.

First, start with the remote:

# important: make a new folder!
git init --bare ~/path/to/some/new/folder

Then local:

git clone user@server:path/to/the/aforementioned/folder
cd folder
# make some changes
git add -A
git commit -am "initial commit or whatever you want to say"
git push

If you check out what's in that folder on remote you'll find it has no tracked files. Basically, a bare repo is basically just an index. It's a place to pull and push to. You're not goinng to go there and start changing files and getting git all confused.

Now here's the magic part: in the hooks subfolder of your folder, create a new executable file called post-receive containing the following:

#!/usr/bin/env sh
export GIT_WORK_TREE=/path/to/your/final/live/folder
git checkout -f master
# add any other commands that need to happen to rebuild your site, e.g.:
# blogofile build

Assuming you've already committed some changes, go ahead and run it and check your website.

Pretty cool, huh? Well, it gets even better. Next push you do will automatically update your website for you. So now for me, an update to the website is just a local push away. No need to even login to the server anymore.

There are other solutions to this problem but this one seems to be the most consistent and easy.

on November 21, 2014 03:42 PM

This is a technical post about PulseAudio internals and the upcoming protocol improvements in the upcoming PulseAudio 6.0 release.

PulseAudio memory copies and buffering

PulseAudio is said to have a “zero-copy” architecture. So let’s look at what copies and buffers are involved in a typical playback scenario.

Client side

When PulseAudio server and client runs as the same user, PulseAudio enables shared memory (SHM) for audio data. (In other cases, SHM is disabled for security reasons.) Applications can use pa_stream_begin_write to get a pointer directly into the SHM buffer. When using pa_stream_write or through the ALSA plugin, there will be one memory copy into the SHM.

Server resampling and remapping

On the server side, the server might need to convert the stream into a format that fits the hardware (and potential other streams that might be running simultaneously). This step is skipped if deemed unnecessary.

First, the samples are converted to either signed 16 bit or float 32 bit (mainly depending on resampler requirements).
In case resampling is necessary, we make use of external resampler libraries for this, the default being speex.
Second, if remapping is necessary, e g if the input is mono and the output is stereo, that is performed as well. Finally, the samples are converted to a format that the hardware supports.

So, in worst case, there might be up to four different buffers involved here (first: after converting to “work format”, second: after resampling, third: after remapping, fourth: after converting to hardware supported format), and in best case, this step is entirely skipped.

Mixing and hardware output

PulseAudio’s built in mixer multiplies each channel of each stream with a volume factor and writes the result to the hardware. In case the hardware supports mmap (memory mapping), we write the mix result directly into the DMA buffers.

Summary

The best we can do is one copy in total, from the SHM buffer directly into the DMA hardware buffer. I hope this clears up any confusion about what PulseAudio’s advertised “zero copy” capabilities means in practice.

However, memory copies is not the only thing you want to avoid to get good performance, which brings us to the next point:

Protocol improvements in 6.0

PulseAudio does pretty well CPU wise for high latency loads (e g music playback), but a bit worse for low latency loads (e g VOIP, gaming). Or to put it another way, PulseAudio has a low per sample cost, but there is still some optimisation that can be done per packet.

For every playback packet, there are three messages sent: from server to client saying “I need more data”, from client to server saying “here’s some data, I put it in SHM, at this address”, and then a third from server to client saying “thanks, I have no more use for this SHM data, please reclaim the memory”. The third message is not sent until the audio has actually been played back.
For every message, it means syscalls to write, read, and poll a unix socket. This overhead turned out to be significant enough to try to improve.

So instead of putting just the audio data into SHM, as of 6.0 we also put the messages into two SHM ringbuffers, one in each direction. For signalling we use eventfds. (There is also an optimisation layer on top of the eventfd that tries to avoid writing to the eventfd in case no one is currently waiting.) This is not so much for saving memory copies but to save syscalls.

From my own unscientific benchmarks (i e, running “top”), this saves us ~10% – 25% of CPU power in low latency use cases, half of that being on the client side.

on November 21, 2014 03:36 PM

I recently updated the PostBooks packages in Debian and Ubuntu to version 4.7. This is the version that was released in Ubuntu 14.10 (Utopic Unicorn) and is part of the upcoming Debian 8 (jessie) release.

Better prospects for Fedora and RHEL/CentOS/EPEL packages

As well as getting the packages ready, I've been in contact with xTuple helping them generalize their build system to make packaging easier. This has eliminated the need to patch the makefiles during the build. As well as making it easier to support the Debian/Ubuntu packages, this should make it far easier for somebody to create a spec file for RPM packaging too.

Debian wins a prize

While visiting xTupleCon 2014 in Norfolk, I was delighted to receive the Community Member of the Year award which I happily accepted not just for my own efforts but for the Debian Project as a whole.

Steve Hackbarth, Director of Product Development at xTuple, myself and the impressive Community Member of the Year trophy

This is a great example of the productive relationships that exist between Debian, upstream developers and the wider free software community and it is great to be part of a team that can synthesize the work from so many other developers into ready-to-run solutions on a 100% free software platform.

Receiving this award really made me think about all the effort that has gone into making it possible to apt-get install postbooks and all the people who have collectively done far more work than myself to make this possible:

Here is a screenshot of the xTuple web / JSCommunicator integration, it was one of the highlights of xTupleCon:

and gives a preview of the wide range of commercial opportunities that WebRTC is creating for software vendors to displace traditional telecommunications providers.

xTupleCon also gave me a great opportunity to see new features (like the xTuple / Drupal web shop integration) and hear about the success of consultants and their clients deploying xTuple/PostBooks in various scenarios. The product is extremely strong in meeting the needs of manufacturing and distribution and has gained a lot of traction in these industries in the US. Many of these features are equally applicable in other markets with a strong manufacturing industry such as Germany or the UK. However, it is also flexible enough to simply disable many of the specialized features and use it as a general purpose accounting solution for consulting and services businesses. This makes it a good option for many IT freelancers and support providers looking for a way to keep their business accounts in a genuinely open source solution with a strong SQL backend and a native Linux desktop interface.

on November 21, 2014 02:12 PM

I swear, I find out about some new event Ubuntu does every day. How is it that I've been around Ubuntu for as long as I have and I've only now heard about this?

Well, in any case, today is Ubuntu Community Appreciation Day, where we give thanks to the humans (remember, ubuntu means humanity!) that have so graciously donated their time to make Ubuntu a reality.

I have a lot of people to thank in the community. We have some really exceptional people about. I really feel like I could make the world's longest blog post just trying to list them all. Several folks already have!

Instead, I'll point out a major player in the community who is pretty unseen these days.

Phill Westside was a major contributor to Lubuntu. He was there when I first came to #lubuntu so many moons ago. His friendly, inviting demeanour was one of the things that kept me sticking around after my support request was met. Phill took it upon himself to encourage me just as he had with others and slowly I came to contribute more and more.

Sadly, some people in high rankings in the community failed to see Phill's value for whatever reason. I'm not sure I totally understand but I think the barrage of opinions that came from Jono Bacon's call for reform in Ubuntu governance may offer some hint. Phill's no longer an Ubuntu member and is rarely seen in the typical places in the community.

Yet he still helps out on #lubuntu, still helps with Lubuntu ISO testing, still reposts Lubuntu news on Facebook, still contributes to the Lubuntu mailing lists, still tries to help herd the cats as it were, though he's handed off titles to others (that's how I'm the Release Manager and Head of QA!). tl;dr, Phill is still a major contributor to Ubuntu.

Did I mention he's a great guy to hang out with, too? I've never met him face to face, but I'm sure if I did, I'd give him one heck of a big ole hug.

Thanks, Phill!

on November 21, 2014 07:55 AM

Today is Ubuntu Community Appreciation Day and I wanted to recognize several people who have helped me along my journey within the Ubuntu Community.

Elizabeth Krumbach Joseph
Lyz has been a friend for years. We met when I was just transitioning from using Windows to using Linux. The Ubuntu New York LoCo was holding its bi-annual release part at the Holiday Inn located in Waterloo, NY on November, 8th 2009. Lyz gave a presentation “Who Uses and Contributes to Open Source Projects (And how you can too!)” that day and helped serve as a guide for the New York LoCo team as it sought to become an approved LoCo team. Lyz is an amazing person who has given me advice over the last five years. She contributes her energies to the Ubuntu project with a commitment and passion that has both my respect and admiration.

Thank you for all you have done Lyz!

Jorge Castro at FOSSCON in Rochester, NY

Jorge Castro at FOSSCON in Rochester, NY

Jorge Castro
Jorge is the first ‘Ubuntu celebrity’ that I interacted with. When I was helping to organize FOSSCON at RIT in Rochester, NY I contacted Jorge to ask if he would attend and present at the conference. I think Jorge’s participation helped us attract attendees the first year and I was grateful that he was willing to attend. FOSSCON has become a successful conference under the guidance of my friend Jonathan Simpson. Jorge also encouraged me to apply for sponsorship to an Ubuntu Developer Summit which culminated in my being sponsored and attending my first UDS. Jorge is a person that is always willing to help others with great energy and a smile. He is an awesome contributor to the Ubuntu Community and I am thankful that I have met him in person.

Jorge you inspire us with your advice to Just Do It!

Jono Bacon
At my first UDS I was in awe of the people around me. They were brilliant high energy people committed to Ubuntu and open source. There was a fantastic energy and passion in every session I attended. While I had offered what thoughts I had and signed up to undertake work items in many sessions I felt like a small fish in a sea of very big fish. It was Jono who took the time to let me know that he was impressed with my willingness to speak up, volunteer to undertake work and get things done. He made me feel as though my contributions were appreciated. It is an awesome feeling I will remember for the rest of my life. He inspired me that day to continue to contribute and to help others do the same.

Jono, you have my utmost respect for your ability to inspire people to take on important work and make the world a better place.

Mark poses with a student from Poland

Mark with a student from Poland

Mark Shuttleworth
While many would thank Mark for his unique vision for Ubuntu or his massive contribution of money to fund the project, I would like to thank him for the personal touch he exhibits to members of the community. Mark took the time to autograph a picture for my young son who was impressed that I knew a person who had been in space. To this day my son tells his peers at school about the picture and keeps it on his night stand. I also remember a young man at his first UDS that had a great idea and wanted to present it to Mark. I mentioned this to Mark and he immediately made time to meet the young man and listened intently to his idea. The young man felt he had a limited ability to impact the project as a college student from Poland, but after speaking with Mark he was inspired and felt that he could make a difference in his local community and in the Ubuntu Project. To this day I am amazed at the passion to do good that I have seen Mark exhibit.

Thanks for creating the project Mark; you are truly amazing.

Laura Czajkowski
I have worked with Laura on the LoCo Council and on the Community Council and she is a fantastically dedicated hard working person who is very passionate about Ubuntu LoCo Teams. She is an advocate for women in technology and open source. Laura has helped move many projects along and one of the hardest working people I have ever met. It is amazing how much work she does behind the scenes without ever seeking recognition or thanks.

Thank your Laura for all your hard work and dedication to the Ubuntu Community.

Brian Neil
Brian is one of the first New York Ubuntu LoCo members I met. We met at Wegman’s in Rochester, NY on November 6th, 2008 with the intention of reviving the NY LoCo team. Over the next several years Brian played a key role in helping me expand the activities of the team. He helped organize the launch parties, presentations, irc meetings and other activities. Brian helped man many booths at local technology events and was instrumental in getting the team copies of CDs before we were eligible to receive them from Canonical.

Thank you Brian!

Daniel Holbach
What a truly amazing person! Daniel is very thoughtful and understanding when dealing with important issues in the Ubuntu community. He takes on multiple tasks with ease and is always cheerful and energetic. He helps to keep the Community Council organized and on task. When Daniel contributes his thoughts they are always well thought out and of high value.

Daniel you are awesome my friend!

The Ubuntu Community is filled with unique, intelligent and amazing people. There is not enough space to mention everyone, but I truly feel enriched for having met many of you either in-person or online. Each and every one of you help make the Ubuntu Community amazing!


on November 21, 2014 03:26 AM

Community Appreciation Day

José Antonio Rey

And again, I don’t know how to start a blog post. I believe that one of my weak points is that I don’t know how to start redacting stuff. But meh, we’re here because it’s the Ubuntu Community Appreciation Day. And here am I, part of this huge community for more than three years. It’s been an awesome experience ever since I joined. And I am grateful to a whole bunch of people.

I know it may sound like a cliché, but seriously, listing all the people who I have met and contributed with in the community would be basically impossible for me. It would be a never-ending list! All I can say right now is that I am so, so thankful for crossing paths with so many of them. From developers, translators, designers and more, the Ubuntu community is such a diverse one, with people united by one thing: Ubuntu.

When I joined the community I was a kind-of disoriented 14-year old guy. As time passed, the community has helped me develop skills from improving my English (Spanish is my native language for those who didn’t know) to starting me in programming (thing that I didn’t know about a couple years ago!). And I’ve formed great friendships around.

Again, all I can say is I am forever grateful to all those people who I have worked with, and to those who I haven’t too. We are working on what’s the future of open computing, and all of this wouldn’t be possible without you. Whether you have contributed in the past are or still contributing to the community, rest assured that you have helped build this huge community.

Thank you. Sincerely, thank you.


on November 21, 2014 02:48 AM
See https://wiki.ubuntu.com/UCADay for more about this lovely initiative.

Thank you maco/Mackenzie Morgan for getting me involved in Ubuntu Women and onto freenode.

Thank you akk/Akkana Peck, Pleia2/Lyz Joseph, Pendulum/Penelope Stow, belkinsa/Svetlana Belkin and so many more of the Ubuntu Women for being calm and competent, and energizing the effort to keep Ubuntu welcoming to all.

Thank you to my Kubuntu team, Riddell/Jonathan Riddell, apachelogger/Harald Sitter, shadeslayer/Rohan Garg, yofel/Philip Muscovak, ScottK/Scott Kitterman and sgclark/Scarlett Clark for your energy, intelligence and wonderful work. Your packaging and tooling makes it all happen. The great people who help users on the ML and in IRC and on the forums keep us going as well. And the folks who test, who are willing to break their systems so the rest of us don't have to: thank you!

There are so many people (some of the same ones!) to thank in KDE, but that's a separate blogpost. Your software keeps me working and online.

What a great community working together for the betterment of humanity.

Ubuntu: human kindness.
on November 21, 2014 12:41 AM

Ubuntu Community Appreciation DayToday is Ubuntu Community Appreciation Day and I wanted to quickly recognize the following people, but before doing so, I want to thank all the contributors that make the Ubuntu Community what it is.

Elizabeth Krumbach Joseph

Elizabeth is a stellar community contributor who has provided solid leadership and mentorship to thousands of Ubuntu Contributors over the years. She is always available to lend an ear to a Community Contributor and provide advice. Her leadership through the Community Council has been amazing and she has always done what is in the best interest of the Community.

Charles Profitt

Charles is a friend of the Community and long time contributor who is always providing excellent and sensical feedback as we have discussions in the community. He is among a few who will always call it how he sees it and always has the community’s best interest in mind. For me he was very helpful when I first started building communities in Ubuntu and shared his own experiences and how to get through bureaucracy and do awesome.

Michael Hall

Michael is a Canonical Employee who started as a Community Contributor and I think of all the employees I have met that work for Canonical it is Michael who has always seemed to be able to balance his role at Canonical and contributing best. He is always fair when dealing with contributors and has an uncanny ability to see things through the Community lenses which I think many at Canonical cannot. I appreciate his leadership on the Community Council.

Thanks again to all those who make Ubuntu one of the best linux distros available for Desktop, Server and Cloud! You all rock!

on November 21, 2014 12:16 AM

November 20, 2014

In the light of Community Appreciation Day, I would like to thank everyone in the Ubuntu Community for doing a great job contributing to Ubuntu- from promoting it to fixing bugs or from leading events to teaching others.  There are two people and one group that I would like to really, really thank.

The first one is Elizabeth Krumbach Joseph of Ubuntu Women.  She was the first one who interacted with me when I first started last year.  From that point on, she mentored (not formally, but in a organic way) me on how to do thing within the Community, like how to reply in mail-lists in a way where it’s readable.  She also supported me with the various ideas that I came up with.

Ubuntu Ohio Team’s very fine leader, Stephen Michael Kellat, is the next one.  He mentored me on how to deal with the state of our LoCo and how to think in a different way on certain topics.

The group of people who I want to thank is Phil Whiteside and the Lubuntu Community (mainly the folks of the Lubuntu Admins team).

P.S. I would like to thank Michael Hall for his blog post.


on November 20, 2014 09:46 PM

When things are moving fast and there’s still a lot of work to do, it’s sometimes easy to forget to stop and take the time to say “thank you” to the people that are helping you and the rest of the community. So every November 20th we in Ubuntu have a Community Appreciation Day, to remind us all of the importance of those two little words. We should of course all be saying it every day, but having a reminder like this helps when things get busy.

Like so many who have already posted their appreciation have said, it would be impossible for me to thank everybody I want to thank. Even if I spent all day on this post, I wouldn’t be able to mention even half of them.  So instead I’m going to highlight two people specifically.

First I want to thank Scarlett Clark from the Kubuntu community. In the lead up to this last Ubuntu Online Summit we didn’t have enough track leads on the Users track, which is one that I really wanted to see more active this time around. The track leads from the previous UOS couldn’t do it because of personal or work schedules, and as time was getting scarce I was really in a bind to find someone. I put out a general call for help in one of the Kubuntu IRC channels, and Scarlett was quick to volunteer. I really appreciated her enthusiasm then, and even more the work that she put in as a first-time track lead to help make the Users track a success. So thank you Scarlett.

Next, I really really want to say thank you to Svetlana Belkin, who seems to be contributing in almost every part of Ubuntu these days (including ones I barely know about, like Ubuntu Scientists). She was also a repeat track lead last UOS for the Community track, and has been contributing a lot of great feedback and ideas on ways to make our amazing community even better. Most importantly, in my opinion, is that she’s trying to re-start the Ubuntu Leadership team, which I think is needed now more than ever, and which I really want to become more active in once I get through with some deadline-bound work. I would encourage anybody else who is a leader in the community, or who wants to be one, to join her in that. And thank you, Svetlana, for everything that you do.

It is both a joy and a privilege to be able to work with people like Scarlett and Svetlana, and everybody else in the Ubuntu community. Today more than ever I am reminded about how lucky I am to be a part of it.

on November 20, 2014 08:44 PM

For this year’s Ubuntu Community Appreciation day I’d like to thank Jose Antonio Rey for his tireless contribution to Juju Charms and for running Ubuntu on Air.

on November 20, 2014 08:29 PM

Appreciation for Riccardo Padovani

Nekhelesh Ramananthan

This is my first time participating in the Ubuntu Community Appreciation Day. I think it is a great idea to publicly acknowledge the work of others and thank them for their work to improve Ubuntu. After all, Ubuntu is a community where people come together to collaborate, have fun and bring technology to the masses in a humanly fashion.

Anyway, the person I like to thank is Riccardo Padovani whose contributions spread across several apps like Reminders, Ubuntu Browser, Clock, Calculator etc and various other personal projects. In particular, it shows how one can get involved and work on the applications that you use daily and improve them. Riccardo becomes a beacon of inspiration for others including myself.

It is definitely a challenge to juggle University and open-source work, and by the looks of it, he seems to have achieved a perfect equilibrium.

Thanks Riccardo for everything and keep up the good work!

on November 20, 2014 02:37 PM

Amnesty International is getting a lot of attention with the launch of a new tool to detect government and corporate spying on your computer.

I thought I would try it myself. I went to a computer running Microsoft Windows, an operating system that does not publish its source code for public scrutiny. I used the Chrome browser, users often express concern about Chrome sending data back to the vendor about the web sites the users look for.

Without even installing the app, I would expect the Amnesty web site to recognise that I was accessing the site from a combination of proprietary software. Instead, I found a different type of warning.

Beware of Amnesty?

Instead, the only warning I received was from Amnesty's own cookies:

Even before I install the app to find out if the government is monitoring me, Amnesty is keen to monitor my behaviour themselves.

While cookies are used widely, their presence on a site like Amnesty's only further desensitizes Internet users to the downside risks of tracking technologies. By using cookies, Amnesty is effectivley saying a little bit of tracking is justified for the greater good. Doesn't that sound eerily like the justification we often hear from governments too?

Is Amnesty part of the solution or part of the problem?

Amnesty is a well known and widely respected name when human rights are mentioned.

However, their advice that you can install an app onto a Windows computer or iPhone to detect spyware is like telling people that putting a seatbelt on a motorbike will eliminate the risk of death. It would be much more credible for Amnesty to tell people to start by avoiding cloud services altogether, browse the web with Tor and only use operating systems and software that come with fully published source code under a free license. Only when 100% of the software on your device is genuinely free and open source can independent experts exercise the freedom to study the code and detect and remove backdoors, spyware and security bugs.

It reminds me of the advice Kim Kardashian gave after the Fappening, telling people they can continue trusting companies like Facebook and Apple with their private data just as long as they check the privacy settings (reality check: privacy settings in cloud services are about as effective as a band-aid on a broken leg).

Write to Amnesty

Amnesty became famous for their letter writing campaigns.

Maybe now is the time for people to write to Amnesty themselves, thank them for their efforts and encourage them to take more comprehensive action.

Feel free to cut and paste some of the following potential ideas into an email to Amnesty:


I understand you may not be able to respond to every email personally but I would like to ask you to make a statement about these matters on your public web site or blog.

I understand it is Amnesty's core objective to end grave abuses of human rights. Electronic surveillence, due to its scale and pervasiveness, has become a grave abuse in itself and in a disturbing number of jurisdictions it is an enabler for other types of grave violations of human rights.

I'm concerned that your new app Detekt gives people a false sense of security and that your campaign needs to be more comprehensive to truly help people and humanity in the long term.

If Amnesty is serious about solving the problems of electronic surveillance by government, corporations and other bad actors, please consider some of the following:

  • Instead of displaying a cookie warning on Amnesty.org, display a warning to users who access the site from a computer running closed-source software and give them a link to download an open source web browser like Firefox.
  • Redirect all visitors to your web site to use the HTTPS encrypted version of the site.
  • Using spyware-free open source software such as the GNU/Linux operating system (using one of the Debian, Fedora or Ubuntu systems is one of the more common ways to achieve this) and LibreOffice for all Amnesty's own operations, making a public statement about your use of free open source software and mentioning this in the closing paragraph of all press releases relating to surveillance topics.
  • Encouraging Amnesty donors, members and supporters to choose similar software especially when engaging in any political activities.
  • Make a public statement that Amnesty will not use cloud services such as SalesForce or Facebook to store, manage or interact with data relating to members, donors or other supporters.
  • Encouraging the public to move away from centralized cloud services such as those provided by their smartphone or social networks and use de-centralized or federated services such as XMPP chat.

Given the immense threat posed by electronic surveillance, I'd also like to call on Amnesty to allocate at least 10% of annual revenue towards software projects releasing free and open source software that offers the public an alternative to the centralized cloud.


While publicity for electronic privacy is great, I hope Amnesty can go a step further and help people use trustworthy software from the ground up.

on November 20, 2014 12:48 PM

Today marks another Ubuntu Community Appreciation Day, one of Ubuntu’s beautiful traditions, where you publicly thank people for their work. It’s always hard to pick just one person or a group of people, but you know what – better appreciate somebody’s work than nobody’s work at all.

One person I’d like to thanks for their work is Michael Hall. He is always around, always working on a number of projects, always involved in discussions on social media and never shy to add yet another work item to his TODO list. Even with big projects on his plate, he is still writing apps, blog entries, charms and hacks on a number of websites and is still on top of things like mailing list discussions.

I don’t know how he does it, but I’m astounded how he gets things done and still stays friendly. I’m glad he’s part of our team and tirelessly working on making Ubuntu a better place.

I also like this picture of him.

cat5000

Mike: keep up the good work! :-)

on November 20, 2014 12:13 PM

KDE Project:

Kubuntu CI
[thanks to jens for the lovely logo]

Many years ago Ubuntu had a plan for Grumpy Groundhog, a version of Ubuntu which was made from daily packages of free software development versions. This never happened but Kubuntu has long provided Project Neon (and later Project Neon 5) which used launchpad to build all of KDE Software Compilation and make weekly installable images. This is great for developers who want to check their software works in a final distribution or want to develop against the latest libraries without having to compile them, but it didn't help us packagers much because the packaging was monolithic and unrelated to the packages we use in Kubuntu real.

Recently Harald has been working on a replacement, Kubuntu Continuous Integration (Kubuntu CI) which makes packages fresh each day from KDE Git for Frameworks and Plasma and crucially uses the Kubuntu packaging branches. There are three PPAs, unstable (unchecked), unstable daily (with some automated checking to ensure it builds) and unstable weekly (with some manual checking)

At the same time he's been hard at work making weekly Kubuntu CI images which can be run as a live image or installable. They include the latest KDE Frameworks, Plasma 5 and a few other apps.

We've moved our packaging into Debian Git because it'll make merges ever so much easier and mean we can share fixes faster.

The Kubuntu CI Jenkins setup has the reports on what has built and what needs fixed.

Ahora es la hora

on November 20, 2014 10:46 AM

November 19, 2014

I recently blogged about my Ubuntu Scopes Contest Wishlist after we kicked off the Scopes Development Competition where Ubuntu Phone Scope developers can be in with a chance of winning cool devices and swag. See the above links for more details.

As a judge on that contest I’ve been keeping an eye out for interesting scopes that are under development for the competition. As we’re at the half way point in the contest I thought I’d mention a few. Of course me mentioning them here doesn’t mean they’re favourites or winners, I’m just raising awareness of the competition and hopefully helping to inspire more people to get involved.

Developers have until 3rd December to complete their entry to be in with a chance of winning a laptop, tablet and other cool stuff. We’ll accept new scopes in the Ubuntu Click Store at any time though :)

Robert Schroll is working on a GMail scope giving fast access to email.

gmail

Bogdan Cuza is developing a Mixcloud scope making it easy to search for cool songs and remixes.

Screenshot from 2014-11-10 18:57:32

Sam Segers has a Google Places scope making it easy to find local businesses.

Places-scope002

Michael Weimann has been working on a Nearby Scope and has been blogging about his progress.

Main

Dan has also been blogging about the Cinema Scope.

img2

Finally Riccardo Padovani has been posting screenshots of his Duck Duck Go Scope which is already in the click store.

divergentduck

I’m sure there there are other scopes I’ve missed. Feel free to link to them in the comments. It’s incredibly exciting for me to see early adopter developers embracing our fast-moving platform to realise their ideas.

Good luck to everyone entering the contest.

on November 19, 2014 12:03 PM

The Pogues

Rhonda D'Vine

Actually I was working already on a different music blog entry, but I want to get this one out. I was invited to join the Organic Dancefloor last thursday. And it was a really great experience. A lot of nice people enjoying a dance evening of sort of improvisational traditional folk dancing with influences from different parts of europe. Three bands playing throughout the evening. I definitely plan to go there again. :)

Which brings me to the band I want to present you now. They also play sort-of traditional songs, or at least with traditional instruments, and are also quite danceable to. This is about The Pogues. And these are the songs that I do enjoy listening to every now and then:

  • Medley: Don't meddle with the Medley. Rather dance to it.
  • Fairytale of New York: Well, we're almost in the season for it. :)
  • Streams of Whiskey: Also quite the style of song that they are known for and party with at concerts.

Like always, enjoy!

/music | permanent link | Comments: 2 | Flattr this

on November 19, 2014 11:10 AM

multiply the speed of compute-intensive Lambda functions without (much) increase in cost

Given:

  • AWS Lambda duration charges are proportional to the requested memory.

  • The CPU power, network, and disk are poportional to the requested memory.

One could conclude that the charges are proportional to the CPU power available to the Lambda function. If the function completion time is inversely proportional to the CPU power allocated (not entirely true), then the cost remains roughly fixed as you dial up power to make it faster.

If your Lambda function is primarily CPU bound and takes at least several hundred ms to execute, then you may find that you can simply allocate more CPU by allocating more memory, and get the same functionality completed in a shorter time period for about the same cost.

For example, if you allocate 128 MB of memory and your Lambda function takes 10 seconds to run, then you might be able to allocate 640 MB and see it complete in about 2 seconds.

At current AWS Lambda pricing, both of these would cost about $0.02 per thousand invocations, but the second one completes five times faster.

Things that would cause the higher memory/CPU option to cost more in total include:

  • Time chunks are rounded up to the nearest 100 ms. If your Lambda function runs near or under that in less memory, then increasing the CPU allocated will make it return faster, but the rounding up will cause the resulting cost to be more expensive.

  • Doubling the CPU allocated to a Lambda function does not necessarily cut the run time in half. The code might be accessing external resources (e.g., calling S3 APIs) or interacting with disk. If you double the requested CPU, then those fixed time actions will end up costing twice as much.

If you have a slow Lambda function, and it seems that most of its time is probably spent in CPU activities, then it might be worth testing an increase in requested memory to see if you can get it to complete much faster without increasing the cost by much.

I’d love to hear what practical test results people find when comparing different memory/CPU allocation values for the same Lambda function.

Original article: http://alestic.com/2014/11/aws-lambda-speed

on November 19, 2014 12:01 AM

November 18, 2014

I spent the weekend learning just enough JavaScript and nodejs to hack together a Lambda function that runs arbitrary shell commands in the AWS Lambda environment.

This hack allows you to explore the current file system, learn what versions of Perl and Python are available, and discover what packages might be installed.

Setup

Define the basic parameters.

# Replace with your bucket name
bucket_name=lambdash.alestic.com

function=lambdash
lambda_execution_role_name=lambda-$function-execution
lambda_execution_access_policy_name=lambda-$function-execution-access
log_group_name=/aws/lambda/$function

IAM role that will be used by the Lambda function when it runs.

lambda_execution_role_arn=$(aws iam create-role \
  --role-name "$lambda_execution_role_name" \
  --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [{
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
      }]
    }' \
  --output text \
  --query 'Role.Arn'
)
echo lambda_execution_role_arn=$lambda_execution_role_arn

What the Lambda function is allowed to do/access. Log to Cloudwatch and upload files to a specific S3 bucket/location.

aws iam put-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name" \
  --policy-document '{
      "Version": "2012-10-17",
      "Statement": [{
          "Effect": "Allow",
          "Action": [ "logs:*" ],
          "Resource": "arn:aws:logs:*:*:*"
      }, {
          "Effect": "Allow",
          "Action": [ "s3:PutObject" ],
          "Resource": "arn:aws:s3:::'$bucket_name'/'$function'/*"
      }]
  }'

Grab the current Lambda function JavaScript from the Alestic lambdash GitHub repository, create the ZIP file, and upload the new Lambda function.

wget -q -O$function.js \
  https://raw.githubusercontent.com/alestic/lambdash/master/lambdash.js
npm install async fs tmp
zip -r $function.zip $function.js node_modules
aws lambda upload-function \
  --function-name "$function" \
  --function-zip "$function.zip" \
  --runtime nodejs \
  --mode event \
  --handler "$function.handler" \
  --role "$lambda_execution_role_arn" \
  --timeout 60 \
  --memory-size 256

Invoke the Lambda function with the desired command and S3 output locations. Adjust the command and repeat as desired.

cat > $function-args.json <<EOM
{
    "command": "ls -laiR /",
    "bucket":  "$bucket_name",
    "stdout":  "$function/stdout.txt",
    "stderr":  "$function/stderr.txt"
}
EOM

aws lambda invoke-async \
  --function-name "$function" \
  --invoke-args "$function-args.json"

Look at the Lambda function log output in CloudWatch.

log_stream_names=$(aws logs describe-log-streams \
  --log-group-name "$log_group_name" \
  --output text \
  --query 'logStreams[*].logStreamName') &&
for log_stream_name in $log_stream_names; do
  aws logs get-log-events \
    --log-group-name "$log_group_name" \
    --log-stream-name "$log_stream_name" \
    --output text \
    --query 'events[*].message'
done | less

Get the command output.

aws s3 cp s3://$bucket_name/$function/stdout.txt .
aws s3 cp s3://$bucket_name/$function/stderr.txt .
less stdout.txt stderr.txt

Clean up

If you are done with this example, you can delete the created resources. Or, you can leave the Lambda function in place ready for future use. After all, you aren’t charged unless you use it.

aws s3 rm s3://$bucket_name/$function/stdout.txt
aws s3 rm s3://$bucket_name/$function/stderr.txt
aws lambda delete-function \
  --function-name "$function"
aws iam delete-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name"
aws iam delete-role \
  --role-name "$lambda_execution_role_name"
aws logs delete-log-group \
  --log-group-name "$log_group_name"

Requests

What command output would you like to see in the Lambda environment?

Original article: http://alestic.com/2014/11/aws-lambda-shell

on November 18, 2014 09:21 PM