January 26, 2015

Gratuitous picture of my pets, the day after we rescued them

In March 2014, when I first started looking after MAAS as a product manager, I raised a minor feature request in Bug #1287224, noting that the random, 5-character hostnames that MAAS generates are not ideal.  You can't read them or pronounce them or remember them easily.  I'm talking about hostnames like: sldna, xwknd, hwrdz or wkrpb.  From that perspective, they're not very friendly.  Certainly not very Ubuntu.

So I proposed a few different ways of automatically generating those names, modeled mostly after Ubuntu's beloved own code naming scheme -- Adjective Animal.  To get the number of combinations high enough to model any reasonable MAAS user, though, we used Adjective Noun instead of Animal.

I collected a Adjective list and a Noun list from a blog run by moms, in the interest of having a nice, soft, friendly, non-offensive source of words.

For the most part, the feature served its purpose.  We now get memorable, pronounceable names.  However, we get a few odd balls in there from time to time.  Most are humorous.  But some combinations would prove, in fact, to be inappropriate, or perhaps even offensive.

Accepting that, I started thinking about other solutions.

In the mean time, I realized that Docker had recently launched something similar, their NamesGenerator, which pairs an Adjective with a Famous Scientist's Last Name (except they have explicitly blacklisted boring_wozniakbecause "Steve Wozniak is not boring", of course!).

Similarly, Github itself now also "suggests" random repo names.

I liked one part of the Docker approach better -- the use of proper names, rather than random nouns.

On the other hand, their approach is hard-coded into the Docker Golang source itself, and not usable or portable elsewhere, easily. 

Moreover, there's only a few dozen Adjectives (57) and Names (76), yielding only a few hundred combos (4332) -- ie, not nearly enough for MAAS's purposes, where we're shooting for 1M+, with minimal collisions.

MAAS was already pretty good on Adjectives (about 1,300).  But I decided to scrap the Nouns list, and instead build a Names list.  I started with Last Names (like Docker), but instead focused on First Names, and built a list of about 2,000 names from public census data.

The combo works actually works pretty well!  While smelly-susan isn't particularly polite, it's certainly not an ad hominem attack targeted at any particular Susan!  That 1,300 x 2,000 gives us well over 2 million unique combinations.

Moreover, I also thought about how I could make actually infinitely extensible...  The simple rules of English allow Adjectives to modify Nouns, while Adverbs can recursively modify other Adverbs or Adjectives

So I built a word list of Adverbs (3,600) as well, and added support for specifying the "number" of words in a PetName.
  1. If you want 1, you get a random Name
  2. If you want 2, you get a random Adjective plus a Name
  3. If you want 3 or more, you get N-2 Adverbs, an Adjective and a Name
Oh, and the separator is now optional, and can be any character or string, with a default of a hyphen, "-".

3 words will generate nearly 10 billion unique combos -- more than a 32-bit unique space (232).  And 5 words will nearly cover 64-bit space (264).

So once the algorithm was spec'd out, I built and packaged a simple shell utility and text word lists, called petname, which are published at launchpad.net/petname and github.com/dustinkirkland/petname, with packages for Ubuntu also published at pad.lv/u/petname.The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:petname/ppa
$ sudo apt-get update

$ sudo apt-get install petname
$ petname
$ petname -w 3
$ petname -s ":" -w 5

That's only really useful from the command line, though.  In MAAS, we'd want this in a native Python library.  So it was really easy to create python-petname, source now published at launchpad.net/python-petnamegithub.com/dustinkirkland/python-petname, Ubuntu packages at pad.lv/u/python-petname and PyPi at pypi.python.org/pypi/petname. The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:python-petname/ppa
$ sudo apt-get update

$ sudo apt-get install python-petname
$ python-petname
$ python-petname -w 4
$ python-petname -s "" -w 2

Using it in your own Python code looks as simple as this:

$ python
>>> import petname
>>> foo = petname.PetName(3, "_")
>>> print(foo)

In the way that NamesGenerator is useful to Docker, I though a Golang library might be useful for us in LXD (and perhaps even usable by Docker or others too), so I created github.com/dustinkirkland/golang-petname. The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:golang-petname/ppa
$ sudo apt-get update

$ sudo apt-get install golang-petname
$ golang-petname
$ golang-petname -words=1
$ golang-petname -separator="|" -words=10

Using it in your own Golang code looks as simple as this:

package main
import (
func main() {
fmt.Println(petname.PetName(2, ""))
Gratuitous picture of my pets, 7 years later.
on January 26, 2015 02:00 PM

Spammers and trolls

Lubuntu Blog

Hello. This blog, as many others, is continuously receiving disgusting spam messages and troll comments. Of course all of them go directly to the trash can and marked as non-welcomed, so they no longer can annoy us. It's ok, I can deal with it with the help of Google.

But what I'm not going to tolerate is, under the appearance of nice readers and users, allow offensive comments or trolling about non-related things or articles on this blog. So please, you non-bots, humans that intentionally work for those companies colelcting data from blogs, public profiles and other stuff to feed the beast, you know you work sucks and I doubt about your human quality and integrity. You're banned.

Thanks everyone for your patience.
on January 26, 2015 11:04 AM

January 25, 2015

In 2012, I wrote an article about default configuration for an operating system and the challenges involved with it. For a related, but slightly different topic, I thought it would be useful to share some of my experiences in setting up environments for more or less technically limited people.

Please note that this is just a pointer and a suggestion and the needs and wants of real people may and will vary.

Relevant visual elements


The first thing I would suggest to do is to remove any unnecessary panel applets (and panels, where appropriate). Automatically hidden panels can be really hard to use, especially for those users who have limited experience with mice or other problems that affect hand-cursor coordination. These can vary anywhere from bad eyesight to difficulties with accurate movement or simply having a hard time understanding the concept of a cursor.

What is relevant in the panels for a simple desktop experience? If you are striving for the simplest possible configuration, I would say that you only need launchers for applications, window list of open applications, a clock and a shutdown button with no choice of logout, suspend or other actions. With this setup, I recommend using the confirmation dialog to prevent unwanted shutdown cycles.

When deciding which launchers to show, please remember that you can enable access to the full application menu on right-clicking the desktop. Because of that option, it’s not always worth the trouble to try to add a launcher for every application, especially if they are used only rarely. Consider picking ones that users need daily, weekly or monthly, depending on how much you want to avoid right-clicking.

I believe people that want or need a simple desktop don’t want to see anything that they think is irrelevant. This is especially true to indicators, because they use symbols that are more or less hard to understand for a technically limited person.

There are a few exceptions: If you’re setting up a laptop that’s actually unplugged now and then, you might want to show the battery indicator. If you have a laptop that needs to be used in various locations, you’ll want to show the network manager as well. If controlling volume is necessary, you might want to consider whether the sound indicator or shortcut keys (Fn+Fx in laptops) are the better choice.


In addition to the panel launchers, it’s wise to add launchers for the desktop as well along with a shutdown button. Make sure the launchers user generic names instead of application names eg. Email instead of Mozilla Thunderbird. It’s usually wise to bump up the icon and label size up as well. If the users will not run several applications at a time, you can simply drop the panel and only use the desktop icons. If you want to show the clock without a panel, you can use a simple Conky setup. Conky is available in the Ubuntu repositories.

Other accessibility considerations

If the users have problems with their eyesight, there are a few things that can help make the system more usable for them.

The first one is adjusting the font and DPI settings. Bumping up the font size by just one step and increasing the DPI value makes the text more easily readable. Xubuntu has a very legible default font, even in smallish fonts, but it’s good to remember you can change the font as well

The other thing you can do is change the window border theme. The default Xubuntu theme is designed to be elegant and keep out of the way, but sometimes this is not ideal. If the user has a hard time seeing where a window ends and the other starts, it might be a good idea to try another window border theme. On the other hand, if too many buttons is the problem – or you simply don’t need or want to enable some features – you can remove some of the window buttons as well.

There is also many accessibility configuration options under the Accessibility tab in Window Manager Tweaks found in the Settings Manager. The one I tend to turn off is rolling up windows with the mouse wheel. This prevents the accidentally “disappearing” windows.

Accessibility version of Greybird?

Currently, Greybird, the Xubuntu default theme, ships two window border themes: a regular and a compact one. It has been brought up to discussion by me and others that we should ship an accessibility version as well. This accessibility version would sport bigger window buttons as well as a bigger border to grab for resizing the window.

So far, the accessibility on the drawing board phase and not much has been done yet, as it’s currently one of the most low priority items for the development teams of Xubuntu and Shimmer. That being said, all constructive feedback is welcome. Furthermore, if we see a lot of people asking for the accessibility version, it’s likely that its priority will be bumped up at least a little.

Smoother user experience

Since we are talking about a simple desktop experience, I can assume at least part of our target group is people who don’t either understand or want to understand why updating is important or how to install updates. For this reason, I’d simply turn on the automatical security updates but turn off all manual updates.

Depending on the situation, I would make sure apport will not pop up and ask to send new bug reports. It’s self-evident that bug reports are important, but if the user doesn’t understand or want to understand the importance, it’s better to turn any reporting that needs user input off. The possibility that these users with the simplest possible desktops would run into bugs that haven’t been already found is really rare. Moreover, the possibility of developers getting further information from these users are really slim.

While I don’t use autologin myself and can’t suggest using it for security reasons, setting it up might save a lot of frustration. But please, only use autologin after a good assessment of the situation and understanding the security considerations related to that.

Manual maintenance needs

Even though a system can run smoothly without daily maintenance, manual maintenance is sometimes required. I’ve been maintaining a few computers for family remotely during the years, and the two tools I’ve needed the most are an SSH server and remote desktop viewing ability – for which I’m currently using an X11vnc setup.

While SSH is usually fine for most of the regular maintenance, being able to view (and use) the desktop remotely has been an invaluable help in situations where the user can’t describe the issue accurately enough via text or voice based communication. This is even more useful if the computer is far from you and you have limited possibilities to access it physically.

Naturally, you need to take security considerations into account when accessing a computer remotely. Making servers listen on unusual ports and securing with them firewalls is highly encouraged.


There are numerous opinions on the best desktop configuration, both in the look and feel. However, if you are setting a system up for somebody else, you will need to consider how they usually use the computer and how you could support their workflow to make the experience smoother.

Xfce allows a great deal of customizability by default. On top of that, the Xubuntu team has worked to bring the users even more tools that can help them configure their system. The options brought by these alone give you a vast amount of different things you can control. This article is just scratching the surface for even those options. If you want to go deeper, there is always more software on the Ubuntu repositories that can help you set up the system in the way you like it.

If you have other ideas and suggestions for simple and/or accessible desktops, feel free to drop them in the comments. If you write (or have written) a blog article about customizing Xubuntu, especially ones that cover accessbility issues, I’d like to hear back from those as well.

Happy configuring!

on January 25, 2015 11:38 PM

With the appearance of Snappy Ubuntu steps into the world of embedded systems. Ubuntu Snappy is designed in a way that will make it safe to run in critical environments from drones over medical equipment to robotics, home automation and machine control. The automatic rollback features will prevent you from outages when an upgrade fails, application confinement prevents you from apps, servers and tools doing any evil to your system and the image based design makes upgrades happen in minutes instead of potentially hours you are used to from package based upgrade systems.

By its design of separating device, rootfs and application packages strictly Snappy provides a true rolling release, you just upgrade each of the bits separately, independent from each other. Your Home Automation Server software can just stay on the latest upstream version all the time, no matter what version or release the other bits of your system are on. There is no more “I’m running Ubuntu XX.04 or XX.10, where do i find a PPA with a backport of the latest LibreOffice”, “snappy install” and “snappy upgrade” will simply always get you the latest stable upstream version of your software, regardless of the base system.

Thanks to the separation of the device related bits porting to yet unsupported hardware is a breeze too, though since features like automated roll-back on upgrades as well as the security guarding of snap packages depend on capabilities of the bootloader and kernel, your port might operate slightly degraded until you are able to add these bits.

Let’s take a look what it takes to do such a port to a NinjaSphere developer board in detail.

The Snappy boot process and finding out about your Bootloader capabilities

This section requires some basic u-boot knowledge, you should also have read https://developer.ubuntu.com/en/snappy/porting/

By default the whole u-boot logic in a snappy system gets read and executed from a file called snappy-system.txt living in the /boot partition of your install. This file is put in place by the image build software we will use later. So first of all your Bootloader setup needs to be able to load files from disk and read their content into the bootloader environment. Most u-boot installs provide “fatload” and the “env import” commands for this.

It is also very likely that the commands in your snappy-system.txt are to new for your installed u-boot (or are simply not enabled in its build configuration) so we might need to override them with equivalent functions your bootloader actually supports (i.e. fatload vs load or bootm vs bootz).

To get started, we grab a default linux SD card image from the board vendor, write it to an SD card and wire up the board for serial console using an FTDI USB serial cable. We stop the boot process by hitting enter right after the first u-boot messages appear during boot, which should get us to the bootloader prompt where we simply type “help”. This will show us all the commands the installed bootloader knows. Next we want to know what the bootloader does by default, so we call the “printenv” command which will show us all pre-set variables (copy paste them from your terminal application to a txt file so you can easier look them up later without having to boot your board each time you need to know anything).

Inspecting the “printenv” output of the NinjaSphere u-boot you will notice that it uses a file called uEnv-NS.txt to read its environment from. This is the file we will have to work with to put overrides and hardware sepcific bits in place. It is also the file from which we will load snappy-system.txt into our environment.

Now lets take a look at the snappy-system.txt file, an example can be found at:

It contains four variables we can not change that tell our snappy how to boot, these are snappy_cmdline, snappy_ab, snappy_stamp and snappy_mode. It also puts the logic for booting a snappy system into the snappy_boot variable.
Additionally there are the different load commands for kernel, initrd and devicetree files and as you can see when comparing these with your u-boot “help” output they use commands out installed u-boot does not know, so the first bits we will put into our uEnv-NS.txt files are adjusted version of these commands. In the default instructions for the NinjaSphere for building the Kernel you will notice that it uses the devicetree attached to an uImage and can not boot raw vmlinuz and initrd.img files by using the bootz command. It also does not use an initrd at all by default but luckily in the “printenv” output there is at least a load address set for a ramdisk already, so we will make use of this. Based on these findings our first lines in uEnv-NS.txt look like the following:

loadfiles_ninja=run loadkernel_ninja; run loadinitrd_ninja
loadkernel_ninja=fatload mmc ${mmcdev} ${kloadaddr} ${snappy_ab}/${kernel_file_ninja}
loadinitrd_ninja=fatload mmc ${mmcdev} ${rdaddr} ${snappy_ab}/${initrd_file_ninja}

We will now simply be able to run “loadfiles_ninja” instead of “loadfiles” from our snappy_boot override command.

Snappy uses ext4 filesystems all over the place, looking at “printenv” we see the NinjaSphere defaults to ext3 by setting the mmcrootfstype variable, so our next line in uEnv-NS.txt switches this to ext4:


Now lets take a closer look at snappy_boot in snappy-system.txt, the command that contains all the magic.
The section  “Bootloader requirements for Snappy (u-boot + system-AB)” on https://developer.ubuntu.com/en/snappy/porting/ describes the if-then logic used there in detail. Comparing the snappy_boot command from snappy-system.txt with the list of available commands shows that we need some adjustments though, the “load” command is not supported, we need to use “fatload” instead. The original snappy_boot command also uses “fatwrite” to touch snappy-stamp.txt. While you can see from the “help” output, that this command is supported by our preinstalled u-boot, there is a bug with older u-boot versions where using fatwrite results in a corrupted /boot partition if this partition is formatted as fat32 (which snappy uses). So our new snappy_boot command will need to have this part of the logic ripped out (which sadly breaks the auto-rollback function but will not have any other limitations for us (“snappy upgrade” will still work fine as well as a manual “snappy rollback” will)).

After making all the changes our “snappy_boot_ninja” will look like the following in the uEnv-NS.txt file:

snappy_boot_ninja=if test "${snappy_mode}" = "try"; then if fatload mmc ${mmcdev} ${snappy_stamp} 0; then if test "${snappy_ab}" = "a"; then setenv snappy_ab "b"; else setenv snappy_ab "a"; fi; fi; fi; run loadfiles_ninja; setenv mmcroot /dev/disk/by-label/system-${snappy_ab} ${snappy_cmdline}; run mmcargs; bootm ${kloadaddr} ${rdaddr}

As the final step we now just need to set “uenvcmd” to import the variables from snappy-system.txt and then make it run our modified snappy_boot_ninja command:

uenvcmd=fatload mmc ${mmcdev} ${loadaddr} snappy-system.txt; env import -t $loadaddr $filesize; run snappy_boot_ninja

This is it ! Our bootloader setup is now ready, the final uEnv-NS.txt that we will put into our /boot partition now looks like below:

# hardware specific overrides for the ninjasphere developer board
loadfiles_ninja=run loadkernel_ninja; run loadinitrd_ninja
loadkernel_ninja=fatload mmc ${mmcdev} ${kloadaddr} ${snappy_ab}/${kernel_file_ninja}
loadinitrd_ninja=fatload mmc ${mmcdev} ${rdaddr} ${snappy_ab}/${initrd_file_ninja}


snappy_boot_ninja=if test "${snappy_mode}" = "try"; then if fatload mmc ${mmcdev} ${snappy_stamp} 0; then if test "${snappy_ab}" = "a"; then setenv snappy_ab "b"; else setenv snappy_ab "a"; fi; fi; fi; run loadfiles_ninja; setenv mmcroot /dev/disk/by-label/system-${snappy_ab} ${snappy_cmdline}; run mmcargs; bootm ${kloadaddr} ${rdaddr}

uenvcmd=fatload mmc ${mmcdev} ${loadaddr} snappy-system.txt; env import -t $loadaddr $filesize; run snappy_boot_ninja

Building kernel and initrd files to boot Snappy on the NinjaSphere

Snappy makes heavy use of the apparmor security extension of the linux kernel to provide a safe execution environment for the snap packages of applications and services. So while we could now clone the NinjaSphere kernel source and apply the latest apparmor patches from linus’ mainline tree, the kind Paolo Pisati from the Ubuntu kernel team was luckily interested in getting the NinjaSphere running snappy and did all this work for us already, so instead of cloning the BSP kernel from the NinjaSphere team on github, we can pull the already patched tree from:


First of all, let us install a cross toolchain. Assuming you use an Ubuntu or Debian install for your work you can just do this by:

sudo apt-get install gcc-arm-linux-gnueabihf

Now we clone the patched tree and move into the cloned directory:

git clone -b snappy_ti_ninjasphere git://kernel.ubuntu.com/ppisati/ubuntu-vivid.git
cd ubuntu-vivid

Build uImage with attached devicetree, build the modules and install them. All based on Paolos adjusted snappy defconfiig:

export CROSS_COMPILE=arm-linux-gnueabihf-; export ARCH=arm
make snappy_ninjasphere_defconfig
make -j8 uImage.var-som-am33-ninja
make -j8 modules
mkdir ../ninjasphere-modules
make modules_install INSTALL_MOD_PATH=../ninjasphere-modules
cp arch/arm/boot/uImage.var-som-am33-ninja ../uImage
cd -

So we now have a modules/ directory containing the binary modules and we have a uImage file to boot our snappy, what we are still missing is an initrd file to make our snappy boot. We can just use the initrd from an existing snappy device tarball which we can find at cdimage.ubuntu.com.

mkdir tmp
cd tmp
wget http://cdimage.ubuntu.com/ubuntu-core/daily-preinstalled/current/vivid-preinstalled-core-armhf.device.tar.gz
tar xzvf vivid-preinstalled-core-armhf.device.tar.gz

Do you remember, our board requires an uInitrd … the above tarball only ships a raw initrd.img, so we need to convert it. In Ubuntu there is the u-boot-tools package that ships the mkimage tool to convert files for u-boot consumption, lets install this package and create a proper uInitrd:

sudo apt-get install u-boot-tools
mkimage -A arm -T ramdisk -C none -n "Snappy Initrd" -d system/boot/initrd.img-* ../uInitrd
cd ..
rm -rf tmp/

If you do not want to keep the modules from the -generic kernel in your initrd.img you can easily unpack and re-pack the initrd.img file as described in “Initrd requirements for Snappy” on https://developer.ubuntu.com/en/snappy/porting/ and simply rm -rf lib/modules/* before re-packing to get a clean and lean initrd.img before converting to uInitrd.

Now we have a bootloader configuration file, uImage, uInitrd and a dir with the matching binary modules we can use to create our snappy device tarball.

Creating the Snappy device tarball

We are ready to create the device tarball filesystem structure and roll a proper snappy tarball from it, lets create a build/ dir in which we build this structure:

mkdir build
cd build

As described on https://developer.ubuntu.com/en/snappy/porting/ our uInitrd and uImage files need to go into the assets subdir:

mkdir assets
cp ../uImage assets/
cp ../uInitrd assets/

The modules we built above will have to live underneath the system/ dir inside the tarball:

mkdir system
cp -a ../modules/* system/

Our boootloader configuration goes into the boot/ dir. For proper operation snappy looks for a plain uEnv.txt file, since our actual bootloader config lives in uEnv-NS.txt we just create the other file as empty doc (it would be great if we could use a symlink here, but remember, the /boot partition that will be created from this uses a vfat filesystem and vfat does not support
symlinks, so we just touch an empty file instead).

mkdir boot
cp ../uEnv-NS.txt boot/
touch boot/uEnv.txt

Snappy will also expect a flashtool-assets dir, even though we do not use this for our port:

mkdir flashtool-assets

As last step we now need to create the hardware.yaml file as described on https://developer.ubuntu.com/en/snappy/porting/

echo "kernel: assets/uImage" >hardware.yaml
echo "initrd: assets/uInitrd" >>hardware.yaml
echo "dtbs: assets/dtbs" >>hardware.yaml
echo "partition-layout: system-AB" >>hardware.yaml
echo "bootloader: u-boot" >>hardware.yaml

This is it ! Now we can tar up the contents of the build/ dir into a tar.xz file that we can use with ubuntu-device-flash to build a bootable snappy image.

tar cJvf ../device_part_ninjasphere.tar.xz *
cd ..

Since I personally like to re-build my tarballs regulary if anything changed or improved, I wrote a little tool I call snappy-device-builder which takes over some of the repetitive tasks you have to do when rolling the tarball, you can branch it with bzr from launchpad if you are interested in it (patches and improvements are indeed very welcome):

bzr branch lp:~ogra/+junk/snappy-device-builder

Building the actual SD card image

Install the latest ubuntu-device-flash from the snappy-dev beta PPA:

sudo add-apt-repository ppa:snappy-dev/beta
sudo apt-get update
sudo apt-get install ubuntu-device-flash

Now we build a 3GB big image called mysnappy.img using ubuntu-device-flash and our newly created device_part_ninjasphere.tar.xz with the command below:

sudo ubuntu-device-flash core --size 3 -o mysnappy.img --channel ubuntu-core/devel-proposed --device generic_armhf --device-part device_part_ninjasphere.tar.xz --developer-mode

.. and write the create mysnappy.img to an SD card that sits in the SD Card Reader at /dev/sdc:

sudo dd if=mysnappy.img of=/dev/sdc bs=4k

This is it, your NinjaSphere board should now boot you to a snappy login on the serial port, log in with “ubuntu” with the password “ubuntu” and if your board is attached to the network i recommend doing a “sudo snappy install webdm”, then you can reach your snappy via http://webdm.local:4200/ in a browser and install/remove/configure snap packages on it.

If you have any problems with this guide, want to make suggestions or have questions, you can reach me as “ogra” via IRC inthe #snappy channel on irc.freenode.net or just mail the snappy-devel@lists.ubuntu.com mailing list with your question.

on January 25, 2015 05:22 PM

Self Development

Ali Jawad


From 2010 until this very moment, I have volunteered and invested my time to help tons of people from all over the world. The key was ‘helping’. In most cases, I had no idea what are their real names? how do they look like? their nationality? colour? religion? personal life? etc. That was the best part of all. You help others without knowing anything about them except they are:

  • Humans.
  • Need Help.

What could be better than this?

Words fail me here. I can’t summarize all these years in one post and guess will never do. That requires a book which I’m planning to write, one day hopefully. Why a book? simply because I was doing that on daily basis, 24/7/365 and it was like a full-time job. I’ve been the most active contributor ever. That definitely needs a book in case I would like to document it and share that unique experience.


In 21-January-2015, I was talking to someone so close, so precious and so has been and still my real life mentor. We were discussing about my real life, what I have done, what I’m doing, what I’m suppose to do, what I’m not suppose to do, etc. It was long discussion.

Suddenly, we came to a huge disagreement about the path I have decided to walk through. Due to that disagreement, the discussion, which turned to be an argument, had to reach to a dead end.

The call ended and a moment of silence from both side after that.

I then started to think with myself about what happened. It took me nearly a day until I started to figure out what I must do and what should happen. At first, I was so confused and feeling so bad. Once I started to take some actions, things started to get more clear and I had a better vision.


In one single day, I have decided to step down from 5 different projects at the same time.


Out of 5 projects I stepped down from, 3 of them are mine (I founded them).

It was the toughest and hardest decision I have ever taken for the last 5 years since I have started all that in 2010 (being a volunteer). Of course, I have taken much harder decisions in my real life but I’m talking about my life as a volunteer. That was the toughest of all.

Such action requires a lot of energy, maybe courage too. Above all, it requires thinking out-of-the box (differently).

Once I took that one, I decided to go for the next step and the next decision:

Self Development.


A totally new chapter in my life. New page, new path, new beginning. Above all, new way of thinking. Thinking differently.


Actions Speak Louder Than Words.

I will not write about my next move yet. I will start some paper planning and start following what I have planed for myself.

I’ve been the brain behind so many projects. I have helped lots of people and many projects. It is time to work on myself. If I can’t be useful to my own self, how can I be useful to others? I don’t know how I was helpful for so many people? maybe I wasn’t after all? I don’t know how it happened and I don’t want to think about that at the moment. All what I want truly is to start working so hard on myself. Develop and improve myself. It is to be or not to be. It is super serious and super high priority.


I must admit and confess that I was chasing the wrong dream. That is why I have contacted my real life mentor and apologized to the misunderstanding that happened between us and all. I hope he can forgive me and above all, I hope he understands.

I’m forever thankful as I got the wake up call that was in bad need to.

Enough talk, time for actions.

Oh, and I must also admit that after stepping down from my roles with 5 different projects, I felt much better. At first, I was feeling so bad and down. But the next day, when I breathed, I did that with LESS burden and stress. I then realized what I have done to myself all that time. Needless to say, it was so stupid to burn myself out that way. While I have done good things to others, I have done so bad things to myself. Never too late and better late than never. I don’t think I could have done that without stepping down. that action made me think clearly and better. Less burden means more focus and more energy on less and fewer things.


FWIW, I don’t think I will step down from ToriOS and Ubuntu GNOME. And I don’t think I will put Kibo on hold. I will however limit my activities and keep low profile.


To the one who is always there for me, spending time, energy and efforts to offer the honest and the best advice ever, THANK YOU SO MUCH wherever you are!


Thank you for reading this.

Yes, I shall share the progress of myself development in case this will help anyone out there, wherever there is :)

on January 25, 2015 11:30 AM

January 24, 2015

I've just whipped up a Python script that renders Github issue lists from your favourite projects as an iCalendar feed.

The project is called github-icalendar. It uses Python Flask to expose the iCalendar feed over HTTP.

It is really easy to get up and running. All the dependencies are available on a modern Linux distribution, for example:

$ sudo apt-get install python-yaml python-icalendar python-flask python-pygithub

Just create an API token in Github and put it into a configuration file with a list of your repositories like this:

api_token: 6b36b3d7579d06c9f8e88bc6fb33864e4765e5fac4a3c2fd1bc33aad
bind_address: ::0
bind_port: 5000
- repository: your-user-name/your-project
- repository: your-user-name/another-project

Run it from the shell:

$ ./github_icalendar/main.py github-ics.cfg

and connect to it with your favourite iCalendar client.

Consolidating issue lists from Bugzilla, Github, Debian BTS and other sources

A single iCalendar client can usually support multiple sources and thereby consolidate lists of issues from multiple bug trackers.

This can be much more powerful than combining RSS bug feeds because iCalendar has built-in support for concepts such as priority and deadline. The client can use these to help you identify the most critical issues across all your projects, no matter which bug tracker they use.

Bugzilla bugtrackers already expose iCalendar feeds directly, just look for the iCalendar link at the bottom of any search results page. Here is an example URL from the Mozilla instance of Bugzilla.

The Ultimate Debian Database consolidates information from the Debian and Ubuntu universe and can already export it as an RSS feed, there is discussion about extrapolating that to an iCalendar feed too.

Further possibilities

  • Prioritizing the issues in Github and mapping these priorities to iCalendar priorities
  • Creating tags in Github that allow issues to be ignored/excluded from the feed (e.g. excluding wishlist items)
  • Creating summary entries instead of listing all the issues, e.g. a single task entry with the title Fix 2 critical bugs for project foo


The screenshots below are based on the issue list of the Lumicall secure SIP phone for Android.

Screenshot - Mozilla Thunderbird/Lightning (Icedove/Iceowl-extension on Debian)

on January 24, 2015 11:07 PM

Remembering Eric P. Scott (eps)

Elizabeth K. Joseph

Last night I learned the worst kind of news, my friend and valuable member of the Linux community here in San Francisco, Eric P. Scott (eps) recently passed away.

In an excerpt from a post by Chaz Boston Baden, he cites the news from Ron Hipschman:

I hate to be the bearer of bad news, but It is my sad duty to inform you that Eric passed away sometime in the last week or so. After a period of not hearing from Eric by phone or by email, Karil Daniels (another friend) and I became concerned that something might be more serious than a lost phone or a trip to a convention, so I called his property manager and we met at Eric’s place Friday night. Unfortunately, the worst possible reason for his lack of communication was what we found. According to the medical examiner, he apparently died in his sleep peacefully (he was in bed). Eric had been battling a heart condition. We may learn more next week when they do an examination.

He was a good friend, the kind who was hugely supportive of any local events I had concocted for the Ubuntu California community, but as a friend he was also the thoughtful kind of man who would spontaneously give me thoughtful gifts. Sometimes they were related to an idea he had for promoting Ubuntu, like a new kind of candy we could use for our candy dishes at the Southern California Linux Expo, a toy penguin we could use at booths or a foldable origami-like street car he thought we could use as inspiration for something similar as a giveaway to promote the latest animal associated with an Ubuntu LTS release.

He also went beyond having ideas and we spent time together several times scouring local shops for giveaway booth candy, and once meeting at Costco to buy cookies and chips in bulk for an Ubuntu release party last spring, which he then helped me cart home on a bus! Sometimes after the monthly Ubuntu Hours, which he almost always attended, we’d go out to explore options for candy to include at booth events, with the amusing idea he also came up with: candy dishes that came together to form the Ubuntu logo.

In 2012 we filled the dishes with M&Ms:

The next year we became more germ conscious and he suggested we go with individually wrapped candies, searching the city for ones that would taste good and not too expensive. Plus, he found a California-shaped bowl which fit into our Ubuntu California astonishingly theme well!

He also helped with Partimus, often coming out to hardware triage and installfests we’d have at the schools.

At a Partimus-supported school, back row, middle

As a friend, he was also always welcome to share his knowledge with others. Upon learning that I don’t cook, he gave me advice on some quick and easy things I could do at home, which culminated in the gift of a plastic container built for cooking pasta in the microwave. Skeptical of all things microwave, it’s actually something I now use routinely when I’m eating alone, I even happened to use it last night before learning of his passing.

He was a rail fan and advocate for public transportation, so I could always count on him for the latest transit news, or just a pure geek out about trains in general, which often happened with other rail fans at our regular Bay Area Debian dinners. He had also racked up the miles on his favorite airline alliance, so there were plenty of air geek conversations around ticket prices, destinations and loyalty programs. And though I haven’t really connected with the local science fiction community here in San Francisco (so many hobbies, so little time!), we definitely shared a passion for scifi too.

This is a hard and shocking loss for me. I will deeply miss his friendship and support.

on January 24, 2015 08:10 PM
Día U - 8395
Comencé a oír la extraña palabreja de Linux, allá por principios de los años 90 de la mano de PC Actual. Por esa época yo sólo jugaba (no sabía hacer otra cosa) con mi Spectrum...

Día U - 7300
Pasaron los años y mis estudios valieron para convencer a mis padres de que era necesario un potente 486DX. El MS-DOS se portaba bien en aquel hardware y seguro que ahora adoro la Shell debido a reminiscencias de esos viejos tiempos...

Día U - 6935
A pesar de mi pasión por la informática, no conocí a Windows 95. Sus requerimientos de 4MB de RAM (= 25.000 ptas de la época) provocaban que actualizar el ordenador fuera excesivamente caro, aunque Microsoft anunciara el oro y el moro....
Tras la muerte del 486 un flamante AMD a 800Mhz entró en el hogar y en el año 96 instalé mi primer Linux, un Mandrake que venía en un CD de PC World... ignorante de mi, en la instalación gráfica me cargué el Windows 98. Y Mandrake apenas duró unas horas... porque tenía un modem (cof, cof, winmodem) de 56Kb y claro... los winmodem se llevaban no mal, sino fatal con Linux. Así que el resto de años escogí usar un PC con Windows 98/ME/XP e Internet, o un poderoso Linux sin Internet... La elección era clara... :$

Día U - 6205
Y olalá, el ADSL se extendió por España y con ello el cambio a un router en condiciones... Así que volví a intentarlo otra vez con un CD de SUSE. Se notaba y mucho la mejoría de KDE en esos años. SUSE molaba, pero... yo tenía mucho cacao mental con tanta distribución y muchas ganas de probar cosas nuevas. Mandriva tomó el relevo, wow Mandriva... ¡que buen hacer! Y poco a poco entraba en un mundo grandioso, casi que infinito y necesitaba recorrerlo para saber a dónde pertenecía ;)

Nunca mejor dicho ;)
De Mandriva pasé a Ubuntu (5.04/5.10?). La primera vez que probé GNOME sus aplicaciones me parecieron demasiado simples. Cada vez leía más sobre Linux, haciendo que probase Linux Mint y la nostalgia hizo que probase la nueva openSUSE...

Y al final, volví a donde me sentí más a gusto, a MS-DOS... jajajaja noooo que es broma ;P Volví al XP :S Era donde más cómodo me sentía.

Día U - 2920
Y por fin, en el 2007 todo cambió. La versión 6.06 de Ubuntu me convenció, no por su libertad (tenía un lío sobre qué era libertad ¡que pa qué!), si no, porque era algo nuevo, atrayente, tenía un auténtico halo cool. Quería/necesitaba probarlo. Y cumplió. Me sorprendió por su estabilidad, facilidad de uso y acabé enganchado a la sencillez de GNOME. Desde entonces no encontré otro SO en el que me sienta tan a gusto. Y eso es lo importante al fin y al cabo :)

Día U - 2190
Y cada vez más y más, me adentré en la jungla, donde a cada paso descubres un mundo nuevo. Aprendes qué es libertad y aprendes a valorarla, ves que el imposible es posible, no hay límite...

Día U - 14
Y volvieron a pasar años y más años; Años en los que he visto atacar naves en llamas más allá de Ori... uys no... eso es otra historia... En todos estos años, sí ví caer auténticos imperios como Nokia o RIM, ví otros imperios robados como la manzana a Steve. Ví nacer renqueantes SO como Android que se engulló con patatas a la informática móvil. También ví como Windows 10 presume de convergencia cuando Mark la presentó años ha. Y es que, en este mundillo, cuya prehistoria se remonta a apenas unas décadas, no hay nada escrito.

El 6 de Febrero del 2015, todo termina y todo comienza
Remember, remember the 6th of February... con un Ubuntu un poco olvidado (a mi pesar) de su escritorio, haciéndolo muy bien en el mercado de servidores/nube, va a entrar en el mercado de los móviles con paso firme... El próximo día 6 Ubuntu presentará el primer modelo de Ubuntu Phone. Un Ubuntu Phone que no sólo tiene dinero y tiempo invertidos, también tiene invertidos sueños e ilusión, incluso, tiene invertido tanto como su propio futuro.
¿Triunfará? Pregunta-ylo a la nublina, sólo ella te lo dirá nos canta Llan de Cubel. Pero al menos, hay que ser paisanos y quien quiera peces que moje el culo. Ubuntu ha demostrado que tiene las ideas claras y ya está en el camino de alcanzar un sueño
Ubuntu, a valiente no te gana nadie. ¡Chapó!
on January 24, 2015 11:40 AM

U2F, Yubikey, Universal 2nd FactorPasswords are always going to be vulnerable to being cracked. Fortunately, there are solutions out there that are making it safer for users to interact with services on the web. The new standard in protecting users is Universal 2nd Factor (U2F) authentication which is already available in browsers like Google Chrome.

Mozilla currently has a bug open to start the work necessary to delivering U2F support to people around the globe and bring Firefox into parity with Chrome by offering this excellent new feature to users.

I recently reached out to the folks at Yubico who are very eager to see Universal 2nd Factor (U2F) support in Firefox. So much so that they have offered me the ability to give out up to two hundred Yubikeys with U2F support to testers and will ship them directly to Mozillians regardless of what country you live in so you can follow along with the bug we have open and begin testing U2F in Firefox the minute it becomes available in Firefox Nightly.

If you are a Firefox Nightly user and are interested in testing U2F, please use this form (offer now closed) and apply for a code to receive one of these Yubikeys for testing. (This is only available to Mozillians who use Nightly and are willing to help report bugs and test the patch when it lands)

Thanks again to the folks at Yubico for supporting U2F in Firefox!

Update: This offer is now closed check your email for a code or a request to verify you are a vouched Mozillian! We got more requests also then we had available so only the first two hundred will be fulfilled!

on January 24, 2015 12:20 AM

January 23, 2015

Ubuntu Global Jam, Vivid edition is a few short weeks away. It's time to make your event happen. I can help! Here's my officially unofficial guide to global jam success.


  1. Get your jam pack! Get the request in right away so it gets to you on time. 
  2. Pick a cool location to jam
  3. Tell everyone! (be sure to mention free swag, who can resist!?)
But wait, what are you going to do while jamming? I've got that covered too! Hold a testing jam! All you need to know can be found on the ubuntu global jam wiki. The wiki even has more information for you as a jam host in case you have questions or just like details.

Ohh and just in case you don't like testing (seems crazy, I know), there are other jam ideas available to you. The important thing is you get together with other ubuntu aficionados and celebrate ubuntu! 

P.S. Don't forget to share pictures afterwards. No one will know you had the coolest jam in the world unless you tell them :-)

P.P.S. If I'm invited, bring cupcakes! Yum!

on January 23, 2015 05:08 PM
We’re preparing Lubuntu 15.04, Vivid Vervet, for distribution in April 2015. With this early Alpha pre-release, you can see what we are trying out in preparation for our next version. We have some interesting things happening, so read on for highlights and information.

NOTE: This is an alpha pre-release. Lubuntu Pre-releases are NOT recommended for:
  • Regular users who are not aware of pre-release issues
  • Anyone who needs a stable system
  • Anyone uncomfortable running a possibly frequently broken system
  • Anyone in a production environment with data or workflows that need to be reliable

Lubuntu Pre-releases ARE recommended for:
  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Lubuntu developers

You can get more information here.
on January 23, 2015 04:36 PM

Softpedia writes about our new Alpha 2 release

Kubuntu 15.04 Alpha 2 Is the Most Exciting Release in a Long Time

The first thing people will likely notice is the new look of the distribution and they won’t be disappointed.

An even more impressive review, this time of Plasma 5 on Utopic, is from Ken Vermette.

Plasma 5.2 – The Quintessential Breakdown


on January 23, 2015 12:30 PM

Find-a-Task, quickly!

Pasi Lallinaho

The Ubuntu community team has recently set up Find-a-Task, which is a small tool that helps new contributors find a task to start working on. If you click around (long enough), you will also find Xubuntu tasks in there. How nice of us!

There’s one problem though… The content is unfortunately maintained in a private environment, and there’s no easy way to see all the tasks. What if you wanted to check if a very similar task, or exactly the same task, already exists?

Stop the clicking already! I wrote a simple Greasemonkey script that outputs the full index of Find-a-Task instead of showing you the different categories and tasks one by one. The easiest way to install the script is to click on the Raw -button on the Gist.

The team has only started gathering these tasks around the community. Some have already came in, but there’s room for a lot more expansion.

If your team wants to add some tasks, join the #ubuntu-community-team IRC channel on Freenode and shout your tasks out loud. The community team will update the tool with your tasks as long as you provide them the name, description, target URL and the desired category for each task.

Go add a task… quickly!

on January 23, 2015 01:39 AM

reddit spritesheet CSS magic

Walter Lapchynski

So you've got yourself a subreddit. Now you need to configure it. Boy, what a hard time /r/lubuntu had trying to get flair figured out. Unfortunately, the reddit group is our newest channel to connect with the larger community, so I didn't have a lot of help.

See the issue was that we already had this spritesheet created by our Artwork God, Rafael Laguna. He actually had made another one to make this fantastic stylesheet, using Naut (look at that winking Snoo!), so I was confident this would be easy.

However, I had quite a bit of problems trying to get background-position to behave right. It looked like it was going all over the place. In looking at tutorials, none of them, even the most helpful, covered what to do if you have a spritesheet. Instead, they all assume you just have individual images.

There were several problems I had going on including extra padding and incorrectly sized images on the spritesheet that created issues. The spritesheet was remade with more Lubuntu specific images (well, at least for half of them).

Lubuntu spritesheet

Making it a single row of equal-sized (16x16) images made things easier. The ah ha moment came when realizing that background-position moves the background relative to the canvas. It does NOT move the background on the canvas. So we needed to use negative numbers and all was well.

And yet, there was another problem: there's optional text that can go with the flair. Unfortunately, it was included in the span that was styled with our background spritesheet. It was appearing right on top of the flair and there was no way around this looking at the HTML.

<span class="flair flair-lxqt" title="Release Mgr. | QA Head">Release Mgr. | QA Head</span>
"Release Mgr. | QA Head"

Enter the ::before pseudo element. Note I didn't say selector. The following code creates an an element that sits before the element that it acts on, which is, in this case, the flair class. So before we see any flair things, we have an blank element that is 4 pixels away from the flair class (which includes the image and the text) and and that is 4 pixels away from the next element, which happens to be the text. It also makes sure that our spritesheet is in there and that we're selecting 16x16 bits of it.

.flair::before {
margin: 0 4px 0 4px;
display: inline-block;
background: url(%%reddit-sprite%%) no-repeat scroll top left;
min-width: 16px;
min-height: 16px;

And lastly, we need the specific positioning. Remember that the element that has the margin and everything is the pseudo element, so we need to use ::before on these as well. We'll also make sure to use the specific classes we created in the "edit flair" option in reddit Moderation Tools, so that we can act specifically on these.

.flair-lxde::before { background-position: 0px 0px !important; }
.flair-lxqt::before { background-position: -16px 0px !important; }
.flair-lenny::before { background-position: -32px 0px !important; }
.flair-user::before { background-position: -48px 0px !important; }
.flair-invader::before { background-position: -64px 0px !important; }
.flair-orb::before { background-position: -80px 0px !important; }

I basically hacked this together through inspecting elements at r/PixelDungeon (which is currently my favourite roguelike game and one that I hope to rewrite using Python and Kivy, if anyone's interested in helping) since they have all sorts of fantastic functionality going on. Kudos to them.

I do hope this helps save someone the sanity that I lost yesterday. ;)

on January 23, 2015 12:41 AM

January 22, 2015

Hey all, a new version of Ubuntu Calculator App Reboot is on the store, ready for the your tries! As usual, please report any bug you find on Launchpad, so we can fix them!

translations stats

Don’t worry about the number of version, I know it is passed from 0.1.4 to 2.0.73, it’s a bit strange but now makes more sense: the major release is 2, because it’s the reboot, but we don’t have still a stable version, and so the 0. The last number is the bzr commit.

Let me show you some of the changes Bartosz and Giulio did in last week - I was busy with an importat exam at uni, so I did nothing, but I’m sure I’ll have more time next week ;-)


Bartosz enabled translations in the project, so since the next version you should see the app in your language, if someone has made translations. So, if you have some spare time, take a look to our translation page and make calculator available in your language!

Copy feature

Bartosz has also implemented the possibility to copy a calc from the multiselection mode (you just have to longclick on a calc)


In the previous version of reboot app, when you started the app not all the keyboard was visible. Now, thanks to Giulio, this has been fixed, the app opens in the right position.

Full changelog

Here the full changelog:

  • #64 Fix scrolling position on app startup on devices. (Giulio Collura)
  • #65 Adding autopilot tests for calculation after gathering result. (Bartosz Kosiorek)
  • #66 Add feature to copy selected calculation from the history. (Bartosz Kosiorek)
  • #67 Add keyboard support for Calculation keyboard. (Bartosz Kosiorek )
  • #68 Fix ScrollableView behavior with few items visible. (Giulio Collura)
  • #69 Improve manifest.json generation. (Giulio Collura)
  • #70 Launchpad automatic translations update.
  • #71 Fix translation generation. (Bartosz Kosiorek)
  • #72 Launchpad automatic translations update.
  • #73 Updated math.js to 1.2.0. (Riccardo Padovani)


on January 22, 2015 09:00 PM

The second alpha of the Vivid Vervet (to become 15.04) has now been released!

Pre-releases of the Vivid Vervet are *not* encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent breakage. They are, however, recommended for Ubuntu flavor developers and those who want to help in testing, reporting and fixing bugs as we work towards getting this release ready.

Alpha 2 includes a number of software updates that are ready for wider testing. This is quite an early set of images, so you should expect some bugs.

While these Alpha 2 images have been tested and work, except as noted in the release notes, Ubuntu developers are continuing to improve the Vivid Vervet. In particular, once newer daily images are available, system installation bugs identified in the Alpha 2 installer should be verified against the current daily image before being reported in Launchpad. Using an obsolete image to re-report bugs that have already been fixed wastes your time and the time of developers who are busy trying to make 15.04 the best Ubuntu release yet. Always ensure your system is up to date before reporting bugs.

This alpha features images for Kubuntu, Lubuntu, Ubuntu GNOME, UbuntuKylin and the Ubuntu Cloud images.


Kubuntu uses KDE software and now features the new Plasma 5 desktop. The Alpha-2 images can be downloaded at:


More information on Kubuntu Alpha-2 can be found here:



Lubuntu is a flavour of Ubuntu based on LXDE and focused on providing a very lightweight distribution. The Alpha-2 images can be downloaded at:


More information on Lubuntu Alpha-2 can be found here:


Ubuntu GNOME

Ubuntu GNOME is an flavour of Ubuntu featuring the GNOME desktop environment. The Alpha-2 images can be downloaded at:


More information on Ubuntu GNOME Alpha-2 can be found here:



UbuntuKylin is a flavour of Ubuntu that is more suitable for Chinese users.

The Alpha-2 images can be downloaded at:


More information on UbuntuKylin Alpha-2 can be found here:


Ubuntu Cloud

Ubuntu Cloud images can be run on Amazon EC2, Openstack, SmartOS and many other clouds. The Alpha-2 images can be downloaded at:


Regular daily images for Ubuntu can be found at:


If you’re interested in following the changes as we further develop Vivid, we suggest that you subscribe to the ubuntu-devel-announce list. This is a low-traffic list (a few posts a week) carrying announcements of approved specifications, policy changes, alpha releases and other interesting events.


A big thank you to the developers and testers for their efforts to pull together this Alpha release!

Originally posted to the ubuntu-devel-announce mailing list on Thu Jan 22 15:04:50 UTC 2015 by Walter Lapchynski

on January 22, 2015 04:52 PM
The second Alpha of Vivid (to become 15.04) has now been released!

The Alpha-2 images can be downloaded at: http://cdimage.ubuntu.com/kubuntu/releases/vivid/alpha-2/

More information on Kubuntu Alpha-2 can be found here: https://wiki.ubuntu.com/VividVervet/Alpha2/Kubuntu
on January 22, 2015 03:52 PM


Ubuntu GNOME Team is glad to announce the availability of the second milestone (Alpha 2) for Ubuntu GNOME Vivid Vervet (15.04).

Kindly do take the time and read the release notes:

We would like to thank our great helpful and very supportive testers. They have responded to our urgent call for help in no time. Having high quality testers on the team make us more confident this cycle will be extraordinary and needless to mention, that is an endless motivation for us to do more and give more. Thank you so much again for all those who helped to test Alpha 1 images.

As always, if you need more information about testing, please see this page.

And, don’t hestiate to contact us if you have any question, feedback, notes, suggestions, etc – please see this page.

Thank you for choosing, testing and using Ubuntu GNOME!

on January 22, 2015 02:57 PM
Find-a-Task is the Ubuntu community's job board for volunteers.


Is your team listed?

It's the place for volunteers to find new, interesting, fulfilling ways to contribute to Ubuntu.
It's the place for them to discover your team or project.

Get listed today!

We have made it super easy to get your team onto Find-a-Task: No login, no editing. Just jump into #ubuntu-community-team with a volunteer role in mind:

  • Category: Programming
  • Role: QML De-frobber
  • Very short description: Get rid of QML Frob with the Ubuntu Frobbing Team
  • Landing page: http://wiki.ubuntu.com/FrobbingTeam/QMLDefrobber

That's right...you can list technical roles, too.

Nah, I don't want volunteers

If the old way is working for you, and your team has lots of spare capacity, then more power to you! Please share your secret sauce.

But if you could use a few more hands to grab a work item or two, a Find-a-Task listing is fast and simple.

You really should, you know.

on January 22, 2015 02:43 PM

With the recent introduction of Snappy Ubuntu, there are now several different ways to extend and update (apt-get vs. snappy) multiple flavors of Ubuntu (Core, Desktop, and Server).

We've put together this matrix with a few examples of where we think Traditional Ubuntu (apt-get) and Transactional Ubuntu (snappy) might make sense in your environment.  Note that this is, of course, not a comprehensive list.

Ubuntu Core
Ubuntu Desktop
Ubuntu Server
Traditional apt-get
Minimal Docker and LXC imagesDesktop, Laptop, Personal WorkstationsBaremetal, MAAS, OpenStack, General Purpose Cloud Images
Transactional snappy
Minimal IoT Devices and Micro-Services Architecture Cloud ImagesTouch, Phones, TabletsComfy, Human Developer Interaction (over SSH) in an atomically updated environment

I've presupposed a few of the questions you might ask, while you're digesting this new landscape...

Q: I'm looking for the smallest possible Ubuntu image that still supports apt-get...
A: You want our Traditional Ubuntu Core. This is often useful in building Docker and LXC containers.

Q: I'm building the next wearable IoT device/drone/robot, and perhaps deploying a fleet of atomically updated micro-services to the cloud...
A: You want Snappy Ubuntu Core.

Q: I want to install the best damn Linux on my laptop, desktop, or personal workstation, with industry best security practices, 30K+ freely available open source packages, freely available, with extensive support for hardware devices and proprietary add-ons...
A: You want the same Ubuntu Desktop that we've been shipping for 10+ years, on time, every time ;-)

Q: I want that same converged, tasteful Ubuntu experience on your personal, smart devices like my Phones and Tablets...
A: You want Ubuntu Touch, which is a very graphical human interface focused expression of Snappy Ubuntu.

Q: I'm deploying Linux onto bare metal servers at scale in the data center, perhaps building IaaS clouds using OpenStack or PaaS cloud using CloudFoundry? And I'm launching general purpose Linux server instances in public clouds (like AWS, Azure, or GCE) and private clouds...
A: You want the traditional apt-get Ubuntu Server.

Q: I'm developing and debugging applications, services, or frameworks for Snappy Ubuntu devices or cloud instances?
A: You want Comfy Ubuntu Server, which is a command line human interface extension of Snappy Ubuntu, with a number of conveniences and amenities (ssh, byobu, manpages, editors, etc.) that won't be typically included in the minimal Snappy Ubuntu Core build. [*Note that the Comfy images will be available very soon]

on January 22, 2015 01:19 PM



Due to unavoidable reasons, I have decided to step down from StartUbuntu Project. Someone needs to take over this project. Trust me, giving away or stepping down from a project that I have started is really hard and tough decision but when life force you and push you to the edge, it is either you fall deep or you save yourself before it is too late. Real life was super kind to me lately. Yes, I’m thankful that I’m alive but very tough decisions must be made in order to carry on.

I’m willing to explain everything about StartUbuntu for whoever wishes to take over. I prefer someone from StartUbuntu community but that is not a must. Anyone can step in and I’m around in case of any question.

I’m sure someone is better than me will take over. I don’t see myself as a good leader to StartUbuntu any more or at least, can’t be at the moment.

Maybe in the future I could be back again and take over the project I founded but for this period of time, I have no choice but to step down.

Appreciate your full help and support!

Thank you :)

on January 22, 2015 12:17 PM

Today we released Ubuntu Make 0.4.1 which validates the application menu support for some java application using swing (like Intellij, Android Studio…) and fixes Intellij IDEA support.

Vertical screen estate is particularly valuable for developers, to maximize the place where you can visualize your code and not bother too much about the shell itself. Also, in complex menu structure, it can be hard to find the relevant items. Unity introduced a while back (2010!) the excellent application menu and then grows the HUD support to search through those menus. We even got recently new options for menu integration without renouncing on those principles. However, until now, some java-based IDEs didn't get default appmenu and HUD support. It was time to get that fixed with our Ubuntu Loves Developers focus!

Appmenu support in intellij IDEA

The application menu support is installed by default on Ubuntu Vivid thanks to our work with jayatana's excellent contributor Dan Jared! We did some cleaning and worked with him to get jayatana into ubuntu vivid, and then, promote it on the Ubuntu Desktop image[1]. On older releases, we pushed jayatana into the Ubuntu Make ppa and every new install through that tool will install as well this support as needed.

We also saw jetbrains changing their download page structure, making Intellij IDEA not being installable anymore. Less than 24 hours after a bug report being opened, we got this new 0.4.1 release including a fix from Intellij IDEA support contributor to Ubuntu Make, Tin Tvrtković. Big kudos to him for the prompt reaction! The tests have also been updated to reflect this new page structure.

Those nice goodies and fixes are available on ubuntu vivid (ensure you install Ubuntu Make 0.4.1), and as well, through its ppa for 14.04 LTS and 14.10 ubuntu releases. Keep in mind that you need to restart your session once jayatana has been installed to get the menu export to Unity available.

Another release demonstrating how the Ubuntu and Ubuntu Make community really work well together, get things quickly under control and shine! If you want to help out defining and contributing in making Ubuntu the best platform for developers, please head here!


[1] which won't install java on the image by default, don't be scared ;)

on January 22, 2015 08:58 AM

The clash of the titans over Java may end up being heard by the Supreme Court, possibly hinging on what the solicitor general has to say about it. SCOTUS has asked for advice on whether the case merits its attention. “This is going to be a true 2015 nail-biter for the industry,” said tech analyst Al Hilwa. “This is a judgment on what might constitute fair use in the context of software.”

The Supreme Court of the United States on Monday invited the Obama administration to weigh in on whether it should hear arguments in the ongoing dispute between Google and Oracle over Java copyrights.

The move is a response to Google’s October petition for a writ of certiorari following a May 2014 federal circuit court decision in favor of Oracle.

Google argued that the code was not copyrightable under section 102(b) of the Copyright Act, which withholds copyright protection from any idea, procedure, process, system, or method of operation. It also argued that the copied elements were a key part of allowing interoperability between Java and Android.

Numerous large technology companies, including HP, Red Hat and Yahoo, have filed amicus briefs supporting Google’s position.


Source: http://www.ecommercetimes.com/story/81573.html

Submitted by: Katherine Noyes

on January 22, 2015 07:05 AM

Secure Erase in Linux

Charles Profitt

Recently I was tasked with wiping a computer hard drive. The drive was a 128GB Samsung SSD. My normal tool of choice is DBAN (Darik’s Boot and Nuke), but to my surprise it did not support erasing SSD drives. As always Google came to my rescue and I found an easy way to wipe the drive called ‘secure erase’.

Not Frozen
The first thing you have to do is ensure that the drive is not ‘frozen’.

sudo hdparm -I /dev/sdb

Master password revision code = 65534
not    enabled
not    locked
not    frozen
not    expired: security count
supported: enhanced erase


sudo hdparm -I /dev/sda

Master password revision code = 65534
not    enabled
not    locked
not    expired: security count
supported: enhanced erase


If the drive is frozen it might be possible to ‘un-freeze’ the drive by suspending the computer and then waking it up. This works in the cases where the bios issues a lock command on boot up. A power cycle of the drive clears that states.

Set The Password
Once the drive is not in a ‘frozen’ state you can move on the next step. In order to issue the erase command the drive needs have a password set.

sudo hdparm –user-master u –security-set-pass password /dev/sda

Issuing SECURITY_SET_PASS command, password=”password”, user=user, mode=high

Check the drive again should indicate that the password is now enabled.

sudo hdparm -I /dev/sdb

Master password revision code = 65534
not    enabled
not    locked
not    frozen
not    expired: security count
supported: enhanced erase

Erase The Disk
Now, you can execute the secure erase command:

sudo hdparm –user-master u –security-erase password /dev/sdb

Issuing SECURITY_ERASE command, password=”password”, user=user

Check The Results
After the command executes the password should automatically be cleared.

sudo hdparm -I /dev/sdb

Master password revision code = 65534
not    enabled
not    locked
not    frozen
not    expired: security count
supported: enhanced erase

Your drive should be securely erased now. I found the process to be easy and quick.

on January 22, 2015 04:10 AM

January 21, 2015

Gustavo is one of the members of Linux Padawan who is both a padawan and a master. We all can learn new things, right? Linux Padawan is not a hierarchy, but a community. Meet Gustavo Silva How did you first get started using Linux? What distros, software or resources did you use while learning? I started using […]
on January 21, 2015 11:47 PM

After the previous post which describes how to send screen video from the Ubuntu phone to a file on your desktop via netcat, it occurred to me that it ought to be possible to just display your Ubuntu phone directly on your desktop’s screen in a window… and with basically the same trick, it is.

Connect to your Ubuntu phone with ssh: if the phone is plugged in with a cable then opening the Ubuntu SDK IDE’s Devices pane will let you do this, or there is a phablet-shell command which does the same thing when plugged in with a cable, and once you’ve done that once you can ssh to the phone over the network with ssh phablet@phone-ip-address.

In that ssh session, run: mirscreencast -m /var/run/mir_socket --stdout --cap-interval 4 -s 360 640 | nc desktop-machine-ip-address 1234

In a separate terminal on your desktop machine, run: nc -l -p 1234 | mplayer -demuxer rawvideo -rawvideo w=360:h=640:format=rgba - (note the hyphen on the end of that command; it is important!)

and: you get your phone’s screen, live, in a window on your desktop. Handy for screencasting at conferences or for hangout video demonstrations or similar.

on January 21, 2015 03:46 PM

Our very own James Page blogged about Kilo-1 availability for Vivid and Trusty (via Ubuntu Cloud Archive) . If you are interested in checking out the current OpenStack in development on Ubuntu, this is for you. Enjoy!

on January 21, 2015 02:40 PM

Just prior to the Paris OpenStack Summit in November, the Ubuntu Server team had the opportunity to repeat and expand on the scale testing of OpenStack Icehouse that we did in the first quarter of last year with AMD and SeaMicro. HP where kind enough to grant us access to a few hundred servers in their Discovery Lab; specifically three chassis of HP ProLiant Moonshot m350 cartridges (540 in total): indexThe m350 is an 8-core Intel Atom based server with 16GB of RAM and 64GB of SSD based direct attached storage. They are designed for scale out workloads, so not an immediately obvious choice for an OpenStack Cloud, but for the purposes of stretching OpenStack to the limit, having lots of servers is great as it puts load on central components in Neutron and Nova by having a large number of hypervisor edges to manage. We had a few additional objectives for this round of scale testing over and above re-validating the previous scale test we did on Icehouse on the new Juno release of OpenStack:

  • Messaging: The default messaging solution for OpenStack on Ubuntu is RabbitMQ; alternative messaging solutions have been supported for some time – we wanted to specifically look at how ZeroMQ, a broker-less messaging option, scales in a large OpenStack deployment.
  • Hypervisor: The testing done previously was based on the libvirt/kvm stack with Nova; The LXC driver was available in an early alpha release so poking at this looked like it might be fun.

As you would expect, we used the majority of the same tooling that we used in the previous scale test:

  • MAAS (Metal-as-a-Service) for deployment of physical server resources
  • Juju: installation and configuration of OpenStack on Ubuntu

in addition, we also decided to switch over to OpenStack Rally to complete the actual testing and benchmarking activities. During our previous scale test this project was still in its infancy but its grown a lot of features in the last 9 months including better support for configuring Neutron network resources as part of test context set-up.

Messaging Scale

The first comparison we wanted to test was between RabbitMQ and ZeroMQ; RabbitMQ has been the messaging workhorse for Ubuntu OpenStack deployments since our first release, but larger clouds do make high demands on a single message broker both in terms of connection concurrency and message throughput. ZeroMQ removes the central broker from the messaging topology, switching to a more directly connected edge topology.

The ZeroMQ driver in Oslo Messaging has been a little unloved over the last year or so, however some general stability improvements have been made – so it felt like a good time to take a look and see how it scales. For this part of the test we deployed a cloud of:

  • 8 Nova Controller units, configured as a cluster
  • 4 Neutron Controller units, configured as a cluster
  • Single MySQL, Keystone and Glance units
  • 300 Nova Compute units
  • Ganglia for monitoring

In order to push the physical servers as hard as possible, we also increased the default workers (cores x 4 vs cores x 2) and the cpu and ram allocation ratios for the Nova scheduler. We then completed an initial 5000 instance boot/delete benchmark with a single RabbitMQ broker with a concurrency level of 150.  Rally takes this as configuration options for the test runner – in this test Rally executed 150 boot-delete tests in parallel, with 5000 iterations:

action min (sec) avg (sec) max (sec) 90 percentile 95 percentile success count
total 28.197 75.399 220.669 105.064 117.203 100.0% 5000
nova.boot_server 17.607 58.252 208.41 86.347 97.423 100.0% 5000
nova.delete_server 4.826 17.146 134.8 27.391 32.916 100.0% 5000

Having established a baseline for RabbitMQ, we then redeployed and repeated the same test for ZeroMQ; we immediately hit issues with concurrent instance creation.  After some investigation and re-testing, the cause was found to be Neutron’s use of fanout messages for communicating with hypervisor edges; the ZeroMQ driver in Oslo Messaging has an inefficiency in that it creates a new TCP connection for every message it sends – when Neutron attempted to send fanout messages to all hypervisors edges with a concurrency level of anything over 10, the overhead in creating so many TCP connections causes the workers on the Neutron control nodes to back up, and Nova starts to timeout instance creation on network setup.

So the verdict on ZeroMQ scalability with OpenStack? Lots of promise but not there yet….

We introduced a new feature to the OpenStack Charms for Juju in the last charm release to allow use of different RabbitMQ brokers for Nova and Neutron, so we completed one last messaging test to look at this:

action min (sec) avg (sec) max (sec) 90 percentile 95 percentile success count
total 26.073 114.469 309.616 194.727 227.067 98.2% 5000
nova.boot_server 19.9 107.974 303.074 188.491 220.769 98.2% 5000
nova.delete_server 3.726 6.495 11.798 7.851 8.355 98.2% 5000

unfortunately we had some networking problems in the lab which caused some slowdown and errors for instance creation, so this specific test proved a little in-conclusive. However, by running split brokers, we were able to determine that:

  • Neutron peaked at ~10,000 messages/sec
  • Nova peaked at ~600 messages/sec

It’s also worth noting that the SSDs that the m350 cartridges use do make a huge difference, as the servers don’t suffer from the normal iowait times associated with spinning disks.

So in summary, RabbitMQ still remains the de facto choice for messaging in an Ubuntu OpenStack Cloud; it scales vertically very well – add more CPU and memory to your server and you can deal with a larger cloud – and benefits from fast storage.

ZeroMQ has a promising architecture but needs more work in the Oslo Messaging driver layer before it can be considered useful across all OpenStack components.

In my next post we’ll look at how hypervisor choice stacks up…

on January 21, 2015 02:24 PM
Meet Svetlana Belkin 1) How did you first get started using Linux? What distros, software or resources did you use while learning? I started out with Ubuntu Linux in 2009 and most of the learning I done was on my own just clicking around and changing settings to the way I want it. If I […]
on January 21, 2015 12:00 PM

If you’ve ever wanted to make an animated film, the learning curve for such software often is really steep. Thankfully, the Pencil program was released and although basic, it provided a fairly simple way to create animations on your computer (Windows, Mac or Linux) with open-source tools. Unfortunately, the Pencil program was abandoned.

And really, that’s the coolest part of open-source software. Building on the incredible Pencil program, a new project was born. Pencil2D is under active development, and it’s a cross-platform application allowing for a frame-by-frame animation sequence to be drawn and exported.


Source: http://www.linuxjournal.com/content/non-linux-foss-animation-made-easy

Submitted by: Shawn Powers

on January 21, 2015 07:04 AM

ROS what?

Robot Operating System (ROS) is a set of libraries, services, protocols, conventions, and tools to write robot software. It’s about seven years old now, free software, and a growing community, bringing Linux into the interesting field of robotics. They primarily target/support running on Ubuntu (current Indigo ROS release runs on 14.04 LTS on x86), but they also have some other experimental platforms like Ubuntu ARM and OS X.

ROS, meet Snappy

It appears that their use cases match Ubuntu Snappy’s vision really well: ROS apps usually target single-function devices which require absolutely robust deployments and upgrades, and while they of course require a solid operating system core they mostly implement their own build system and libraries, so they don’t make too many assumptions about the underlying OS layer.

So I went ahead and created a snapp package for the Turtle ROS tutorial, which automates all the setup and building. As this is a relatively complex and big project, it helped to uncover quite a number of bugs, of which the most important ones got fixed now. So while the building of the snap still has quite a number of workarounds, installing and running the snap is now reasonably clean.

Enough talk, how can I get it?

If you are interested in ROS, you can look at bzr branch lp:~snappy-dev/snappy-hub/ros-tutorials. This contains documentation and a script build.sh which builds the snapp package in a clean Ubuntu Vivid environment. I recommend a schroot for this so that you can simply do e. g.

  $ schroot -c vivid ./build.sh

This will produce a /tmp/ros/ros-tutorial_0.2_<arch>.snap package. You can download a built amd64 snapp if you don’t want to build it yourself.

Installing and running

Then you can install this on your Snappy QEMU image or other installation and run the tutorial (again, see README.md for details):

  yourhost$ ssh -o UserKnownHostsFile=/dev/null -p 8022 -R 6010:/tmp/.X11-unix/X0 ubuntu@localhost
  snappy$ scp <yourhostuser>@
  snappy$ sudo snappy install ros-tutorial_0.2_amd64.snap

You need to adjust <yourhostuser> accordingly; if you didn’t build yourself but downloaded the image, you might also need to adjust the host path where you put the .snap.

Finally, run it:

  snappy$ ros-tutorial.rossnap roscore &
  snappy$ DISPLAY=localhost:10.0 ros-tutorial.rossnap rosrun turtlesim turtlesim_node &
  snappy$ ros-tutorial.rossnap rosrun turtlesim turtle_teleop_key

You might prefer ssh’ing in three times and running the commands in separate shells. Only turtlesim_node needs $DISPLAY (and is quite an exception — an usual robotics app of course wouldn’t!). Also, note that this requires ssh from at least Ubuntu 14.10 – if you are on 14.04 LTS, see README.md.


on January 21, 2015 05:45 AM

The Ubuntu Community Council is the primary community (i.e., non-technical) governance body for the Ubuntu project. In this series of 7 interviews, we go behind the scenes with the community members who were elected in 2013 serve on this council with Mark Shuttleworth.

In this, our first interview, we talk with Charles about his experience with Ubuntu and beyond.


Tell us a little about yourself

I am currently an IT professional at a K-12 school dI am an IT professional for a K-12 school district responsible for running the server infrastructure, disaster recovery, information security, and virtualization. I introduced Linux and Open Source to the district. I started playing around with Linux in 1993, but did not start using it regularly until 2006. At first I was the typical distro hopper, but I soon found the Ubuntu Community and realized that I had found a home. The Ubuntu Community was full of knowledgeable friendly and helpful people.

How long have you been involved with Ubuntu? And how long on the Ubuntu Community Council?

I have been active with Ubuntu since 2008 when I got involved with the New York State Ubuntu LoCo Community. I have been involved with the Ubuntu Forums, Ubuntu Beginners Team, IRC OPs, Ubuntu Bug Squad, Ubuntu Documentation, Ubuntu New York, Ubuntu Education, and Ubuntu News. I served on the Ubuntu Beginners team Council, The Ubuntu LoCo Council and am currently on The Ubuntu Community Council.

What are some of the projects you’ve worked on in Ubuntu over the years?

Laptop Testing Team, Ubuntu IRC operators, Ubuntu Educators, Ubuntu Leadership (development of leadership), Ubuntu Screencast, Ubuntu New York, Ubuntu Power users, Ubuntu Bug Control, Ubuntu Bugsquad, Ubuntu Accomplishments, Ubuntu Documentation Team, Ubuntu Documentation Team Wiki Administrators, and Ubuntu Accessibility

What is your focus in Ubuntu today?

My focus in Ubuntu today is on the community, both local and global.

Do you contribute to other free/open source projects? Which ones?

I want to use the word contribute carefully as I do not have any code contributions outside of the Ubuntu community. I have contributed in terms of support, testing and community with openVAS, Cacti, Racktables, Security Onion, Kali and nmap.

If you were to give a newcomer some advice about getting involved with Ubuntu, what would it be?

Enjoy using Ubuntu and share your success with others. When you want to contribute to Ubuntu find an area you are passionate about and seek out any assistance you need to grow in that area.

on January 21, 2015 01:27 AM

The latest version of Mono has released (actually, it happened a week ago, but it took me a while to get all sorts of exciting new features bug-checked and shipshape).

Stable packages

This release covers Mono 3.12, and MonoDevelop 5.7. These are built for all the same targets as last time, with a few caveats (MonoDevelop does not include F# or ASP.NET MVC 4 support). ARM packages will be added in a few weeks’ time, when I get the new ARM build farm working at Xamarin’s Boston office.

Ahead-of-time support

This probably seems silly since upstream Mono has included it for years, but Mono on Debian has never shipped with AOT’d mscorlib.dll or mcs.exe, for awkward package-management reasons. Mono 3.12 fixes this, and will AOT these assemblies – optimized for your computer – on installation. If you can suggest any other assemblies to add to the list, we now support a simple manifest structure so any assembly can be arbitrarily AOT’d on installation.

Goodbye Mozroots!

I am very pleased to announce that as of this release, Mono users on Linux no longer need to run “mozroots” to get SSL working. A new command, “cert-sync”, has been added to this release, which synchronizes the Mono SSL certificate store against your OS certificate store – and this tool has been integrated into the packaging system for all mono-project.com packages, so it is automatically used. Just make sure the ca-certificates-mono package is installed on Debian/Ubuntu (it’s always bundled on RPM-based) to take advantage! It should be installed on fresh installs by default. If you want to invoke the tool manually (e.g. you installed via make install, not packages) use

cert-sync /path/to/ca-bundle.crt

On Debian systems, that’s

cert-sync /etc/ssl/certs/ca-certificates.crt

and on Red Hat derivatives it’s

cert-sync /etc/pki/tls/certs/ca-bundle.crt

Your distribution might use a different path, if it’s not derived from one of those.

Windows installer back from the dead

Thanks to help from Alex Koeplinger, I’ve brought the Windows installer back from the dead. The last release on the website was for 3.2.3 (it’s actually not this version at all – it’s complicated…), so now the Windows installer has parity with the Linux and OSX versions. The Windows installer (should!) bundles everything the Mac version does – F#, PCL facades, IronWhatever, etc, along with Boehm and SGen builds of the Mono runtime done with Visual Studio 2013.

An EXPERIMENTAL OH MY GOD DON’T USE THIS IN PRODUCTION 64-bit installer is in the works, when I have the time to try and make a 64-build of Gtk#.

on January 21, 2015 01:26 AM
I'm happy to announce that Python 3 ports of launchpadlib & ubuntu-dev-tools (library) are available for consumption.

These are 1.10.3 & 0.155 respectfully.

This means that everyone should start porting their reports, tools, and scriptage to python3.

ubuntu-dev-tools has the library portion ported to python3, as I did not dare to switch individual scripts to python3 without thorough interactive testing. Please help out porting those and/or file bug reports against the python3 port. Feel free to subscribe me to the bug reports on launchpad.

For the time being, I believe some things will not be easy to port to python3 because of the elephant in the room - bzrlib. For some things like lp-shell, it should be easy to move away from bzrlib, as non-vcs things are used there. For other things the current suggestion is to probably fork to bzr binary or a python2 process. I ponder if a minimal usable python3-bzrlib wrapper around python2 bzrlib is possible to satisfy the needs of basic and common scripts.

On a side note, launchpadlib & lazr.restfulclient have out of the box proxy support enabled. This makes things like add-apt-repository work behind networks with such setup. I think a few people will be happy about that.

All of these goodies are available in Ubuntu 15.04 (Vivid Vervet) or Debian Experimental (and/or NEW queue).
on January 21, 2015 12:06 AM

January 20, 2015


  • Review ACTION points from previous meeting
    • gaughen to establish new qa-team point of contact for server team
  • V Development
  • Server & Cloud Bugs (caribou)
  • Weekly Updates & Questions for the QA Team (psivaa)
  • Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges)
  • Ubuntu Server Team Events
  • Open Discussion
  • Announce next meeting date, time and chair


Meeting Actions
  • gaughen to establish new qa-team point of contact for server team — gaughen and beisner discussing – keeping as ACTION point

  • jamespage to answer question in bug 1410363 in response to smb

V Development

Today is Jan 20th.  Jan 22nd is alpha 2 (For opt-in flavors).  Feb 19th is feature freeze and debian import freeze.

Server & Cloud Bugs
caribou is busy on a CUPS bug & apport upstream issues
Ubuntu Server Team Events

FOSDEM is soon (Saturday 31 January and Sunday 1 February 2015) and hallyn is presenting lxd at FOSDEM on Sunday.

Open Discussion

teward says nginx merge coming as soon as the next debian updates come for it (assuming before featurefreeze). last merge introduced out of the box POODLE mitigations in the default confs.

Agree on next meeting date and time

Next meeting will be on Tuesday, Jan 27th at 16:00 UTC in #ubuntu-meeting. jamespage will chair.

on January 20, 2015 08:36 PM

In my earlier blog about choosing a storage controller, I mentioned that the Microserver's on-board AMD SB820M SATA controller doesn't quite let the SSDs perform at their best.

Just how bad is it?

I did run some tests with the fio benchmarking utility.

Lets have a look at those random writes, they simulate the workload of synchronous NFS write operations:

rand-write: (groupid=3, jobs=1): err= 0: pid=1979
  write: io=1024.0MB, bw=22621KB/s, iops=5655 , runt= 46355msec

Now compare it to the HP Z800 on my desk, it has the Crucial CT512MX100SSD1 on a built-in LSI SAS 1068E controller:

rand-write: (groupid=3, jobs=1): err= 0: pid=21103
  write: io=1024.0MB, bw=81002KB/s, iops=20250 , runt= 12945msec

and then there is the Thinkpad with OCZ-NOCTI mSATA SSD:

rand-write: (groupid=3, jobs=1): err= 0: pid=30185
  write: io=1024.0MB, bw=106088KB/s, iops=26522 , runt=  9884msec

That's right, the HP workstation is four times faster than the Microserver, but the Thinkpad whips both of them.

I don't know how much I can expect of the PCI bus in the Microserver but I suspect that any storage controller will help me get some gain here.

on January 20, 2015 07:53 PM

Next Feb, 6th there will be a Canonical event in London, where firsts Ubuntu Phones will be presented to the public. I’m one of the lucky guys will join the event and will have the opportunity to have one of this little treasures - so next month I’ll do a lot of posts about it :-)

Meanwhile, Ubuntu Phone Teams is writing to the participants of the event some details about the phone and the system. It isn’t anything secret, but these mails give some good informations on the phone. I think information wants to be free, so I share them with the world.

Aggregated scopes

In our initial Phone Glimpse mail we’d introduced scopes for the Ubuntu phone - an integrated approach to delivering content and services. We touched on aggregated scopes that are default scopes valuable to end users. In this mailer we’ll be showcasing the default scopes available that provide a full spectrum of rich content categories.

The Today scope let’s you see your most important interactions on one screen. Personalise it to see what’s most important to you, right at your fingertips.

To see local information, events and services from wherever you are located, check out the NearBy scope. Imagine you’re in Barcelona yet don’t know where to eat, the NearBy scope will provide you with hidden gems from various sources. A few app partners include: TimeOut, Yelp and The Weather Channel.

The all important News scope aggregates news feeds from your chosen providers that includes the BBC, EuroNews and Engadget.

Bringing music to you. The Music scope allows you to see music on your phone and the web within one place, be it your music library, streaming content from Soundcloud or Grooveshark, and maybe tracks purchased from 7Digital.

There’s also the Video scope with app partners that include YouTube, TED and Vimeo, as well as the Photo scope that brings your Facebook, Instagram and Flickr feed into one place.

Every source in the aggregated scopes can expand to give you an app-like experience for that source - and by starring it, they can even become another default screen. That’s a great way to personalize your Ubuntu phone so it truly revolves around the services you use most.

There’s also a scope dedicated to traditional apps where you can see your downloaded apps in one place.

Voila! A content rich experience brought to you which we can’t wait to showcase for real at the Insider event. As mentioned in earlier mails this information is yours to use in anyway!

Looking forward to sharing more insights with you.

Best, The Ubuntu Phone Team

on January 20, 2015 07:15 PM

uBrick – a Lego Scope

Victor Tuson Palau

Just a quick note to tell you that I have published a new scope called uBrick that brings you the awesomeness of Lego, as a catalogue powered by brickset.com, directly to your Ubuntu phone home screen.

I wrote the scope in Go cause I find it easier to work with for a quick scope ( took about 8 hours with interruptions over 2 days to write this scope).  The scope is now available at the store, just search for uBrick.

Here are some pics:

lego1lego2lego3 lego4

Also I have to congratulate the folks at Brickset for a very nice API, even if it is using SOAP :)

on January 20, 2015 05:49 PM

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.


20150120 Meeting Agenda

Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt

Status: Vivid Development Kernel

Our Vivid kernel remains based on the v3.18.2 upstream stable kernel,
but we’ll be rebasing to v3.18.3 shortly. We’ll also be rebsaing our
unstable branch to v3.19-rc5 and get that uploaded to our team PPA soon.
Important upcoming dates:
Thurs Jan 22 – Vivid Alpha 2 (~2 days! away)
Thurs Feb 5 – 14.04.2 Point Release (~2 weeks away)
Thurs Feb 26 – Beta 1 Freeze (~5 weeks away)

Status: CVE’s

The current CVE status can be reviewed at the following link:


Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

Status for the main kernels, until today:

  • Lucid – Verification & Testing
  • Precise – Verification & Testing
  • Trusty – Verification & Testing
  • Utopic – Verification & Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html


    cycle: 09-Jan through 31-Jan
    09-Jan Last day for kernel commits for this cycle
    11-Jan – 17-Jan Kernel prep week.
    18-Jan – 31-Jan Bug verification; Regression testing; Release

Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

on January 20, 2015 05:11 PM

The Ubuntu Server Team is pleased to announce the general availability of the first development milestone of the OpenStack Kilo release in Ubuntu 15.04 development and for Ubuntu 14.04 LTS via the Ubuntu Cloud Archive.

Ubuntu 14.04 LTS

For now, you can enable the Ubuntu Cloud Archive for OpenStack Kilo on Ubuntu 14.04 installations by running the following commands:

echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu trusty-updates/kilo main" \
    | sudo tee /etc/apt/sources.list.d/cloud-archive.list
sudo apt-get -y install ubuntu-cloud-keyring
sudo apt-get update

The Ubuntu Cloud Archive for Kilo includes updates for Nova, Glance, Keystone, Neutron, Cinder, Horizon, Swift, Ceilometer and Heat; Ceph (0.87), RabbitMQ (3.4.2), QEMU (2.1), libvirt (1.2.8) and Open vSwitch (2.3.1) back-ports from 15.04 development have also been provided.

Ubuntu 15.04 development

No extra steps required; just start installing OpenStack!  Keystone is still pending update due to review of new dependencies by the Ubuntu MIR team, but that should happen in the next few weeks.

New OpenStack components

This cycle we’ve also provided packages for Trove, Sahara and Ironic – these projects are part of the integrated OpenStack release but remain in Ubuntu universe for this development cycle, which means they won’t get point release updates or security updates as part of ongoing stable release maintenance once Ubuntu 15.04 and the Kilo Cloud Archive for Ubuntu 14.04 LTS release.

NOTE: that if you use the Neutron FWaaS driver, you will need to install the ‘python-neutron-fwaas’ package to continue using this functionality; we will improve this situation in the packaging prior to final release.

Reporting bugs

Let’s face it, as the first development milestone there are bound to be a few bugs so please use the ‘ubuntu-bug’ tool to report any bugs that you find – for example:

sudo ubuntu-bug nova-conductor

this will ensure that bugs get logged in the right place in Launchpad.

Thanks and have fun!

on January 20, 2015 04:33 PM