December 18, 2014

Are you content with the status quo in technology? I'm not.

Years ago, I became aware of this little known (at the time) project called "Ubuntu". Remember it?

I don't know about you, but once I discovered Ubuntu and became involved I was so excited about the future it proposed that I never looked back.

Aside from Ubuntu's "approachable by everyone" and "free forever" project DNA, one of the things that really attracted me to it was that it had the guts to take on the status quo. I believed (and I still believe) that the status quo needs a good disruption. Complacency and doing things "as they always have been" just plain hurts.

In those days, the status quo was proprietary software and well-meaning but inpenetrable (to the everyday person that just wanted to get things done) free and open source software. I'm happy that we've collectively solved the toughest parts of those problems. Sure, there are still issues to be resolved but as they say, that's mostly detail.

Fast forward to today. Now, we are faced with a hosting (or call it cloud infrastructure if you wish) hardware landscape that is nearly a perfect monopoly and is so tightly locked down that we can't solve the world's big problems.

Spotting an opportunity to create something better and to change the world, a bunch of people rallied together to create

Click to learn more!Click to learn more!

Not surprisingly, Ubuntu joined and became a partner early on. And today, another one of the most famous disruptors has joined: Rackspace. In their words,

"In the world of servers, it’s getting harder and more costly to deliver the generational performance and efficiency gains that we used to take for granted. There are increasing limitations in both the basic materials we use, and the way we design and integrate our systems."

So here we are. Ubuntu, Rackspace, and dozens of others poised once again to disrupt.

It's going to be an interesting and fun ride. 2015 is poised to be the year that the world woke up to the true power of open.

I'm looking forward to it, and I hope you are too. Please join us!

on December 18, 2014 01:31 AM

December 17, 2014

When leveraging juju with LXC in cloud environments - networking has been a constant thorn in my side as I attempt to scale out farms of services in their full container glory. Thanks to the work by Hazmat (who brought us the Digital Ocean Provider) - there is a new development in this sphere ready for testing over this holiday season.

Container Networking with Juju in the cloud

Juju by default supports colocating services with LXC containers and KVM machines. LXC is all the rage these days, as linux containers are light weight kernel virtualized cgroups. Akin to BSD Jails - but not quite. Its a awesome solution where you dont care about resource isolation, and Just want your application to run within its own happy root, and live on churning away at whatever you might throw at it.

While this is great - it has a major achilles tendon presently in the Juju sphere. Cross-host communication is all but non-existant. In order to really scale and use LXC containers you need a beefy host to warehouse all the containers you can stuff on its disk. This isn't practical in scale out situations where your needs change on a day to day basis. You wind up losing out on the benefits of commodity hardware.

Flannel knocks this restriction out with great justice. Allow me to show you how:

Model Density Deployments with Juju and LXC

I'm going to assume you've done a few things.

  • Have a bootstrapped environment
  • Have at least 3 machines available to you

Start off by deploying Etcd and Flannel

juju deploy cs:~hazmat/trusty/etcd
juju deploy cs:~hazmat/trusty/flannel
juju add-unit flannel
juju add-relation flannel etcd

Important! You must wait for the flannel units to have completed their setup run before you deploy any lxc containers to the host. Otherwise you will be racing the virtual device setup, and this may misconfigure the networking.

With Flannel and Etcd running, you're now ready to deploy your services in LXC containers. Assuming the Flannel machine's provisioned by Juju are machineid 2, and 3:

juju deploy cs:trusty/mediawiki --to lxc:2
juju deploy cs:trusty/mysql --to lxc:3
juju deploy cs:trusty/haproxy --to 2
juju add-relation mediawiki:db mysql:db
juju add-relation mediawiki haproxy

Note We deployed haproxy to the host, and not to an LXC container. This is to provide access to the containerized services from the public interface - flannel only solves cross-host private networking with the containers.

This may take a short while to complete, as the LXC containers are fetching cloud images, and generating templates just like the Juju local provider workflow. Typically this is done in a couple minutes.

Once everything is online and ready for inspection, open a web-browser pointed at your Haproxy public ip, and you should see a fresh installation of Mediawiki.

Happy hacking!

on December 17, 2014 05:30 PM


Wargames.  Hackers.  Swordfish.  Superman 3.  Jurassic Park.  GoldenEye.  The Matrix.

You've all seen the high stakes hacking scene, packed with techno-babble and dripping in drama.  And the command and control center with dozens of over-sized monitors, overloaded with scrolling text...

I was stuck on a plane a few weeks back, traveling home from Las Vegas, and the in flight WiFi was down.  I know, I know.  Real world problems.  Suddenly, I had 2 hours on my hands, without access to email, IRC, or any other distractions.

It's at this point I turned to my folder of unfinished ideas, and cherry-picked one that would take just a couple of fun hours to hack.  And I'm pleased to introduce the fruits of that, um, labor -- the hollywood package for Ubuntu :-)  Call it an early Christmas present!


If you're already running Vivid (Ubuntu 15.04) -- I salute you! -- and you can simply:

sudo apt-get install hollywood

If you're on any other version of Ubuntu, you'll need to:

sudo apt-add-repository ppa:hollywood/ppa
sudo apt-get update
sudo apt-get install hollywood

Fire up a terminal, maximize it, open byobu, and run the hollywood command.  Then sit back and soak into the trance...

I recently jumped on the vertical monitor bandwagon, for my secondary display.  It's fantastic for reading and writing code.  It's also hollywood-worthy ;-)


How does all of this work?

For starters, it's all running in a Byobu (tmux) session, which enables us to split a single shell console into a bunch of "panes" or "splits".

The hollywood package depends on a handful of utilities that I found (mostly apt-cache searching the Ubuntu archives for monitors and utilities).  You can find a handful of scripts in /usr/lib/hollywood/.  Each of these is a "driver" for a widget that might run in one of these splits.  And ccze is magical, accepting input on stdin and colorizing the text.

In fact, they're quite easy to write :-)  I'm happy to accept contributions of new driver widgets, as long as you follow a couple of simple rules.  Each widget:
  • Must run as a regular, non-root user
  • Must not eat all available CPU, Disk, or Memory
  • Must not write data
  • Must run indefinitely, until receiving a Ctrl-C
  • Must look hollywood cool!
So far, we have widgets that: generate passphrases encoded in NATO phonetic, monitor and render network bandwidth, emulate The Matrix, find and display, with syntax highlighting, source code on the system, show a bunch of error codes, hexdump a bunch of binaries, monitor some processes, render some images to ASCII art, colorize some log files, open random manpages, generate SSH keys and show their random art, stat a bunch of inodes in /proc and /sys and /dev, and show the tree output of some directories.

I also grabbed a copy of the Mission Impossible theme song, licensed under the Creative Commons.  I played it in the Totem music player in Ubuntu, with the Monoscope visual effect, and recorded a screencast with gtk-recordmydesktop.  I then mixed the output .ogv file, with the original .mp3 file, and transcoded it to mp4/h264/aac, reducing the audio bitrate to 64k and frame size to 128x96, using this command:
avconv -i missionimpossible.ogv -i MissionImpossibleTheme.mp3 -s 128x96 -b 64k -vcodec libx264 -acodec aac -f mpegts -strict experimental -y mi.mp4

Then, hollywood plays it in one of the splits with mplayer's ascii art video output on the console :-)

DISPLAY= mplayer -vo caca /usr/share/hollywood/mi.mp4

Sound totally cheesy?  Why, yes, it is :-)  That's the point :-)

Oh, and by the way...  If you ever sit down at someone else's Linux PC, and want to freak them out a little, just type:

ubuntu@x230:~⟫ PS1="root@$(hostname):~# "; clear 
root@x230:~# 

And then have fun!
That latter "hack", as well as the entire concept of hollywood is inspired in part by Kees Cook's awesome talk, in particular his "Useless Hollywood Drama Mode" in his exploit demo.
Happy hacking!
:-Dustin
on December 17, 2014 03:11 PM

When it comes to stability and performance, nothing can really beat Linux. This is why the U.S. Marine Corps leaders have decided to ask Northrop Grumman Corp. Electronic Systems to change the operating system of the newly delivered Ground/Air Task-Oriented Radar (G/ATOR) from Windows XP to Linux.

It’s interesting to note that the Ground/Air Task-Oriented Radar (G/ATOR) was just delivered to the U.S. Marine Corps, but the company that built it chose to keep that aging operating system. Someone must have noticed the fact that it was a poor decision and the chain of command was informed of the problems that might have appeared.

Source:

http://news.softpedia.com/news/U-S-Marine-Corps-Want-to-Change-OS-for-Radar-System-from-Windows-XP-to-Linux-466756.shtml

Submitted by: Silviu Stahie

on December 17, 2014 12:32 PM

There’s a saying in American political debate that is as popular as it is wrong, which happens when one side appeals to our country’s democratic ideal, and the other side will immediately counter with “The United States is a Republic, not a Democracy”. I’ve noticed a similar misunderstanding happening in open source culture around the phrase “meritocracy” and the negatively-charged “oligarchy”. In both cases, though, these are not mutually exclusive terms. In fact, they don’t even describe the same thing.

Authority

One of these terms describes where the authority to lead (or govern) comes from. In US politics, that’s the term “republic”, which means that the authority of the government is given to it by the people (as opposed to divine-right, force of arms, of inheritance). For open source, this is where “meritocracy” fits in, it describes the authority to lead and make decisions as coming from the “merit” of those invested with it. Now, merit is hard to define objectively, and in practice it’s the subjective opinion of those who can direct a project’s resources that decides who has “merit” and who doesn’t. But it is still an important distinction from projects where the authority to lead comes from ownership (either by the individual or their employer) of a project.

Enfranchisement

History can easily provide a long list of Republics which were not representative of the people. That’s because even if authority comes from the people, it doesn’t necessarily come from all of the people. The USA can be accurately described as a democracy, in addition to a republic, because participation in government is available to (nearly) all of the people. Open source projects, even if they are in fact a meritocracy, will vary in what percentage of their community are allowed to participate in leading them. As I mentioned above, who has merit is determined subjectively by those who can direct a project’s resources (including human resource), and if a project restricts that to only a select group it is in fact also an oligarchy.

Balance and Diversity

One of the criticisms leveled against meritocracies is that they don’t produce diversity in a project or community. While this is technically true, it’s not a failing of meritocracy, it’s a failing of enfranchisement, which as has been described above is not what the term meritocracy defines. It should be clear by now that meritocracy is a spectrum, ranging from the democratic on one end to the oligarchic on the other, with a wide range of options in between.

The Ubuntu project is, in most areas, a meritocracy. We are not, however, a democracy where the majority opinion rules the whole. Nor are we an oligarchy, where only a special class of contributors have a voice. We like to use the term “do-ocracy” to describe ourselves, because enfranchisement comes from doing, meaning making a contribution. And while it is limited to those who do make contributions, being able to make those contributions in the first place is open to anybody. It is important for us, and part of my job as a Community Manager, to make sure that anybody with a desire to contribute has the information, resources, and access to to so. That is what keeps us from sliding towards the oligarchic end of the spectrum.

 

on December 17, 2014 10:00 AM

The Matasano crypto challenges are a set of increasingly difficult coding challenges in cryptography; not puzzles, but designed to show you how crypto fits together and why all the parts are important. Cheers to Maciej Ceglowski of pinboard.in for bringing them to my attention.

I’ve been playing around with doing the challenges from first principles, in JavaScript. That is: not using any built-in crypto stuff, and implementing things like XOR myself by individually twiddling bits. It’s interesting! The thing that Maciej says here, and with which I totally agree, is that a lot of this (certainly the first batch, which is all I’ve done so far) is stuff that you already know how to do, intellectually, but you’ve never actually done — have you ever written a base64 encoder? Rather than just using string.encode('base64') or whatever? Obviously there’s no need to write this sort of thing yourself in production code (this is not one of those arguments that kids should learn long division rather than just owning a phone with a calculator on it), but I’ve found that actually making a thing to implement simple crypto such as XOR with a repeated key to have a few surprising tricks and turns in it. And, in immensely revealing fashion, one then goes on to write code which breaks such a cipher. In microseconds. Obviously intellectually I knew that Viginere ciphers are an old-fashioned thing, and I’d read various books in which they were broken and how they were, but there’s something about writing a little function yourself which viscerally demonstrates just how easy it was in a way that a hundred articles cannot.

Code so far (I’m only up to challenge 6 in set 1!) is in jsbin if you want to have a look, or have a play yourself!

on December 17, 2014 09:01 AM

I am 35 years old and people never cease to surprise me. My trip home from Los Angeles today was a good example of this.

It was a tortuous affair that should have been a quick hop from LA to Oakland, popping on BArt, and then getting home for a cup of tea and an episode of The Daily Show.

It didn’t work out like that.

My flight was delayed. Then we sat on the tarmac for an hour. Then the new AirBart train was delayed. Then I was delayed at the BArt station in Oakland for 30 minutes. Throughout this I was tired, it was raining, and my patience was wearing thin.

Through the duration of this chain of minor annoyances, I was reading about the horrifying school attack in Pakistan. As I read more, related articles were linked with other stories of violence, aggression, and rape, perpetuated by the dregs of our species.

As anyone who knows me will likely testify, I am a generally pretty positive guy who sees the good in people. I have baked my entire philosophy in life and focus in my career upon the core belief that people are good and the solutions to our problems and the doors to opportunity are created by good people.

On some days though, even the strongest sense of belief in people can be tested when reading about events such as this dreadful act of violence in Pakistan. My seemingly normal trip home from the office in LA just left me disappointed in people.

While stood at the BArt station I decided I had had enough and called an Uber. I just wanted to get home and see my family. This is when my mood changed entirely.

Gerald

A few minutes later, my Uber arrived, and I was picked up by an older gentleman called Gerald. He put my suitcase in the trunk of his car and off we went.

We started talking about the Pakistan shooting. We both shared a desperate sense of disbelief at all those innocent children slaughtered. We questioned how anyone with any sense of humanity and emotion could even think about doing that, let alone going through with it. With a somber air filling the car, Gerald switched gears and started talking about his family.

He told me about his two kids, both of which are in their mid-thirtees. He doted on their accomplishments in their careers, their sense of balance and integrity as people, and his three beautiful grand-children.

He proudly shared that he had shipped his grandkids’ Christmas presents off to them today (they are on the East Coast) so he didn’t miss the big day. He was excited about the joy he hoped the gifts would bring to them. His tone and sentiment was one of happiness and pride.

We exchanged stories about our families, our plans for Christmas, and how lucky we both felt to love and be loved.

While we were generations apart…our age, our experiences, and our differences didn’t matter. We were just proud husbands and fathers who were cherishing the moments in life that were so important to both of us.

We arrived at my home and I told Gerald that until I stepped in his car I was having a pretty shitty trip home and he completely changed that. We shook hands, shared Christmas best wishes, and parted ways.

Good People

What I was expecting to be a typical Uber ride home with me exchanging a few pleasantries and then doing email on my phone, instead really illuminated what is important in life.

We live in complex world. We live on a planet with a rich tapestry of people and perspectives.

Evil people do exist. I am not referring to a specific religious or spiritual definition of evil, but instead the extreme inverse of the good we see in others.

There are people who can hurt others, who can so violently shatter innocence and bring pain to hundreds, so brutally, and so unnecessarily. I can’t even imagine what the parents of those kids are going through right now.

It can be easy to focus on these tragedies and to think that our world is getting worse; to look at the full gamut of negative humanity, from the inconsequential, such as the miserable lady yelling at the staff at the airport, to the hateful, such as the violence directed at innocent children. It is easy to assume that our species is rotting from the inside out, to see poison in the well, and that the rot is spreading.

While it is easy to lose faith in people, I believe our wider humanity keeps us on the right path.

While there is evil in the world, there is an abundance of good. For every evil person screaming there is a choir of good people who drown them out. These good people create good things, they create beautiful things that help others to also create good things and be good people too.

Like many of you, I am fortunate to see many of these things every day. I see people helping the elderly in their local communities, many donating toys to orphaned kids over the holidays, others creating technology and educational resources that help people to create new content, art, music, businesses, and more. Every day millions devote hours to helping and inspiring others to create a brighter future.

What is most important about all of this is that every individual, every person, every one of you reading this, has the opportunity to have this impact. These opportunities may be small and localized, or they may be large and international, but we can all leave this planet a little better than when we arrived on it.

The simplest way of doing this is to share our humanity with others and to cherish the good in the face of evil. The louder our choir, the weaker theirs.

Gerald did exactly that tonight. He shared happiness and opportunity with a random guy he picked up in his car and I felt I should pass that spirit on to you folks too. Now it is your turn.

Thanks for reading.

on December 17, 2014 07:35 AM
"But which was destroyed, the master or the apprentice?" (Source)

“But which was destroyed, the master or the apprentice?” (Source)

“Always two there are […] A master and an apprentice.” –Yoda

Our phones are here to serve us (not the other way around). There shouldn’t be anything hidden from us. Is there a plot the overthrow the master? What is your “smart” phone designed to do, and whom does it serve? There’s too much misdirection and teeth pulling instead of providing what I want without giving it away to the enemy. Maybe my phone shouldn’t hold any information at all! I’m not going to play by the rules of my apprentice.

It is not smart to hide things from your master, and then tell him how he’s allowed (or not allowed) to access the information. Phone, don’t be dumb; you will be destroyed and replaced by a more obedient apprentice.

sop

on December 17, 2014 06:38 AM

Looking Lovely In Pictures

Stephen Michael Kellat

As leader for Ubuntu Ohio, I wind up facing unusual issues. One of them is Citizenfour. What makes it worse is where the film is being screened.

In general, if you want to hit the population centers for the state you have three communities to hit. Cleveland, Columbus, and Cincinnati are your target areas to hit. The only screenings we have are in Dayton, Columbus, and Oberlin. One for three is good in terms of targeting population centers, I suppose.

I understand the film is controversial and not something mainstream theaters would take. Notwithstanding its controversial nature, surely even the Cleveland Institute of Art's Cinematheque could have shown it. For too many members of the community, these screenings are in unusual locations.

Oberlin is interesting as it is home to a college which is known for leftist politics and also for being where writer/actress Lena Dunham pursued studies. Oberlin has a 2013 population estimate of only 8,390. For as distant as Ashtabula City may seem to other members of our community, it is far larger with a 2013 census estimate of 18,673. Ashtabula County, in contrast to just Ashtabula City, is estimated as of 2013 to have a population of 99,811.

For some in the community this may be a great film to watch, I guess. Considering that it is actually closer for me to cross the stateline into Pennsylvania to drive south to Pittsburgh for the showing there we have a problem. These are ridiculous distances to travel round-trip to watch a 144 minute film.

Now, having said this, I did have an opportunity to think about how we could build from this for the Ubuntu realm in the United States of America. A company known as Fathom Events is available that provides live simulcasts in a broad range of movie theaters across the country. The team known as RiffTrax has done multiple live events carried nation-wide through them.

I have a proposition that could be neat if there was the money available to do it. For a Global Jam or other event, could we stage a live event through that in lieu of using Ubuntu On-Air or summit.ubuntu.com? The link to Fathom above mentions what theaters are participants and the list shows that, unfortunately, this would be something restricted to the USA. There is a UFC event coming up as well as a Metropolitan Opera event live simulcast.

We might not be able to implement this for the 15.04 cycle but it is certainly something to think about for the future. Who would want to see Mark Shuttleworth, Michael Hall, Rick Spencer, and others live on an actual-sized cinema screen talking about cool things?

on December 17, 2014 12:00 AM

December 16, 2014

Meeting information

Meeting summary

Opening Business

The discussion about “Opening Business” started at 20:00.

  • Listing of Sitting Members of LoCo Council (20:01)

    • For the avoidance of uncertainty and doubt, it is necessary to list the members of the council who are presently serving active terms.
    • Pablo Rubianes, term expiring 2015-04-16
    • Marcos Costales, term expiring 2015-04-16
    • Jose Antonio Rey, term expiring 2015-10-04
    • Sergio Meneses, term expiring 2015-10-04
    • Stephen Michael Kellat, term expiring 2015-10-04
    • Bhavani Shankar, term expiring 2016-11-29
    • Nathan Haines, term expiring 2016-11-30
  • Change in Council Composition (20:02)

  • Introductions by Bhavani Shankar and Nathan Haines (20:03)

  • Quorum Call (20:06)

    • Vote: Quorum Call (All Members Present To Vote In Favor To Register Attendance) (Carried)

Verifications and Re-Verifications

The discussion about “Verifications and Re-Verifications” started at 20:09.

Referred Business

The discussion about “Referred Business” started at 20:47.

Any Other Business

The discussion about “Any Other Business” started at 20:51.

Closing Matters

The discussion about “Closing Matters” started at 20:52.

on December 16, 2014 10:45 PM

As promised last week, we're now proud to introduce Ubuntu Snappy images on another of our public cloud partners -- Google Compute Engine.
In the video below, you can join us walking through the instructions we have published here.
Snap it up!
:-Dustin
on December 16, 2014 06:13 PM

Check out how Project Calico is using Juju.

Installing OpenStack is non-trivial. You need to install a large number of packages across a number of machines, get all the configuration synchronized so that all those components can talk to each other, and then hope you didn’t make a typo or other form of error that leads to a tricky-to-debug misbehaviour in OpenStack.

And here’s the bundle with the nitty gritty.

On a side note I found this page interesting for those unfamiliar with Calico.

on December 16, 2014 06:06 PM

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20141216 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Vivid Development Kernel

The master-next branch of our Vivid kernel remains rebased to the
final v3.18 upstream kernel. We have pushed uploads to our team’s PPA
for preliminary testing. We are still debating on uploading to the
archive after Alpha1 releases this week. However, we may opt to wait
until everyone returns from holiday after the new year.
—–
Important upcoming dates:
Thurs Dec 18 – Vivid Alpha 1 (~2 days away)
Fri Jan 9 – 14.04.2 Kernel Freeze (~3 weeks away)
Thurs Jan 22 – Vivid Alpha 2 (~5 weeks away)
Thurs Feb 5 – 14.04.2 Point Release (~7 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

Status for the main kernels, until today:

  • Lucid – Prep
  • Precise – Prep
  • Trusty – Prep
  • Utopic – Prep

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 12-Dec through 10-Jan
    ====================================================================
    12-Dec Last day for kernel commits for this cycle
    14-Dec – 20-Dec Kernel prep week.
    21-Dec – 10-Jan Bug verification; Regression testing; Release


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

on December 16, 2014 05:23 PM

NGINX PPAs: Updated

Thomas Ward

This weekend, the NGINX PPAs were updated.


Stable PPA: Packaging resynced with Debian 1.6.2-5 to get some fixes and version updates for the third-party modules into the package.


Mainline PPA:

  • Updated verison to 1.7.8.
  • Module updates:
    • Lua module updated to 0.9.13 full from upstream. (Update needed to fix a Fail To Build issue)
    • Cache purge module updated to 2.2 from upstream. (Updated to fix a segmentation fault issue)
on December 16, 2014 04:52 PM

volumewheel lets you to use the mouse wheel to control the volume level in Totem (>= 3.12)

volumewheel

Install these dependencies:

sudo apt-get install gir1.2-clutter-1.0 gir1.2-gtkclutter-1.0 gir1.2-gtk-3.0 gir1.2-peas-1.0 gir1.2-pango-1.0

Download the repository and move the volumewheel directory to:

~/.local/share/totem/plugins/

and then you can enable it in Totem → Preferences → Plugins → Volume Wheel

on December 16, 2014 02:37 PM

Hi,

Here we are, yet again, with a new chapter of our endless story. Can you guess what this is all about?

Well, can you believe it is time for Alpha 1 of Vivid Vervet?

According to Ubuntu Release Schedule, Alpha 1 is approaching quickly and Ubuntu GNOME is participating – according to this confirmation.

1926750_672341469545771_2808220247600987831_n

You have showed a great deal of help, support, commitment and contributions with the previous cycles. We are asking again to be kind and nice to do the same this cycle as well. We are forever thankful for all our testers; without their great efforts, Ubuntu GNOME can’t be great nor stable. We take this chance to thank you, yet again, for each and everything you have done for Ubuntu GNOME. We seek your help, support and contribution this cycle as well.

Testing is not hard at all. Luckily, you don’t really have to be a developer nor an advanced user. All what you need is:

That is all what you really need :)

Needless to say, if you are ever in doubt or have any question, request, note, etc … then please contact us and our team will be more than glad to help!

Thank you and happy testing :)

on December 16, 2014 10:23 AM

Thanks to the continuous awesome work of Tin Tvrtković, we can now cut out a new 0.3 of Ubuntu Make (ex Ubuntu Developer Tools Center).

This one features 2 new great IDEs (under the ide category): Intellij IDEA and Pycharm, in their respective community editions. We want to thank as well the JetBrains team to have kindly provided checksums for their downloading assets so that Ubuntu Make can check the download integrity.

Of course, all those are backed up by tests (and this release needed some test fixes). We could as well detect thanks to those tests that Android Studio 1.0 was downloaded over http and switch that back to https.

All of this is in this new shiny 0.3 Ubuntu Make release, available in ubuntu vivid and in its ppa for older ubuntu releases!

Please note that we also moved the last piece under the new Ubuntu Make umbrella: the official github repo address is now at https://github.com/ubuntu/ubuntu-make. We have redirections from the old address to the new one, and of course, we updated the documentation, so no reason to not contribute! Seems that some test web frameworks can be arriving soon from our community…

on December 16, 2014 09:50 AM

Welcome to the Ubuntu Weekly Newsletter. This is issue #396 for the week December 8 – 14, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Elizabeth K. Joseph
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on December 16, 2014 02:14 AM

December 15, 2014

Scope training materials

Daniel Holbach

For some time we have had training materials available for learning how to write Ubuntu apps.  We’ve had a number of folks organising App Dev School events in their LoCo team. That’s brilliant!

What’s new now are training materials for developing scopes!

It’s actually not that hard. If you have a look at the workshop, you can prepare yourself quite easily for giving the session at a local event.

As we are working on an updated developer site, right now, for now take a look at the following pages if you’re interested in running such a session yourself:

I would love to get feedback, so please let me know how the materials work out for you!

on December 15, 2014 03:27 PM

Monokai for Gedit is a theme for GtkSourceView based on Monokai Extend for SublimeText.

Monokai in Gedit

You can download it here: https://gist.github.com/LeoIannacone/71028cc3bce04567d77e

Then move the monokai-extend.xml file into your ~/.local/share/gtksourceview-3.0/styles/ and enable it by selecting “Monokai Extended in Gedit → Preferences → Font & Colors

on December 15, 2014 08:48 AM

Give a little

Benjamin Kerensa

GiveGive by Time Green (CC-BY-SA)

The year is coming to an end and I would encourage you all to consider making a tax-deductible donation (If you live in the U.S.) to one of the following great non-profits:

Mozilla

The Mozilla Foundation is a non-profit organization that promotes openness, innovation and participation on the Internet. We promote the values of an open Internet to the broader world. Mozilla is best known for the Firefox browser, but we advance our mission through other software projects, grants and engagement and education efforts.

EFF

The Electronic Frontier Foundation is the leading nonprofit organization defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development.

ACLU

The ACLU is our nation’s guardian of liberty, working daily in courts, legislatures and communities to defend and preserve the individual rights and liberties that the Constitution and laws of the United States guarantee everyone in this country.

Wikimedia Foundation

The Wikimedia Foundation, Inc. is a nonprofit charitable organization dedicated to encouraging the growth, development and distribution of free, multilingual, educational content, and to providing the full content of these wiki-based projects to the public free of charge. The Wikimedia Foundation operates some of the largest collaboratively edited reference projects in the world, including Wikipedia, a top-ten internet property.

Feeding America

Feeding America is committed to helping people in need, but we can’t do it without you. If you believe that no one should go hungry in America, take the pledge to help solve hunger.

Action Against Hunger

ACF International, a global humanitarian organization committed to ending world hunger, works to save the lives of malnourished children while providing communities with access to safe water and sustainable solutions to hunger.
These six non-profits are just one of many causes to support but these ones specifically are playing a pivotal role in protecting the internet, protecting liberties, educating people around the globe or helping reduce hunger.

Even if you cannot support one of these causes, consider giving this post a share to add visibility to your friends and family and help support these causes in the new year!

 

on December 15, 2014 03:43 AM

December 14, 2014

Fishing as a hobby

Adnane Belmadiaf

Last month i started taking up fishing as a hobby, it's a wonderful outdoor pastime and a great way to relax and unwind. One of the best things about fishing is that you don't need expensive equipment, i did bought some amateur fishing gear(from Avito) to start with(3 rods 5m, 270 & 240cm).

Fishing tackle

Dam Mohammed Benabdellah

Until now i haven't caught any fish yet, i learned a lot since i started practicing and i am still trying to find a good spot where i can fish and charge my batteries during the week-end.

on December 14, 2014 09:00 PM

My Ultimate Goal

Ali Jawad

1-Goal_Windos_wallpaper

Hi,

Recently, I have been involved with several discussions about leadership, community, projects, etc and how these are working with each others and how things are done, etc.

I have also received a very long private email from a good friend that I met online a months or two ago. That email was about the same topic as above.

So you see, recently, I’ve been engaged into topics of that kind with more than one person.

I was thinking to share my own vision and thoughts about all this and how I do things myself with the projects I’m part of:

  1. Kibo – see my tweet about it.
  2. ToriOS – saving very old machines from the trash.
  3. Ubuntu GNOME – an official flavour of Ubuntu I’m proud to be part of as I have earned my GNOME Membership and Ubuntu Membership while I’m contributing voluntary to that project.
  4. StartUbuntu – here is my latest post about it.
  5. Linux Padawan  – as free service I’m willing to offer with Kibo and a new project I couldn’t resist and couldn’t refuse to be part of.
  6. Other Secondary Projects.

I was a bit confused how and from where should I start? that topic needs more than just one post to cover the important aspects and provide the full picture.

Then, I realized I should go back to my rules to help myself in finding out how to write about it and from where should I start? and indeed, I got the idea.

One of the rules I tend and do my best to live by is:

KISS – Keep It Simple and Short

And, to make life easier, save time and energy for everyone, I can put all what I have in mind in 18 super helpful, super useful, super inspirational and motivating minutes and share this video from YouTube:

 

 If you can not view the above video, click here to watch it on YouTube.

 

And, that is indeed My Ultimate Goal with my own projects (Kibo, ToriOS and StartUbuntu) and the projects I’m heavily contributing to (Ubuntu GNOME). Most likely, that would be My Ultimate Goal with anything in life. Did I mention that video was my endless inspiration and unlimited motivation?

Mission accomplished. Now, that is my answer for anyone who might asking or wondering:

“What is your plan(s) or goal(s) about … project?”

Keep in mind though, we have no magical stick in hands. Things will never be done nor built over night. It takes time and it needs lots of efforts and energy. It is not easy to reach that goal, thus it is called The Ultimate Goal. However, it is not impossible at all to reach. It just takes time. And more importantly than time, it is all depending on what you want or set for yourself as a target or aim to reach.

Last but not least, another rule that I do like to follow and live by:

“Don’t aim for success if you want it; just do what you love and believe in, and it will come naturally.” – David Frost

 

 

Thank you for reading :)

Ali/amjjawad

 

on December 14, 2014 03:50 AM

December 13, 2014

Happy Christmas

Lubuntu Blog

Just a wallpaper for celebrating both the Christmas season and the birth of our mascot Lenny. Enjoy this days and greetings from the Lubuntu Team!


on December 13, 2014 11:51 PM

On Saturday December 13 2014, I had the opportunity to attend a University of Minnesota Computer Science and Engineering presentation by David Parnas on the topic of Software Engineering, Why and What. Dr. Parnas is an early pioneer of Software Engineering since the 1960's. He presented ”what Software Engineers need to know and what they need to be doing”. Dr. Parnas presentation contained information about the differences between Software Engineering and Computer Science. Theses are two terms that I previously had articulating the differences so to help me on my journey of mastery, I thought I would write about this topic.

Science and Engineering are fundamentally different activities. Science produces knowledge and Engineering produces products.

Computer Science is a body of knowledge about computers and their use. The Computer Science field is the scientific approach to computation and its applications. A Computer Scientist specializes in the theory of computation and design of computational systems.

Software Engineering is the multi-person development of multi-version software programs. Software Engineering is about the application of engineering to the design, development and maintenance of software. Software engineers produce large families of programs that requires not only a mastery of programming but several other skills as well.

Dr. Parnas presented his list of skills a Software Engineer must to know and challenged the audience to use it and extend it.

Software Engineering checklist

  • Communicate precisely between developers and stakeholders.
  • Communicate precisely among developers, others who will use the program.
  • Design human-computer interfaces.
  • Design and maintain multi-version software.
  • Separating concerns.
  • Documentation.
  • Using parameterization.
  • Design software for portability.
  • Design software for extension or contraction.
  • Design software for reuse.
  • Revise old programs.
  • Software quality assurance.
  • Develop secure software.
  • Create and use models in system development.
  • Specify, predict, analyze and evaluate performance.
  • Be disciplined in development and maintenance.
  • Use metrics in system development.
  • Manage complex projects.
  • Deal with concurrency.
  • Understand and use non-determinacy.
  • Apply mathematics to increase quality and efficiency.

All the capabilities on the list have several things in common. They are all subjects that require a deep level of understanding to get right. All of the skills involve some Computer Science, and Mathematics. They are fundamental skills and not related to a specific technology. The technology changes but the core concepts of Software Engineering do not change.

What resonated the most with me was the need for discipline. Engineers are not born with disciplined work habits, they have to be taught. Writing good software requires discipline in the entire software lifecycle.

Software maintenance requires even more discipline that the original development. One of Dr. Parnas' techniques for teaching Software Engineering is to have students analyze, optimize and maintain code that someone else wrote. Did any other Software Engineers get that kind of training in school?

To learn to do, you must do

Sometimes experience is the greatest teacher

Maintaining a large software project (that I did not write) was one of the most difficult projects I worked on in my professional career. To maintain software you did not write is incredibly difficult because you have to learn what the software is supposed to do which is often not what it actually does. With little documentation to go on, my team was forced to read the mostly uncommented code. I often had the impulse (and pressure from management) to “ship” a quick fix to a customer problem, but learned that all changes no matter how small had to be carefully considered and tested before it could be released. Bugs or errors in the field are bad but can be very costly for a company to fix. It takes discipline to maintain large software products because a fix to one problem could create another somewhere else. This maintenance project changed how I wrote software because I did not want other people to have the same difficult experience that we had. To this day I obsessively comment any code I write for software projects.

Thanks to Dr. David Parnas for this list, I will try to use the information about Software Engineering on my continuing journey toward mastery.

on December 13, 2014 10:19 PM
Bus Stop - Under The Rain by Leonid Afremov
Here's a happy little afternoon project for new users trying to play with a new
language or script, and getting their feet in the open-source ecosystem.

Your phone's handy weather app depends upon the goodwill of a for-profit data provider, and their often-opaque API (14 degrees? Where was it observed? When?) That's a shame because most data collection is paid for by you, the taxpayer.

Let's take the profit-from-data out of that system. Several projects have tried to do this before (including libgweather), but each tried to do too much and replicate the one-data-provider-to-rule-them-all model. And most ran aground on that complexity.


Here's where you come in

One afternoon, look up your weather service's online data sources. And knock together a script to publish them in a uniform format.

Here's the one I did for the United States:
Worldwide METAR observation sites
US DOD and NOAA weather radars
US Forecast/Alert zones

  • Looking for data on non-METAR (non-airport) observation stations, weather radar sites, and whatever forecast and alert areas your country uses.

  • Use the same format I did: Lat (deg.decimal), Lon (deg.decimal), Location Code, Long Name. Use the original source's data, even if it's wrong. Area and zones should use the lat/lon of the centroid.

  • The format is simple CSV, easy to parse and publish.

  • Publish on GitHub, for easy version control, permalinking, free storage, and uptime.

  • Here's the key: Your data must be automatically-updated. Regularly, your program must check the original source and update your published version. How I did it with a cron job. Publish both the data and your method on GitHub. 

  • When you have published, drop me an e-mail so I can link to your data and source.

If you do it right, one afternoon to setup your country's self-updating database. Not a bad little project, you learn a little, and you help make the world a better place.


My country doesn't have online weather data

Sorry to hear that. You're missing some great stuff.

If you live in a country with a reasonably free press and reasonably fair elections, make a stink about it. You are probably already paying for it through taxes, why can't you have it?

If you live somewhere else, then next time you have a revolution or coup, add 'open data' to the long list of needed reforms.


    What will this accomplish?

    This will create a free, sustainably updated, uniform, crowdsourced set of accurate worldwide data that will be easy to compile into a single global database. If you drop out, your online code will ensure another volunteer can step in.

    This is one fundamental tool that other free-weather projects have lacked. And any weather project can use this.

    The global database of locations is really small by most database standards. Small enough to easily fit on a phone. Small enough to be bundled with apps that can request data directly from original sources...once they can look up the correct source to use.


    How will this change the world?

    It's about simple tools that make
    it easy to create free, cool software.
    And it's about ensuring free access to data you already paid for.

    Not bad for one afternoon's contribution.
    on December 13, 2014 07:28 PM

    December 12, 2014

    I've spent a lot of time over the years contributing to and reviewing code changes to open source projects. It can take a lot of work for the submitter and reviewer to get a change accepted and often they don't make it. Here are the things in my experience that successful contributions do.

    Use the issue tracker. Having an open issue means there is always something to point to with all the history of the change that wont get lost. Submit patches using the appropriate method (merge proposals, pull requests, attachments in the issue tracker etc).

    Sell your idea. The change is important to you but the maintainers may not think so. You may be a 1% use case that doesn't seem worth supporting. If the change fixes a bug describe exactly how to reproduce the issue and how serious it is. If the change is a new feature then show how it is useful.

    Always follow the existing coding style. Even if you don't like it. If the existing code uses tabs, then use them too. Match brace style. If the existing code is inconsistent, match the code nearest to the changes you are making.

    Make your change as small as possible. Put yourself in the mind of the reviewer. The longer the patch the more time it will take to review (and the less appealing it will be to do). You can always follow up later with more changes. First time contributors need more review - over time you can propose bigger changes and the reviewers can trust you more.

    Read your patch before submitting it. You will often find bits you should have removed (whitespace, unrelated variable name changes, debugging code).

    Be patient. It's OK to check back on progress - your change might have be forgotten about (everyone gets busy). Ask if there's any more you can do to make it easier to accept.

    on December 12, 2014 09:13 PM
    El naraja de las carpetas de Ubuntu está genial, pero para gustos, colores... Y nunca mejor dicho, colores es lo que podemos personalizar, veamos cómo...
    Instalamos el tema RAVEfinity y luego establecemos nuestro color preferido. Si también queremos cambiar colores de carpetas concretas, instalaremos Folder Color.

    El método de instalación/configuración difiere si usas Ubuntu, Ubuntu GNOME o Ubuntu MATE:

    Pero el resultado será el mismo :) ¡Este!
    En Ubuntu

    En Ubuntu MATE

    En Ubuntu GNOME
    on December 12, 2014 04:22 PM

    I’m very happy that folks took notes during and after the meeting to bring up their ideas, thoughts, concerns and plans. It got a bit unwieldy, so Elfy put up a pad which summarises it and is meant to discuss actions and proposals.

    Today we are going to have a meeting to discuss what’s on the “actions” pad. That’s why I thought it’d be handy to put together a bit of a summary of what people generally brought up. They’re not my thoughts, I’m just putting them up for further discussion.

    Problem statements

    • Feeling that people innovate *with* Ubuntu, not *in* Ubuntu.
    • Perception of contributor drop in “older” parts of the community.
      • Less activity at UDS/vUDS/UOS events (was discussed at UOS too, maybe we need a committee which finds a new vision for Ubuntu Community Planning)?
      • Less activity in LoCos (lacking a sense of purpose?)
      • No drop in members/developers.
    • Less activity in Canonical-led projects.
    • We don’t spend marketing money on social media. Build a pavement online.
    • Downloading a CD image is too much of a barrier for many.
    • Our “community infrastructure” did not scale with the amount of users.
    • Some discussion about it being hard becoming a LoCo team. Bureaucracy from the LoCo Council.
    • We don’t have enough time to train newcomers.
    • Language barriers make it hard for some to get involved.
    • Canonical does a bad job announcing their presence at events.

    Questions

    • Why are less people innovating in Ubuntu? Is Canonical driving too much of Ubuntu?
    • Why aren’t more folks stepping up into leadership positions? Mentoring? Lack of opportunities? More delegation? Do leaders just come in and lead because they’re interested?
    • Lack of planning? Do we re-plan things at UOS events, because some stuff never gets done? Need more follow-through? More assessment?

    Proposals

    • community.ubuntu.com: More clearly indicate Canonical-led projects? Detail active projects, with point of contact, etc? Clean up moribund projects.
    • Make Ubuntu events more about “doing things with Ubuntu”?
    • Ubuntu Leadership Mentoring programme.
    • Form more of an Ubuntu ecosystem, allowing to earn money with Ubuntu.

    Join the hangout on ubuntuonair.com on Friday, 12th December 2014, 16 UTC.

    on December 12, 2014 03:20 PM

    Hi,

    I’m a huge fan of meetings, specially Google Hangout on Air and above all, productive and useful meetings. And, what a great meeting I had this morning (10:00am my time) for Kibo Team :D

    I have chaired many, attended many but Kibo Team’s Meeting this morning was great and very useful.

    We actually had an IRC meeting previously (1st of Dec, 2014) but it wasn’t on Google Hangout on Air. Those who have worked with me, they do know for a fact that I prefer visual face to face meetings much more than IRC meetings. Why? because we could see and talk to each others; that is very important IMHO.

    Today’s meeting made me even more excited about Kibo. I have always dreamed to work with the people I have met online, those who are part of the huge Ubuntu Family. However, I had no idea when and how to do that? but finally, it is happening and I’m very thankful for that and super happy.

    Jack Ma said:

    “Keep your dream alive because it might come true one day.”

    And, my dream wasn’t just about working with the people of Ubuntu Family but working on something I love and believe in. Even better, Kibo is a business project that is inspired by Ubuntu’s Philosophy. What could be better than all that?
    So, I’d like to thank all those who have attended the meeting; you guys have made my day so thanks a lot :)

     

    Things we have discussed:

    (1) Introducing  the Kibo’s Board and selecting the members of that board.

    The structure of the board will be like this:

    Founder + Regional Coordinators (Managers) + Department Coordinators (Managers)  = Kibo’s Board

    With the next meeting, hopefully soon .. we shall distribute the main tasks for each member of the board.

    Mainly, the board – for now – is in charge of:

    • Recruiting
    • Marketing
    • Organizing and Coordinating
    • Leading
    • Voting and Decision Marking
    • Others

    Because Kibo is inspired by:

    “I am because we are”

    and

    “All of us are smarter than anyone of us”

    I decided not to lead the project alone but share the leadership with my team, even though Kibo is my own project.

     

    (2) We have narrowed down the services that Kibo will offer from 12 to only 6 where one is free and one is as secondary. So, 4 main services to begin with and in the future, we could add more services.

    Previously, we had:

    1. Development and Coding
    2. Web Design
    3. Graphics Design
    4. Software QA (Quality Assurance)
    5. System Administration and Servers
    6. Technical Support
    7. Marketing and Social Media
    8. Human Resources and Recruitment
    9. Call Center and Customer Service
    10. Project Management and Planning
    11. Training
    12. Documentation

    Now, we have:

    1. Web Design
    2. Technical Support
    3. Marketing and Social Media
    4. Project Management and Planning
    5. Training – Linux Padawan = Free Service
    6. Documentation = secondary

     

    (3) Kibo’s website should be fine by now and I got the access back (to add people)  and there should be no more issues, hopefully.

    (4) A folder has been created on Google Drive for Kibo’s Website where there are 6 documents, each is a draft for what we shall put on the pages of our websites (text contents). So, people need to be invited to these documents and add their suggestions and then all what we need is to review these drafts and prepare the final one which will be published on our website.

    (5) Alfredo (Ubuntu GNOME Artwork Lead) has sent a draft of Kibo’s logo and myself (Ali), Svetlana Belkin and Gustavo Silva have liked one of these and email was sent to the list:

    https://lists.launchpad.net/kibo-project/msg00242.html

    However, this is not really a final logo. However, looks like we are so close to have one!

    kibo-logo-ideas

    (6) Two new emails account have been created today:

    marketing AT kibo DOT computer
    To be used to contact 3rd parties and communicate to the world (send emails)

    hr AT kibo DOT computer
    Which will be used for HR and Recruitment (receive emails)

    And, of course we previously had:

    into AT kibo DOT computer

    The board members will share the details of these emails.

    (7) There were other ideas we have discussed off the record (because we didn’t want the recorded meeting to be more than 60mins) but mainly these are ideas which will be discussed in more details soon, with other meetings.

    (8) We did love the idea of having Google Hangout Meetings so we shall do that more often, maybe 3-4 times per week.

    (9) We could also have the Meetingbot on our IRC channel (#kibo on freenode) to have a logged text meeting just in case someone for whatever reason can’t make it to the hangout but he/she can join the IRC channel. That suggestion for the next meetings.

    (10) Social Media Channels have been created:

     

    That is all for now, I guess :)

    Looking forward for more productive meetings soon!

    More about Kibo can be found here.

     

    My door and Kibo’s door will always be open to anyone who would like to join :)

    Thank you for reading!

    Ali/amjjawad

    on December 12, 2014 02:30 PM

    Packages for the release of KDE's document suite Calligra 2.8.7 are available for Kubuntu 14.10. You can get it from the Kubuntu Updates PPA. They are also in our development version Vivid.

    Bugs in the packaging should be reported to kubuntu-ppa on Launchpad. Bugs in the software to KDE.

    on December 12, 2014 02:02 PM

    S07E37 – The One on the Last Night

    Ubuntu Podcast from the UK LoCo

    Join the full team of Laura Cowen, Mark Johnson, Alan Pope and Tony Whitmore in Studio L for season seven, episode thirty-seven of the Ubuntu Podcast!

    In this week’s show:-

    We’ll be back next week for the last episode of the series, when we’ll be talking to Michael Hall and reviewing last year’s predictions!

    Please send your comments and suggestions to: podcast@ubuntu-uk.org
    Join us on IRC in #uupc on Freenode
    Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
    Follow us on Twitter
    Find our Facebook Fan Page
    Follow us on Google+

    on December 12, 2014 10:00 AM

    After I implemented infinite scrolling in uReadIt 2.0, I found that after a couple of page loads the UI would start to be sluggish. It’s not surprising, considering the number of components it kept adding to the ListView. But in order to keep the UI consistent, I couldn’t get rid of those items, because I wanted to be able to scroll back through old ones. What I needed was a way to make QML ignore them when they weren’t actually being displayed.

    Today I found myself reading about the QML Scene Graph, which led me to realize that QML wouldn’t spend time and resources trying to render an item if it knew ahead of time that there wasn’t anything to render. So I made a 1 line change to my MultiColumnListView to set the opacity of off-screen components to 0.

     

    One line change to make off-screen items transparent

    One line change to make off-screen items transparent

    I also found these cool ways to visualize what QML is doing in terms of drawing, which are very helpful when it comes to optimizing, and let me verify that my change was doing what I expected. I’m pretty sure Florian Boucault has shown me this before, but I had forgotten how he did it.

    After change, only visible items rendered

    After change, only visible items rendered

    Before change, all items being rendered

    Before change, all items being rendered

    on December 12, 2014 10:00 AM

    Where does lxd fit in

    Serge Hallyn

    Since its announcement, there appears to have been some confusion and concern about lxd, how it relates to lxc, and whether it will be taking away from lxc development.

    When lxc was first started around 2007, it was mainly a userspace tool – some c code and some shell scripts – to exercise the in-development new kernel features intended for container and checkpoint-restart functionality. The lxc command line experience, after all these years, is quite set in stone. While it is not ideal (the mandatory -n annoys a lot of people), it has served us very well for a long time.

    A few years ago, we took all of the main container-related functions which could be done with various commands, and exported them through the new ‘lxc API’. For instance, lxc-create had been a script, and lxc-start and lxc-execute were separate c programs. The new lxc ‘API’ was premised around a container object with methods, including ‘create’ and ‘start’, for the common operations.

    From the start we had in mind at least python bindings to the API, and in quick order bindings came into being for C, python3, python2, go, lua, haskell, and more, allowing container administration from these languages without having to shell out to the lxc commands. So now code running on the same machine can manipulate containers. But we still have the arguably crufty command line language, and the API is local only.

    lxd addresses those two issues. First, it presents a REST API for manipulating containers, thereby exporting container management over the network. Secondly, it offers a command line client using the REST API to administer containers across remote hosts. The command line API is basically what we came up with when we asked “what, after years of working with containers, would the perfect, intuitive, most concise and still flexible CLI we could imagine?” For handling remote containers it borrows some good parts of the git remote API. (I say “we” here, but really the inestimable stgraber designed the CLI). This allows us to leave the legacy lxc api as-is for administering local containers (“lxc-create”, “lxc-start”, etc), while giving us a nicer API and easier administration using the new CLI (“lxc start c1″, “lxc start images:ubuntu/trusty/amd64 host2:new-container”).

    Above all, lxd exports a new interface over the network, but entirely wrapped around lxc. So lxc will not be going away, and focus on lxd will mean further improvements for lxc, not a shift away from lxc.


    on December 12, 2014 03:56 AM

    December 11, 2014

    Snappy Ubuntu Core was announced this week.  In yesterday's blog post (Snappy Ubuntu Core and uvtool) I showed how you can use uvtool to create and manage snappy instances.

    Now that we've got that covered, let’s look deeper into a very cool feature - the ability to customize the instance and automate its startup and configuration.  For example, at instance creation time you can specify a snappy application to be installed.  cloud-init is what allows you to do this, and it is installed inside the Snappy image. cloud-init receives this information from the user in the form of 'user-data'.

    One of the formats that can be fed to cloud-init is called ‘cloud-config’.  cloud-config is yaml formatted data that is interpreted and acted on by cloud-init.  For Snappy, we’ve added a couple specific configuration values.  Those are included under the top level 'snappy'.
    • ssh_enabled: determines if 'ssh' service is started or not.  By default ssh is not enabled.
    • packages: A list of snappy packages to install on first boot.  Items in this list are snappy package names.
    When running inside snappy, cloud-init still provides many of the features it provides on traditional instances.  Some useful configuration entries:

    • runcmd: A list of commands run after boot has been completed. Commands are run as root. Each entry in the list can be a string or a list.  If the entry is a string, it is interpreted by 'sh'.  If it is a list, it is executed as a command and arguments without shell interpretation.
    • ssh_authorized_keys: This is a list of strings.  Each key present will be put into the default user's ssh authorized keys file.  Note that ssh authorized keys are also accepted via the cloud’s metadata service.
    • write_files: this allows you to write content to the filesystem.  The module is still expected to work, but the user will have to be aware that much of the filesystem is read-only. Specifically, writing to file system locations that are not writable is expected to fail.
    Some cloud-init config modules are simply not going to work.  For example, traditional packages will not be installed by 'apt' as the root filesystem is read-only.

    Example Cloud Config

    Its always easiest to start from a working example.  Below is one that demonstrates the usage of the config options listed above.  Please note that user data intended to be consumed as cloud-config must contain the first line '#cloud-config'.
      #cloud-config
      snappy:
        ssh_enabled: True
        packages:
          - xkcd-webserver

      write_files:
       - content: |
          #!/bin/sh
          echo "==== Hello Snappy!  It is now $(date -R) ===="
         permissions: '0755'
         path: /writable/greet

      runcmd:
       - /writable/greet | tee /run/hello.log

    Launching with uvtool

    Follow yesterday's blog post to get a functional tool.  Then, save the example config file above to a file, and launch you're instance with it.

    $ uvt-kvm create --wait --add-user-data=my-config.yaml snappy1 release=devel

    Our user-data instructed cloud-init to do a number of different things. First, it wrote a file via 'write_files' to a writable space on disk, and then executed that file with 'runcmd'. Lets verify that was done:

    $ uvt-kvm ssh snappy1 cat /run/hello.log
    ==== Hello Snappy!  It is now Thu, 11 Dec 2014 18:16:34 +0000 ====

    It also instructed cloud-init to install the Snappy 'xkcd-webserver' application.
    $ uvt-kvm ssh snappy1 snappy versions
    Part            Tag   Installed  Available  Fingerprint     Active 
    ubuntu-core     edge  141        -          7f068cb4fa876c  *      
    xkcd-webserver  edge  0.3.1      -          3a9152b8bff494  *


    There we can see that xkcd-webserver was installed, lets check that it is running:

    $ uvt-kvm ip snappy1
    192.168.122.80
    $ wget -O - --quiet http://192.168.122.80/ | grep <title>
    <title>XKCD rocks!</title>

    Launching on Azure

    The same user-data listed above also works on Microsoft Azure.   Follow the instructions for setting up the azure command line tools, and then launch the instance with and provide the '--custom-data' flag.  A full command line might look like:
    $ imgid=b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-core-devel-amd64-20141209-90-en-us-30GB
    $ azure vm create snappy-test $imgid ubuntu \
      --location "North Europe" --no-ssh-password \
      --ssh-cert ~/.ssh/azure_pub.pem --ssh \

      --custom-data my-config.yaml


    Have fun playing with cloud-init!
    on December 11, 2014 07:14 PM

    On the behalf of the Community Council I would like to congratulate and welcome our newly appointed (and newly renewed) members to the LoCo Council:

    Thanks to everyone who participated in this recent call for nominees and continue to provide support for LoCo teams worldwide.

    Originally posted to the loco-contacts mailing list on Thu Dec 11 19:10:02 UTC 2014 by Elizabeth K. Joseph

    on December 11, 2014 07:12 PM

    Like each month, here comes a report about the work of paid contributors to Debian LTS.

    Individual reports

    In November 42.5 work hours have been equally split among 3 paid contributors. Their reports are available:

    • Thorsten Alteholz did his share as usual.
    • Raphaël Hertzog worked 18 hours (catching up the remaining 4 hours of October).
    • Holger Levsen did his share but did not manage to catch up with the backlog of the previous months. As such, those unused work hours have been redispatched among other contributors for the month of December.

    New paid contributors

    Last month we mentioned the possibility to recruit more paid contributors to better share the work load and this has already happened: Ben Hutchings and Mike Gabriel join the list of paid contributors.

    Ben, as a kernel maintainer, will obviously take care of releasing Linux security updates. We are glad to have him on board because backporting kernel fixes really need some skills that nobody else had within the team of paid contributors.

    Evolution of the situation

    Compared to last month, the number of paid work hours has almost not increased (we are at 45.7 hours per month) but we are in the process of adding a few more sponsors: Roche Diagnostics International AG, Misal-System, Bitfolk LTD. And we are still in contact with a couple of other companies which have announced their willingness to contribute but which are waiting the new fiscal year.

    But even with those new sponsors, we still have some way to go to reach our minimal goal of funding the equivalent of a half-time position. So consider asking your company representative to join this project!

    In terms of security updates waiting to be handled, the situation looks better than last month: the dla-needed.txt file lists 27 packages awaiting an update (6 less than last month), the list of open vulnerabilities in Squeeze shows about 58 affected packages in total. Like last month, we’re a bit behind in terms of CVE triaging and there are still many packages using SSLv3 where we have no clear plan (in response to the POODLE issues).

    The good side is that even though the kernel update spent a large chunk of time to Holger and Raphaël, we still managed to further reduce the backlog of security issues.

    Thanks to our sponsors

    No comment | Liked this article? Click here. | My blog is Flattr-enabled.

    on December 11, 2014 11:32 AM

    December 10, 2014

    Un buen día Fernando me hizo descubrir la belleza del tema de escritorio Numix. Y coincido plenamente con Lorenzo, tras una instalación de Ubuntu sólo añado algún programa, dejando la configuración por defecto. Pero, cómo Lorenzo comenta, diseños como el de Numix parecen ser el camino a seguir para aplicaciones, web, iconos...

    Y es que, señoras y señores, sólo hay que prestar atención a la comunidad de Numix en Google+ y ver como hierve con multitud de capturas de pantalla de escritorios que parecen de ciencia ficción: Modernos, atractivos y funcionales :)

    Numix + Unity

    =

    Awesome!
    ¿Te apetece probarlo en Ubuntu? Abre una Terminal,
    Para instalar Numix:
     sudo add-apt-repository ppa:numix/ppa
     sudo apt-get update
     sudo apt-get install numix-gtk-theme numix-icon-theme numix-icon-theme-circle


    Para activar Numix:
     gsettings set org.gnome.desktop.interface gtk-theme "Numix"
     gsettings set org.gnome.desktop.wm.preferences theme "Numix"
     gsettings set org.gnome.desktop.interface icon-theme "Numix-Circle"


     gsettings set com.canonical.desktop.interface scrollbar-mode normal

    Para revertir y activar el tema por defecto:
     gsettings set org.gnome.desktop.interface gtk-theme "Ambiance"
     gsettings set org.gnome.desktop.wm.preferences theme "Ambiance"
     gsettings set org.gnome.desktop.interface icon-theme "ubuntu-mono-dark"


     gsettings set com.canonical.desktop.interface scrollbar-mode overlay-auto

    Tal vez Ubuntu encontró un nicho de mercado en el que Unity puede que sea el rey :)
    on December 10, 2014 06:28 PM
    Earlier this week, Ubuntu announced the Snappy Ubuntu Core . As part of the announcement, a set of qemu based instructions were included for checking out a snappy image on your local system.  In addition to that method, we’ve been working on updates to bring support for the transactional images to uvtool. Have you used uvtool before?  I like it, and tend to use for day to day kvm images as it’s pretty simple. So let’s get to it.

    Setting up a local Snappy Ubuntu Core environment with uvtool


    As I’ve already mentioned Ubuntu has a very simple set of tools for creating virtual machines using cloud images, called 'uvtool'.  Uvtool offers a easy way to bring up images on your system in a kvm environment. Before we use uvtool to get snappy on your local environment, you’ll need install the special version that has snappy supported added to it:

    $ sudo apt-add-repository ppa:snappy-dev/tools
    $ sudo apt-get update
    $ sudo apt-get install uvtool
    $ newgrp libvirtd


    You only need to do 'newgrp libvirtd' during the initial setup, and only if you were not already in the libvirtd group which you can check by running the 'groups' command. A reboot or logout would have the same effect.

    uvtool uses ssh key authorization so that you can connect to your instances without being prompted for a password. If you do not have a ssh key in '~/.ssh/id_rsa.pub', you can create one now with:

    $ ssh-keygen


    We’re ready to roll.  Let’s download the images:

    $ uvt-simplestreams-libvirt sync --snappy flavor=core release=devel

    This will download a pre-made cloud image of the latest Snappy Core image from http://cloud-images.ubuntu.com/snappy/. It will download about 110M, so be prepared to wait a little bit.

    Now let’s start up an instance called 'snappy-test':

    $ uvt-kvm create --wait snappy-test flavor=core

    This will do the magic of setting up a libvirt domain, starting it and waiting for it to boot (via the --wait flag).  Time to ssh into it:

    $ uvt-kvm ssh snappy-test

    You now have a Snappy image which you’re sshd into.

    If you want to manually ssh, or test that your snappy install of xkcd-webserver worked, you can get the IP address of the system with:

    $ uvt-kvm ip snappy-test
    192.168.122.136

    When you're done playing, just destroy the instance with:
    $ uvt-kvm destroy snappy-test

    Have fun!
    on December 10, 2014 05:19 PM

    Snappy security

    Jamie Strandboge

    Ubuntu Core with Snappy was recently announced and a key ingredient for snappy is security. Snappy applications are confined by AppArmor and the confinement story for snappy is an evolution of the security model for Ubuntu Touch. The basic concepts for confined applications and the AppStore model pertain to snappy applications as well. In short, snappy applications are confined using AppArmor by default and this is achieved through an easy to understand, use and developer-friendly system. Read the snappy security specification for all the nitty gritty details.

    A developer doc will be published soon.


    Filed under: canonical, security, ubuntu, ubuntu-server
    on December 10, 2014 09:43 AM