July 30, 2014

OSM GPS dump

We’re very excited to announce an agreement with Nokia HERE to provide A-GPS support on Ubuntu. The new platform service will enable developers to obtain accurate positioning data for their location-based apps in under two minutes, a significantly shorter Time To First Fix (TTFF) than the average for raw GPS technologies.

Faster positioning

While Ubuntu already features GPS-based location, it has always been a key requirement for the OS to provide application developers with rapid and efficient location positioning capabilities.

The new positioning service will be a hybrid solution integrating A-GPS and WiFi positioning, a powerful combo to help obtaining a very fast and accurate TTFF. The system is to be functional by the Release To Manufacturer (RTM) milestone, and available on the regular Ubuntu builds and for retail phones shipping Ubuntu.

Privacy and security

With the user’s explicit consent, anonymous data related to signal strength of local WiFi signals and radio cells can be contributed to crowd-sourcing location services, with the purpose of improving the overall quality of the positioning service for all users.

In line with Ubuntu’s privacy policy, no personal data of any nature is to be collected and released. Users will also be able to opt-out of this service if they do not wish their mobile handset to collect this type of data.

The positioning system will also be run under strict confinement, so that the service and its data cannot be accessed without the user explicitly granting access. With Ubuntu’s trust model, a confined application has to be granted trust by the user to gain access to security- or privacy-relevant system components.

Mapping capabilities

As the new service is to be focused on positioning, it will be decoupled from any mapping solution. Ubuntu Developers, as before, will have a choice of mapping services to use for their applications, including Nokia HERE, OpenStreetMap and others.

Header image based on “openstreetmap gps coverage” by Steven Kay, CC-BY-SA 2.0.

on July 30, 2014 09:09 AM
Andrew Sorensen live-coding at OSCON 2014
Keynote

Shortly after Andrew Sorensen began the performance segment of his keynote at OSCON 2014, the #oscon Twitter topic began erupting with posts about the live coding session. Comments, retweets, and additional links persisted for that day and the next. In short, Andrew was a hit :-)

My first encounter with Andrew's work was a few years ago when I was getting back into Lisp. I was playing with generative music with Overtone (and then, a bit later, experimenting with SuperCollider, Hy, and Twisted) and came across his piece A Study in Keith. You might want to take a break from reading this port and watch that now ...

When Andrew started up his presentation, I didn't immediately recognize him. In fact, when the code was displayed on the big screens, I assumed it was Clojure until I looked closely and saw he was using (define ...) and not (defun ...).  This seemed very familiar, and then I remembered Impromptu, which ultimately lead to my discovery of Extempore (see more links below) and the realization that this is what Andrew was using to live code.

At the end of the performance a bunch of us jumped up and gave a standing ovation. (In fact, you can hear me yell out "YEAH" at the end of his presentation when he says "And there we go."). It was quite a show. It seemed the OSCON 2014 had been given a theme song. The next step was getting the source code ...


Andrew's gist (Dark Github Theme)
Sharing the Code

Andrew gave a presentation on Extempore in the ballroom right after the keynote. This too was fantastic and resulted in much tweeting.

Afterwards a bunch of us went up front and chatted with him, enthusing about his work, the recent presentation, the keynote, and his previously published pieces.

I had Andrew's ear for a moment, and asked him if he was interested in sharing his keynote source -- there had been several requests for it on Twitter (that also got retweeted and/or favourited). Without hesitation, he gave an enthusiastic "yes" and we were off and running for the lounge where we could sit down to create a gist (and grab a cappuccino!). The availability of the source was announced immediately, to the delight of many.


Setting Up Extempore

Sublime Text 3 connected to Extempore
Later that night in my hotel room, I had time to download and run Extempore ... and discovered that I couldn't actually play the keynote code, since there was some implicit setup I was missing. However, after some digging around on the docs site and the mail list, music was pouring forth from my laptop -- to my great joy :-D

To ensure anyone else who is not familiar with Extempore can also have this pleasure, I've put together the all the prerequisites and setup necessary in a forked gist, in multiple parts. I will go through those in this blog post. Also: all of my testing and live coding was done using Ben Swift's Extempore Sublime Text plugin.

The first step is getting all the dependencies. You'll want to start the downloads right away, since they are large (the sample files are compressed .wavs). While that's going on, you can install Extempore using Homebrew (this worked for me on Mac OS X with no additional tweaking/configuration necessary):

With Extempore running, let's do some setup. We're going to need to:

  • load some libraries (this takes a while for them to compile),
  • define some samples, and then
  • define some musical note aliases for convenience (and visual clarity).
The easiest way to use the files below is to clone the gist repo and load them up in Sublime Text, executing blocks of text by hi-lighting them, and then pressing ^x^x.

Here is the file for the fist two bullets mentioned above:


You will need to edit this file to point to the locations where your samples were downloaded. Also,
at the very end there are some lines of code you can execute to make sure that your samples are working.

Now let's define the note aliases. You can just hi-light the entire contents of this file in Sublime Text and then ^x^x:
At this point, we're ready to play!


Playing the Music

To get started on the music, open up the fourth file from the clone of the gist and ^x^x the root, scale, and left-hand-notes-* constants.

Here is the evolution of the left hand part:
Go ahead and start that first one playing (^x^x the definition as well as the call). Wait for a bit, and then execute the next one, etc. Once you've started playing the final left hand form, you can switch to the wider range of notes defined/updated at the bottom.

Next, you'll want to bring in the right hand ... then bassline ... then the higher fmsynth sparkles for the right hand:

Then you'll increase the energy with the drum section:

Finally, you'll bring it to the climax, and then start the gentle fade out:

A slightly modified code listing for the final keynote form is here:


Variation on a Theme

I have recorded a variation of Andrew's keynote based on the code above, for your listening pleasure :-) You can listen to it in your browser or download it.

This version plays part of the left hand piano an octave lower. There's a tiny bit of clipping in places, and I accidentally jazzed it up (and for too long!) with a hi-hat change in the middle. There are also some awkward transitions and volume oddities. However, these should be inspiration for you to make your own variation of the OSCON 2014 Theme Song :-)

The "script" used for the recording can found here.


Links of Note

Some of these were mentioned above, some haven't been. All relate to Extempore :-)


on July 30, 2014 06:14 AM

July 29, 2014

I previously announced the availability of rsyslog+MongoDB+LogAnalyzer in Debian wheezy-backports. This latest rsyslog with MongoDB storage support is also available for Ubuntu and Fedora users in one way or another.

Just one thing was missing: a flexible way to prune the database. LogAnalyzer provides a very basic pruning script that simply purges all records over a certain age. The script hasn't been adapted to work within the package layout. It is written in PHP, which may not be ideal for people who don't actually want LogAnalyzer on their Syslog/MongoDB host.

Now there is a convenient solution: I've just contributed a very trivial Python script for selectively pruning the records.

Thanks to Python syntax and the PyMongo client, it is extremely concise: in fact, here is the full script:

#!/usr/bin/python

import syslog
import datetime
from pymongo import Connection

# It assumes we use the default database name 'logs' and collection 'syslog'
# in the rsyslog configuration.

with Connection() as client:
    db = client.logs
    table = db.syslog
    #print "Initial count: %d" % table.count()
    today = datetime.datetime.today()

    # remove ANY record older than 5 weeks except mail.info
    t = today - datetime.timedelta(weeks=5)
    table.remove({"time":{ "$lt": t }, "syslog_fac": { "$ne" : syslog.LOG_MAIL }})

    # remove any debug record older than 7 days
    t = today - datetime.timedelta(days=7)
    table.remove({"time":{ "$lt": t }, "syslog_sever": syslog.LOG_DEBUG})

    #print "Final count: %d" % table.count()

Just put it in /usr/local/bin and run it daily from cron.

Customization

Just adapt the table.remove statements as required. See the PyMongo tutorial for a very basic introduction to the query syntax and full details in the MongoDB query operator reference for creating more elaborate pruning rules.

Potential improvements

  • Indexing the columns used in the queries
  • Logging progress and stats to Syslog


LogAnalyzer using a database backend such as MongoDB is very easy to set up and much faster than working with text-based log files

on July 29, 2014 06:27 PM

Meeting Actions

None

U Development

The discussion about “U Development” started at 16:00.

  • Feature freeze is August 21. Note Debian Import Freeze is coming up
    • as well.
  • The mysql /var/lib/mysql discussion is proceeding, but it seems
    • unlikely that this will happen by feature freeze now. Nevertheless, we expect to land 5.6 in main in the same manner as 5.5 is currently on schedule.
  • http://status.ubuntu.com/ubuntu-u/group/topic-u-server.html – please

    • remember to keep your blueprints updated with work item progress and re-plan milestones if things slip.

Server & Cloud Bugs (caribou)

The discussion about “Server & Cloud Bugs (caribou)” started at 16:03.

  • No updates

Weekly Updates & Questions for the QA Team (psivaa)

The discussion about “Weekly Updates & Questions for the QA Team (psivaa)” started at 16:05.

  • No updates

Weekly Updates & Questions for the Kernel Team (smb, sforshee)

The discussion about “Weekly Updates & Questions for the Kernel Team (smb, sforshee)” started at 16:05.

  • James Page reports that iscsitarget 12.04 DKMS updates for HWE
    • kernels are ready and uploaded to trusty-proposed awaiting SRU team review (bug 1262712)
  • The KSM on NUMA + KVM bug (1346917) is making great progress, driven
    • by Chris Arges. Brad Figg reports that an upload to trusty-proposed is imminent, and it should land on August 8th (the day after 12.04.5). 12.04.5 (for the HWE kernel) won’t include the update, but one will be available for it the next day.
  • For kernel SRU cadence updates, see

Ubuntu Server Team Events

The discussion about “Ubuntu Server Team Events” started at 16:17.

  • rbasak noted that the Canonical Server Team have been sprinting in
    • #ubuntu-server on Fridays to complete merges, including mentoring and sponsoring, and that all are welcome to join them.

Open Discussion

The discussion about “Open Discussion” started at 16:18.

  • James Page reported that there are plans to SRU docker 1.0.x to
    • 14.04 in bug 1338768. The proposed uploaded is in a PPA and awaiting review from the SRU team. Testers are encouraged to try it out.

Agree on next meeting date and time

Next meeting will be on Tuesday, August 4th at 16:00 UTC in #ubuntu-meeting. Note that this was stated incorrectly in the meeting itself. The chair will be Liam Young.

on July 29, 2014 05:46 PM

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140729 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel has been rebased to v3.16-rc7 and uploaded to the
archive, ie. linux-3.13.0-6.11. Please test and let us know your
results. I also want to mention 14.04.1 released last Thursday
July 24 and 12.04.5 is scheduled to release next Thurs Aug 7.
—–
Important upcoming dates:
Thurs Aug 07 – 12.04.5 (~1 week away)
Thurs Aug 21 – Utopic Feature Freeze (~3 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (Jul. 22):

  • Lucid – Released
  • Precise – Released
  • Saucy – Released
  • Trusty – Released

    Current opened tracking bugs details:

  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html

    Schedule:

    14.04.1 cycle: 29-Jun through 07-Aug
    ====================================================================
    27-Jun Last day for kernel commits for this cycle
    29-Jun – 05-Jul Kernel prep week.
    06-Jul – 12-Jul Bug verification & Regression testing.
    13-Jul – 19-Jul Regression testing & Release to -updates.
    20-Jul – 24-Jul Release prep
    24-Jul 14.04.1 Release [1]
    07-Aug 12.04.5 Release [2]

    cycle: 08-Aug through 29-Aug
    ====================================================================
    08-Aug Last day for kernel commits for this cycle
    10-Aug – 16-Aug Kernel prep week.
    17-Aug – 23-Aug Bug verification & Regression testing.
    24-Aug – 29-Aug Regression testing & Release to -updates.

    [1] This will be the very last kernels for lts-backport-quantal, lts-backport-raring,
    and lts-backport-saucy.

    [2] This will be the lts-backport-trusty kernel as the default in the precise point
    release iso.


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

on July 29, 2014 05:18 PM

Kubuntu Ninja Rohan was on today’s ubuntuonair talking about Plasma 5 and what is happening in Kubuntu.  Watch it now to hear the news.

 

on July 29, 2014 04:07 PM

On the behalf of the Ubuntu Leadership team, I’m doing a call for the team leaders of the various teams that make Ubuntu and its flavours possible.  The reason for this call is simple- (as a team) to find what problems in that teams’ leadership and what works.  In turn, these problems can be talked about in order for a solution to be found and that solution can be then shared via the Ubuntu Leadership wiki for other folks to read.

What teams are needed:

  • Ubuntu and the flvours developer, translation, marketing, and the other teams that are key to the success
  • LoCo Leaders/Point of Contacts
  • Ubuntu Women Elected Leaders
  • And other team leaders can of course join in!

How to join:

It’s your choice if you want to join the Ubuntu Leadership team on LaunchPad, it’s the the mailing-list that is important!  What you need to put in for your message is: who you are, what team that you are a leader to, what problems in leadership that you are facing or what is working for you, and what sort of questions/comments do you have about the problem that you are facing.  For the subject, your standard new member introduction or maybe say “Leader from [insert team name here]“.

Extra Links:

Ubuntu Leadership mailing-list idea on this


on July 29, 2014 03:39 PM

Welcome to the Ubuntu Weekly Newsletter. This is issue #376 for the week July 21 – 27, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Jose Antonio Rey
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on July 29, 2014 04:45 AM
Series Links

Survivors' Breakfast

The previous post covered some thoughts on the future-looking programming themes present at OSCON 2014.

Following that wonderful conference, long-time Open Source advocate, Pythonista, and instructor Steve Holden, was kind enough to host his third annual "OSCON Survivors' Breakfast" with tens of esteemed attendees, speakers, and organizers enjoying great company and conversation, relaxing together after the flurry of conference activity, planning a leisurely day in Portland, and -- most immediately -- having some much-needed breakfast.

The view from the 23rd floor was quite an eyeful, and the conversation ranged across equally panoramic topics. Sitting with Alex Martelli, Anna Ravenscroft, and Katie Miller, the conversation inevitably turned to thoughts programmatical. One thread of the discussion was so compelling that it helped crystallize this series of blog posts. That was kicked off with Katie's question:

Why [have some large companies] not embraced functional programming to the extent that other large ones have?

Multiple points of discussion spawned from this, some of which still continue. The rest of this post explores these. 


Large Companies?

What constitutes a large company? We settled on discussing Fortune 500 companies, which, by definition are:
  • U.S. Companies
  • Ranked by gross revenue (after adjustments for excise taxes).

Afterwards, I looked up the 2013 top 25 tech companies in the Fortune 500. I've listed them below; in parentheses is the Fortune 500 ranking. After the dash are the functional programming languages used on various company projects -- these are listed only if I have talked to someone who has worked on a project (or interviewed for a job that used the language), or if I have read an article by an employee who has stated that they use the listed language(s) [1].
  1. Apple (6) - Swift, Clojure, Scala
  2. AT&T (11) - Haskell
  3. HP (15) - F#, Scala
  4. Verizon Communications (16) - Scala
  5. IBM (20) - Scala
  6. Microsoft (35) - F#, F*
  7. Comcast (46) - Scala
  8. Amazon (49) - Haskell, Scala, Erlang
  9. Dell (51) - Erlang, Scala
  10. Intel (54) - Haskell, SML, PLT Scheme
  11. Google (55) - Haskell [2]
  12. Cisco (60) - Scala
  13. Ingram Micro (76) - ?
  14. Oracle (80) - Scala
  15. Avnet (117) - ?
  16. Tech Data (119) - ?
  17. Emerson Electric (123) - ?
  18. Xerox (131) - Scala
  19. EMC (133) - Scala
  20. Arrow Electronics (141) - ?
  21. Century Link (150) - ?
  22. Computer Sciences Corp. (176) - ?
  23. eBay (196) - Scala 
  24. TI (218) - ?
  25. Western Digital (222) - ?

The companies which have committed to projects guessed to be of significant business value written in FP languages include: Apple, HP, and eBay. Possibly also Oracle and Intel. So, a rough estimate of between 3 to 5 of the top 25 U.S. tech companies have made a significant investment in FP.

Why not Google?

The next two sections offer summaries of some views on this.


Ideal Use Case?

Is an FP language suitable for large organisations? Are smaller companies better served by them? During breakfast, It was postulated that dealing with such things as immutable data, handling I/O in pure FP languages, and creating/using higher order functions is easier for small startups due to the shorter amount of time required to hire or train a critical mass of skilled programmers.

It is certainly true that it will take larger organisations longer to train its personnel simply due to sheer numbers and, even with enough trainers, logistics. But this argument can be made for any corporate level of instruction; in my book, this cancels out on both sides and is not an argument unique to hard topics, even less, specifically pertinent to FP adoption.


Brain Fit?

I've heard this one a bit: "Some people just don't think in FP terms." They need loops and iteration, not higher order functions and recursion. Joel Spolsky makes reference to this in his article The Guerrilla Guide to Interviewing. In particular, he says that "For some reason most people seem to be born without the part of the brain that understands pointers." This has been applied to topics in FP as well as C.

To be fair, Joel's comment was probably made with a bit of lightness and not meant to be a statement on the nature of mind or a theory of cognition. The context of the article is a very practical one: hiring. When trying to identify whether a programmer would be an asset for your team, you're not living in the space of cognitive theory, rather you inhabit the realm of quick approximations, gut instincts, and fast, economical decisions.

Regardless, I find this perspective -- Type Physicalism [3] -- fairly objectionable. This is because I see it as a kind of intellectual "racism." Early social sciences utilized this form of reasoning to justify all sorts of discriminatory thinking in the name of "science", reinforcing a rigid mentality of "us" vs. "them." In my own experience, I've seen this sort of approach used to shutdown exploration, to enforce elitism, and dismiss ideas that threaten the authority of the status quo.

Rather than seeing the problem of comprehending FP as a physical limitation of the individual, I see instructional failure as the obstacle to overcome. If we start with the proposition that certain brains are deficient, we are essentially abandoning education. It is the responsibility of the instructor to engage creatively with each student's learning style. When adhering to the idea that certain brains are limited, one discards creative engagement; one doesn't even consider working with the students and their learning styles. This is a view that, however implicitly, can be used to shun diversity and dismiss potential.

I believe the essence of what Joel was shooting for can be approached in a much kinder fashion (adapted for an FP discussion):

None of us was born knowing GOTO statements, global state, mutable data, or for loops. There are many programmers alive, though, whose first contact with programming involved one or more of these. That's their "home town", as it were; their programmatic birth place. Having utilized -- as well as taught -- imperative, OOP, and functional styles of programming, I do not feel that one is intrinsically any harder than another. However, they are sometimes so vastly different from each other in style or syntax or semantics that once a student has solidified around the concepts of a particular paradigm, it can be a challenge retraining to work easily in another.


Why the Objections?

If both "ideal use case" and "brain fit" are given as arguments against adopting FP (or any other new paradigm) in large organisations, and neither are considered logically or philosophically valid, what's at the root of the resistance?

It is not uncommon for changes in an industry or field of study to be met with resistance. The bigger or more different the change from the status quo, very often is proportional to the amount of resistance. I suspect that this is really what we're seeing when companies take a stance against FP. There are very often valid business concerns: "we've made an investment in OOP" or "it will cost too much to train/hire/migrate to FP." 

I would remind those company leaders, though, that new sources of revenue, that product innovation and changes in market adoption do not often come from maintaining or enforcing the current state. Instead, that is an identifying characteristic of companies whose relevance is fading.

Even if your company has market dominance or is a monopoly, there is still a good incentive for exploring alternative paradigms. At the very least, one can uncover inefficiencies and apply new knowledge to remove duplication of efforts, increase margins, etc.


Careers

As a manager, I have found that about half of the senior engineers up for promotion have very little to no interest in taking on different (new to them) programmatic paradigms. They consider current burdens sufficient (or too much) and would rather spend what little free time they have available to them in improving existing systems.

Senior engineers who have a more academic or research bent (or are easily bored) are much more likely to embrace this sort of change. Interestingly, senior engineers who have little to no competitive drive will more readily pick up something new if the need arises. This may be due to such things as not perceiving accumulated knowledge as territory to defend, for example.

Younger engineers with less experience (and less of an investment made in a particular school of thought) are much more willing to take on new challenges. I believe there are many reasons for this, one of which may include an interest in becoming more professionally competitive with their peers.

Junior or senior, I have found that programmers who are currently looking to find new employment are nearly invariably not only willing to take on the challenge of learning different paradigms, but are usually going about that proactively and engaging in self-study.

I want to work with programmers who can take on any problem space in any paradigm and find creative solutions, contributing as valued members of a team. This is certainly an ideal set of characteristics, but one that I have seen in the wilds of the workplace on multiple occasions. It has nothing to do with FP or OOP paradigms, but rather with the people themselves.

Even if a company is locked into well-established processes and views on programming, they may find it in their best interests to provide a more open-minded approach with their employees who would enjoy that. Their retention rates could very well increase dramatically.


Do We Need To?

Philosophy and hiring strategies aside, do we -- as programmers, software projects, or organizations that support programming -- need to take on the burden of learning or adopting functional programming? Quite possibly not.

If Google's plans around Go involve building a new operating system (in the spirit of 1970s C and UNIX), the systems programmers may find pure functions too cumbersome to work with. FP may be too burdensome a fit for that type of work.

If one is not tied to a historical analogy with UNIX, as Mozilla is not with Rust, doing something like creating a new browser engine (or running a remote services company) may be a good fit for FP, especially if one has data showing reduced error counts when using type systems.

As we shall see illustrated in the next post, the usual advice continues to apply: the decision of which paradigm to employ for any given project should be dictated by the best fit and not ideological inflexibility. The bearing this has on programming is innovation: it is the early adopters who have the best chance of leading us into the future.

Up next: Retrospective on Programming Paradigms
Previously: Themes at OSCON 2014


Footnotes

[1] If anyone has additional information as to which FP languages are used by these top 25 companies, please let me know, and I will include that information. Bonus points for knowing of business-critical applications.

[2] Google Switzerland are using Haskell.

[3] Type Physicality is a form of reductive materialism, also known as the Mind-Brain Identity Theory that does not allow for mental states to be realized in organisms or computational systems that do not have a brain. See "Criticisms of Type Physicality" at http://en.wikipedia.org/wiki/Identity_theory_of_mind#Multiple_realizability.

on July 29, 2014 03:53 AM

July 28, 2014

Lubuntu 14.04.1 LTS

Lubuntu Blog

It's already available Ubuntu 14.04.1 LTS, the first update on Trusty Tahr, a recommended by-default setup for all flavours including, of course, Lubuntu. This comes with extended support (until 2019). Various bugs that required updates are rolled into this update: A bug for btrfs has been found, this is scheduled to be fixed in 14.04.2 A bug affecting Alt-F2 is still present For PPC the bug
on July 28, 2014 08:24 PM

Until Next Year CLS!

Benjamin Kerensa

Bs7Qxr CMAAYtLa 300x199 Until Next Year CLS!

Community Leadership Summit 2014 Group Photo

This past week marked my second year helping out as a co-organizer of the Community Leadership Summit. This Community Leadership Summit was especially important because not only did we introduce a new Community Leadership Forum but we also introduced CLSx events and continued to introduce some new changes to our overall event format.

Like previous years, the attendance was a great mix of community managers and leaders. I was really excited to have an entire group of Mozillians who attended this year. As usual, my most enjoyable conversations took place at the pre-CLS social and in the hallway track. I was excited to briefly chat with the Community Team from Lego and also some folks from Adobe and learn about how they are building community in their respective settings.

I’m always a big advocate for community building, so for me, CLS is an event I try and make it to each and every year because I think it is great to have an event for community managers and builders that isn’t limited to any specific industry. It is really a great opportunity to share best practices and really learn from one another so that everyone mutually improves their own toolkits and technique.

It was apparent to me that this year there were even more women than in previous years and so it was really awesome to see that considering CLS is often times heavily attended by men in the tech industry.

I really look forward to seeing the CLS community continue to grow and look forward to participating and co-organizing next year’s event and possibly even kick of a CLSxPortland.

A big thanks to the rest of the CLS Team for helping make this free event a wonderful experience for all and to this years sponsors O’Reilly, Citrix, Oracle, Linux Fund, Mozilla and Ubuntu!

on July 28, 2014 12:00 PM

When you contribute something as a member of a community, who are you actually giving it to? The simple answer of course is “the community” or “the project”, but those aren’t very specific.  On the one hand you have a nebulous group of people, most of which you probably don’t even know about, and on the other you’ve got some cold, lifeless code repository or collection of web pages. When you contribute, who is that you really care about, who do you really want to see and use what you’ve made?

In my last post I talked about the importance of recognition, how it’s what contributors get in exchange for their contribution, and how human recognition is the kind that matters most. But which humans do our contributors want to be recognized by? Are you one of them and, if so, are you giving it effectively?

Owners

The owner of a project has a distinct privilege in a community, they are ultimately the source of all recognition in that community.  Early contributions made to a project get recognized directly by the founder. Later contributions may only get recognized by one of those first contributors, but the value of their recognition comes from the recognition they received as the first contributors.  As the project grows, more generations of contributors come in, with recognition coming from the previous generations, though the relative value of it diminishes as you get further from the owner.

Leaders

After the project owner, the next most important source of recognition is a project’s leaders. Leaders are people who gain authority and responsibility in a project, they can affect the direction of a project through decisions in addition to direct contributions. Many of those early contributors naturally become leaders in the project but many will not, and many others who come later will rise to this position as well. In both cases, it’s their ability to affect the direction of a project that gives their recognition added value, not their distance from the owner. Before a community can grown beyond a very small size it must produce leaders, either through a formal or informal process, otherwise the availability of recognition will suffer.

Legends

Leadership isn’t for everybody, and many of the early contributors who don’t become one still remain with the project, and end of making very significant contributions to it and the community over time.  Whenever you make contributions, and get recognition for them, you start to build up a reputation for yourself.  The more and better contributions you make, the more your reputation grows.  Some people have accumulated such a large reputation that even though they are not leaders, their recognition is still sought after more than most. Not all communities will have one of these contributors, and they are more likely in communities where heads-down work is valued more than very public work.

Mentors

When any of us gets started with a community for the first time, we usually end of finding one or two people who help us learn the ropes.  These people help us find the resources we need, teach us what those resources don’t, and are instrumental in helping us make the leap from user to contributor. Very often these people aren’t the project owners or leaders.  Very often they have very little reputation themselves in the overall project. But because they take the time to help the new contributor, and because theirs is very likely to be the first, the recognition they give is disproportionately more valuable to that contributor than it otherwise would be.

Every member of a community can provide recognition, and every one should, but if you find yourself in one of the roles above it is even more important for you to be doing so. These roles are responsible both for setting the example, and keeping a proper flow, or recognition in a community. And without that flow or recognition, you will find that your flow of contributions will also dry up.

on July 28, 2014 12:00 PM
Kubuntu Plasma 5 ISOs have started being built. These are early development builds of what should be a Tech Preview with our 14.10 release in October. Plasma 5 should be the default desktop in a future release.

Bugs in the packaging should be reported to kubuntu-ppa on Launchpad. Bugs in the software to KDE.

on July 28, 2014 10:33 AM

KDE Project:

Your friendly Kubuntu team is hard at work packaging up Plasma 5 and making sure it's ready to take over your desktop sometime in the future. Scarlett has spent many hours packaging it and now Rohan has spent more hours putting it onto some ISO images which you can download to try as a live session or install.

This is the first build of a flavour we hope to call a technical preview at 14.10. Plasma 4 will remain the default in 14.10 proper. As I said earlier it will eat your babies. It has obvious bugs like kdelibs4 theme not working and mouse themes only sometimes working. But also be excited and if you want to make it beautiful we're sitting in #kubuntu-devel having a party for you to join.

I recommend downloading by Torrent or failing that zsync, the server it's on has small pipes.

Default login is blank password, just press return to login.

on July 28, 2014 10:20 AM

Kbuntu Next

The Kubuntu team is proud to announce the immediate availability of the Plasma 5 flavor of the Kubuntu ISO which can be found here (here’s a mirror to the torrent file in case the server is slow). Unlike it’s Neon 5 counterpart , this ISO contains packages made from the stock Plasma 5.0 release . The ISO is meant to be a technical preview of what is to come when Kubuntu switches to Plasma 5 by default in a future release of Kubuntu.

A special note of thanks to the Plasma team for making a rocking release. If you enjoy using KDE as much as we do, please consider donating to Kubuntu and KDE :)

NB: When booting the live ISO up, at the login screen, just hit the login button and you’ll be logged into a Plasma 5 session.


on July 28, 2014 09:39 AM
IMG 20140723 161033 300x225 Mozilla at OReilly Open Source Convention

Mozililla OSCON 2014 Team

This past week marked my fourth year of attending O’Reilly Open Source Convention (OSCON). It was also my second year speaking at the convention. One new thing that happened this year was I co-led Mozilla’s presence during the convention from our booth to the social events and our social media campaign.

Like each previous year, OSCON 2014 didn’t disappoint and it was great to have Mozilla back at the convention after not having a presence for some years. This year our presence was focused on promoting Firefox OS, Firefox Developer Tools and Firefox for Android.

While the metrics are not yet finished being tracked, I think our presence was a great success. We heard from a lot of developers who are already using our Developer tools and from a lot of developers who are not; many of which we were able to educate about new features and why they should use our tools.

IMG 20140721 171609 300x168 Mozilla at OReilly Open Source Convention

Alex shows attendee Firefox Dev Tools

Attendees were very excited about Firefox OS with a majority of those stopping by asking about the different layers of the platform, where they can get a device, and how they can make an app for the platform.

In addition to our booth, we also had members of the team such as Emma Irwin who helped support OSCON’s Children’s Day by hosting a Mozilla Webmaker event which was very popular with the kids and their parents. It really was great to see the future generation tinkering with Open Web technologies.

Finally, we had a social event on Wednesday evening that was very popular so much that the Mozilla Portland office was packed till last call. During the social event, we had a local airbrush artist doing tattoos with several attendees opting for a Firefox Tattoo.

All in all, I think our presence last week was very positive and even the early numbers look positive. I want to give a big thanks to Stormy Peters, Christian Heilmann, Robyn Chau, Shezmeen Prasad, Dave Camp, Dietrich Ayala, Chris Maglione, William Reynolds, Emma Irwin, Majken Connor, Jim Blandy, Alex Lakatos for helping this event be a success.

on July 28, 2014 01:48 AM

July 25, 2014

Xubuntu 14.04 Trusty Tahr

Xubuntu 14.04 Trusty Tahr

The Xubuntu team is pleased to announce the immediate release of Xubuntu 14.04.1 Xubuntu 14.04 is an LTS (Long-Term Support) release and will be supported for 3 years. This is the first Point Release of it’s cycle.

The final release images are available as Torrents and direct downloads at http://cdimage.ubuntu.com/xubuntu/releases/trusty/release/

As the main server will be very busy in the first days after the release, we recommend using the Torrents wherever possible.

For support with the release, navigate to Help & Support for a complete list of methods to get help.

Bug fixes for the first point release

  • Black screen after wakeup from suspending by closing the laptop lid. (1303736)
  • Light Locker blanks the screen when playing video. (1309744)
  • Include MenuLibre 2.0.4, which contains many fixes. (1323405)
  • The documentation is now attributed to the Translators.

Highlights, changes and known issues

The highlights of this release include:

  • Light Locker replaces xscreensaver for screen locking, a setting editing GUI is included
  • The panel layout is updated, and now uses Whiskermenu as the default menu
  • Mugshot is included to allow you to easily edit your personal preferences
  • MenuLibre for menu editing, with full Xfce support, replaces Alacarte
  • A community wallpapers package, which includes work from the five winners of the wallpaper contest
  • GTK Theme Config to customize your desktop theme colors
  • Updated artwork, including various enhancements to themes as well as a new default wallpaper

Some of the known issues include:

  • Window manager shortcut keys don’t work after reboot (1292290)
  • Sorting by date or name not working correctly in Ristretto (1270894)
  • Due to the switch from xscreensaver to light-locker, some users might have issues with timing of locking; removing xscreensaver from the system should fix these problems
  • IBus does not support certain keyboard layouts (1284635). Only affects upgrades with certain keyboard layouts. See release notes for a workaround.

To see the complete list of new features, improvements and known and fixed bugs, read the release notes.

on July 25, 2014 06:55 PM


This month:

* Command & Conquer
* How-To : Python, LibreOffice, and GRUB2.
* Graphics : Inkscape.
* Book Review: Puppet
* Security – TrueCrypt Alternatives
* CryptoCurrency: Dualminer and dual-cgminer
* Arduino
plus: Q&A, Linux Labs, Ubuntu Games, and Ubuntu Women.

Get it while it’s hot!
http://fullcirclemagazine.org/issue-87/

on July 25, 2014 04:47 PM

Bringing fluid motion to browsing

Canonical Design Team

In the previous Blog Post, we looked at how we use the Recency principle to redesign the experience around bookmarks, tabs and history.
In this blog post, we look at how the new Ubuntu Browser makes the UI fade to the background in favour of the content. The design focuses on physical impulse familiarity – “muscle memory” – by marrying simple gestures to the two key browser tasks, making the experience feel as fluid and simple as flipping through a magazine.

 

Creating a new tab

For all new browsers, the approach to the URI Top Bar that enables searching as well as manual address entry has made the “new tab” function more central to the experience than ever. In addition, evidence suggests that opening a new tab is the third of the most frequently used action in browser. To facilitate this, we made opening a new tab effortless and even (we think) a bit fun.
By pulling down anywhere on the current page, you activate a sprint loaded “new tab” feature that appears under the address bar of the page. Keep dragging far enough, and you’ll see a new blank page coming into view. If you release at this stage, a new tab will load ready with the address bar and keyboard open as well as an easy way to get to your bookmarks. But, if you change your mind, just drag the current page back up or release early and your current page comes back.

http://youtu.be/zaJkNRvZWgw

 

Get to your open tabs and recently visited sites

Pulling the current page downward can create a new blank tab, and conversely dragging the bottom edge upward shows you already open tabs ordered by recency that echoes the right edge “open apps” view.

If you keep on dragging upward without releasing, you can dig even further into the past with your most recently visited pages grouped by site in a “history” list. By grouping under the site domain name, it’s easier to find what you’re looking for without thumbing through hundreds of individual page URLs. However, if you want all the detail, tap an item in the list to see your complete history.

Blog Post - Browser #2 (1)
It’s not easy to improve upon such a well-worn application as the browser, it’s true. We’re hopeful that by adding new fluidity to creating, opening and switching between tabs, our users will find that this browsing experience is simpler to use, especially with one hand, and feels more seamless and fluid than ever.

 

 

on July 25, 2014 11:42 AM

The first update to our LTS release 14.04 is out now. This contains all the bugfixes added to 14.04 since its first release in April. Users of 14.04 can run the normal update procedure to get these bufixes.

See the 14.04.1 release announcement.

Download 14.04.1 images.

on July 25, 2014 10:35 AM

Edubuntu Long-Term Support

The Edubuntu team is proud to announce Edubuntu 14.04.1 LTS, which is the first Long Term Support (LTS) update as part of the Edubuntu 14.04 5 years support cycle.

This point release includes all the bug fixes and improvements that have been applied via updates to Edubuntu 14.04 LTS since it has been released. It also includes updated hardware support and installer fixes. If you have an Edubuntu 14.04 LTS system and have applied all the available updates, then your system will already be on 14.04.1 LTS and there is no need to re-install. For new installations, installing from the updated media is recommended since it will be installable on more systems than before and will require less updates than installing from the original 14.04 LTS media.

  • Information on where to download the Edubuntu 14.04.1 LTS media is available from the Downloads page.
  • We do not ship free Edubuntu discs at this time, however, there are 3rd party distributors available who ship discs at reasonable prices listed on the Edubuntu Martketplace

To ensure that the Edubuntu 14.04 LTS series will continue to work on the latest hardware as well as keeping quality high right out of the box, further point releases will be made available during its lifecycle. More information will be available on the release schedule page on the Ubuntu wiki.

See also

Thanks for your support and interest in Edubuntu!

on July 25, 2014 04:46 AM

Plasma’s Road to Wayland

Sebastian Kügler

Road to WaylandWith the Plasma 5.0 release out the door, we can lift our heads a bit and look forward, instead of just looking at what’s directly ahead of us, and make that work by fixing bug after bug. One of the important topics which we have (kind of) excluded from Plasma’s recent 5.0 release is support for Wayland. The reason is that much of the work that has gone into renovating our graphics stack was also needed in preparation for Wayland support in Plasma. In order to support Wayland systems properly, we needed to lift the software stack to Qt5, make X11 dependencies in our underlying libraries, Frameworks 5 optional. This part is pretty much done. We now need to ready support for non-X11 systems in our workspace components, the window manager and compositor, and the workspace shell.

Let’s dig a bit deeper and look at at aspects underlying to and resulting from this transition.

Why Wayland?

The short answer to this question, from a Plasma perspective, is:

  • Xorg lacks modern interfaces and protocols, instead it carries a lot of ballast from the past. This makes it complex and hard to work with.
  • Wayland offers much better graphics support than Xorg, especially in terms of rendering correctness. X11′s asynchronous rendering makes it impossible to be sure about correctness and timeliness of graphics that ends up on screen. Instead, Wayland provides the guarantee that every frame is perfect
  • Security considerations. It is almost impossible to shield applications properly from each other. X11 allows applications to wiretap each other’s input and output. This makes it a security nightmare.

I could go deeply into the history of Xorg, and add lots of technicalities to that story, but instead of giving you a huge swath of text, hop over to Youtube and watch Daniel Stone’s presentation “The Real Story Behind Wayland and X” from last year’s LinuxConf.au, which gives you all the information you need, in a much more entertaining way than I could present it. H-Online also has an interesting background story “Wayland — Beyond X”.

While Xorg is a huge beast that does everything, like input, printing, graphics (in many different flavours), Wayland is limited by design to the use-cases we currently need X for, without the ballast.
With all that in mind, we need to respect our elders and acknowledge Xorg for its important role in the history of graphical Linux, but we also need to look beyond it.

What is Wayland support?

KDE Frameworks 5 apps under Weston

KDE Frameworks 5 apps under Weston

Without communicating our goal, we might think of entirely different things when talking about Wayland support. Will Wayland retire X? I don’t think it will in the near future, the point where we can stop caring for X11-based setups is likely still a number of years away, and I would not be surprised if X11 was still a pretty common thing to find in enterprise setups ten years down the road from now. Can we stop caring about X11? Surely not, but what does this mean for Wayland? The answer to this question is that support for Wayland will be added, and that X11 will not be required anymore to run a Plasma desktop, but that it is possible to run Plasma (and apps) under both, X11 and Wayland systems. This, I believe, is the migration process that serves our users best, as the question “When can I run Plasma on Wayland?” can then be answered on an individual basis, and nobody is going to be thrown into the deep (at least not by us, your distro might still decide to not offer support for X11 anymore — that is not in our hands). To me, while a quick migration to Wayland (once ready) is something desirable, realistically, people will be running Plasma on X11 for years to come. Wayland can be offered as an alternative at first, and then promote to primary platform once the whole stack matures further.

Where at we now?

With the release of KDE Frameworks 5, most of the issues in our underlying libraries have been ironed out, that means X11-dependent codepaths have become optional. Today, it’s possible to run most applications built on top of Frameworks 5 under a Wayland compositor, independent from X11. This means that applications can run under both, X11 and Wayland with the same binary. This is already really cool, as without applications, having a workspace (which in a way is the glue between applications would be a pointless endeavour). This chicken-egg situation plays both ways, though: Without a workspace environment, just having apps run under Wayland is not all that useful. This video shows some of our apps under the Weston compositor. (This is not a pure Wayland session “on bare metal”, but one running in an X11 window in my Plasma 5 session for the purpose of the screen-recoding.)

For a full-blown workspace, the porting situation is a bit different, as the workspace interacts much more intimately with the underlying display server than applications do at this point. These interactions are well-hidden behind the Qt platform abstraction. The workspace provides the host for rendering graphics onto the screen (the compositor) and the machinery to start and switch between applications.

We are currently missing a number of important pieces of the full puzzle: Interfaces between the workspace shell, the compositor (KWin) and the display server are not yet well-defined or implemented, some pioneering work is ahead of us. There is also a number of workspace components that need bigger adjustments, global shortcut handling being a good example. Most importantly, KWin needs to take over the role of Wayland compositor. While some support for Wayland has already been added to KWin, the work is not yet complete. Besides KWin, we also need to add support for Wayland to various bits of our workspace. Information about attached screens and their layout has to be made accessible. Global keyboard shortcuts only support X11 right now. The screen locking mechanism needs to be implemented. Information about Windows for the task-manager has to be shared. Dialog positioning and rendering needs to be ported. There are also a few assumptions in startkde and klauncher that currently prevent them from being able to start a session under Wayland, and more bits and pieces which need additional work to offer a full workspace experience under Wayland.

Porting Strategy

The idea is to be able to run the same binaries under both, X11 and Wayland. This means that we (need to decide at runtime how to interact with the windowing system. The following strategy is useful (in descending order of preference):

  • Use abstract Qt and Frameworks (KF5) APIs
  • Use XCB when there are no suitable Qt and KF5 APIs
  • Decide at runtime whether to call X11-specific functions

In case we have to resort to functions specific to a display server, X11 should be optional both at build-time and at run-time:

  • The build of X11-dependent code optional. This can be done through plugins, which are optionally included by the build-system or (less desirably) by #ifdef’ing blocks of code.
  • Even with X11 support built into the binary, calls into X11-specific libraries should be guarded at runtime (QX11Info::isPlatformX11() can be used to check at runtime).

Get your Hands Dirty!

Computer graphics are an exciting thing, and many of us are longing for the day they can remove X11 from their systems. This day will eventually come, but it won’t come by itself. It’s a very exciting time to get involved, and make the migration happen. As you can see, we have a multitude of tasks that need work. An excellent first step is to build the thing on your system and try running, fix issues, and send us patches. Get in touch with us on Freenode’s #plasma IRC channel, or via our mailing list plasma-devel(at)kde.org.

on July 25, 2014 02:28 AM

The Ubuntu team is pleased to announce the release of Ubuntu 14.04.1 LTS (Long-Term Support) for its Desktop, Server, Cloud, and Core products, as well as other flavours of Ubuntu with long-term support.

As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 14.04 LTS.

Kubuntu 14.04.1 LTS, Edubuntu 14.04.1 LTS, Xubuntu 14.04.1 LTS, Mythbuntu 14.04.1 LTS, Ubuntu GNOME 14.04.1 LTS, Lubuntu 14.04.1 LTS, Ubuntu Kylin 14.04.1 LTS, and Ubuntu Studio 14.04.1 LTS are also now available. More details can be found in their individual release notes:

https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes#Official_flavours

Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud, Ubuntu Core, Ubuntu Kylin, Edubuntu, and Kubuntu. All the remaining flavours will be supported for 3 years.

To get Ubuntu 14.04.1

In order to download Ubuntu 14.04.1, visit:

http://www.ubuntu.com/download

Users of Ubuntu 12.04 will soon be offered an automatic upgrade to 14.04.1 via Update Manager. For further information about upgrading, see:

https://help.ubuntu.com/community/TrustyUpgrades

As always, upgrades to the latest version of Ubuntu are entirely free of charge.

We recommend that all users read the 14.04.1 release notes, which document caveats and workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:

https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes

If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

Help Shape Ubuntu

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:

http://www.ubuntu.com/community/get-involved

About Ubuntu

Ubuntu is a full-featured Linux distribution for desktops, laptops, clouds and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:

http://www.ubuntu.com/support

More Information

You can learn more about Ubuntu and about this release on our website listed below:

http://www.ubuntu.com/

To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

http://lists.ubuntu.com/mailman/listinfo/ubuntu-announce

Originally posted to the ubuntu-announce mailing list on Fri Jul 25 01:35:00 UTC 2014 by Adam Conrad

on July 25, 2014 02:11 AM

July 24, 2014

S07E17 – The One with the Chicken Pox

Ubuntu Podcast from the UK LoCo

Tony Whitmore and Laura Cowen are in Studio L, Alan Pope is AWOL, and Mark Johnson Skypes in from his sick bed for Season Seven, Episode Seventeen of the Ubuntu Podcast!

In this week’s show:-

We’ll be back next week, when we’ll be interviewing Graham Binns about the MAAS project project, and we’ll go through your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on July 24, 2014 07:30 PM

Currently I am enjoying my summer vacation. Vacation is when you do non-stop activities that make fun, no matter whether this is more relaxing or challenging, and usually in a different place. So I am going to take the opportunity to visit Kazan, and furthermore I am taking the other opportunity and will give an ownCloud talk (btw, did you hear, ownCloud 7 was released?) at the local hackerspace, FOSS Labs. Due to my lack of Russian I will stick to English, however ;)

So, if you are there and interested in ownCloud please save the date:

Monday, July 28th, 18:00
FOSS Labs
Universitetskaya 22, of. 114
420111 Kazan, Russia

Thank you, FOSS Labs and especially Mansur Ziatdinov, for making this possible. I am very much excited to not only to share information with you, but also to learn and get to know local (FOSS) culture!

Picture: Kazan Kremlin, derived from Skyline of Kazan city by TY-214.

on July 24, 2014 07:20 PM

You might already have Ubuntu Desktop installed and you might want to just run one application without stripping it down. This article should give you a decent idea how to convert a stock Desktop/Unity install into a single-application computer.

This follows straight on from today's other article on building a kiosk computer with Ubuntu and Chrome [from scratch]. In my mind that's the perfect setup: low fat and speedy... But we don't always get it right first time. You might have already been battling with a full Ubuntu install and not have the time to strip it down.

This tutorial assumes you're starting with an Ubuntu desktop, all installed with working network and graphics. While we're in graphical-land, you might as well go and install Chrome.

I have tested this in a clean 14.04 install but be careful. Back up any important data before you commit.

sudo apt update
sudo apt install --no-install-recommends openbox

sudo install -b -m 755 /dev/stdin /opt/kiosk.sh <<- EOF
  #!/bin/bash

  xset -dpms
  xset s off
  openbox-session &

  while true; do
    rm -rf ~/.{config,cache}/google-chrome/
    google-chrome --kiosk --no-first-run  'http://thepcspy.com'
  done
EOF

sudo install -b -m 644 /dev/stdin /etc/init/kiosk.conf <<- EOF
  start on (filesystem and stopped udevtrigger)
  stop on runlevel [06]

  emits starting-x
  respawn

  exec sudo -u $USER startx /etc/X11/Xsession /opt/kiosk.sh --
EOF

sudo dpkg-reconfigure x11-common  # select Anybody

echo manual | sudo tee /etc/init/lightdm.override  # disable desktop

sudo reboot

This should boot you into a browser looking at my home page (use sudoedit /opt/kiosk.sh to change that), but broadly speaking, we're done.

If you ever need to get back into the desktop you should be able to run sudo start lightdm. It'll probably appear on VT8 (Control+Alt+F8 to switch).

Why wouldn't I always do it this way?

I'll freely admit that I've done farts longer than it took to run the above. Starting from an Ubuntu Desktop base does do a lot of the work for us, however it is demonstrably flabbier:

  • The Server result was 1.6GB, using 117MB RAM with 38 processes.
  • The Desktop result is 3.7GB, using 294MB RAM with 80 processes!

Yeah, the Desktop is still loading a number of udisks mount helpers, PulseAudio, GVFS, Deja Dup, Bluetooth daemons, volume controls, Ubuntu 1, CUPS the printer server and all the various Network and Modem Manager things a traditional desktop needs.

This is the reason you base your production model off Ubuntu Server (or even Ubuntu Minimal).

And remember that you aren't done yet. There's a big list of boring jobs to do before it's Martini O'Clock

Just remember that everything I said about physical and network security last time applies doubly here. Ubuntu-proper ships a ton of software on its 1GB image and quite a lot more of that will be running, even after we've disabled the desktop. You're going to want to spend time stripping some of that out and putting in place any security you need to stop people getting in.

Just be careful and conscientious about how you deploy software.

on July 24, 2014 04:16 PM

Welcome all to the first of many “Who We Are” posts.  These posts will introduce you to many of our  members of the team.  We will start with Svetlana Belkin, the founder and admin of the team:

I am Svetlana Belkin (A.K.A. belkinsa everywhere in Ubuntu community and
Mechafish on the Ubuntu Forums), and I am getting my BS in biology with
molecular sciences as my focus at University of Cincinnati. I used
Ubuntu since 2009, but the only “scientific” program that I used was
Ugene. But hopefully, I will get to use more in my field.


Filed under: Who We Are Tagged: Svetlana Belkin, Ubuntu Forums, University of Cincinnati
on July 24, 2014 03:48 PM

Single-purpose kiosk computing might seem scary and industrial but thanks to cheap hardware and Ubuntu, it's an increasingly popular idea. I'm going to show you how and it's only going to take a few minutes to get to something usable.

Hopefully we'll do better than the image on the right.

We're going to be running a very light stack of X, Openbox and the Google Chrome web browser to load a specified website. The website could be local files on the kiosk or remote. It could be interactive or just an advertising roll. Of course you could load any standalone application. XBMC for a media centre, Steam for a gaming machine, Xibo or Concerto for digital signage. The possibilities are endless.

The whole thing takes less than 2GB of disk space and can run on 512MB of RAM.

Update: If you've already installed, read this companion tutorial if you want to convert an existing Ubuntu Desktop install to a kiosk.

Step 1: Installing Ubuntu Server

I'm picking the Server flavour of Ubuntu for this. It's all the nuts-and-bolts of regular Ubuntu without installing a load of flabby graphical applications that we're never ever going to use.

It's free for download. I would suggest 64bit if your hardware supports it and I'm going with the latest LTS (14.04 at the time of writing). Sidebar: If you've never tested your kiosk's hardware in Ubuntu before it might be worth download the Desktop Live USB, burning it and checking everything works.

Just follow the installation instructions. Burn it to a USB stick, boot the kiosk to it and go through. I just accepted the defaults and when asked:

  • Set my username to user and set an hard-to-guess, strong password.
  • Enabled automatic updates
  • At the end when tasksel ran, opted to install the SSH server task so I could SSH in from a client that supported copy and paste!

After you reboot, you should be looking at a Ubuntu 14.04 LTS ubuntu tty1 login prompt. You can either SSH in (assuming you're networked and you installed the SSH server task) or just log in.

The installer auto-configures an ethernet connection (if one exists) so I'm going to assume you already have a network connection. If you don't or want to change to wireless, this is the point where you'd want to use nmcli to add and enable your connection. It'll go something like this:

sudo apt install network-manager
sudo nmcli dev wifi con <SSID> password <password>

Later releases should have nmtui which will make this easier but until then you always have man nmcli :)

Step 2: Install all the things

We obviously need a bit of extra software to get up and running but we can keep this fairly compact. We need to install:

  • X (the display server) and some scripts to launch it
  • A lightweight window manager to enable Chrome to go fullscreen
  • Google Chrome

We'll start by adding the Google-maintained repository for Chrome:

sudo add-apt-repository 'deb http://dl.google.com/linux/chrome/deb/ stable main'
wget -qO- https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -

Then update our packages list and install:

sudo apt update
sudo apt install --no-install-recommends xorg openbox google-chrome-stable

If you omit --no-install-recommends you will pull in hundreds of megabytes of extra packages that would normally make life easier but in a kiosk scenario, only serve as bloat.

Step 3: Loading the browser on boot

I know we've only been going for about five minutes but we're almost done. We just need two little scripts.

Run sudoedit /opt/kiosk.sh first. This is going to be what loads Chrome once X has started. It also needs to wipe the Chrome profile so that between loads you aren't persisting stuff. This in incredibly important for kiosk computing because you never want a user to be able to affect the next user. We want them to start with a clean environment every time. Here's where I've got to:

#!/bin/bash

xset -dpms
xset s off
openbox-session &

while true; do
  rm -rf ~/.{config,cache}/google-chrome/
  google-chrome --kiosk --no-first-run  'http://thepcspy.com'
done

When you're done there, Control+X to exit and run sudo chmod +x /opt/kiosk.sh to make the script executable. Then we can move onto starting X (and loading kiosk.sh).

Run sudoedit /etc/init/kiosk.conf and this time fill it with:

start on (filesystem and stopped udevtrigger)
stop on runlevel [06]

console output
emits starting-x

respawn

exec sudo -u user startx /etc/X11/Xsession /opt/kiosk.sh --

Replace user with your username. Exit, Control+X, save.

X still needs some root privileges to start. These are locked down by default but we can allow anybody to start an X server by running sudo dpkg-reconfigure x11-common and selecting "Anybody".

After that we should be able to test. Run sudo start kiosk (or reboot) and it should all come up.

One last problem to fix is the amount of garbage it prints to screen on boot. Ideally your users will never see it boot but when it does, it's probably better that it doesn't look like the Matrix. A fairly simple fix, just run sudoedit /etc/default/grub and edit so the corresponding lines look like this:

GRUB_DEFAULT=0
GRUB_HIDDEN_TIMEOUT=0
GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT=0
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
GRUB_CMDLINE_LINUX=""

Save and exit that and run sudo update-grub before rebooting.
The monitor should remain on indefinitely.

Final step: The boring things...

Technically speaking we're done; we have a kiosk and we're probably sipping on a Martini. I know, I know, it's not even midday, we're just that good... But there are extra things to consider before we let a grubby member of the public play with this machine:

  • Can users break it? Open keyboard access is generally a no-no. If they need a keyboard, physically disable keys so they only have what they need. I would disable all the F* keys along with Control, Alt, Super... If they have a standard mouse, right click will let them open links in new windows and tabs and OMG this is a nightmare. You need to limit user-input.

  • Can it break itself? Does the website you're loading have anything that's going to try and open new windows/tabs/etc? Does it ask for any sort of input that you aren't allowing users? Perhaps a better question to ask is Can it fix itself? Consider a mechanism for rebooting that doesn't involve a phone call to you.

  • Is it physically secure? Hide and secure the computer. Lock the BIOS. Ensure no access to USB ports (fill them if you have to). Disable recovery mode. Password protect Grub and make sure it stays hidden (especially with open keyboard access).

  • Is it network secure? SSH is the major ingress vector here so follow some basic tips: so at the very least move it to another port, only allow key-based authentication, install fail2ban and make sure fail2ban is telling you about failed logins.

  • What if Chrome is hacked directly? What if somebody exploited Chrome and had command-level access as user? Well first of all, you can try to stop that happening with AppArmor (should still apply) but you might also want to change things around so that the user running X and the browser doesn't have sudo access. I'd do that by adding a new user and changing the two scripts accordingly.

  • How are you maintaining it? Automatic updates are great but what if that breaks everything? How will you access it in the field to maintain it if (for example) the network dies or there's a hardware failure? This is aimed more at the digital signage people than simple kiosks but it's something to consider.

You can mitigate a lot of the security issues by having no live network (just displaying local files) but this obviously comes at the cost of maintenance. There's no one good answer for that.

Photo credit: allegr0/Candace

on July 24, 2014 10:36 AM

I have used LaTeX and latex-beamer for pretty much my entire life of document and presentation production, i. e. since about my 9th school grade. I’ve always found the LaTeX syntax a bit clumsy, but with good enough editor shortcuts to insert e. g. \begin{itemize} \item...\end{itemize} with just two keystrokes, it has been good enough for me.

A few months ago a friend of mine pointed out pandoc to me, which is just simply awesome. It can convert between a million document formats, but most importantly take Markdown and spit out LaTeX, or directly PDF (through an intermediate step of building a LaTeX document and calling pdftex). It also has a template for beamer. Documents now look soo much more readable and are easier to write! And you can always directly write LaTeX commands without any fuss, so that you can use markdown for the structure/headings/enumerations/etc., and LaTeX for formulax, XYTex and the other goodies. That’s how it should always should have been! ☺

So last night I finally sat down and created a vim config for it:

"-- pandoc Markdown+LaTeX -------------------------------------------

function s:MDSettings()
    inoremap <buffer> <Leader>n \note[item]{}<Esc>i
    noremap <buffer> <Leader>b :! pandoc -t beamer % -o %<.pdf<CR><CR>
    noremap <buffer> <Leader>l :! pandoc -t latex % -o %<.pdf<CR>
    noremap <buffer> <Leader>v :! evince %<.pdf 2>&1 >/dev/null &<CR><CR>

    " adjust syntax highlighting for LaTeX parts
    "   inline formulas:
    syntax region Statement oneline matchgroup=Delimiter start="\$" end="\$"
    "   environments:
    syntax region Statement matchgroup=Delimiter start="\\begin{.*}" end="\\end{.*}" contains=Statement
    "   commands:
    syntax region Statement matchgroup=Delimiter start="{" end="}" contains=Statement
endfunction

autocmd BufRead,BufNewFile *.md setfiletype markdown
autocmd FileType markdown :call <SID>MDSettings()

That gives me “good enough” (with some quirks) highlighting without trying to interpret TeX stuff as Markdown, and shortcuts for calling pandoc and evince. Improvements appreciated!

on July 24, 2014 09:38 AM
Tomorrow, February 19, 2014, I will be giving a presentation to the Capital of Texas chapter of ISSA, which will be the first public presentation of a new security feature that has just landed in Ubuntu Trusty (14.04 LTS) in the last 2 weeks -- doing a better job of seeding the pseudo random number generator in Ubuntu cloud images.  You can view my slides here (PDF), or you can read on below.  Enjoy!


Q: Why should I care about randomness? 

A: Because entropy is important!

  • Choosing hard-to-guess random keys provide the basis for all operating system security and privacy
    • SSL keys
    • SSH keys
    • GPG keys
    • /etc/shadow salts
    • TCP sequence numbers
    • UUIDs
    • dm-crypt keys
    • eCryptfs keys
  • Entropy is how your computer creates hard-to-guess random keys, and that's essential to the security of all of the above

Q: Where does entropy come from?

A: Hardware, typically.

  • Keyboards
  • Mouses
  • Interrupt requests
  • HDD seek timing
  • Network activity
  • Microphones
  • Web cams
  • Touch interfaces
  • WiFi/RF
  • TPM chips
  • RdRand
  • Entropy Keys
  • Pricey IBM crypto cards
  • Expensive RSA cards
  • USB lava lamps
  • Geiger Counters
  • Seismographs
  • Light/temperature sensors
  • And so on

Q: But what about virtual machines, in the cloud, where we have (almost) none of those things?

A: Pseudo random number generators are our only viable alternative.

  • In Linux, /dev/random and /dev/urandom are interfaces to the kernel’s entropy pool
    • Basically, endless streams of pseudo random bytes
  • Some utilities and most programming languages implement their own PRNGs
    • But they usually seed from /dev/random or /dev/urandom
  • Sometimes, virtio-rng is available, for hosts to feed guests entropy
    • But not always

Q: Are Linux PRNGs secure enough?

A: Yes, if they are properly seeded.

  • See random(4)
  • When a Linux system starts up without much operator interaction, the entropy pool may be in a fairly predictable state
  • This reduces the actual amount of noise in the entropy pool below the estimate
  • In order to counteract this effect, it helps to carry a random seed across shutdowns and boots
  • See /etc/init.d/urandom
...
dd if=/dev/urandom of=$SAVEDFILE bs=$POOLBYTES count=1 >/dev/null 2>&1

...

Q: And what exactly is a random seed?

A: Basically, its a small catalyst that primes the PRNG pump.

  • Let’s pretend the digits of Pi are our random number generator
  • The random seed would be a starting point, or “initialization vector”
  • e.g. Pick a number between 1 and 20
    • say, 18
  • Now start reading random numbers

  • Not bad...but if you always pick ‘18’...

XKCD on random numbers

RFC 1149.5 specifies 4 as the standard IEEE-vetted random number.

Q: So my OS generates an initial seed at first boot?

A: Yep, but computers are predictable, especially VMs.

  • Computers are inherently deterministic
    • And thus, bad at generating randomness
  • Real hardware can provide quality entropy
  • But virtual machines are basically clones of one another
    • ie, The Cloud
    • No keyboard or mouse
    • IRQ based hardware is emulated
    • Block devices are virtual and cached by hypervisor
    • RTC is shared
    • The initial random seed is sometimes part of the image, or otherwise chosen from a weak entropy pool

Dilbert on random numbers


http://j.mp/1dHAK4V


Q: Surely you're just being paranoid about this, right?

A: I’m afraid not...

Analysis of the LRNG (2006)

  • Little prior documentation on Linux’s random number generator
  • Random bits are a limited resource
  • Very little entropy in embedded environments
  • OpenWRT was the case study
  • OS start up consists of a sequence of routine, predictable processes
  • Very little demonstrable entropy shortly after boot
  • http://j.mp/McV2gT

Black Hat (2009)

  • iSec Partners designed a simple algorithm to attack cloud instance SSH keys
  • Picked up by Forbes
  • http://j.mp/1hcJMPu

Factorable.net (2012)

  • Minding Your P’s and Q’s: Detection of Widespread Weak Keys in Network Devices
  • Comprehensive, Internet wide scan of public SSH host keys and TLS certificates
  • Insecure or poorly seeded RNGs in widespread use
    • 5.57% of TLS hosts and 9.60% of SSH hosts share public keys in a vulnerable manner
    • They were able to remotely obtain the RSA private keys of 0.50% of TLS hosts and 0.03% of SSH hosts because their public keys shared nontrivial common factors due to poor randomness
    • They were able to remotely obtain the DSA private keys for 1.03% of SSH hosts due to repeated signature non-randomness
  • http://j.mp/1iPATZx

Dual_EC_DRBG Backdoor (2013)

  • Dual Elliptic Curve Deterministic Random Bit Generator
  • Ratified NIST, ANSI, and ISO standard
  • Possible backdoor discovered in 2007
  • Bruce Schneier noted that it was “rather obvious”
  • Documents leaked by Snowden and published in the New York Times in September 2013 confirm that the NSA deliberately subverted the standard
  • http://j.mp/1bJEjrB

Q: Ruh roh...so what can we do about it?

A: For starters, do a better job seeding our PRNGs.

  • Securely
  • With high quality, unpredictable data
  • More sources are better
  • As early as possible
  • And certainly before generating
  • SSH host keys
  • SSL certificates
  • Or any other critical system DNA
  • /etc/init.d/urandom “carries” a random seed across reboots, and ensures that the Linux PRNGs are seeded

Q: But how do we ensure that in cloud guests?

A: Run Ubuntu!


Sorry, shameless plug...

Q: And what is Ubuntu's solution?

A: Meet pollinate.

  • pollinate is a new security feature, that seeds the PRNG.
  • Introduced in Ubuntu 14.04 LTS cloud images
  • Upstart job
  • It automatically seeds the Linux PRNG as early as possible, and before SSH keys are generated
  • It’s GPLv3 free software
  • Simple shell script wrapper around curl
  • Fetches random seeds
  • From 1 or more entropy servers in a pool
  • Writes them into /dev/urandom
  • https://launchpad.net/pollinate

Q: What about the back end?

A: Introducing pollen.

  • pollen is an entropy-as-a-service implementation
  • Works over HTTP and/or HTTPS
  • Supports a challenge/response mechanism
  • Provides 512 bit (64 byte) random seeds
  • It’s AGPL free software
  • Implemented in golang
  • Less than 50 lines of code
  • Fast, efficient, scalable
  • Returns the (optional) challenge sha512sum
  • And 64 bytes of entropy
  • https://launchpad.net/pollen

Q: Golang, did you say?  That sounds cool!

A: Indeed. Around 50 lines of code, cool!

pollen.go

Q: Is there a public entropy service available?

A: Hello, entropy.ubuntu.com.

  • Highly available pollen cluster
  • TLS/SSL encryption
  • Multiple physical servers
  • Behind a reverse proxy
  • Deployed and scaled with Juju
  • Multiple sources of hardware entropy
  • High network traffic is always stirring the pot
  • AGPL, so source code always available
  • Supported by Canonical
  • Ubuntu 14.04 LTS cloud instances run pollinate once, at first boot, before generating SSH keys

Q: But what if I don't necessarily trust Canonical?

A: Then use a different entropy service :-)

  • Deploy your own pollen
    • bzr branch lp:pollen
    • sudo apt-get install pollen
    • juju deploy pollen
  • Add your preferred server(s) to your $POOL
    • In /etc/default/pollinate
    • In your cloud-init user data
      • In progress
  • In fact, any URL works if you disable the challenge/response with pollinate -n|--no-challenge

Q: So does this increase the overall entropy on a system?

A: No, no, no, no, no!

  • pollinate seeds your PRNG, securely and properly and as early as possible
  • This improves the quality of all random numbers generated thereafter
  • pollen provides random seeds over HTTP and/or HTTPS connections
  • This information can be fed into your PRNG
  • The Linux kernel maintains a very conservative estimate of the number of bits of entropy available, in /proc/sys/kernel/random/entropy_avail
  • Note that neither pollen nor pollinate directly affect this quantity estimate!!!

Q: Why the challenge/response in the protocol?

A: Think of it like the Heisenberg Uncertainty Principle.

  • The pollinate challenge (via an HTTP POST submission) affects the pollen's PRNG state machine
  • pollinate can verify the response and ensure that the pollen server at least “did some work”
  • From the perspective of the pollen server administrator, all communications are “stirring the pot”
  • Numerous concurrent connections ensure a computationally complex and impossible to reproduce entropy state

Q: What if pollinate gets crappy or compromised or no random seeds?

A: Functionally, it’s no better or worse than it was without pollinate in the mix.

  • In fact, you can `dd if=/dev/zero of=/dev/random` if you like, without harming your entropy quality
    • All writes to the Linux PRNG are whitened with SHA1 and mixed into the entropy pool
    • Of course it doesn’t help, but it doesn’t hurt either
  • Your overall security is back to the same level it was when your cloud or virtual machine booted at an only slightly random initial state
  • Note the permissions on /dev/*random
    • crw-rw-rw- 1 root root 1, 8 Feb 10 15:50 /dev/random
    • crw-rw-rw- 1 root root 1, 9 Feb 10 15:50 /dev/urandom
  • It's a bummer of course, but there's no new compromise

Q: What about SSL compromises, or CA Man-in-the-Middle attacks?

A: We are mitigating that by bundling the public certificates in the client.


  • The pollinate package ships the public certificate of entropy.ubuntu.com
    • /etc/pollinate/entropy.ubuntu.com.pem
    • And curl uses this certificate exclusively by default
  • If this really is your concern (and perhaps it should be!)
    • Add more URLs to the $POOL variable in /etc/default/pollinate
    • Put one of those behind your firewall
    • You simply need to ensure that at least one of those is outside of the control of your attackers

Q: What information gets logged by the pollen server?

A: The usual web server debug info.

  • The current timestamp
  • The incoming client IP/port
    • At entropy.ubuntu.com, the client IP/port is actually filtered out by the load balancer
  • The browser user-agent string
  • Basically, the exact same information that Chrome/Firefox/Safari sends
  • You can override if you like in /etc/default/pollinate
  • The challenge/response, and the generated seed are never logged!
Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server received challenge from [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634146155]

Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server sent response to [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634191843]

Q: Have the code or design been audited?

A: Yes, but more feedback is welcome!

  • All of the source is available
  • Service design and hardware specs are available
  • The Ubuntu Security team has reviewed the design and implementation
  • All feedback has been incorporated
  • At least 3 different Linux security experts outside of Canonical have reviewed the design and/or implementation
    • All feedback has been incorporated

Q: Where can I find more information?

A: Read Up!


Stay safe out there!
:-Dustin
on July 24, 2014 02:15 AM

July 23, 2014

I don’t usually post sales links, but this sale by InformIT involves my two Ubuntu books along with several others that I know my friends in the open source world would be interested in.

Save 40% on recommend titles in the InformIT OpenSource Resource Center. The sale ends August 8th.

on July 23, 2014 05:40 PM

It seems a fairly common, straight forward question.  You’ve probably been asked it before. We all have reasons why we hack, why we code, why we write or draw. If you ask somebody this question, you’ll hear things like “scratching an itch” or “making something beautiful” or “learning something new”.  These are all excellent reasons for creating or improving something.  But contributing isn’t just about creating, it’s about giving that creation away. Usually giving it away for free, with no or very few strings attached.  When I ask “Why do you contribute to open source”, I’m asking why you give it away.

takemyworkThis question is harder to answer, and the answers are often far more complex than the ones given for why people simply create something. What makes it worthwhile to spend your time, effort, and often money working on something, and then turn around and give it away? People often have different intentions or goals in mind when the contribute, from benevolent giving to a community they care about to personal pride in knowing that something they did is being used in something important or by somebody important. But when you strip away the details of the situation, these all hinge on one thing: Recognition.

If you read books or articles about community, one consistent theme you will find in almost all of them is the importance of recognizing  the contributions that people make. In fact, if you look at a wide variety of successful communities, you would find that one common thing they all offer in exchange for contribution is recognition. It is the fuel that communities run on.  It’s what connects the contributor to their goal, both selfish and selfless. In fact, with open source, the only way a contribution can actually stolen is by now allowing that recognition to happen.  Even the most permissive licenses require attribution, something that tells everybody who made it.

Now let’s flip that question around:  Why do people contribute to your project? If their contribution hinges on recognition, are you prepared to give it?  I don’t mean your intent, I’ll assume that you want to recognize contributions, I mean do you have the processes and people in place to give it?

We’ve gotten very good about building tools to make contribution easier, faster, and more efficient, often by removing the human bottlenecks from the process.  But human recognition is still what matters most.  Silently merging someone’s patch or branch, even if their name is in the commit log, isn’t the same as thanking them for it yourself or posting about their contribution on social media. Letting them know you appreciate their work is important, letting other people know you appreciate it is even more important.

If you the owner or a leader in a project with a community, you need to be aware of how recognition is flowing out just as much as how contributions are flowing in. Too often communities are successful almost by accident, because the people in them are good at making sure contributions are recognized and that people know it simply because that’s their nature. But it’s just as possible for communities to fail because the personalities involved didn’t have this natural tendency, not because of any lack of appreciation for the contributions, just a quirk of their personality. It doesn’t have to be this way, if we are aware of the importance of recognition in a community we can be deliberate in our approaches to making sure it flows freely in exchange for contributions.

on July 23, 2014 12:00 PM

[tech] Going solar

Andrew Pollock

With electricity prices in Australia seeming to be only going up, and solar being surprisingly cheap, I decided it was a no-brainer to invest in a solar installation to reduce my ongoing electricity bills. It also paves the way for getting an electric car in the future. I'm also a greenie, so having some renewable energy happening gives me the warm and fuzzies.

So today I got solar installed. I've gone for a 2 kWh system, consisting of 8 250 watt Seraphim panels (I'm not entirely sure which model) and an Aurora UNO-2.0-I-OUTD inverter.

It was totally a case of decision fatigue when it came to shopping around. Everyone claims the particular panels they want to sell at the best. It's pretty much impossible to make a decent assessment of their claims. In the end, I went with the Seraphim panels because they scored well on the PHOTON tests. That said, I've had other solar companies tell me the PHOTON tests aren't indicative of Australian conditions. It's hard to know who to believe. In the end, I chose Seraphim because of the PHOTON test results, and they're also apparently one of the few panels that pass the Thresher test, which tests for durability.

The harder choice was the inverter. I'm told that yield varies wildly by inverter, and narrowed it down to Aurora or SunnyBoy. Jason's got a SunnyBoy, and the appeal with it was that it supported Bluetooth for data gathering, although I don't much care for the aesthetics of it. Then I learned that there was a WiFi card coming out soon for the Aurora inverter, and that struck me as better than Bluetooth, so I went with the Aurora inverter. I discovered at the eleventh hour that the model of Aurora inverter that was going to be supplied wasn't supported by the WiFi card, but was able to switch models to the one that was. I'm glad I did, because the newer model looks really nice on the wall.

The whole system was up at running just in time to catch the setting sun, so I'm looking forward to seeing it in action tomorrow.

Apparently the next step is Energex has to come out to replace my analog power meter with a digital one.

I'm grateful that I was able to get Body Corporate approval to use some of the roof. Being on the top floor helped make the installation more feasible too, I think.

on July 23, 2014 05:36 AM

The problem: Some time ago, I had a server “in the wild” from which I
wanted some data backed up to my rsync.net account. I didn’t want to
put sensitive credentials on this server in case it got compromised.

The awesome admins at rsync.net pointed out their subuid feature. For
no extra charge, they’ll give you another uid, which can have its own
ssh keys, whose home directory is symbolically linked under your main
uid’s home directory. So the server can rsync backups to the subuid,
and if it is compromised, attackers cannot get at any info which didn’t
originate from that server anyway.

Very nice.


on July 23, 2014 04:02 AM

July 22, 2014

Those of you who follow this blog since some time know for sure that the preferred language is English (a little number of posts in the early stages are an exception). Things are changing though.

It’s not that difficult to understand: if you go on it.deshack.net you can see this website in Italian. I’ve been thinking about giving a big change to this little place in the web for a while, as I want it to become more than a simple blog. I am working on a new theme for business websites, but I’ll let you know when it’s time. In the mean time, don’t be amazed if you see some small changes here.

Note

The main language will remain the English. You will find all the Italian content on it.deshack.net, as said before. Old posts will be translated only if someone asks.

Now it’s time for me to ask something to you: do you think this is an interesting change? Let me know with a comment!

on July 22, 2014 05:21 PM

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140722 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel has been rebased to v3.16-rc6 and officially uploaded
to the archive. We (as in apw) has also completed a hurculean config
review for Utopic and administered the appropriate changes. Please test
and let us know your results.
—–
Important upcoming dates:
Thurs Jul 24 – 14.04.1 (~2 days away)
Thurs Aug 07 – 12.04.5 (~2 weeks away)
Thurs Aug 21 – Utopic Feature Freeze (~4 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (Jul. 22):

  • Lucid – Released
  • Precise – Released
  • Saucy – Released
  • Trusty – Released

    Current opened tracking bugs details:

  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html

    Schedule:

    14.04.1 cycle: 29-Jun through 07-Aug
    ====================================================================
    27-Jun Last day for kernel commits for this cycle
    29-Jun – 05-Jul Kernel prep week.
    06-Jul – 12-Jul Bug verification & Regression testing.
    13-Jul – 19-Jul Regression testing & Release to -updates.
    20-Jul – 24-Jul Release prep
    24-Jul 14.04.1 Release [1]
    07-Aug 12.04.5 Release [2]

    cycle: 08-Aug through 29-Aug
    ====================================================================
    08-Aug Last day for kernel commits for this cycle
    10-Aug – 16-Aug Kernel prep week.
    17-Aug – 23-Aug Bug verification & Regression testing.
    24-Aug – 29-Aug Regression testing & Release to -updates.

    [1] This will be the very last kernels for lts-backport-quantal, lts-backport-raring,
    and lts-backport-saucy.

    [2] This will be the lts-backport-trusty kernel as the default in the precise point
    release iso.


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

on July 22, 2014 05:12 PM

Box for Qt

Lubuntu Blog

Box's evolution continues ahead. Due to the Qt development, the main theme for Lubuntu must grow a bit more to cover more apps, devices and, of course, environments. Now it's Qt, the sub-system for the next Lubuntu desktop, but this will allow its use for KDE5 and Plasma Next. For now it's just a project, but the Dolphin file manager looks fine! Note: this is under heavy development, no
on July 22, 2014 04:11 PM

The Community Team

Rick Spencer



So, given Jono’s departure a few weeks back, I bet a lot of folks have been wondering about the Canonical Community Team. For a little background, the community team reports into the Ubuntu Engineering division of Canonical, which means that they all report into me. We have not been idle, and this post is to discuss a bit about the Community Team going forward.

What has Stayed the Same?

First, we have made some changes to the structure of the community team itself. However, one thing did not change. I kept the community team reporting directly into me, VP of Engineering, Ubuntu. I decided to do this so that there is a direct line to me for any community concerns that have been raised to anyone on the community team.

I had a call with the Community Council a couple of weeks ago to discuss the community team and get feedback about how it is functioning and how things could be improved going forward. I laid out the following for the team.

First, there were three key things that I think that I wanted the Community Team to continue to focus on:
  • Continue to create and run innovative programs to facilitate ever more community contributions and growing the community.
  • Continue to provide good advice to me and the rest of Canonical regarding how to be the best community members we can be, given our privileged positions of being paid to work within that community.
  • Continue to assist with outward communication from Canonical to the community regarding plans, project status, and changes to those.
The Community Council was very engaged in discussing how this all works and should work in the future, as well as other goals and responsibilities for the community team.

What Has Changed?

In setting up the team, I had some realizations. First, there was no longer just one “Community Manager”. When the project was young and Canonical was small, we had only one, and the team slowly grew. However, the Team is now four people dedicated to just the Community Team, and there are others who spend almost all of their time working on Community Team projects.

Secondly, while individuals on the team had been hired to have specific roles in the community, every one of them had branched out to tackle new challenges as needed.

Thirdly, there is no longer just one “Community Spokesperson”. Everyone in Ubuntu Engineering can and should speak to/for Canonical and to/for the Ubuntu Community in the right contexts.
So, we made some small, but I think important changes to the Community Team.

First, we created the role Community Team Manager. Notice the important inclusion of the word Team”. This person’s job is not to “manage the community”, but rather to organize and lead the rest of the community team members. This includes things like project planning, HR responsibilities, strategic planning and everything else entailed in being a good line manager. After a rather competitive interview process, with some strong candidates, one person clearly rose to the top as the best candidate. So, I would like formally introduce David Planella (lp, g+) as the Community Team Manager!

Second, I change the other job titles from their rather specific titles to just “Community Manager” in order to reflect the reality that everyone on the community team is responsible for the whole community. So that means, Michael Hall (lp, g+), Daniel Holbach (lp, g+), and Nicholas Skaggs (lp, g+), are all now “Community Manager”.

What's Next?

This is a very strong team, and a really good group of people. I know them each personally, and have a lot of confidence in each of them personally. Combined as a team, they are amazing. I am excited to see what comes next.

In light of these changes, the most common question I get is, “Who do I talk to if I have a question or concern?” The answer to that is “anyone.” It’s understandable if you feel the most comfortable talking to someone on the community team, so please feel free to find David, Michael, Daniel, or Nicholas online and ask their question. There are, of course, other stalwarts like Alan Pope (lp, g+) and Oliver Grawert (lp, g+) who seem to be always online :) By which, I mean to say that while the Community Managers are here to serve the Ubuntu Community, I hope that anyone in Ubuntu Engineering considers their role in the Ubuntu Community to include working with anyone else in the Ubuntu Community :)

Want talk directly to the community team today? Easy, join their Ubuntu on Air Q&A Session at 15 UTC :)

Finally, please note that I love to be "interrupted" by questions from community members :) The best way to get in touch with me is on freenode, where I go by rickspencer3. Otherwise, I am also on g+, and of course there is this blog :)
on July 22, 2014 12:56 PM

Yesterday’s autopkgtest 3.2 release brings several changes and improvements that developers should be aware of.

Cleanup of CLI options, and config files

Previous adt-run versions had rather complex, confusing, and rarely (if ever?) used options for filtering binaries and building sources without testing them. All of those (--instantiate, --sources-tests, --sources-no-tests, --built-binaries-filter, --binaries-forbuilds, and --binaries-fortests) now went away. Now there is only -B/--no-built-binaries left, which disables building/using binaries for the subsequent unbuilt tree or dsc arguments (by default they get built and their binaries used for tests), and I added its opposite --built-binaries for completeness (although you most probably never need this).

The --help output now is a lot easier to read, both due to above cleanup, and also because it now shows several paragraphs for each group of related options, and sorts them in descending importance. The manpage got updated accordingly.

Another new feature is that you can now put arbitrary parts of the command line into a file (thanks to porting to Python’s argparse), with one option/argument per line. So you could e. g. create config files for options and runners which you use often:

$ cat adt_sid
--output-dir=/tmp/out
-s
---
schroot
sid

$ adt-run libpng @adt_sid

Shell command tests

If your test only contains a shell command or two, or you want to re-use an existing upstream test executable and just need to wrap it with some command like dbus-launch or env, you can use the new Test-Command: field instead of Tests: to specify the shell command directly:

Test-Command: xvfb-run -a src/tests/run
Depends: @, xvfb, [...]

This avoids having to write lots of tiny wrappers in debian/tests/. This was already possible for click manifests, this release now also brings this for deb packages.

Click improvements

It is now very easy to define an autopilot test with extra package dependencies or restrictions, without having to specify the full command, using the new autopilot_module test definition. See /usr/share/doc/autopkgtest/README.click-tests.html for details.

If your test fails and you just want to run your test with additional dependencies or changed restrictions, you can now avoid having to rebuild the .click by pointing --override-control (which previously only worked for deb packages) to the locally modified manifest. You can also (ab)use this to e. g. add the autopilot -v option to autopilot_module.

Unpacking of test dependencies was made more efficient by not downloading Python 2 module packages (which cannot be handled in “unpack into temp dir” mode anyway).

Finally, I made the adb setup script more robust and also faster.

As usual, every change in control formats, CLI etc. have been documented in the manpages and the various READMEs. Enjoy!

on July 22, 2014 06:16 AM

I picked up Zoe from Sarah this morning and dropped her at Kindergarten. Traffic seemed particularly bad this morning, or I'm just out of practice.

I spent the day powering through the last two parts of the registration block of my real estate licence training. I've got one more piece of assessment to do, and then it should be done. The rest is all dead-tree written stuff that I have to mail off to get marked.

Zoe's doing tennis this term as her extra-curricular activity, and it's on a Tuesday afternoon after Kindergarten at the tennis court next door.

I'm not sure what proportion of the class is continuing on from previous terms, and so how far behind the eight ball Zoe will be, but she seemed to do okay today, and she seemed to enjoy it. Megan's in the class too, and that didn't seem to result in too much cross-distraction.

After that, we came home and just pottered around for a bit and then Zoe watched some TV until Sarah came to pick her up.

on July 22, 2014 01:23 AM