August 28, 2015

At 100 pages this is the biggest issue EVAR!

This month:
* Our Great Ancestor: Warty Warthog
* Command & Conquer
* How-To : Python, LibreOffice, Website with Infrastructure, and Programming COBOL
* Graphics : Inkscape.
* Survey Results
* Chrome Cult
* Linux Labs: How I Learned To Love Ubuntu
* Site Review
* A Quick Look At: Linux in Industry, and the French Translation Team
* Ubuntu Phones
* [NEW!] Linux Loopback
* Ubuntu Games: The Current State of Linux Gaming
plus: News, Q&A, and soooo much more.

I’m also trying new avenues to promote FCM, so please take the time to upvote my Reddit post to help bring FCM awareness to the masses: https://www.reddit.com/r/Ubuntu/comments/3iqy13/full_circle_magazine_releases_issue_100/

http://fullcirclemagazine.org/issue-100/

 

on August 28, 2015 05:21 PM

Go enjoy Python3

Dimitri John Ledkov

Given a string, get a truncated string of length up to 12.

The task is ambiguous, as it doesn't say anything about whether or not 12 should include terminating null character or not. None the less, let's see how one would achieve this in various languages.
Let's start with python3

import sys
print(sys.argv[1][:12])

Simple enough, in essence given first argument, print it up to length 12. As an added this also deals with unicode correctly that is if passed arg is 車賈滑豈更串句龜龜契金喇車賈滑豈更串句龜龜契金喇, it will correctly print 車賈滑豈更串句龜龜契金喇. (note these are just random Unicode strings to me, no idea what they stand for).

In C things are slightly more verbose, but in essence, I am going to use strncpy function:

#include <stdio.h>
#include <string.h>
void main(int argc, char *argv[]) {
char res[12];
strncpy(res,argv[1],12);
printf("%s\n",res);
}
This treats things as byte-array instead of unicode, thus for unicode test it will end up printing just 車賈滑豈. But it is still simple enough.
Finally we have Go
package main

import "os"
import "fmt"
import "math"

func main() {
fmt.Printf("%s\n", os.Args[1][:int(math.Min(12, float64(len(os.Args[1]))))])
}
This similarly treats argument as a byte array, and one needs to cast the argument to a rune to get unicode string handling. But there are quite a few caveats. One cannot take out of bounds slices. Thus a naïve os.Args[1][:12] can result in a runtime panic that slice bounds are out of range. Or if a string is known at compile time, a compile time error. Hence one needs to calculate length, and do a min comparison. And there lies the next caveat, math.Min() is only defined for float64 type, and slice indexes can only be integers and thus we end up writing ]))))])...

12 points for python3, 8 points for C, and Go receives nul points Eurovision style.

EDIT: Andreas Røssland and James Hunt are full of win. Both suggesting fmt.Printf("%.12s\n", os.Args[1]) for go. I like that a lot, as it gives simplicity & readability without compromising the default safety against out of bounds access. Hence the scores are now: 14 points for Go, 12 points for python3 and 8 points for C.

EDIT2: I was pointed out much better C implementation by Keith Thompson - http://pastebin.com/5i7rFmMQ in essence it uses strncat() which has much better null termination semantics. And Ben posted a C implementation which handles wide characters http://www.decadent.org.uk/ben/blog/truncating-a-string-in-c.html. I regret to inform you that this blog post got syndicated onto hacker news and has now become the top viewed post on my blog of all time, overnight. In retrospect, I regret awarding points at the end of the blog post, as that's just was merely an expression of opinion and is highly subjective measure. But this problem statement did originate from me reviewing go code that did "if/then/else" comparison and got it wrong to truncate a string and I thought surely one can just do [:12] which has lead me down the rabbit hole of discovering a lot about Go; it's compile and runtime out of bounds access safeguards; lack of universal Min() function; runes vs strings handling and so on. I'm only a beginner go programmer and I am very sorry for wasting everyone's time on this. I guess people didn't have much to do on a Throwback Thursday.

The postings on this site are my own and don't necessarily represent Intel’s positions, strategies, or opinions.
on August 28, 2015 09:48 AM

Note: I’m sorry, this post is a bit of a mess.

I wrote a post 2 days ago, outlining an idea for a non-windowing display server — a layer that wayland compositors (or other programs) could be built upon. It got quite a bit more attention than I expected, and there were many responses to the idea.

Before I go on, I wish to address a few things that weren’t clear in the original post:

The first being that I am not an ubuntu developer, and am in no way associated with canonical. I am only an ubuntu member :) Even though I don’t use ubuntu personally, I wish to improve the user experience of those who do.

Second is a point that I did not address clearly in the original post: One of the main reasons for this idea is to enable users to modify the video resolution, gamma ramp, orientation, brightness, etc. DRM provides an API for doing these operations, however, AFAIK, you cannot run modesetting operations on a virtual terminal that is already running an application that has called video modesetting operations. In other words, you cannot run a DRM-based application on an already-running wayland server in order to run a modesetting operation. So, AFAIK, the only way to enable an application to do this is to write a sort of “proxy” server that handles requests, and then runs the video modesetting operations.

Since I am currently confusing myself re-reading this, I’ll try to provide a diagram in order to explain what I mean.

If you want to change the gamma ramp, for example, this is impossible:

drm_client_wayland

So with the display server acting as a proxy of sorts, it becomes possible:

drm_client_display_server

This is also why I believe that having a server over a shared library is crucial. A shared library would allow for abstraction over multiple backends, however, it doesn’t allow communication with more than one application. A wayland compositor can access all of the functions, yes, but wayland clients cannot.

The third clarification is that this is not only meant for wayland. Though this is the main “client” I have in mind for this server, it isn’t restricted to only wayland. The idea is that it could be used by anything, for example, as one response pointed out, xen virtualization. Or, in my case, I actually want to write clients that use this server directly, without even using a windowing server like wayland (yes, I actually have a good reason for wanting this XD ). In other words, though I believe that the group that would use this the most would be wayland users (hence why I wrote the original post tailored towards this), it isn’t only meant for wayland.

There were a few responses saying that wayland intentionally doesn’t support this, not because of the reason I originally suspected (it being “only” a windowing protocol), but because one of wayland’s main goals is to let the compositor to have full control over the display, and make sure that there are no flickers or tearing etc., which changing the video resolution (or some other modesetting operations) would undoubtedly cause. I understand and respect this, however, I still want to be able to change the resolution or gamma ramp (etc.) myself, and suffer the consequences of the momentary flickering or whatever else. Again though, I respect wayland’s decision in this aspect, so my proposal, instead, is this: To make this an optional backend for wayland compositors. Instead of my original proposal, which was to build wayland compositors on top of this (in order to help simplify the stack), instead, have this as an option, so that if users wish to have the video modesetting (etc.) capabilities, they can use this backend instead.

A pretty large concern that many people (including myself) have is performance. Having an extra server on the stack would definitely have an impact on performance, but the question is how much.

So with this being said, going forwards, I am currently working on implementing a proof-of-concept prototype in order to have a better sense of what it entails, especially in regards to performance. The prototype will be anything but production-ready, but hopefully will at least work … maybe XD .


on August 28, 2015 01:22 AM

August 27, 2015

Recently there has been a flurry of concerns relating to the IP policy at Canonical. I have not wanted to throw my hat into the ring, but I figured I would share a few simple thoughts.

Firstly, the caveat. I am not a lawyer. Far from it. So, take all of this with a pinch of salt.

The core issue here seems to be whether the act of compiling binaries provides copyright over those binaries. Some believe it does, some believe it doesn’t. My opinion: I just don’t know.

The issue here though is with intent.

In Canonical’s defense, and specifically Mark Shuttleworth’s defense, they set out with a promise at the inception of the Ubuntu project that Ubuntu will always be free. The promise was that there would not be a hampered community edition and full-flavor enterprise edition. There will be one Ubuntu, available freely to all.

Canonical, and Mark Shuttleworth as a primary investor, have stuck to their word. They have not gone down the road of the community and enterprise editions, of per-seat licensing, or some other compromise in software freedom. Canonical has entered multiple markets where having separate enterprise and community editions could have made life easier from a business perspective, but they haven’t. I think we sometimes forget this.

Now, from a revenue side, this has caused challenges. Canonical has invested a lot of money in engineering/design/marketing and some companies have used Ubuntu without contributing even nominally to it’s development. Thus, Canonical has at times struggled to find the right balance between a free product for the Open Source community and revenue. We have seen efforts such as training services, Ubuntu One etc, some of which have failed, some have succeeded.

Again though, Canonical has made their own life more complex with this commitment to freedom. When I was at Canonical I saw Mark very specifically reject notions of compromising on these ethics.

Now, I get the notional concept of this IP issue from Canonical’s perspective. Canonical invests in staff and infrastructure to build binaries that are part of a free platform and that other free platforms can use. If someone else takes those binaries and builds a commercial product from them, I can understand Canonical being a bit miffed about that and asking the company to pay it forward and cover some of the costs.

But here is the rub. While I understand this, it goes against the grain of the Free Software movement and the culture of Open Source collaboration.

Putting the legal question of copyrightable binaries aside for one second, the current Canonical IP policy is just culturally awkward. I think most of us expect that Free Software code will result in Free Software binaries and to make claim that those binaries are limited or restricted in some way seems unusual and the antithesis of the wider movement. It feels frankly like an attempt to find a loophole in a collaborative culture where the connective tissue is freedom.

Thus, I see this whole thing from both angles. Firstly, Canonical is trying to find the right balance of revenue and software freedom, but I also sympathize with the critics that this IP approach feels like a pretty weak way to accomplish that balance.

So, I ask my humble readers this question: if Canonical reverts this IP policy and binaries are free to all, what do you feel is the best way for Canonical to derive revenue from their products and services while also committing to software freedom? Thoughts and ideas welcome!

on August 27, 2015 11:59 PM

"I am Groot."
– Groot

The first beta of the Wily Werewolf (to become 15.10) has now been released!

This beta features images for Kubuntu, Lubuntu, Ubuntu GNOME, Ubuntu Kylin, Ubuntu MATE, Xubuntu and the Ubuntu Cloud images.

Pre-releases of the Wily Werewolf are *not* encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent breakage. They are, however, recommended for Ubuntu flavor developers and those who want to help in testing, reporting and fixing bugs as we work towards getting this release ready.

Beta 1 includes a number of software updates that are ready for wider testing. This is quite an early set of images, so you should expect some bugs.

While these Beta 1 images have been tested and work, except as noted in the release notes, Ubuntu developers are continuing to improve the Wily Werewolf. In particular, once newer daily images are available, system installation bugs identified in the Beta 1 installer should be verified against the current daily image before being reported in Launchpad. Using an obsolete image to re-report bugs that have already been fixed wastes your time and the time of developers who are busy trying to make 15.10 the best Ubuntu release yet. Always ensure your system is up to date before reporting bugs.

Kubuntu

Kubuntu uses KDE software and now features the new Plasma 5 desktop.

The Kubuntu 15.10 Beta 1 images can be downloaded from:

http://cdimage.ubuntu.com/kubuntu/releases/wily/beta-1/

More information about Kubuntu 15.10 Beta 1 can be found here:

https://wiki.ubuntu.com/WilyWerewolf/Beta1/Kubuntu

Lubuntu

Lubuntu is a flavour of Ubuntu based on LXDE and focused on providing a very lightweight distribution.

The Lubuntu 15.10 Beta 1 images can be downloaded from:

http://cdimage.ubuntu.com/lubuntu/releases/wily/beta-1/

More information about Lubuntu 15.10 Beta 1 can be found here:

https://wiki.ubuntu.com/WilyWerewolf/Beta1/Lubuntu

Ubuntu GNOME

Ubuntu GNOME is a flavour of Ubuntu featuring the GNOME3 desktop environment.

The Ubuntu GNOME 15.10 Beta 1 images can be downloaded from:

http://cdimage.ubuntu.com/ubuntu-gnome/releases/wily/beta-1/

More information about Ubuntu GNOME 15.10 Beta 1 can be found here:

https://wiki.ubuntu.com/WilyWerewolf/Beta1/UbuntuGNOME

Ubuntu Kylin

Ubuntu Kylin is a flavour of Ubuntu that is more suitable for Chinese users.

The Ubuntu Kylin 15.10 Beta 1 images can be downloaded from:

http://cdimage.ubuntu.com/ubuntukylin/releases/wily/beta-1/

More information about Ubuntu Kylin 15.10 Beta 1 can be found here:

https://wiki.ubuntu.com/WilyWerewolf/Beta1/UbuntuKylin

Ubuntu MATE

Ubuntu MATE is a flavour of Ubuntu featuring the MATE desktop environment for people who just want to get stuff done.

The Ubuntu MATE 15.10 Beta 1 images can be downloaded from:

http://cdimage.ubuntu.com/ubuntu-mate/releases/wily/beta-1/

More information about Ubuntu MATE 15.10 Beta 1 can be found here:

https://wiki.ubuntu.com/WilyWerewolf/Beta1/UbuntuMATE

Xubuntu

Xubuntu is a flavour of Ubuntu shipping with the XFCE desktop environment.

The Xubuntu 15.10 Beta 1 images can be downloaded from:

http://cdimage.ubuntu.com/xubuntu/releases/wily/beta-1/

More information about Xubuntu MATE 15.10 Beta 1 can be found here:

https://wiki.ubuntu.com/WilyWerewolf/Beta1/Xubuntu

Ubuntu Cloud

Ubuntu Cloud images can be run on Amazon EC2, Openstack, SmartOS and many other clouds.

The Ubuntu Cloud 15.10 Beta 1 images can be downloaded from:

http://cloud-images.ubuntu.com/releases/wily/beta-1/

Regular daily images for Ubuntu can be found at:

http://cdimage.ubuntu.com

If you’re interested in following the changes as we further develop Wily, we suggest that you subscribe to the ubuntu-devel-announce list. This is a low-traffic list (a few posts a week) carrying announcements of approved specifications, policy changes, beta releases and other interesting events.

http://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-announce

A big thank you to the developers and testers for their efforts to pull together this Beta release!

Originally posted to the ubuntu-devel-announce mailing list on Thu Aug 27 14:27:17 UTC 2015 by Martin Wimpress on behalf of Ubuntu Release Team

on August 27, 2015 10:02 PM
The first Beta of Wily (to become 15.10) has now been released!

The Beta-1 images can be downloaded from: http://cdimage.ubuntu.com/kubuntu/releases/wily/beta-1/

More information on Kubuntu Beta-1 can be found here: https://wiki.kubuntu.org/WilyWerewolf/Beta1/Kubuntu
on August 27, 2015 09:21 PM

Jon recently published a blog post stating that you’re free to create Ubuntu derivatives as long as you remove trademarks. I do not necessarily agree with this statement, primarily because of this clause in the IP rights policy :

Copyright

The disk, CD, installer and system images, together with Ubuntu packages and binary files, are in many cases copyright of Canonical (which copyright may be distinct from the copyright in the individual components therein) and can only be used in accordance with the copyright licences therein and this IPRights Policy.

From what I understand, Canonical is asserting copyright over various binaries that are shipped on the ISO, and they’re totally in the clear to do so for any packages that end up on the ISO that are permissively licensed ( X11 for eg. ), because permissive licenses, unlike copyleft licenses, do not prohibit additional restrictions on top of the software. A reading of the GPL has the explicit statement :

4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.

Whereas licenses such as the X11 license explicitly allow sub licensing :

… including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software …

Depending on the jurisdiction you live in, Canonical *can* claim copyrights over the binaries that are produced in the Ubuntu archive. This is something that multiple other parties such as the SF Conservancy, FSF as well as Bradley Kuhn have agreed on.

So once again, all of this is very much dependent on where you live and where your ISO’s are hosted. So if you’re distributing an Ubuntu derivative, I’d very much recommend talking to a professional lawyer who’d best be able to advise you about how the policy affects you in your jurisdiction. It may very well be that you require a license, or it may be that you don’t. I’m not a lawyer and AFAIK, neither is Jon.

Addendum/Afterthought :

Taken a bit more extreme, one could even argue that in order to be GPL compliant, derivatives should provide sources to all the packages that land on the ISO, and just passing off this responsibility to Canonical is a potential GPL violation.


on August 27, 2015 06:47 PM

Ubuntu is entirely committed to the principles of free software development; we encourage people to use free and open source software, improve it and pass it on.” is what used to be printed on the front page of ubuntu.com. This is still true but recently has come under attack when the project’s main sponsor, Canonical, put up an IP policy which broke the GPL and free software licences generally by claiming packages need to be recompiled. Rather than apologising for this in the modern sense of the word by saying sorry, various staff members have apologised in an older sense of the word meaning to excuse. But everything in Ubuntu is free to share, copy and modify (or just free to share and copy in the case of restricted/multiverse). The archive admins wills only let in packages which comply to this and anyone saying otherwise is incorrect.

In this twitter post Michael Hall says “If a derivative distro uses PPAs it needs an additional license.” But he doesn’t say what there is that needs an additional licence, the packages already have copyright licences all, of them free software.

It should be very obvious that Canonical doesn’t control the world and a licence is only needed if there is some law that allows them to restrict what others want to do. There’s been a few claims on what that law might be but nothing that makes sense when you look at it. It’s worth examining their claims because people will fall for them and that will destroy Ubuntu as a community project. Community projects depend on everyone having the freedom to do whatever they want with the code else nobody will give their time to a project that someone else will then control.

In this blog post Dustin Kirkland again doesn’t say what needs a licence but says one is needed based on Geographical Indication. It’s hard to say if he’s being serious. A geographical indication (GI) is a sign used on products that have a specific geographical origin and possess qualities or a reputation that are due to that origin and are then assessed before being registered. There is no Geographical Indication registration in Ubuntu and it’s completely irrelevant to everything. So lets move on.

A more dangerous claim you can see on this reddit post where Michael Hall claims “for permissively licensed code where you did not build the binary, there is no pre-existing right to redistribution of that binary”.    This is incorrect, everything in Ubuntu has a free software licence with an explicit right to redistribution. (Or a few bits are public domain where no licence is needed at all.)  Let’s take the libX11 as a random example, it gets shipped with a copyright file containing this licence:

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”),  to deal in the Software without restriction

so we do have permission.  Shame on those who say otherwise.  This applies to the source of course and so it applied to any derived work such as the binaries, which is why it’s shipped with the binaries.  It even says you can’t remove remove the licence:
“The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software.”
So it’s free software and the licence requires it to remain free software.  It’s not copyleft, so if you combine it with another work which is not free software then the result is proprietary, but we don’t do that in Ubuntu.  The copyright owner could put extra restrictions on but nobody else can because it’s a free world and you can’t make me do stuff just because you say so, you have to have some legal way to restrict me first.
One of the items allowed by this X11 licence is the ability to “sublicense” which is just putting another licence on it, but you can’t remove the original licence as it says in the part of the licence I quoted above.  Once I have a copy of the work I can copy it all I want under the X11 licence and ignore your sublicence.
This is even true of works under the public domain or a WTFPL style licence, once I’ve got a copy of the work it’s still public domain so I can still copy, share and modify it freely.  You can’t claim it’s your copyright because, well, it’s not.

In Matthew Garrett’s recent blog post he reports that “Canonical assert that the act of compilation creates copyright over the binaries”.  Fortunately this is untrue and can be ignored.  Copyright requires some creative input, it’s not enough to run a work through a computer program.  In the very unlikely case a court did decide that compiling a programme added some copyright then they would not decide that copyright was owned by the owners of the computer it ran on but on the copyright owners of the compiler, which is the Free Software Foundation and the copyright would be GPL.

In conclusion there is nothing which restricts people making derivatives of Ubuntu except the trademark, and removing branding is easy. (Even that is unnecessary unless you’re trading which most derivatives don’t, but it’s a sign of good faith to remove it anyway.)

Which is why Mark Shuttleworth says “you are fully entitled and encouraged to redistribute .debs and .iso’s”. Lovely.

 

facebooktwittergoogle_pluslinkedinby feather
on August 27, 2015 02:50 PM

The Xubuntu team is pleased to announce the immediate release of Xubuntu 15.10 Beta 1. This is the first beta towards the final release in October.

The first beta release also marks the end of the period to land new features in the form of Ubuntu Feature Freeze. This means any new updates to packages should be bug fixes only, the Xubuntu team is committed to fixing as many of the bugs as possible before the final release.

The Beta 1 release is available for download by torrents and direct downloads from
http://cdimages.ubuntu.com/xubuntu/releases/wily/beta-1/

Highlights and known issues

New features and enhancements

  • LibreOffice Calc and Writer and now included. These applications replace Gnumeric and Abiword respectively.
  • A new theme for LibreOffice, libreoffice-style-elementary is also included and is default for Wily Werewolf.

Known Issues

Some issues during testing of image were found, in addition some bugs related to Xubuntu have been noted during the development cycle. Full detail of all of these can be found in the release notes at https://wiki.ubuntu.com/WilyWerewolf/Beta1/Xubuntu

on August 27, 2015 02:39 PM

Hello,

Ubuntu GNOME Team is glad to announce the release of Beta 1 of Ubuntu GNOME Wily Werewolf (15.10).

What’s new and how to get it?

Please do read the release notes:
https://wiki.ubuntu.com/WilyWerewolf/Beta1/UbuntuGNOME

As always, thanks a million to each and everyone who has helped, supported and contributed to make this yet another successful milestone!

We have great testers and without their endless support, we don’t think we can ever make it. Please, keep the great work up!

Thank you!

on August 27, 2015 02:31 PM

S08E25 – Jurassic Shark - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Twenty-five of Season Eight of the Ubuntu Podcast! Mark Johnson is back with Laura Cowen, Martin Wimpress, and Alan Pope!

In this week’s show:

We look at what’s been going on in the news:

We also take a look at what’s been going on in the community:

There are even events:

That’s all for this week, please send your comments and suggestions to: show@ubuntupodcast.org
Join us on IRC in #ubuntu-podcast on Freenode
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on August 27, 2015 09:41 AM

August 26, 2015

For the past couple of weeks I’ve been playing with a variety of boards, and a single problem kept raising its head over and over again, I needed to build test images quickly in order to be able to checkout whether or not these boards had the features that I wanted.

This lead me to investigating tools around building images for these boards. And the tools I came across for each of these boards were absymal to say the least. All of them were either very board specific or were not versatile enough for my needs. Linaro’s HWPack’s came very very close to what I needed, but still had some of the following limitations :

  • HWPack’s are inflexible in terms of partitioning layout, the entire partitioning layout is internal to the tool, and you could only specify one of three variations of the partition layout, and not control anything else, such as start sectors of the partitions.
  • HWPack’s are inflexible in terms of bootloader flashing, as far as I can tell, there was no way to specify the start sector, the byte size and other options that some of these boards were passing dd to flash the bootloader to the image.
  • HWPacks, as far as I could tell, could not generate config files that would be used by u-boot at boot.
  • HWPack’s only support Apt.

So with those 4 problems to solve, I set out writing my own replacement for Linaro’s HWPack’s , and lo and behold, you can find it here. ( I’m quite terrible at coming up with awesome names for my projects, so I chose the most simple and descriptive name I could think of ;)

Here’s a sample config for the ODROID C1, a neat little board from HardKernel.

The rootfs section

You can specify a rootfs for your board in this section, it will take a url to the rootfs tar and optionally a md5sum for the tar.

The firmware section

We currently have 2 firmware backends for installing the firmware ( things like the kernel, and other board specific packages ). One is the tar backend which, like the rootfs section, takes a url to the firmware tar and optionally a md5sum and the Apt backend. I only have time to maintain these 2 backends, so I’d absolutely love it if someone could write more backends such as yum or pacman and send me a pull request.

The tar backend will copy everything from the boot/* folder inside the tar onto the first partition, and anything inside the firmware/* and modules/* folder into the rootfs’s /lib folder. This is a bit implicit and I’m trying to figure out a way to make this better.

The apt backend can take multiple apt repos to be added to the rootfs and a list of packages to install afterwards.

The bootloader section

The bootloader has a :config section which will take a ERB file to be rendered and installed into both the rootfs and the bootfs ( if you have one ).

Here’s a quote of the sample ERB file for the ODROID C1:

This allows me to dynamically render boot files depending on what kernel was installed on the image and what the UUID of the rootfs is. You can in fact access more variables as described here.

Moving on to the :uboot section of the bootloader, you can specify as many stages as you want to flash onto the image. Each stage will take a :file to flash and optionally :dd_opts, which are options that you might want to pass to dd when writing the bootloader. The stages are flashed in the sequence that is declared in config.yml and the files are searched for in the rootfs first, failing which they’re searched for in the bootfs partition, if you have one.

The login section

The login section is quite self-explanatory and takes a user, a password for the user and a list of groups the user should be added to on the target image.

The login section is optional and can be skipped if your rootfs already has a pre-configured user.

At the moment I have configs for the ODROID C1, Cubox-I ( thanks to Solid Run for sending me a free-extra board! :) and the Raspberry Pi 2.

If you have questions send me an email or leave them in the comments below, and I’ll try to answer them ASAP :).

If you end up writing a config for your board, please send me a PR with the config, that’d be most awesome.

PS: Some of the most awesome people I know are meeting up at Randa next month to work on bringing Touch to KDE. It’d be supremely generous of you if you could donate towards the effort.


on August 26, 2015 03:34 PM
[Updates (1) and (2) at the bottom of the post]

It's 01:24am, on Tuesday, August 25, 2015.  I am, again, awake in the the middle of the night, due to another false alarm from Google's spitefully sentient, irascibly ignorant Nest Protect "smart" smoke alarm system.

Exactly how I feel right now.  Except I'm in my pajamas.
Warning: You'll find very little profanity on this blog.  However, the filter is off for this post.  Apologies in advance.

ARRRRRRRRRRRRRRRRRGGGGGGGGGHHHHHHHHHHH!!!!!!!!!!!
Oh.
My.
God.
FOR FUCK'S SAKE.

"Heads up, there's smoke in the kids room," she says.  Not once, but 3 times in a 30 minute period, between 12am and 1am, last night.


That's my alarm clock.  Right now.  I'm serious.
"Heads up, there's smoke in the guest bedroom," she says again tonight a few minutes ago, at 12:59am.

There was in fact never any smoke to clear.
Is it possible for anything wake you up more seriously and violently in a cold panic than a smoke alarm detecting something amiss in your 2 year old's bedroom?

Here's what happens (each time)...

Every Nest Protect unit in the house announces, in unison, "Heads up, there's smoke in the kids' room."  Then both my phone and my wife's phone buzzes on our night stands, with the urgent incoming message from the Nest app.  Another few seconds pass, and a another set of alarms, this time delivered by email, in case you missed the first two.

The first and second time it happens, you jump up immediately.  You run into their room and make sure everyone is okay -- both the infant in the crib and toddler who's into everything.  You walk the whole house, checking the oven, the stove, the toaster, the computer equipment.  You open the door and check around outside.  When everything is okay, you're left with a tingling in the back of your mind, wondering what went wrong.  When you're a computer engineer by trade, you're trying to debug the hardware and/or software bug causing the false positive.  Then you set about trying to calm your family and get them back into bed.  And at some point later, you calm your own nerves and try to get some sleep.  It's a work night after all.

But the third, fourth, and fifth time it happens?  From 3 different units?

Well, it never ceases to scare the ever living shit out of you, waking up out of deep sleep, your mind racing, assessing the threat.

But then, reality kind of sets in.  It's just the stupid Nest Protect fucking it all up again.

Roll over, go back to bed, and pray that the full alarm doesn't sound this time, waking up both kids and setting us up for a really bad night and next few days at school.

It's not over yet, though.  You then wait for the same series of messages announcing the all clear -- first the bitch over the loudspeaker, followed by the Android app notification, then the email -- each with the same message:  "Caution, the smoke is clearing..."

THERE WAS NEVER ANY FUCKING SMOKE, YOU STUPID CYBORG. 

20 years later, and the smartest company in the world
creates a smoke detector that broadcasts the IoT equivalent
of PC LOAD LETTER to your smart home, mobile app, and email.
But not this time.  I'm not rolling over.  I'm here, typing with every ounce of anger this Thinkpad can muster. I'm mashing these keys in the guest bedroom that's supposedly on fire.  I can most assuredly tell you that it's a comfy 72 F, that the air is as clean as a summer breeze.

I'm writing this, hoping that someone, somewhere hears how disturbingly defective, and dangerously disingenuous this product actually is.

It has one job to do.  Detect and report smoke.  And it's unable to do that effectively.  If it can't reliably detect normality, what confidence should I have that it'll actually detect an emergency if that dreaded day ever comes?

The sad, sobering reality is: zero.  I have zero confidence whatsoever in the Nest Protect.

What's worse, is that I'm embarrassed to say that I've been duped into buying 7 (yes, seven) of these broken pieces of shit, at $99 apiece.  I'm a pretty savvy technical buyer, and admittedly a pretty magnanimous early adopter.  But while I'm accepting on beta versions of gadgets and gizmos, I am entirely unforgiving on the safety and livelihood of my family and guests.

Michael Larabel of Phoronix recounts his similar experience here.  He destroyed one with a sledgehammer, which might provide me with some catharsis when (not if, but when) this happens again.

Michael Larabel of Phoronix destroyed his malfunctioning Nest Protect
with a 20 lb sledgehammer, to silence the false alarm in the middle of the night
 There's a sad, long, thread on Nest's customer support forum, calling for a better "silence" feature.  I'm sorry, that's just wrong.  The solution is not a better way to "silence" false positives.  Root out the false positives to begin with.  Or recall the hardware.  Tut, tut, tut.

You can't be serious...
This is from me to Google and Nest on behalf of thousands of trusting families out there:  You have the opportunity, and ultimately the obligation.  Please make this right.  Whatever that means, you owe the world that.
  • Ship working firmware.
  • Recall faulty hardware.
  • Refund the product.
Okay, the empassioned rant is over.  Time for data.  Here is the detailed, distressing timeline.
  • January 2015: I installed 5 Nest Protects: one in each of two kids' rooms, the master bedroom, the hallway, and the kitchen/living room
  • February 2015: While on a business trip to, South Africa, I received notification via email and the Nest App that there was a smoke emergency at my home, half a world away, with my family in bed for the night.  My wife called me immediately -- in the middle of the night in Texas.  My heart raced.  She assured me it was a false alarm, and that she had two screaming kids awake from the noise.  I filed a support ticket with Nest (ref:_00D40Mlt9._50040jgU8y:ref) and tried to assure my wife that it was just a glitch and that I'd fix it when I got home.

  • May 23, 2015: We thought it was funny enough to post to Facebook, "When Nest mistakes a diaper change for a fire, that's one impressive poop, kiddo!"  Not so funny now.


  • August 9, 2015: I installed 2 more Nest Protects, in the guest bedroom and my office
  • August 21, 2015, 11:26am: While on a flight home from another business, trip, I receive another set of daytime warnings about smoke in the house.  Another false alarm.
  • August 24, 2015, 12am: While asleep, I receive another 3 false alarms.
  • August 25, 2015, 1am: Again, asleep, another false alarm.  Different room, different unit.  I'm fucking done with these.
I'm counting on you Google/Nest.  Please make it right.

Burning up but not on fire,
Dustin

Update #1: I was contacted directly by email and over Twitter, by Nest's "Executive Relations", who offer to replace of all 7 of my "v1" Nest Protects with 7 new "v2" Nest Protects, at no charge.  The new "v2" Protect reportedly has an improved design with better photoelectric detector that reduces false positives.  I was initially inclined to try the new "v2" Protects, however, neither the mounting bracket nor the wiring harness are compatible from v1 to v2.  So I would have to replace all of the brackets and redoing all of the wiring myself.  I asked, but Nest would not cover the cost of a professional (re-)installation.  At this point, as expressed my disappointment in this alternative, and I was offered a full refund, in 4-6 weeks time, after I return the 7 units.  I've accepted this solution and will replace the Nest Protects with a simpler, more reliable traditional smoke detector. 
Update #2: I suppose I should mention that I generally like my Nest Thermostat and (3) Dropcams.  This blog post is really only complaining about the Titanic disaster that is the Nest Protect.
on August 26, 2015 02:06 PM

In addition to using developer documentation (see A compact style for jQuery API documentation), people who work with communities need to use community and communication related websites. The bigger the community, the more tools it needs.

In a large community like Ubuntu, the amount of maintenance is big and the variety of platforms is huge. On top of these, many of the website aren’t directly maintained for the community (which has both good and bad sides). For these reasons, it’s sometimes hard and/or slow to get updates landed for the CSS files for the websites.

While workarounds aren’t ideal, at least we can fight the problematic styles with modern technology. That said, I’ve created a gist for a Stylish style that provides some minor improvements for some ubuntu.com websites.

Currently, the style brings the following improvements:

  • The last line of the chat is completely shown in Ubuntu Etherpad pads
  • Images and code blocks aren’t overlapping the content section in Planet Ubuntu, avoiding horizontal scrollbars
  • In the Ubuntu wiki, list items do not have a large bottom padding, making the lists more readable
  • Also in the wiki, tables are always full width but not too wide, keeping them aligned nicely

If you are constantly hitting other annoying styling issues on the Ubuntu websites, leave me a comment and I’ll see whether I can update the gist with a workaround. However, please report the bugs and issues for concerned maintaining parties as well, so we can stop using these workarounds as soon as possible. Thank you!

on August 26, 2015 01:41 PM
Can you believe Linux is celebrating 24 years already? It was on this day, August 25, back in 1991 when a young Linus Torvalds made his now-legendary announcement on the comp.os.minix newsgroup:

Hello everybody out there using minix -

I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things).

I've currently ported bash(1.08) and gcc(1.40), and things seem to work. This implies that I'll get something practical within a few months, and I'd like to know what features most people would want. Any suggestions are welcome, but I won't promise I'll implement them :-)

Linus

PS. Yes – it's free of any minix code, and it has a multi-threaded fs. It is NOT portable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that's all I have :-(.

Quite an understated beginning if I ever heard one!

There's some debate in the Linux community as to whether we should be celebrating Linux's birthday today or on October 5 when the first public release was made, but Linus says he is O.K. with you celebrating either one, or both! So as we say happy birthday, let's take a quick look back at the years that have passed and how far we have come.

Via OpenSource.
on August 26, 2015 12:48 PM

This is a mini case study, or rather a report from me, on how difficult it can be to run multiple services from the same server. Especially when they listen on similar ports for different aspects. In this post, I examine the headaches of making two things work on the same server: GitLab (via their Omnibus .deb packages), and Landscape (Canonical’s systems management tool).

I am not an expert on either of the software I listed, but what I do know I will state here.

The Software

Landscape

Many of you have probably heard of Landscape, Canonical’s systems management tool for the Ubuntu operating system. Some of you probably know about how we can deploy Landscape standalone for our own personal use with 10 Virtual and 10 Physical machines managed by Landscape, via Juju, or manually.

Most of my systems/servers are Ubuntu, and I have enough that makes management by one individual a headache. In the workplace, we have an entire infrastructure set up for a specific set of applications, all on an Ubuntu base, and a similar headache in managing them all one at a time. For me, discovering Landscape Dedicated Server, the setup yourself, makes management FAR easier. Landscape has a dependency on Apache

GitLab

GitLab is almost like GitHub in a sense. It provides a web interface for working with code, via the Git Version Control System. Github and GitLab are both very useful, but for those of us wanting the same interface in only one organization, or for personal use, and not trusting the Cloud hosts like GitHub or GitLab’s cloud, we can run it via their Omnibus package, which is Gitlab pre-packaged for different distributions (Ubuntu included!)

It includes its own copy of nginx for serving content, and uses Unicorn for the Ruby components. It listens on both port 80 and 8080, initially, per the gitlab configuration file which rewrites and modifies all the other configurations for Gitlab, which includes both of those servers.

The tricky parts

But then, I ran into a dilemma on my own personal setup of it: What happens if you need Landscape and multiple other sites run from the same server, some parts with SSL, some without? Throw into the mix that I am not an Apache person, and part of the dilemma appears.

1: Port 8080.

There’s a conflict between these two softwares. Part of Landscape (I believe the appserver part) and part of GitLab (it’s Unicorn server, which handles the Ruby-to-nginx interface both try and bind to port 8080.

2: Conflicting Web Servers on Same Web Ports

Landscape relies on Apache. GitLab relies on its own-shipped nginx. Both are set by default to listen on port 80. Landscape’s Apache config also listens on HTTPS.

These configurations, out of the box by default, have a very evil problem: both try to bind to port 80, so they don’t work together on the same server.

My solution

Firstly, some information. The nginx bundled as part of GitLab is not easily configured for additional sites. It’s not very friendly to be a ‘reverse proxy’ handler. Secondly, I am not an Apache person. Sure, you may be able to get Apache to work as the ‘reverse proxy’, but it is unwieldy for me to do that, as I’m an nginx guy.

These steps also needed to be done with Landscape turned off. (That’s as easy as running sudo lsctl stop)

1: Solve the Port 8080 conflict

Given that Landscape is something by Canonical, I chose to not mess with it. Instead, we can mess with GitLab to make it bind Unicorn to a different port.

What we have to do with GitLab is tell its Unicorn to listen on a different IP/Port combination. These two lines in the default configuration file control it (the file is located at /etc/gitlab/gitlab.rb in the Omnibus packages):

# unicorn['listen'] = '127.0.0.1'
# unicorn['port'] = 8080

These are commented out by default. The default binding is to bind to 127.0.0.1:8080. We can easily change GitLab’s configurations though, by editing the file, and uncommenting both lines. We have to uncomment both because otherwise it tries to bind to the specified port, but also *:8080 (which breaks Landscape’s services). After making those changes, we now run sudo gitlab-ctl reconfigure and it redoes its configurations and makes everything adapt to those changes we just made.

2: Solve the web server problem

As I said above, I’m an nginx guy. I also discovered revising the GitLab nginx server to do this is a painful thing, so I did an ingenious thing.

First up: Apache.

I set the Apache bindports to be something else. In this case, I revised /etc/apache2/ports.conf to be the following:

# If you just change the port or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default.conf

Listen 10080


Listen 10443


Listen 10443

# vim: syntax=apache ts=4 sw=4 sts=4 sr noet

Now, I went into the sites-enabled configuration for Landscape, and also changed the bindports accordingly – the HTTP listener on Port 80 now listens on 10080, and the SSL listener on Port 443 now listens on 10443 instead.

Second: GitLab.

This one’s easier, since we simply edit /etc/gitlab/gitlab.rb, and modify the following lines:

#nginx['listen_addresses'] = ['127.0.0.1']
#nginx['listen_port'] = 80

First, we uncomment the lines. And then, we change the 'listen_port' item to be whatever we want. I chose 20080. Then sudo gitlab-ctl reconfigure will apply those changes.

Finally, a reverse proxy server to handle everything.

Behold, we introduce a third web server: nginx, 1.8.0, from the NGINX Stable PPA.

This works by default because we already changed all the important bindhosts for services. Now the headache: we have to configure this nginx to do what we want.

Here’s a caveat: I prefer to run things behind HTTPS, with SSL. To do this, and to achieve it with multiple domains, I have a few wildcard certs. You’ll have to modify the configurations that I specify to set them up to use YOUR SSL certs. Otherwise, though, the configurations will be identical.

I prefer to use different site configuration files for each site, so we’ll do that. Also note that you will need to put in real values where I say DOMAIN.TLD and such, same for SSL certs and keys.

First, the catch-all for catching other domains NOT hosted on the server, placed in /etc/nginx/sites-available/catchall:

server {
listen 80 default_server;

server_name _;

return 406; # HTTP 406 is "Not Acceptable". 404 is "Not Found", 410 is "Gone", I chose 406.
}

Second, a snippet file with the configuration to be imported in all the later configs, with reverse proxy configurations and proxy-related settings and headers, put into /etc/nginx/snippets/proxy.settings.snippet:


proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_max_temp_file_size 0;

proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;

proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;

Third, the reverse-proxy configuration for Landscape, which is fairly annoying and took me multiple tries to get working right, placed in /etc/nginx/sites-available/landscape_reverseproxy. Don’t forget that Landscape needs SSL for parts of it, so you can’t skip SSL here:


server {
listen 443 ssl;

server_name landscape.DOMAIN.TLD;

ssl_certificate PATH_TO_SSL_CERTIFICATE; ##### PUT REAL VALUES HERE!
ssl_certificate_key PATH_TO_SSL_CERTIFICATE_KEY; ##### PUT REAL VALUES HERE

# These are courtesy of https://cipherli.st, minus a few things.
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;

include /etc/nginx/snippets/proxy.settings.snippet;

location / {
proxy_pass https://127.0.0.1:10443/;
}

location /message-system {
proxy_pass https://127.0.0.1:10443/;
}
}

server {
listen 80;
server_name landscape.DOMAIN.TLD;

include /etc/nginx/snippets/proxy.settings.snippet;

location / {
return 301 https://landscape.DOMAIN.TLD$request_uri;
}

location /ping {
proxy_pass http://127.0.0.1:10080/;
}
}

Forth, the reverse-proxy configuration for GitLab, which was not as hard to make working. Remember, I put this behind SSL, so I have SSL configurations here. I’m including comments for what to put if you want to NOT have SSL:

# If you don't want to have the SSL listener, you don't need this first server block
server {
listen 80;
server_name gitlab.DOMAIN.TLD

# We just send all HTTP traffic over to HTTPS here.
return 302 https://gitlab.DOMAIN.TLD$request_uri;
}

server {
listen 443 ssl;
# If you want to have this listen on HTTP instead of HTTPS,
# uncomment the below line, and comment out the other listen line.
#listen 80;
server_name gitlab.DOMAIN.TLD;

# If you're not using HTTPS, remove from here to the line saying
# "Stop SSL Remove" below
ssl_certificate /etc/ssl/hellnet.io/hellnet.io.chained.pem;
ssl_certificate_key /etc/ssl/hellnet.io/hellnet.io.key;

ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off; # Requires nginx >= 1.5.9
# Stop SSL Remove

include /etc/nginx/snippets/proxy.settings.snippet;

location / {
proxy_pass http://127.0.0.1:20080/;
}
}

System specifications considerations

Landscape is not light on resources. It takes about a gig of RAM to run safely, from what I’ve observed, but 2GB is more recommended.

GitLab recommends AT LEAST 2GB of RAM. It uses at least that, so you should have 3GB for this at the minimum.

Running both demands just over 3GB of RAM. You can run it on a 4GB box, but it’s better to have double that space just in case, especially if Landscape and Gitlab both get heavy use. I run it on an 8GB converted desktop, which is now a Linux server but used to be a Desktop.

on August 26, 2015 12:12 PM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In July, 79.50 work hours have been dispatched among 7 paid contributors. Their reports are available:

Evolution of the situation

August has seen a small decrease in terms of sponsored hours (71.50 hours per month) because two sponsors did not pay their renewal invoice on time. That said they reconfirmed their willingness to support us and things should be fixed after the summer. And we should be able to reach our first milestone of funding the equivalent of a half-time position, in particular since a new platinum sponsor might join the project.

DebConf 15 happened this month and Debian LTS was featured in a talk and in a work session. Have a look at the video recordings:

In terms of security updates waiting to be handled, the situation is better than last month: the dla-needed.txt file lists 20 packages awaiting an update (4 less than last month), the list of open vulnerabilities in Squeeze shows about 22 affected packages in total (11 less than last month). The new LTS frontdesk ensures regular triage of CVE reports and the difference between both counts dropped significantly. That’s good!

Thanks to our sponsors

Thanks to Sig-I/O, a new bronze sponsor, which joins our 35 other sponsors.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on August 26, 2015 09:14 AM

By now, you are probably more than a little tired of hearing people tell you how easy it is to do things like build a website or add ecommerce to an existing site. But when do you need a professional?

Does It Effect the Customer Experience?

If the thing you want to do will have an adverse effect on the client experience if it goes horribly wrong, then you will want to bring in a licensed professional. The last thing you want to do is inadvertently do something that will increase customer confusion.

Avoid changing major design elements of your site just because you are bored. If you are not a designer, you may be changing something that is crucial to navigation or discoverability. It is like knocking out a wall without determining if it is a load-bearing wall. If your site enjoys high levels of customer experience, leave changes to a pro.

Does It Effect Security?

The only thing more sacrosanct than customer experience is customer security. At this point in time, it is safe to say that no company ought to be left as the sole proprietor of consumer security. At the very least, there needs to be third-party security auditing to be sure things are as secure as you think they are.

That is the type of thing that is outsourced to IT services from Firewall Technical, and other such companies. Not every company is big enough to justify having its own IT department. But if you handle customer data, you are required to perform due diligence. In some instances, that means outsourcing security matters to a professional.

Is It Going to Void Your Warranty?

There are plenty of changes you can make to your tech and web presence that are inward facing. If you have the time and skills to take o those projects, knock yourself out. But even those projects should be shifted to a professional if there is the danger of voiding your warranty if something goes awry. Even if nothing goes wrong, some upgrades will void your warranty just because you did them.

You don’t know how, watch a couple of YouTube videos, and have at it. But when it is time to upgrade those slow, unreliable, spinning hard drives to SSDs, check your nerve, and your warranty. While one may be sufficient, the other may not be.

Some people feel ashamed to call for help when it is something they should be able to do themselves. But the real shame is letting pride be the cause of your downfall when help was only a phone call away.

The post You Might Need a Pro for These Tech Upgrades appeared first on deshack.

on August 26, 2015 06:27 AM

For the TL;DR folk who are concerned with the title: It’s not an alternative to wayland or X11. It’s layer that wayland compositors (or other) can use.

As a quick foreward: I’m still a newbie at this field. While I try my best to avoid inaccuracies, there might be a few things I state here that are wrong, feel free to correct me!

Wayland is mainly a windowing protocol. It allows clients to draw windows (or, as the wayland documentation puts it, “surfaces”), and receive input from those surfaces. A wayland server (or “compositor”) has the task of drawing these surfaces, and providing the input to the clients. That is the specification.

However, where does a compositor draw these surfaces to? How does the compositor receive input? It has to provide many backends for various methods of drawing the composited surface. For example, the weston compositor has support for drawing the composited surface using 7 different backends (DRM, Linux Framebuffer, Headless [a fake rendering device], RDP, Raspberry Pi, Wayland, and X11). The amount of work put into making these backends work must be incredible, which is exactly where the problem relies in: it’s arguably too much work for a developer to put in if they want to make a new compositor.

That’s not the only issue though. Another big problem is that there is then no standard way to configure the display. Say you wanted a wayland compositor to change the video resolution to 800×600. The only way to do that is to use a compositor-specific extension to the protocol, since the protocol, AFAIK, has no method for changing the video resolution — and rightfully so. Wayland is a windowing protocol, not a display protocol.

My idea is to create a display server that doesn’t handle windowing. It handles display-related things, such as drawing pixels on the screen, changing video mode, etc… Wayland compositors and other programs that require direct access to the screen could then use this server and trust that the server will take care of everything display-related for them.

I believe that this would enable for much simpler code, and add a good deal more power and flexibility.

To give a more graphic description (forgive my horrible diagraming skills):

Current Stack:

wayland_current

Proposed Stack:

 

wayland_new

I didn’t talk about the input server, but it’s the same idea as the display server: Have a server dedicated to providing input. Of course, if the display server uses something like SDL as the backend, it may have to also provide the input server, due to the SDL library, AFAIK, doesn’t allow a program to access the input of another program.

This is an idea I have toyed around with for some time now (ever since I tried writing my own wayland compositor, in fact! XD), so I’m curious as to what people think of it. I would be more than happy to work with others to implement this.


on August 26, 2015 05:42 AM

August 25, 2015

Packaging

During Akademy I had the great advantage of being in the same room of our (Kubuntu) top packagers (Riddell and Scarlett). Who have helped me learn to package and make patches for errors in the CI/QA machine. Since then I’ve also had the help of Philip (yofel) and Clive (clivejo) in the #kubuntu-devel IRC room as well. I’ve packaged digikam and recently kdenlive ( both need testing in my ppa :) ) as well as getting a new Kubuntu Setting package out there too (ppa) which overlays the slideshow in Muon Discover to highlight some top KDE applications.

Artwork

I also worked with Andrew from the VDG on a Breeze High Contrast color scheme which made it in for Plasma 5.4 before the freeze!

  • commit: https://quickgit.kde.org/?p=breeze.git&a=commit&h=3ebb6ed33fb6522b0f5ca855a9fbd2b79c165e65

 

I can’t thank the Ubuntu Community enough for funding my trip to Akademy this year! THANK YOU!

on August 25, 2015 11:39 PM

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150825 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kt-meeting.txt


Status: CVE’s

The current CVE status can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kernel-cves.html


Status: Wily Development Kernel

We have rebased our Wily master-next branch to the latest upstream
v4.2-rc8 and uploaded to our ~canonical-kernel-team PPA. The fglrx DKMS
package is still failing to build with this latest kernel. We are
actively investigating to get this resolved.
—–
Important upcoming dates:

  • https://wiki.ubuntu.com/WilyWerewolf/ReleaseSchedule
    Thurs Aug 27 – Beta 1 (~2 days away)
    Thurs Sep 24 – Final Beta (~4 weeks away)
    Thurs Oct 8 – Kernel Freeze (~6 weeks away)
    Thurs Oct 15 – Final Freeze (~7 weeks away)
    Thurs Oct 22 – 15.10 Release (~8 weeks away)


Status: Stable, Security, and Bugfix Kernel Updates – Precise/Trusty/Utopic/Vivid

Status for the main kernels, until today:

  • Precise – Verification & Testing
  • Trusty – Verification & Testing
  • lts-Utopic – Verification & Testing
  • Vivid – Verification & Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html
    For SRUs, SRU report is a good source of information:
  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 16-Aug through 05-Sep
    ====================================================================
    14-Aug Last day for kernel commits for this cycle
    15-Aug – 22-Aug Kernel prep week.
    23-Aug – 29-Aug Bug verification & Regression testing.
    30-Aug – 05-Sep Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

on August 25, 2015 07:53 PM

Visiting FrOSCon …

Sujeevan Vijayakumaran

Last weekend, on the 22nd and 23rd August, the FrOSCon took place in St. Augustin (next to Bonn) in Germany. It is one of the bigger Open Source Conferences and my first visit. Short summary: It was great! There were many talks, too bad there were many on talks on the same time, but luckily all talks were recorded.

Personally I gave to talks: about Snappy Ubuntu Core and about Ubuntu Phone. You can watch both talks here and here, both talks are in German. Both talks had many attendees. (Here is a small photo!)

On Saturday I didn't visit more talks - on the evening, after the talks, there was a free barbecue for everybody! Also, the entrance to the conference was completely free in this year, which I strongly support.

On Sunday I went to the Talk of Benjamin Mako Hill about „Access Without Empowerment“, which was the only English talk which I visited. I also visited a few more talks, if you are interested to watch the other talks, you can have a look here.

The rest of the time was I mostly talking to people at the Ubuntu Booth, mostly showing and presenting my Ubuntu Phones. Besides that we had a small Taskwarrior-Meetup with Dirk Deimeke, Wim Schürmann and Lynoure Braakman which was quite funny and interesting ;).

I really like to visit different Open Source Conferences, mostly to learn new stuff and talk to old and new friends. This time I've met many „old“ friends and also met new guys. Surprisingly, I had the chance to meet and talk to Niklas Wenzel from the Ubuntu-Community, who is involved in the development of different apps and features of Ubuntu Phone (and he's way younger than I would have expected) and also Christian Dywan from Canonical.

I'm really looking forward to the next conferences, which will be Ubucon in Berlin and OpenRheinRuhr in Oberhausen later this year!

on August 25, 2015 06:50 PM

Lubuntu 15.10 beta 1

Lubuntu Blog

Beta 1 is now available for testing, please help test it. New to testing? Head over to the wiki for all the information and background you need, along with contact points.


Also, there's a new Facebook group named LubuntuQA for testing new Lubuntu ISOs, as well as bug triage. You can find it here.

And at last, but not least, a new ISO made by Julien Lavergne with the LXQt desktop integrated, just for testing Lubuntu Next evolution, is available here.
on August 25, 2015 05:07 PM

Welcome to the Ubuntu Weekly Newsletter. This is issue #431 for the week August 17 – 23, 2015, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Elizabeth K. Joseph
  • Chris Guiver
  • Jim Connett
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on August 25, 2015 12:53 AM

August 24, 2015



A week ago a dozen cool guys, who happen to be GStreamer developers, met at Montpellier for the GStreamer Summer Hackfest 2015. We got together to work for 3 days, over the weekend, without a fixed agenda. The hacking venue coworkin' Montpellier was provided by Edward Hervey (bilboed) and most meals were provided by GStreamer.

With the opportunity to work in the same room and enjoy the lovely city of Montpellier, developers speed up the patch reviews and fixes by being able to easily discuss them with colleagues through the high-bandwith low-latency face-to-face protocol. They also took the chance to discuss major features and try to settle on problems that have been waiting for design decisions for a long time in the community. This is a non-exhaustive list of work done in the event:

  • Performance improvement for Caps negotiations: Caps negotiation is part of the GStreamer applications startup and was vastly optimized. Initial tests show it is taking 49.6% less time to happen.


  • nvenc element: A new nvenc element for recent NVIDIA GPU's was made released. It currently implements h264.


  • 1.6 release: At the hackfest a few blocker issues were revisited to get the project ready for releasing version 1.6. This will be a stable release. The release candidate just took place right after the hackfest, the 1.5.90 version.


  • decodebin3 design proposal/discussion: A new version of the playback stack was proposed and discussed. It should maintain the same features of the current version but cover use cases needed by targets with restricted resources, such as embedded devices (TV, mobile, for example), and providing a stream selection API for applications to use. This is a very important feature for Tizen to support more hardware-enabled scenarios in its devices.


  • Moving to Phabricator: The community started experimenting with the recently created Phabricator instance for GSteamer's bug and code tracking. Tweaking of settings and scripts, before a full transition from Bugzilla can be made.


  • Improvements on GtkGLSink: The sink had flickering and scaling noise among some other problems. Most are now fixed.


  • libav decoders direct rendering: Direct rendering allows decoders to write their output directly on screen, increasing performance by reducing the number of memory copies done. The libav video decoders had their direct rendering redone for the new libav API as is now enabled again.


  • Others: improvements to RTP payloaders and depayloadres of different formats, discussions about how to provide more documentation, bug fixes and more.
    Without any major core design decision pending, this hackfest allowed the attendees to work on different areas they wanted to focus on and it was very productive in many fronts. With the GStreamer Conference being around the corner, people in the organizing committee discussed which talks should be accepted and other organizational details.


A huge gratitude note to our host, Edward Hervey (shown below). The venue was very comfortable, the fridge always stocked and the city a lot of fun!


Lively discussion about GST Streams and decodebin3


If you missed the notes from the previous Hackfest. Read them here
on August 24, 2015 04:03 PM

Publishing Vanilla

Canonical Design Team

We’ve got a new CSS framework at Canonical, named Vanilla. My colleague Ant has a great write-up introducing Vanilla. Essentially it’s a CSS microframework powered by Sass. The build process consists of two steps, an open source build, and a private build.

Open Source Build

While there are inevitably componants that need to be kept private (keys, tokens, etc.) being Canonical, we want to keep much of the build in the open, in addition to the code. We wanted the build to be as automated and close to CI/CD principles as possible. Here’s what happens:

Committing to our github repository kicks off a travis build that runs gulp tests, which include sass-lint. And we also use david-dm.org to make sure our npm dependencies are up to date. All of these have nice badges we can link to right from our github page, so the first thing people see is the heath of our project. I really like this, it keeps us honest, and informs the community.

Not everything can be done with travis, however, as publishing Vanilla to npm, updating our project page and demo site require some private credentials. For the confidential build, we use Jenkins. (formally Hudson, a java-based build management system.).

Private Build with Jenkins

Our Jenkins build does a few things:

  1. Increment the package.json version number
  2. npm publish (package)
  3. Build Sass with npm install
  4. Upload css to our assets server
  5. Update Sassdoc
  6. Update demo site with new CSS

Robin put this functionality together in a neat bash script: publish.sh.

We use this script in a Jenkins build that we kick off with a few parameters, point, minor and major to indicate the version to be updated in package.json. This allows our devs push-button releases on the fly, with the same build, from bugfixes all the way up to stable releases (1.0.0)

After less than 30 seconds, our demo site, which showcases framework elements and their usage, is updated. This demo is styled with the latest version of Vanilla, and also serves as documentation and a test of the CSS. We take advantage of github’s html publishing feature, Github Pages. Anyone can grab – or even hotlink – the files on our release page.

The Future

It’d be nice for the regression test (which we currently just eyeball) to be automated, perhaps with a visual diff tool such as PhantomCSS or a bespoke solution with Selenium.

Wrap-up

Vanilla is ready to hack on, go get it here and tell us what you think! (And yes, you can get it in colours other than Ubuntu Orange)

on August 24, 2015 01:35 PM
Ubuntu 15.10 is coming up soon, and what better way to celebrate a new release with beautiful new content to go with the release? The Ubuntu Free Culture Showcase is a way to celebrate the Free Culture movement, where talented artists across the globe create media and release it under licenses that encourage sharing and adaptation. We're looking for content which shows off the skill and talent of these amazing artists and will great Ubuntu 15.10 users. We announced the showcase last week, and now we are accepting submissions at the following groups: For more information, please visit the Ubuntu Free Culture Showcase page on the Ubuntu wiki.
on August 24, 2015 03:40 AM

A couple of weeks ago, Hulu made some changes to their video playback system to incorporate Adobe Flash DRM technology. Unfortunately, this meant that Hulu no longer functioned on Ubuntu because Adobe stopped supporting Flash on Linux several year ago, and therefore Adobe’s DRM requires HAL which was likewise obsoleted about 4 years ago and was dropped from Ubuntu in 13.10. The net result is that Hulu no longer functions on Ubuntu.

While Hulu began detecting Linux systems and displaying a link to Adobe’s support page when playback failed, and the Adobe site correctly identifies the lack of HAL support as the problem, the instructions given no longer function because HAL is no longer provided by Ubuntu.

Fortunately, Michael Blennerhassett has maintained a Personal Package Archive which rebuilds HAL so that it can be installed on Ubuntu. Adding this PPA and then installing the “hal” package will allow you to play Hulu content once again.

To do this, first open a Terminal window by searching for it in the Dash or by pressing Ctrl+Alt+T.

Next, type the following command at the command line and press Enter:

sudo add-apt-repository ppa:mjblenner/ppa-hal

You will be prompted for your password and then you will see a message from the PPA maintainer. Press Enter, and the PPA will be added to Ubuntu’s list of software sources. Next, have Ubuntu refresh its list of available software, which will now include this PPA, by typing the following and pressing Enter:

sudo apt update

Once this update finishes, you can then install HAL support on your computer by searching for “hal” in the Ubuntu Software Center and installing the “Hardware Abstraction Layer” software, or by typing the following command and pressing Enter:

sudo apt install hal

and confirming the installation when prompted by pressing Enter.

book cover

I explain more about how to install software on the command line in Chapter 5 and how to use PPAs in Chapter 6 of my upcoming book, Beginning Ubuntu for Windows and Mac Users, coming this October from Apress. This book was designed to help Windows and Mac users quickly and easily become productive on Ubuntu so they can get work done immediately, while providing a foundation for further learning and exploring once they are comfortable.

on August 24, 2015 02:13 AM

August 23, 2015

Viagra is a very specialized drug, and it use should not be taken lightly. Not taking Viagra in a responsible way can seriously damage your health, especially if you have problems such as a heart condition, or suffer from high blood pressure. The problem with Viagra is that it is now being sold under many different names on the Internet. This is called generic medication and is often produced in places like India or China. Is it safe? No, it isn’t always safe. Maria who works for London escorts, says that her father bought some. He was about embarrassed about his medical condition, and did not want to speak to his doctor. But like so many London escorts know, this is not a drug to be played around with at all.

Maria has worked for charlotte action escorts services for about two years. During that time she has always known that her father has suffered from a heart condition. The condition reduces his circulation quite severely, and makes it difficult for him to maintain a good erection. Most London escorts know that this can happen to men who have circulatory problems, and Viagra is one of the many solutions available. But, you should never take any drug without having spoken to your doctor first.

Maria’s dad took Viagra which he had bought of the Internet and ended up having a heart attack. She says it is complicated, but the Viagra contraindicated with the medication that he was already on. That means that it caused a problem and the two drugs mixed together caused her father to have a heart attack. Maria had to take two weeks off from London escorts services, and go to look after her mom whilst her dad was in hospital. It was a worrying time for my mom, she says, so I simply had to take time off from London escorts. It was the only way to cope.

In the end, Maria’s dad recovered and Maria was able to return to her job at London escorts services. It was scary, she says, and taught be a valuable lesson. You should never take drugs without knowing what they can do, and I am sure that many London escorts appreciate that Viagra should not be played around with just like other medications. The fact is, says Maria, my father could have died. Of course, my mom and I would both have been devastated.

The Internet is full of sexual performance enhancing drugs and you should at all times be careful.

There are some safe alternatives out there such as the amino acids, and herbal alternatives. However, London escorts would like you all to know that the herb Ginseng can be dangerous as well. It can “knock out” some heart medications and raise your blood pressure. This is another online sexual enhancement drug which London escorts warn you to stay away from at all times. If, you do have a concern about your performance, it is always best to see your local GP.

on August 23, 2015 01:00 PM

August 21, 2015

Clock App Update: August 2015

Nekhelesh Ramananthan

We have been working on a new clock app update with lots of goodies :-) I thought I would summarize the release briefly. Huge props to Bartosz Kosiorek for helping out with this release and coordinating with the canonical designers for the stopwatch & timer designs.

General Improvements

We focused on many parts of the clock app for this release ranging from the world-clock feature, to the alarms and stopwatch.

  • Transitioned to the new 15.04 SDK ListItems which effectively results in a lot of custom code being removed and maintaining consistency with other apps. LP: #1426550
  • User added world cities previously were not translated if the user changed the phone language. This has been fixed. LP: #1477492
  • New navigation structure due to the introduction of Stopwatch
  • Replaced a few hard coded icons with system icons. LP: #1476580
  • Fixed not being able to add world cities with apostrophe in their names (database limitation). LP: #1473074

Stopwatch

This along with Timer is the single most requested feature since the clock app reboot. And I am thrilled to see it finally land in this update. It sports a couple of usability tweaks like prevent screen-dim while stopwatch is running and keep the stopwatch running in the background regardless of if the clock app is closed or the phone is switched off. The UI is clean and simple. Expect some more changes to this in the future. We reused a lot of code from Michael Zanetti's Stopwatch App.

stopwatch-image

Alarms

In this area, we have fixed a good number of small bugs that overall improve the alarms experience. The highlight of course is the support for custom alarm sounds. Yes! You can now import music using content hub and set that as your alarm sound to wake you in the morning.

custom-sound-image

Other bugs fixed include,

  • Changed default alarm sound to something a bit more strong. LP: #1354370
  • Fixed confirmation behaviour being confusing in the alarm page header. LP: #1408015
  • Made the alarm times shown in the alarm page more bigger and bolder. LP: #1365428
  • Adding/Deleting alarms will move the alarm list items up/down using a nice smooth animation
  • Alternate between alarm frequency and alarm ETA every 5 seconds using a fade animation. LP: #1466000
  • Fixed alarm time being incorrectly set if it the current time is a multiple of 5. LP: #1484926

This pretty much sums up the upcoming release. We will wait a few days to ensure it is fully translated and then tested by QA before releasing the update next week.

on August 21, 2015 10:39 PM

DebConf15

Rhonda D'Vine

I tried to start to write this blog entry like I usually do: Type along what goes through my mind and see where I'm heading. This won't work out right now for various reasons, mostly because there is so much going on that I don't have the time to finish that in a reasonable time and I want to publish this today still. So please excuse me for being way more brief than I usually am, and hopefully I'll find the time to expand some things when asked or come back to that later.

Part of the reason of me being short on time is different stuff going on in my private life which requires additional attention. A small part of this is also something that I hinted in a former blog entry: I switched my job in June. I really was looking forward to this. I made them aware of what the name Rhonda means to me and it's definitely extremely nice to be addressed with female pronouns at work. And also I'm back in a system administration job which means there is an interest overlap with my work on Debian, so a win-win situation on sooo many levels!

I'm at DebConf15 since almost two weeks now. On my way here I was complimented on my outfit by a security guard at the Vienna airport which surprised me but definitely made my day. I was wearing one of these baggy hippie pants (which was sent to me by a fine lady I met at MiniDebConf Bucharest) but pulled up the leg parts to the knees so it could be perceived as a skirt instead. Since I came here I was pretty busy with taking care of DCschedule bot adjustments (like, changing topic and twittering from @DebConf at the start of the talks), helping out with the video team when I noticed there was a lack of people (which is a hint for that you might want to help with the video team in the future too, it's important for remote people but also for yourself because you can't attend multiple sessions at the same time).

And I have to repeat myself, this is the place I feel home amongst my extended family, even though I it still is sometimes for me to get to speak up in certain groups. I though believe it's more an issue of certain individuals taking up a lot of space in discussions without giving (more shy) people in the round the space to also join in. I guess it might be the time that we need a session on dominant talking patterns for next year and how to work against them. I absolutely enjoyed such a session during last year's FemCamp in Vienna which set the tone for the rest of the conference, and it was simply great.

And then there was the DebConf Poetry Night. I'm kinda disappointed with the outcome this year. It wasn't able to attract as much people anticipated, which I to some degree account to me not making people aware of it well enough, overlapping with a really great band playing at the same time in competition, and even though the place where we did it sounded like a good idea at first, it didn't had enough light for someone to read something from a book (but that was solved through smartphone lights). I know that most people did enjoy it, so it was good to do it, but I'm still a fair bit disappointed with the outcome and will try to work on improving on that grounds for next year. :)

With all this going on there unfortunately wasn't as much time as I would have liked to spend with people I haven't seen for a long time, or new people I haven't met yet. Given that this year's DebConf had an height in attendees (526 being here at certain times during the two weeks, and just today someone new arrived too, so that doesn't even have to be the final number) it makes it a bit painful to have picked up so many tasks and thus lost some chances to socialize as much as I would have liked to.

So, if you are still here and have the feeling we should have talked more, please look for me. As Bdale pointed out correctly in the New to DebConf BoF (paraphrased): When you see us DebConf old timers speaking to someone else and you feel like you don't want to disturb, please do disturb and speak to us. I always enjoyed to get to know new people. This for me always is one of the important aspects of DebConf.

Also, I am very very happy to have received feedback from different people about both my tweets and my blog, thank you a lot of that. It is really motivating to keep going.

So, lets enjoy the last few hours of DebConf!

Another last side notice: While my old name in the Debian LDAP did manage to find some wrongly displayed names in the DebConf website, like for speakers, or volunteers, it was clear to me that having it exposed through SSO.debian.org isn't really something I appreciate. So I took the chance and spoke to Luca from the DSA team right here today, and ... got it fixed. I love it! Next step is getting my gpg key exchanged, RT ticket is coming up. :)

/debian | permanent link | Comments: 1 | Flattr this

on August 21, 2015 09:00 PM

S08E24 – Epic Movie - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Twenty-four of Season Eight of the Ubuntu Podcast! Laura Cowen, Martin Wimpress, and Alan Pope are back with Stuart Langridge!

In this week’s show:

  • We chat about why Laura doesn’t like webapps on the Ubuntu Phone and we get a more qualified view from app developer Stuart.
  • We go over your feedback, including Ubuntu Phone notes from Pete Cliff.
  • We have a command line love, Comcast from Jorge Castro.
  • We chat about getting a Picade, playing with Jasper, and controlling a Nexus 6 with Pebble Time whilst listening to podcasts on the move.

PiCade

That’s all for this week, please send your comments and suggestions to: show@ubuntupodcast.org
Join us on IRC in #ubuntu-podcast on Freenode
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on August 21, 2015 05:35 PM
Akademy is long over, you say? Yes, but I've been traveling almost constantly since flying home, since my husband is on the home leg of his long hike of the Pacific Crest Trail, which he is chronicling at http://bobofwashington.blogspot.com. While driving about the state to meet him, I've not been online much, therefore unable to create blogposts from my thoughts and impressions written during and right after Akademy. Fortunately, I did scrawl some thoughts which I'll post over the next week or so.

Please visit https://akademy.kde.org/2015 for more information about Akademy. Click the photo for a larger version and add names if someone is left unlabeled.

First: A Coruña where Akademy 2015 met, is beautiful! Galicia, the region of Spain is not only beautiful, but serves delicious food, especially if you love fresh seafood.

The local team, working in conjunction with the e.V. Board and the amazing Kenny Duffus and Kenny Coyle created a wonderful atmosphere in which to absorb, think, and work. One of the best bits this year was the Rialta, where most of us lived during Akademy. Scarlett and I flew in early, to get over our jetlag, and have a day to see the city.


The journey from Seattle began very early Monday morning, and Scarlett set out even earlier the previous day via Amtrack train to Seattle. Our connections and flights were very long, but uneventful. We caught the airport bus and then the city bus 24 and walked to the Rialta, arriving about dinner-time Tuesday. Although we tried to avoid sleeping early, it was impossible.

Waking the next morning at 4am with no-one about, and no coffee available was a bit painful! Breakfast was not served until 8am, and we were *not* late! Rialta breakfasts are adequate; the coffee less so. I found that adding a bit of cocoa made it more drinkable, but some days bought cafe con leche from the bar instead. That small bar was also the source of cervesa (beer) and a few whiskys as well.

One of the beautiful things about the Rialta was their free buses for residents. Some were called Touristic, and followed a long loop throughout the city. You could get off at any of the stops and get back on later after sight-seeing, eating or shopping. So we rode it a loop to figure out what we wanted to see, which was part of the sea-side and the old town. Scarlett and I both took lots of photos of the beautiful bay and some of the port. After visiting Picasso's art college, we headed into the old city. On the way in, we saw a archaelogical dig of a Roman site, I guess one of many. This one was behind the Military Museum. As we walked further into the city, we heard music from Game of Thrones, and a giant round tent covered in medieval scenes. As we walked around the square trying to figure out what was happening, we saw lancers on large horses, dancing about waiting to enter the ring!


Some of the Akademy attendees were inside the tent watching the jousts, we later found out. I stopped in to the tourist info office to find out why the tent was there, and found out there was a week-long celebration all through the old city. It was delightful to turn the corner and see a herd of geese, or medieval handicrafts, or.... beer! A small cold beer from a beer barrel with a medieval monk serving us was most welcome as we wandered close to Domus. The Rialta bus was a great way "home."

A day of play left us ready to work as the rest of the attendees began to arrive.
Oh by the way: give big! Randa Meetings will soon be happening, and we need your help!


on August 21, 2015 05:00 AM

Hace unas semanas conocí wallabag, una alternativa libre y opensource que nace como alternativa a proyectos o productos de “read later” como instapaper, pocket y otros … pues no se a ud’s pero a mi si me pasa que encuentro urls interesantes en la world wide web pero aveces no tengo el tiempo de leer la noticia o articulo completo; generalmente lo dejo en un tab de mi navegador abierto pero muchas veces perdemos esas lecturas pendientes cuando las “guardo” de esta forma.

Pues bien, quise darle una probada a wallabag, el cual me permite tener mi propio servidor de noticias o artículos de la www para leer luego; ademas que también cuenta con add-on para firefox, chrome, y una app para android, IOS, windows phone y firefoxOS (descargas)…

…y como ando aprendiendo un poco de contenedores y Docker específicamente por ahora, decidí crear mi propio contenedor para practicar y aprender un poco mas. así que en este post también les hablare un poco de docker 😀

Lo primero que hice fue leer el manual de instalación sobre Ubuntu para crear mi serie de pasos en un Dockerfile para poder construir mi propia imagen o contenedor.

FROM ubuntu:latest
MAINTAINER Hollman Enciso <hollman.enciso en gmail>
RUN apt-get update && apt-get -y dist-upgrade

#Install the neccesary packages
RUN apt-get -y install apache2 php5 php5-gd php5-imap php5-mcrypt php5-memcached php5-mysql mysql-client php5-curl php5-tidy php5-sqlite curl git sqlite3

#This will install the required dependency Twig via Composer
RUN curl -sS https://getcomposer.org/installer | php
RUN  mv composer.phar /usr/local/bin/composer
#RUN cd /var/www/html/ && /usr/local/bin/composer install
RUN rm -rf /var/www/html/*

#cloning the project
RUN git clone https://github.com/wallabag/wallabag.git /var/www/html/
RUN chown -R www-data: /var/www/html/

#setting the document root volume
VOLUME [“/var/www/html/”]

#Set some apache variables
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2

RUN echo “ServerName localhost” >> /etc/apache2/apache2.conf

#Expose default apache port
EXPOSE 80

#run apache on background
CMD [“/usr/sbin/apache2”, “-D”, “FOREGROUND”]

Luego se crea la imagen:

hollman@prime ~/Docker/wallabag $ docker build -t wallabag/v1 .

y al finalizar podemos ver que la tenemos listica 😀

hollman@prime ~ $ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
wallabag/v1 latest 35316456d694 6 minutes ago 461.5 MB

Luego la lanzamos y revisamos que este andando:

hollman@prime ~/Docker $ docker run -d -p 80:80 hollman/wallabag:latest
16b44a184bd58ae181a36d38c50ccff6d408b54a74b543bef81d2231bb0175ca
hollman@prime ~/Docker $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
16b44a184bd5 hollman/wallabag:latest “/usr/sbin/apache2 – 6 seconds ago Up 5 seconds 0.0.0.0:80->80/tcp serene_poitras

Y lanzamos nuestro navegador para completar la instalación:

Wallabag_installationEn esta parte solo nos queda ingresar los datos de instalación como el motor de base de datos a utilizar que puede ser sqlite o mysql y para finalizar los datos de cuenta admin.

En este punto pueden utilizar el mismo contenedor para utilizar con sqlite3 o si prefieren con otro contenedor como mysql con la bd.

Ponemos los datos y finalizamos la instalación.

En mi caso solo instalé y configuré el add-on de firefox para agregar mis lecturas por leer para que se guarden en mi wallabag las cuales las agrego con solo dar click en el icono.

wallabag_firefox_addonya después desde mi casa o cualquier parte donde haya puesto el servidor (publico o privado) puedo continuar leyendo, etiquetar, compartir, rankear, eliminar o guardar como servidor de contenido o material que pueda utilizar luego.

index_wallabagPara finalizar y si alguien lo quiere probar lo he subido a mi docker hub. no lo he terminado; la idea es que al lanzar el contenedor este cree la bd y despliegue un contenedor de mysql si lo desea para que sea completamente “out of the box”. Para quienes ya tienen docker en sus maquinas con un docker pull hollman/wallabag bastará para bajar la imagen

on August 21, 2015 01:06 AM

August 20, 2015

Recently I started writing a column for Forbes.

My latest column covers the rise of the maker movement and in it I interviewed Jamie Hyneman from Mythbusters and Dale Dougherty from Make Magazine.

Go and give it a read right here.

on August 20, 2015 04:48 PM

It is this time of the year again. In but a few weeks some 50 KDE contributors are going to take over the village of Randa in the Swiss Alps to work on making KDE software yet more awesome.

So if you would kindly click on this fancy image here to donate a penny a or two, I think you will make people around the world eternally grateful:

Fundraiser-Banner-2015

Not convinced yet? Oh my.

The KDE Sprints in Randa are an annual event where different parts of the KDE community meet in the village of Randa in Switzerland to focus their minds and spirit on making KDE software better, faster, more robust, more secure, and of course better looking as well.

Sprints are a big part of KDE development, they enable contributors to meet in person and focus the entire thrust of their team on pushing their project and the KDE community as a whole forward. KDE software is primarily built by a community of volunteers and as such they require support to finance these sprints to leap forward in development and bring innovation to software.

If you have not yet perused the Randa 2015 page, you definitely should. You will probably find that the list of main projects for this year not only sound very interesting, but will in all likelihood be relevant to you. If you own a smartphone or tablet you can benefit from KDEConnect which makes your mobile device talk to your computer (by means of magic no less). Or perhaps you’d rather have the opportunity to run Plasma on your mobile device? General investments in touch-support and enablement are going to go a long way to achieve that. Do you like taking beautiful photographs? Improvements to digiKam will make it even easier to manage and organize your exploits.
These are but a few things the KDE contributors are going to focus on in Randa. All in all there should be something for everyone to get behind and support.

KDE is a diverse community with activities in many different areas in and around software development. Standing as a beacon of light in a world where everyone tries to gobble up as much information about their users as possible, or lock users’ data in proprietary formats from which they cannot ever be retrieved again, or quite simply spy on people.

Be a benefactor of freedom. Support Randa 2015.

on August 20, 2015 11:42 AM
Ubuntu Feature Freeze is always such an exciting time.  New stable (2.0.7) and development (2.1.0) versions of MenuLibre are now available.  Several bugs have been fixed and the new development release begins to show a modern spin on the UI. What’s New? MenuLibre 2.0.7 is a bugfix release and 2.1.0 builds on top of that … Continue reading MenuLibre 2.0.7 and 2.1.0 Released
on August 20, 2015 10:51 AM
The first release in the Catfish 1.3 development cycle is now out!  This development cycle is intended to further refine the user interface and improve search speed and responsiveness.  1.3.0 is all about style. What’s New? The toolbar has been replaced with a Gtk HeaderBar.  This better utilizes screen space and makes Catfish more consistent … Continue reading Catfish 1.3.0 Released
on August 20, 2015 04:12 AM

On an Ubuntu phone, apps are1 isolated from one another; each app has its own little folder where its files go, and no other app can intrude. This, obviously, requires some way to exchange files between apps, because frankly there are times when my epub ebook is in my file downloader app and I need it in my ebook reader app. And so on.

To deal with this, Ubuntu provides the Content Hub: a way for an app to say “I need a photo” and all the other apps on your phone which have photos to say “I have photos! Ask me! Me!”.

This is, at a high level, the right thing to do. If my app wants to use a picture of you as an avatar, it should not be able to snarf your whole photo gallery and do what it wants with it. More troubling yet, adding some new social network app should not give it access to your whole address book so that it can hassle your friends to join, or worse just snaffle that information and store it away on its own server for future reference. So when some new app needs a photo of you to be an avatar, it asks the content hub; you, the punter, choose an app to provide that photo, and then a photo from within that app, and our avatar demander gets that photo, and none of the pictures of your kids or your holiday or whatever you take photos of. This is, big picture2 a good idea.

Sadly, the content hub is spectacularly under-documented, so actually using it in your Ubuntu apps is jolly hard work. However, with an assist3 from Michael Zanetti, I now understand how to offer files you have to others via the content hub. So I come to explain this to you.

First, you need permission to access the content hub at all. So, in your appname.apparmor file4, add content_exchange_source.5 This tells Ubuntu that you’re prepared to provide files for others (you are a “source” of data). You then need to, also in manifest.json, configure what you’re allowed to do with the content hub; add a hooks.content-hub key which names a file (myappname.content-hub or whatever you prefer). That file that you just named needs to also be json, and looks something like {"source": ["all"]}, which dictates which sorts of files you want to be a source for.6 Once you’ve done all this, you’re allowed to use the content hub. So now we explore how.

In your QML app, you need to add a ContentPeerPicker. This is a normal QML Item; specifically, showing it to the user is your responsibility. So you might want to drop it in a Dialog, or a Page, or you might just put it at top level with visible:hidden and then show it when appropriate (such as when your user taps a file or image or whatever that they want to open in another app).

Your ContentPeerPicker should look, at minimum, like this:

ContentPeerPicker {
    handler: ContentHandler.Destination
    contentType: ContentType.All
    onPeerSelected: {
        var transfer = peer.request();
        var items = new Array();
        exportItem.url = /* whatever the URL of the file you want to share is */;
        items.push(exportItem);
        transfer.items = items;
        transfer.state = ContentTransfer.Charged;
        cpp.visible = false;
    }
    onCancelPressed: cpp.visible = false;
}

The important parts here are handler: ContentHandler.Destination (which means “I am a source for files which need to be opened in some other app”), and contentType: ContentType.All (which means “I am a source for all types of file”).7 After that8 show it to the user somehow and connect to its onPeerSelected method. When the user picks some other app to export to from this new Item, onPeerSelected will be called; when the callback onPeerSelected is called, the peer property is valid. Get a transfer object to this peer: var transfer = peer.request();, and then you need to fill in transfer.items. This is a JavaScript list of ContentItems; specifically, define ContentItem { id: exportItem } in your app, and then make a “list” of one item with var items = new Array(); exportItem.url = PATH_TO_FILE_YOU_ARE_EXPORTING; items.push(exportItem); transfer.items = items;.9 After that, set transfer.state = ContentTransfer.Charged and your transfer begins; you can hide the ContentPeerPicker by setting cpp.visible=false at this point.

And that’s how to export files over the Content Hub so that your app can make files available to others. There’s a second half of this (other apps export the files; your app wants to retrieve them, so let’s say they’re an app which needs a photo, and you’re an app with photos), which I’ll come to in a future blog post.

As you can see from the large number of footnotes10 there are a number of caveats with this whole process, in particular that a bunch of it isn’t documented. It will, I’m sure, over time, get better. Meanwhile, the above gives you the basics. Have fun.

  1. correctly
  2. ha!
  3. a bit more than that, if I’m honest
  4. or whatever you called it; hooks.$APPNAME.apparmor in manifest.json
  5. This is more confusing than it should be. If you’re using Ubuntu SDK as your editor, then clicking the big “+” button will load a list of possible apparmor permissions. Don’t double-click a permission; this will just show you what it means in code terms, rather irrelevantly. Instead, choose your permission (content_exchange_source in this case) and then say Add
  6. you can also do `{“source”:[“pictures”]}. There may be other things you can write in there instead of “all” or “pictures”, but the documentation is surlily silent on such things.
  7. You can see all the possible content types in the Ubuntu SDK ContentType documentation (https://developer.ubuntu.com/api/apps/qml/sdk-15.04/Ubuntu.Content.ContentType/), with misleading typos and all
  8. as mzanetti excellently described it
  9. You can transfer more than one item, here.
  10. not this one, though
on August 20, 2015 01:19 AM

Support Randa 2015

Valorie Zimmerman



Weeeee! KDE is sponsoring Randa Meetings again, this time with touch. And you can help making KDE technologies even better! This exciting story in the Dot this week, https://dot.kde.org/2015/08/16/you-can-help-making-kde-technologies-even-better caught not only my attention, but my pocketbook as well.

Yes, I donated, although I'm not going this time. Why? Because it is important, because I want Plasma Mobile to succeed, because I want my friend Scarlett* to have a great time, and because I want ALL the devels attending to have enough to eat! Just kidding, they can live on Swiss chocolate and cheese. No, really: the funds are needed for KDE software development.

So dig deep, my friends, and help out. https://www.kde.org/fundraisers/kdesprints2015/

*(And somebody hire Scarlett to make KDE software!)
on August 20, 2015 12:36 AM