July 29, 2015

I wanted to share a unique opportunity to get invovled with ubuntu and testing. Last cycle, as part of a datacenter shuffle, the automated installer testing that was occurring for ubuntu flavors stopped running. The images were being test automatically via a series of autopilot tests, written originally by the community (Thanks Dan et la!). These tests are vital in helping reduce the burden of manual testing required for images by running through the base manual test cases for each image automatically each day.

When it was noticed the tests didn't run this cycle, wxl from Lubuntu accordingly filed an RT to discover what happened. Unfortunately, it seems the CI team within Canonical can no longer run these tests. The good news however is that we as a community can run them ourselves instead.

To start exploring the idea of self-hosting and running the tests, I initially asked Daniel Chapman to take a look. Given the impending landing of dekko in the default ubuntu image, Daniel certainly has his hands full. As such Daniel Kessel has offered to help out and begun some initial investigations into the tests and server needs. A big thanks to Daniel and Daniel!

But they need your help! The autopilot tests for ubiquity have a few bugs that need solving. And a server and jenkins need to be setup, installed, and maintained. Finally, we need to think about reporting these results to places like the isotracker. For more information, you can read more about how to run the tests locally to give you a better idea of how they work.

The needed skillsets are diverse. Are you interested in helping make flavors better? Do you have some technical skills in writing tests, the web, python, or running a jenkins server? Or perhaps you are willing to learn? If so, please get in touch!

on July 29, 2015 08:26 PM

Users of some email clients, particularly Gmail, have long had a problem filtering mail from Launchpad effectively.  We put lots of useful information into our message headers so that heavy users of Launchpad can automatically filter email into different folders.  Unfortunately, Gmail and some other clients do not support filtering mail on arbitrary headers, only on message bodies and on certain pre-defined headers such as Subject.  Figuring out what to do about this has been tricky.  Space in the Subject line is at a premium – many clients will only show a certain number of characters at the start, and so inserting filtering tags at the start would crowd out other useful information, so we don’t want to do that; and in general we want to avoid burdening one group of users with workarounds for the benefit of another group because that doesn’t scale very well, so we had to approach this with some care.

As of our most recent code update, you’ll find a new setting on your “Change your personal details” page:

Screenshot of email configuration options

If you check “Include filtering information in email footers”, Launchpad will duplicate some information from message headers into the signature part (below the dash-dash-space line) of message bodies: any “X-Launchpad-Something: value” header will turn into a “Launchpad-Something: value” line in the footer.  Since it’s below the signature marker, it should be relatively unobtrusive, but is still searchable.  You can search or filter for these in Gmail by putting the key/value pair in double quotes, like this:

Screenshot of Gmail filter dialog with "Has new words" set to "Launchpad-Notification-Type: code-review"

At the moment this only works for emails related to Bazaar branches, Git repositories, merge proposals, and build failures.  We intend to extend this to a few other categories soon, particularly bug mail and package upload notifications.  If you particularly need this feature to work for some other category of email sent by Launchpad, please file a bug to let us know.

on July 29, 2015 04:43 PM
Este artículo es una traducción del post de Alan Pope, disponible aquí en Inglés.

Me gusta jugar en el móvil y en la tablet y quería añadir algunos juegos más a Ubuntu. Con poco trabajo se 'migran' fácilmente juegos a Ubuntu Phone. He puesto la palabra 'migrar' entre comillas porque en algunos casos es muy poco esfuerzo, por lo que llamarlo 'migrar' puede aparentar que es más trabajo del que realmente es.

Actualización: Algunos usuarios me preguntaron por qué alguien quedría hacer esto, pudiendo simplemente crear un marcador en el navegador. Mis disculpas si no dejé esto claro. La gran ventaja es que el juego es cacheado offline. Con la ventaja que tiene esto en muchas situaciones, por ejemplo en viajes o con mal acceso a Internet. Por supuesto, no todos los juegos pueden ser completamente offline, este tutorial no será de gran ayuda para juegos online, como Clash of Clans. Sin embargo, sí será útil para muchos otros. También se hace uso del confinamiento de aplicaciones en Ubuntu, por lo que la aplicación/juego no tendrá acceso exterior a su directorio de datos.

Invertí algunas tardes y fines de semana con sturmflut, quien también plasmó su experiencia en el artículo Panda Madness.

Nos divertimos mucho migrando algunos juegos y quiero compartir qué hicimos, para que facilite la tarea de otros desarrolladores. Creé una plantilla básica en Github que puede usarse como punto de partida, pero quiero explicar el proceso y los problemas que tuvimos, para que otros puedan migrar más aplicaciones y juegos.

Si tienes alguna duda, déjame un comentario, o si lo prefieres, también puedes escribirme por privado.

Prueba de concepto


Para demostrar que podemos migrar fácilmente juegos existentes, distribuí un par de juegos de Code Canyon. Tienda donde desarrolladores pueden distribuir sus juegos, a la vez que otros desarrolladores aprenden de ellos. Comencé con un pequeño juego llamado Don't Crash el cual es un juego HTML5 creado con Construct 2. Podría distribuir más juegos, e incluso hay más tiendas de juegos, pero esto es sólo un buen ejemplo para mostrar el proceso.


1
Apunte: Construct 2 de Scirra es una herramienta que sólo funciona en Windows, es popular, potente y rápida, para el desarrollo multiplataforma de aplicaciones y juegos HTML5. Es usado por muchos desarrolladores indie para crear juegos que se ejecutan en navegadores de escritorio y dispositivos móviles. En desarrollo está Construct 3, el cual será más compatible y también estará disponible para Linux.

Antes de distribuir Don't Crash comprobé que funcionaba bien en Ubuntu Phone usando la demo que hay en Code Canyon. Tras verificar que funcionaba, pagué y recibí los ficheros con el 'código' de Construct 2.

2
Si eres un desarrollador de tus propios juegos, puedes saltarte este paso, porque ya tendrás el código a migrar.

Migrando a Ubuntu

Lo mínimo que se necesita para migrar un juego son algunos ficheros de texto y el directorio que contiene el código fuente. Algunas veces hacen falta un par de trucos para los permisos y bloquear la rotación, pero en líneas generales, Simplemente Funciona (TM).

Yo estoy usando un ordenador con Ubuntu para todo el empaquetado y pruebas, pero para este juego necesité un ordenador con Windows para exportarlo desde Construct 2. Los requisitos pueden variar, pero si no tienes Ubuntu, puedes instalarlo en una máquina virtual como VMWare o VirtualBox, y sólo tendrás que añadir el SDK como se detalla en el developer.ubuntu.com.

Este es el contenido entero del directorio, con el juego en la carpeta www/

alan@deep-thought:~/phablet/code/popey/licensed/html5_dontcrash⟫ ls -l
total 52
-rw-rw-r-- 1 alan alan   171 Jul 25 00:51 app.desktop
-rw-rw-r-- 1 alan alan   167 Jun  9 17:19 app.json
-rw-rw-r-- 1 alan alan 32826 May 19 19:01 icon.png
-rw-rw-r-- 1 alan alan   366 Jul 25 00:51 manifest.json
drwxrwxr-x 4 alan alan  4096 Jul 24 23:55 www


Creando el metadata

 Manifiesto

Contiene los detalles básicos acerca de la aplicación, como el nombre, descripción, autor, email y alguna cosa más. Aquí están los mios (en el manifest.json) de la última versión de Don't Crash. Los campos a rellenar son aclaratorios por sí mismos. Por lo que sustituye cada uno de ellos con los detalles de tu propia aplicación.

{
    "description":  "Don't Crash!",
    "framework":    "ubuntu-sdk-14.10-html",
    "hooks": {
        "dontcrash": {
            "apparmor": "app.json",
            "desktop":  "app.desktop"
        }
    },
    "maintainer":   "Alan Pope ",
    "name":         "dontcrash.popey",
    "title":        "Don't Crash!",
    "version":      "0.22"
}


Apunte: "popey" es mi nombre de desarrollador en la tienda, tienes que sustituirlo por el mismo nombre que usas en tu página del portal de desarrollador.

3

 Perfil de seguridad

El fichero app.json, detalla qué permisos necesita la aplicación para ejecutarse:

{
    "template": "ubuntu-webapp",
    "policy_groups": [
        "networking",
        "audio",
        "video",
        "webview"
    ],
    "policy_version": 1.2
}


Fichero Desktop

Define cómo se lanza la aplicación, cual es el icono utilizado y algunos otros detalles:

[Desktop Entry]
Name=Don't Crash
Comment=Avoid the other cars
Exec=webapp-container $@ www/index.html
Terminal=false
Type=Application
X-Ubuntu-Touch=true
Icon=./icon.png


De nuevo, cambia los campos Name y Comment, y practicamente hemos finalizado.

Construyendo el paquete click

Con los ficheros creados y un icono icon.png, compilamos para crear el paquete .click que subiremos a la tienda. Este es el proceso entero:

alan@deep-thought:~/phablet/code/popey/licensed⟫ click build html5_dontcrash/
Now executing: click-review ./dontcrash.popey_0.22_all.click
./dontcrash.popey_0.22_all.click: pass
Successfully built package in './dontcrash.popey_0.22_all.click'.



En mi portátil apenas se compila en un segundo.

Ten en cuenta la salida del comando, la cual realiza comprobaciones de validez de paquetes .click al compilar, asegurándose de que no haya fallos que lo rechacen en la Tienda.

Comprobación en un dispositivo Ubuntu

Comprobar el paquete .click en un móvil es muy fácil. Copia el fichero .click desde el PC con Ubuntu vía USB, usando adb para instalarlo:

adb push dontcrash.popey_0.22_all.click /tmp
adb shell
pkcon install-local --allow-untrusted /tmp/dontcrash.popey_0.22_all.click


Vete al scope de aplicaciones y arrastra hacia abajo para que refresque, pulsa en el icono y prueba el juego.

4
¡Conseguido! :)

5

Configurando la aplicación

En este punto, para alguno de los juegos ví algunas mejoras, que las expondré aquí:

Cargar localmente los ficheros

Construct 2 indica que "Los juegos exportados no funcionarán hasta que los subas por un popup ("When running on the file:/// protocol, browsers block many features from working for security reasons") que se muestra en javascript. Borré esas líneas de js que comprueban que el index.html y el juego funcionan adecuadamente en nuestro navegador.

Orientación del dispositivo

Con la reciente actualización OTA de Ubuntu siempre podemos activar la orientación del dispositivo, lo cual significa que algunos juegos pueden rotarse y no ser jugables. Podemos bloquear los juegos en modo vertical u horizontal mediante el fichero .desktop (creado previamente) con simplemente añadir esta línea:

X-Ubuntu-Supported-Orientations=portrait

Obviamente cambiar "portrait" por "landscape" si el juego usa el modo horizontal. Para Don't Crash no lo hice porque el desarrollador tenía la deteción de la rotación por código y dice al jugador que rote el dispositivo a la posición necesaria.

Enlaces Twitter

Algunos juegos tenían enlaces de Twitter embebidos, mediante los cuales los jugadores pueden publicar su puntuación. Desafortunadamente la versión web móvil de Twitter no admite eso, por lo que no debería de haber un enlace que contiene "Check out my score in Don’t Crash". Por el momento, quité los enlaces a Twitter.

Cookies

Nuestro navegador no soporta cookies locales. Algunos juegos las usan. Para Heroine Dusk cambié las cookies a Local Storage.

Publicando en la tienda

Publicar paquetes .click en la Tienda de Ubuntu es rápido y fácil. Simplemente accede a http://myapps.developer.ubuntu.com/dev/click-apps/ , identificate, pulsa en "New Application" y sigue los pasos para subir el paquete click.

6

¡Esto es todo! Seguiré publicando algunos juegos más en la tienda. Mejorasa la plantilla de Github son bienvenidas.

Artículo original de Alan Pope. Traducido por Marcos Costales.
on July 29, 2015 04:07 PM

The Age of Foundations

Thierry Carrez

At OSCON last week, Google announced the creation around Kubernetes of the Cloud-Native Computing Foundation. The next day, Jim Zemlin dedicated his keynote to the (recently-renamed) Open Container Initiative, confirming the Linux Foundation's recent shift towards providing Foundations-as-a-Service. Foundations ended up being the talk of the show, with some questioning the need for Foundations for everything, and others discussing the rise of Foundations as tactical weapons.

Back to the basics

The main goal of open source foundations is to provide a neutral, level and open collaboration ground around one or several open source projects. That is what we call the upstream support goal. Projects are initially created by individuals or companies that own the original trademark and have power to change the governance model. That creates a tilted playing field: not all players are equal, and some of them can even change the rules in the middle of the game. As projects become more popular, that initial parentage becomes a blocker for other contributors or companies to participate. If your goal is to maximize adoption, contribution and mindshare, transferring the ownership of the project and its governance to a more neutral body is the natural next step. It removes barriers to contribution and truly enables open innovation.

Now, those foundations need basic funding, and a common way to achieve that is to accept corporate members. That leads to the secondary goal of open source foundations: serve as a marketing and business development engine for companies around a common goal. That is what we call the downstream support goal. Foundations work to build and promote a sane ecosystem around the open source project, by organizing local and global events or supporting initiatives to make it more usable: interoperability, training, certification, trademark licenses...

Not all Foundations are the same

At this point it's important to see that a foundation is not a label, the name doesn't come with any guarantee. All those foundations are actually very different, and you need to read the fine print to understand their goals or assess exactly how open they are.

On the upstream side, few of them actually let their open source project be completely run by their individual contributors, with elected leadership (one contributor = one vote, and anyone may contribute). That form of governance is the only one that ensures that a project is really open to individual contributors, and the only one that prevents forks due to contributors and project owners not having aligned goals. If you restrict leadership positions to appointed seats by corporate backers, you've created a closed pay-to-play collaboration, not an open collaboration ground. On the downstream side, not all of them accept individual members or give representation to smaller companies, beyond their founding members. Those details matter.

When we set up the OpenStack Foundation, we worked hard to make sure we created a solid, independent, open and meritocratic upstream side. That, in turn, enabled a pretty successful downstream side, set up to be inclusive of the diversity in our ecosystem.

The future

I see the "Foundation" approach to open source as the only viable solution past a given size and momentum around a project. It's certainly preferable to "open but actually owned by one specific party" (which sooner or later leads to forking). Open source now being the default development model in the industry, we'll certainly see even more foundations in the future, not less.

As this approach gets more prevalent, I expect a rise in more tactical foundations that primarily exist as a trade association to push a specific vision for the industry. At OSCON during those two presentations around container-driven foundations, it was actually interesting to notice not the common points, but the differences. The message was subtly different (pods vs. containers), and the companies backing them were subtly different too. I expect differential analysis of Foundations to become a thing.

My hope is that as the "Foundation" model of open source gets ubiquitous, we make sure that we distinguish those which are primarily built to sustain the needs or the strategy of a dozen of large corporations, and those which are primarily built to enable open collaboration around an open source project. The downstream goal should stay a secondary goal, and new foundations need to make sure they first get the upstream side right.

In conclusion, we should certainly welcome more Foundations being created to sustain more successful open source projects in the future. But we also need to pause and read the fine print: assess how open they are, discover who ends up owning their upstream open source project, and determine their primary reason for existing.

on July 29, 2015 01:30 PM

I don't really know what to say as of late. I've been around but I've been hiding in the background. When you end up having to read appellate court decisions, Inspector General audit reports, GAO audit reports, and ponder if your job will be funded into the new fiscal year...life gets weird. This is the closest illustration I can find of what I do at work:

With all the storm and stress that some persons seem to be trying to raise in the *buntu community I feel it appropriate to truly step away formally for a while. I'm still working on the cross-training matter relative to job functions at work. I'm still occasionally working on backports for pumpa and dianara. I am just going to be off the cadence for a while.

I'm wandering. With luck I may return.

on July 29, 2015 12:00 AM

July 28, 2015

Lubuntu 15.10 alpha 2

Lubuntu Blog


Hi, testing of the second Alpha of Lubuntu 15.10, codename Wily Werewolf is now taking place. Please do help test. Details of how to test can be found at the Testing wiki. Feedback appreciated.
on July 28, 2015 08:46 PM

Charms as Babies: Introduction

José Antonio Rey

Hello everyone, and welcome to my new blog post series: Charms as Babies. It’s been a long time since I’ve written something about charms.

My purpose with this series is to introduce you to Juju, Juju Charms, and it’s development and maintenance flow. At the end of the series you should be able to develop your own Juju Charm, and know how to take care of it. You may even become a Juju Charmer!

In the next couple days I’m gonna be posting little pieces on how to develop and take care of your own charm, or maybe even adopt a charm. Later today I will be posting an introduction to Juju and Charms. Make sure to keep an eye on my blog and Planet Ubuntu for the upcoming posts!

Oh! We are also having a “What is Juju?” session at UbuConLA, this time given by my fellow charm contributor Sebastián Ferrari. If you are not coming to the conference make sure to tune in to the livestream, at summit.ubuntu.com

This post is going to be used as an archive. Each chapter will be linked here. You can see all the chapters posted so far below.


on July 28, 2015 07:32 PM

Welcome to the first chapter of the Charms as Babies series. This chapter is dedicated to explain what are the Cloud, Juju and Juju Charms.

Have you heard about the Cloud? No, not the ones in the sky. THE Cloud. Let’s insert an XKCD comic for reference.

Interesting, huh? Yes, the Cloud is a huge group of servers. Anyone can rent part of a server for a period of time. Usually these are hours.

So, let’s put an example. I am Mr. VP from Blogs Company. I host… well, a blog. Instead of renting hosting like I would usually do, I decide to host my blog on the Cloud. So I go ahead, let’s say to Amazon Web Services. I tell Amazon I want an Ubuntu 14.04 server, with this amount of RAM and this amount of disk space. Amazon provides it to me for a price, and bills me per hour. It’s not renting me a username inside the entire server, but instead is launching a virtual machine (or VM) with the specs I requested, and giving the entire VM to me (including root access).

How is this different to common hosting, you ask? Sure, hosting is a practical solution when you just want things done, maybe FTP access to upload your site and that’s it. However, you can use the Cloud for whatever you want (within what’s legally accepted, of course). I may host my blog, but someone else on the other side of the world may host complicated data analysis applications, or maybe a bug tracker. Also, the Cloud gives you the possibility to rent a VM per hours. If I want to launch a server to try how to install MyProgram, then I launch a VM, install MyProgram, have fun with it, and destroy it right ahead. That way someone else can rent it after you’re done with it, and then you’re billed for the number of hours you used it. Everyone ends up being happy! There are several other benefits that come with the Cloud, but we will get to them later.

As I mentioned, you can do a lot of stuff with the Cloud. However, for most things, you need to know how to use a server, install stuff, compile code and more. Why don’t we simplify stuff and make it a lot easier? Why not run a couple commands and have whatever you want in a matter of minutes, without all the hassle? Well, that’s the basic idea behind Juju. With Juju you can execute simple commands such as `juju deploy wordpress` and you will have a WordPress instance set up for you, in the cloud of your choice (some restrictions apply), within minutes. Isn’t that amazing? However, all of this work is contained in what we call a Juju Charm. Charms are a set of scripts that help us automate the orchestration we see in the cloud. When I execute that command above, there are some scripts that are ran in the machine to install and configure. And someone else has written that charm to make your life easier.

I’m not going to dig deeper on how to use Juju, but want to highlight that even though things seem to be automated, there is a hero behind that automation, who helped you out doing the hardest part, so you can execute a command and get what you want.

What? YOU want to become the hero now?! Sure! In the next chapters we’ll see how can you become one the heroes in the Juju ecosystem. That’s all I have for this chapter, but if you have any questions about Juju and Charms, make sure to leave a comment below. Or drop by our IRC channel, #juju on irc.freenode.net. Look, there’s even a link that will take you to the channel on your web browser!


on July 28, 2015 07:30 PM
tl;dr:  Your Ubuntu-based container is not a copyright violation.  Nothing to see here.  Carry on.
I am speaking for my employer, Canonical, when I say you are not violating our policies if you use Ubuntu with Docker in sensible, secure ways.  Some have claimed otherwise, but that’s simply sensationalist and untrue.

Canonical publishes Ubuntu images for Docker specifically so that they will be useful to people. You are encouraged to use them! We see no conflict between our policies and the common sense use of Docker.

Going further, we distribute Ubuntu in many different signed formats -- ISOs, root tarballs, VMDKs, AMIs, IMGs, Docker images, among others.  We take great pride in this work, and provide them to the world at large, on ubuntu.com, in public clouds like AWS, GCE, and Azure, as well as in OpenStack and on DockerHub.  These images, and their signatures, are mirrored by hundreds of organizations all around the world. We would not publish Ubuntu in the DockerHub if we didn’t hope it would be useful to people using the DockerHub. We’re delighted for you to use them in your public clouds, private clouds, and bare metal deployments.

Any Docker user will recognize these, as the majority of all Dockerfiles start with these two words....

FROM ubuntu

In fact, we gave away hundreds of these t-shirts at DockerCon.


We explicitly encourage distribution and redistribution of Ubuntu images and packages! We also embrace a very wide range of community remixes and modifications. We go further than any other commercially supported Linux vendor to support developers and community members scratching their itches. There are dozens of such derivatives and many more commercial initiatives based on Ubuntu - we are definitely not trying to create friction for people who want to get stuff done with Ubuntu.

Our policy exists to ensure that when you receive something that claims to be Ubuntu, you can trust that it will work to the same standard, regardless of where you got it from. And people everywhere tell us they appreciate that - when they get Ubuntu on a cloud or as a VM, it works, and they can trust it.  That concept is actually hundreds of years old, and we’ll talk more about that in a minute....


So, what do I mean by “sensible use” of Docker? In short - secure use of Docker. If you are using a Docker container then you are effectively giving the producer of that container ‘root’ on your host. We can safely assume that people sharing an Ubuntu docker based container know and trust one another, and their use of Ubuntu is explicitly covered as personal use in our policy. If you trust someone to give you a Docker container and have root on your system, then you can handle the risk that they inadvertently or deliberately compromise the integrity or reliability of your system.

Our policy distinguishes between personal use, which we can generalise to any group of collaborators who share root passwords, and third party redistribution, which is what people do when they exchange OS images with strangers.

Third party redistribution is more complicated because, when things go wrong, there’s a real question as to who is responsible for it. Here’s a real example: a school district buys laptops for all their students with free software. A local supplier takes their preferred Linux distribution and modifies parts of it (like the kernel) to work on their hardware, and sells them all the PCs. A month later, a distro kernel update breaks all the school laptops. In this case, the Linux distro who was not involved gets all the bad headlines, and the free software advocates who promoted the whole idea end up with egg on their faces.

We’ve seen such cases in real hardware, and in public clouds and other, similar environments.

So we simply say, if you’re going to redistribute Ubuntu to third parties who are trusting both you and Ubuntu to get it right, come and talk to Canonical and we’ll work out how to ensure everybody gets what they want and need.

Here’s a real exercise I hope you’ll try...

  1. Head over to your local purveyor of fine wines and liquors.
  2. Pick up a nice bottle of Champagne, Single Malt Scotch Whisky, Kentucky Straight Bourbon Whiskey, or my favorite -- a rare bottle of Lambic Oude Gueze.
  3. Carefully check the label, looking for a seal of Appellation d'origine contrôlée.
  4. In doing so, that bottle should earn your confidence that it was produced according to strict quality, format, and geographic standards.
  5. Before you pop the cork, check the seal, to ensure it hasn’t been opened or tampered with.  Now, drink it however you like.
  6. Pour that Champagne over orange juice (if you must).  Toss a couple ice cubes in your Scotch (if that’s really how you like it).  Pour that Bourbon over a Coke (if that’s what you want).
  7. Enjoy however you like -- straight up or mixed to taste -- with your own guests in the privacy of your home.  Just please don’t pour those concoctions back into the bottle, shove a cork in, put them back on the shelf at your local liquor store and try to pass them off as Champagne/Scotch/Bourbon.


Rather, if that’s really what you want to do -- distribute a modified version of Ubuntu -- simply contact us and ask us first (thanks for sharing that link, mjg59).  We have some amazing tools that can help you either avoid that situation entirely, or at least let’s do everyone a service and let us help you do it well.

Believe it or not, we’re really quite reasonable people!  Canonical has a lengthy, public track record, donating infrastructure and resources to many derivative Ubuntu distributions.  Moreover, we’ve successfully contracted mutually beneficial distribution agreements with numerous organizations and enterprises. The result is happy users and happy companies.

FROM ubuntu,
Dustin

The one and only Champagne region of France

on July 28, 2015 05:50 PM

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150728 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kt-meeting.txt


Status: CVE’s

The current CVE status can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kernel-cves.html


Status: Stable, Security, and Bugfix Kernel Updates – Precise/Trusty/Utopic/Vivid

Status for the main kernels, until today:

  • Precise – Kernel Prep
  • Trusty – Kernel Prep
  • lts-Utopic – Kernel Prep
  • Vivid – Kernel Prep
    Current opened tracking bugs details:
  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html
    For SRUs, SRU report is a good source of information:
  • http://kernel.ubuntu.com/sru/sru-report.html
    Schedule:

    cycle: 26-Jul through 15-Aug
    ====================================================================
    24-Jul Last day for kernel commits for this cycle
    26-Jul – 01-Aug Kernel prep week.
    02-Aug – 08-Aug Bug verification & Regression testing.
    09-Aug – 15-Aug Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

on July 28, 2015 05:22 PM

Hi,

We have recently released Mir 0.14 to Wily (0.14.0+15.10.20150723.1-0ubuntu1). It’ll soon be released to Vivid+ as well. We have lots of goodies in 0.14 :

It was a comprehensive release where we broke every single ABI under the sun including the client ABI. That required us to release many dependent projects.

Here is a list of 0.14 content highlights :

  • Preparation work for new buffer semantics
  • MirEvent-2.0 related changes and unifications
  • New SurfaceInputDispatcher to replace the android InputDispatcher
  • Preparation work for new buffer semantics
  • Preparation work for mir-on-X11: splitting of mesa platform in common
    and KMS parts
  • g++-5.0 compilation
  • Thread sanitizer issues
  • Numerous bugs addressed

For a detailed list, please see the changelog.

We will soon start the release process for Mir 0.15. Expect to have :

  • Application-not-responding (ANR) handling
  • ANR optimizations
  • Raw input events
  • Experimental mir support on X11 (Mir server runs as an X client in a window)
  • Latency reduction optimizations using “predictive bypass”
  • Client API for specifying input region shape
  • Support for relative pointer motion events
  • More window management support
  • More new buffer semantics
  • libinput platform

Do not hesitate to visit us on freenode #ubuntu-mir IRC channel.

on July 28, 2015 04:09 PM

For a short while, the Plasma Mobile forums were hosted outside of the official KDE Forums. In our quest to put everything under KDE governance, we have now moved the Plasma Mobile forums under KDE’s forums as well. Enjoy the new Plasma Mobile forums.

As a few users had already registered on the “old” forums, this means a smallish interruption as the threads could not be quickly moved to the new forums. We’re sorry for that inconvenience and would like to ask everyone to move to the new forums.

Thanks for your patience and sorry again for the hassle involved with that.

on July 28, 2015 02:59 PM

Article also available in Spanish at http://thinkonbytes.blogspot.co.uk/2015/07/migrar-facilmente-juegos-moviles-en.html thanks to Marcos Costales.

I really like playing games on my phone & tablet and wanted some more games to play on Ubuntu. With a little work it turns out it’s really pretty easy to ‘port’ games over to Ubuntu phone. I put the word ‘port’ in quotes simply because in some cases it’s not a tremendous amount of effort, so calling it a ‘port’ might make people think it’s more work than it is.

Update: A few people have asked why someone would want to even do this, and why not just bookmark a game in the browser. Sorry if that’s not clear. With this method the game is entirely cached offline on the customer phone. Having fully offline games is desirable in many situations including when travelling or in a location with spotty Internet access. Not all games are fully offline of course, this method wouldn’t help with a large on-line multi-player game like Clash of Clans for example. It would be great for many other titles though. This method also makes use of application confinement on Ubuntu so the app/game cannot access anything outside of the game data directory.

I worked with sturmflut from the Ubuntu Insiders on this over a few evenings and weekends. He wrote it up in his post Panda Madness.

We had some fun porting a few games and I wanted to share what we did so others can do the same. We created a simple template on github which can be used as a starting point, but I wanted to explain the process and the issues I had, so others can port apps/games.

If you have any questions feel free to leave me a comment, or if you’d rather talk privately you can get in contact in other ways.

Proof of concept

To prove that we could easily port existing games, we licensed a couple of games from Code Canyon. This is a marketplace where developers can license their games either for other developers to learn from, build upon or redistribute as-is. I started with a little game called Don’t Crash which is an HTML5 game written using Construct 2. I could have licensed other games, and other marketplaces are also available, but this seemed like a good low-cost way for me to test out this process.

Screenshot from 2015-07-28 13-06-19

Side note: Construct 2 by Scirra is a popular, powerful, point-and-click Windows-only tool for developing cross-platform HTML5 apps and games. It’s used by a lot of indie game developers to create games for desktop browsers and mobile devices alike. In development is Construct 3 which aims to be backwards compatible, and available on Linux too.

Before I licensed Don’t Crash I checked it worked satisfactorily on Ubuntu phone using the live preview feature on Code Canyon. I was happy it worked, so I paid and received a download containing the ‘source’ Construct 2 files.

device-2015-07-28-130757

If you’re a developer with your own game, then you can of course skip the above step, because you’ve already got the code to port.

Porting to Ubuntu

The absolute minimum needed to port a game is a few text files and the directory containing the game code. Sometimes a couple of tweaks are needed for things like permissions and lock rotation, but mostly it Just Works(TM).

I’m using an Ubuntu machine for all the packaging and testing, but in this instance I needed a Windows machine to export out the game runtime using Construct 2. Your requirements may vary, but for Ubuntu if you don’t have one, you could install it in a VM like VMWare or VirtualBox, then add the SDK tools as detailed at developer.ubuntu.com.

This is the entire contents of the directory, with the game itself in the www/ folder.

alan@deep-thought:~/phablet/code/popey/licensed/html5_dontcrash⟫ ls -l
total 52
-rw-rw-r-- 1 alan alan   171 Jul 25 00:51 app.desktop
-rw-rw-r-- 1 alan alan   167 Jun  9 17:19 app.json
-rw-rw-r-- 1 alan alan 32826 May 19 19:01 icon.png
-rw-rw-r-- 1 alan alan   366 Jul 25 00:51 manifest.json
drwxrwxr-x 4 alan alan  4096 Jul 24 23:55 www

Creating the metadata

Manifest

This contains the basic details about your app like name, description, author, contact email and so on. Here’s mine (called manifest.json) from the latest version of Don’t Crash. Most of it should be fairly self-explanitory. You can simply replace each of the fields with your app details.

{
    "description":  "Don't Crash!",
    "framework":    "ubuntu-sdk-14.10-html",
    "hooks": {
        "dontcrash": {
            "apparmor": "app.json",
            "desktop":  "app.desktop"
        }
    },
    "maintainer":   "Alan Pope ",
    "name":         "dontcrash.popey",
    "title":        "Don't Crash!",
    "version":      "0.22"
}

Note: “popey” is my developer namespace in the store, you’ll need to specify your namespace which you configure in your account page on the developer portal.

Screenshot from 2015-07-28 13-11-17

Security profile

Named app.json, this details what permissions my app needs in order to run:-

{
    "template": "ubuntu-webapp",
    "policy_groups": [
        "networking",
        "audio",
        "video",
        "webview"
    ],
    "policy_version": 1.2
}

Desktop file

This defines how the app is launched, what the icon filename is, and some other details:-

[Desktop Entry]
Name=Don't Crash
Comment=Avoid the other cars
Exec=webapp-container $@ www/index.html
Terminal=false
Type=Application
X-Ubuntu-Touch=true
Icon=./icon.png

Again, change the Name and Comment fields, and you’re mostly done here.

Building a click package

With those files created, and an icon.png thrown in, I can now build my click package for uploading to the store. Here’s that process in its entirety:-

alan@deep-thought:~/phablet/code/popey/licensed⟫ click build html5_dontcrash/
Now executing: click-review ./dontcrash.popey_0.22_all.click
./dontcrash.popey_0.22_all.click: pass
Successfully built package in './dontcrash.popey_0.22_all.click'.

Which on my laptop took about a second.

Note the “pass” is output from the click-review tool which sanity checks click packages immediately after building, to make sure there’s no errors likely to cause it to be rejected from the store.

Testing on an Ubuntu device

Testing the click package on a device is pretty easy. It’s just a case of copying the click package over from my Ubuntu machine via a USB cable using adb, then installing it.

adb push dontcrash.popey_0.22_all.click /tmp
adb shell
pkcon install-local --allow-untrusted /tmp/dontcrash.popey_0.22_all.click

Switch to the app scope and pull down to refresh, tap the icon and play the game.

device-2015-07-28-130907

Success! :)

device-2015-07-28-130522

Tweaking the app

At this point for some of the games I noticed some issues which I’ll highlight here in case others also have them:-

Local loading of files

Construct 2 moans that “Exported games won’t work until you upload them. (When running on the file:/// protocol, browsers block many features from working for security reasons.” in a javascript popup and the game doesn’t start. I just removed that chunk of js which does the check from the index.html and the game works fine in our browser.

Device orientation

With the most recent Over The Air (OTA) update of Ubuntu we enabled device orientation everywhere which means some games can rotate and become unplayable. We can lock games to be portrait or landscape in the desktop file (created above) by simply adding this line:-

X-Ubuntu-Supported-Orientations=portrait

Obviously changing “portrait” to “landscape” if your game is horizontally played. For Don’t Crash I didn’t do this because the developer had coded orientation detection in the game, and tells the player to rotate the device when it’s the wrong way round.

Twitter links

Some games we ported have Twitter links in the game so players can tweet their score. Unfortunately the mobile web version of Twitter doesn’t support intents so you can’t have a link which contains the content “Check out my score in Don’t Crash” embedded in it for example. So I just removed the Twitter links for now.

Cookies

Our browser doesn’t support locally served cookies. Some games use this. For Heroine Dusk I ported from cookies to Local Storage which worked fine.

Uploading to the store

Uploading click packages to the Ubuntu store is fast and easy. Simply visit myapps.developer.ubuntu.com/dev/click-apps/, sign up/in, click “New Application” and follow the upload steps.

Screenshot from 2015-07-28 13-10-31

That’s it! I look forward to seeing some more games in the store soon. Patches also welcome to the template on github.

on July 28, 2015 12:36 PM

Akademy A Coruña Photos

Jonathan Riddell

HJensSaturday lunch Lunch Time

Smart Tech and Sensible Tech

Jens describes Skittles and Doritos

Akademy team The wonderful organising team! Akademy Award winners

Elite Kubuntu developer Scarlett wins an Akademy Award! Developerymobil

Sebas shows off Plasma Mobile phone with a look that suggests he wants world domination by next year

akademy2015Group photo IMG_6485

hacking area

IMG_6450GCompris demos, ooh la la IMG_5992

The opening ceremony to remember absent friends

facebooktwittergoogle_pluslinkedinby feather
on July 28, 2015 10:48 AM

July 27, 2015

Welcome to the Ubuntu Weekly Newsletter. This is issue #427 , and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Elizabeth K. Joseph
  • Chris Guiver
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on July 27, 2015 11:09 PM

3D printing Poe

Kees Cook

I helped print this statue of Edgar Allan Poe, through “We the Builders“, who coordinate large-scale crowd-sourced 3D print jobs:

Poe's Face

You can see one of my parts here on top, with “-Kees” on the piece with the funky hair strand:

Poe's Hair

The MakerWare I run on Ubuntu works well. I wish they were correctly signing their repositories. Even if I use non-SSL to fetch their key, as their Ubuntu/Debian instructions recommend, it still doesn’t match the packages:

W: GPG error: http://downloads.makerbot.com trusty Release: The following signatures were invalid: BADSIG 3D019B838FB1487F MakerBot Industries dev team <dev@makerbot.com>

And it’s not just my APT configuration:

$ wget http://downloads.makerbot.com/makerware/ubuntu/dists/trusty/Release.gpg
$ wget http://downloads.makerbot.com/makerware/ubuntu/dists/trusty/Release
$ gpg --verify Release.gpg Release
gpg: Signature made Wed 11 Mar 2015 12:43:07 PM PDT using RSA key ID 8FB1487F
gpg: requesting key 8FB1487F from hkp server pgp.mit.edu
gpg: key 8FB1487F: public key "MakerBot Industries LLC (Software development team) <dev@makerbot.com>" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
gpg: BAD signature from "MakerBot Industries LLC (Software development team) <dev@makerbot.com>"
$ grep ^Date Release
Date: Tue, 09 Jun 2015 19:41:02 UTC

Looks like they’re updating their Release file without updating the signature file. (The signature is from March, but the Release file is from June. Oops!)

© 2015, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

on July 27, 2015 11:08 PM

The Document FoundationMany users or members of the community will know this already, but just for the sake of clarity: Xubuntu 15.10 Wily Werewolf will be the first release to ship LibreOffice by default. This has been a long-standing and often-repeated request so we decided to try and evaluate this on our way to the next LTS release, which will be 16.04.

Consequently, we started evaluating LibreOffice’s integration in our Desktop Environment. While there is already a package which provides integration with the Gtk+ theme (libreoffice-gtk), we found that none of the existing icon themes really worked too well with Xubuntu’s default theme elementary-xfce. Ubuntu’s human theme – being the closest match because of having elementary on its base – has been abandoned a longer while ago and looks a bit dated (and frankly too orange for Xubuntu). Other options like Tango work quite well with our color scheme, but still aren’t a perfect match.

So our artwork team went ahead and started working on the most prominent icons in LibreOffice Writer and Calc – the two applications shipped by default. The majority of those icons has been either ported or re-done with elementary in mind. Furthermore, over the course of the last months several icons specific to LibreOffice Impress have been tackled. The preliminary results are available for testing already and the final result will be shipped with the release of Xubuntu 15.10. We’re also working with the LibreOffice team to get the new icon theme integrated upstream, so more distributions and users can benefit from our work.

Contribute

If you want to contribute, feel free to get in touch with Simon, our Artwork Lead, or clone the repository and submit your merge requests directly on GitHub (link below).

Screenshots

LibreOffice Writer

LibreOffice Writer

LibreOffice Calc

LibreOffice Calc

LibreOffice Impress

LibreOffice Impress

Project Links

on July 27, 2015 09:25 PM

Multi-homed MythTV backend

Mario Limonciello

This is the first of my non OpenWRT related posts.

I have a peculiar situation.  I have two houses that I split my life between.  I don't want to pay for cable in both houses, but I also want access to some of the TV shows that you only get newer episodes on cable and not via Netflix/Hulu/Amazon.

So to solve this, I have set up MythTV in both houses.  Both setups run Mythbuntu 12.04.1 (www.mythbuntu.org).  In the house without cable I have a Silicondust HD Homerun.


The HD Homerun is set to record OTA content exclusively.  I get all the basic TV networks this way in that house.

In the second house I have a Silicondust HD Homerun Prime.  I'm fortunate that in that house I have Comcast, which is one of the more friendly companies with regard to copy protection.  I'm able to record pretty much everything except the premium networks like HBO and Showtime.
 

In the second house I duplicate all the OTA recording schedules but also schedule things on the other stations I care about.

Secondary House

Now to get the recordings from the second house to the first house, I had to setup a collection of scripts that run in staggered cron jobs.  The first one is on the secondary house.  I first determined which channel ID's I needed to export from.  This can be found by looking through mythweb or by examining the database with a tool like phpmyadmin.  Look for 'chanid' in the 'channels' table.

Once I had those channels figured out I created this script.  On each daily run it will find all the recordings who's starttime happened to be "today".  It exports all relevant SQL data about that recording into a flat SQL file that can be imported on the primary house.

Next it creates a list of recordings that will need to be synced over to the primary house.  This list is based upon the last successful sync date from the primary house.  It only syncs all recordings between the lastrun date and "tomorrow".  This sets it up so that if for some reason the primary backend doesn't sync a day it will still work.  Also it's important to sync only a handful of days because the autoexpire will be happening at different rates on the two backends.

I store all this content in the directory /var/lib/mythtv/exported-sql

sudo mkdir -p /var/lib/mythtv/exported-sql 
sudo chown mythtv:mythtv /var/lib/mythtv/exported-sql

/home/mythtv/export.sh 

#!/bin/sh

#export SQL
chanid="2657 2659 2622"
DIRECTORY=/var/lib/mythtv/exported-sql/
BIGGEST_DATE=`find $DIRECTORY -maxdepth 1 -name '*.sql' | sort -r | head -1 | sed 's,.*exported-,,; s,.sql,,'`
for chan in $chanid;
do
[ -n "$where" ] && where="$where or"
[ -n "$BIGGEST_DATE" ] && date=" and starttime > '$BIGGEST_DATE 00:00:00'"
where="$where (chanid='$chan'$date)"
done
CONFIG=$HOME/.mythtv/config.xml
DATE=`date '+%F'`
if [ "$DATE" = "$BIGGEST_DATE" ]; then
echo "Already ran today, not running SQL generation again"
else
db=`xpath  -q -e 'string(//DatabaseName)' $CONFIG 2>/dev/null`
user=`xpath  -q -e 'string(//UserName)' $CONFIG 2>/dev/null`
pass=`xpath  -q -e 'string(//Password)' $CONFIG 2>/dev/null`
host=`xpath  -q -e 'string(//Host)' $CONFIG 2>/dev/null`
fname=/var/lib/mythtv/exported-sql/exported-$DATE.sql
mysqldump -h$host -u$user -p$pass $db recorded recordedseek recordedrating recordedprogram recordedmarkup recordedcredits --where="$where" --no-create-db --no-create-info > $fname
fi


#generate a recordings list
lastrun=/home/mythtv/lastrun
tomorrow=$(date --date="tomorrow" '+%Y%m%d')
if [ -f $lastrun ]; then
        tmp=$(cat $lastrun)
else
        tmp=$tomorrow
fi
while [ "$tmp" != "$tomorrow" ]
do
        test_dates="$test_dates $tmp"
        tmp=$(date --date="1 day $tmp" '+%Y%m%d')
done
test_dates="$test_dates $tomorrow"
for date in $test_dates;
do
        for chan in $chanid;
        do
                from="$from /var/lib/mythtv/recordings/${chan}_${date}*"
        done
done
ls $from 2>/dev/null | tee /var/lib/mythtv/exported-sql/recordings-list

Next I set up rsync to export my /var/lib/mythtv directory (read only) and my lastrun successful (write only).

/etc/rsyncd.conf

max connections = 2
log file = /var/log/rsync.log
timeout = 300

[mythtv]
comment = mythtv
path = /var/lib/mythtv
read only = yes
list = yes
uid = nobody
gid = nogroup
auth users = rsync
secrets file = /etc/rsyncd.secrets

[lastrun]
comment = last rsync run
path = /home/mythtv/
write only = yes
read only = no
list = no
uid = mythtv
gid = mythtv
auth users = rsync
secrets file = /etc/rsyncd.secrets

Last thing to do on the secondary house is to set up the crontab to run at night.  I set it for 11:05 PM every day.  It should only take a 10-15 seconds to run.

5 23 * * * /home/mythtv/export.sh

Because of the way this all works, I decide to leave my secondary house backend on all the time.  

Primary House

Now in my primary house I need to sync recordings, SQL data, and then update the last successful run at the secondary house.

/home/mythtv/import.sh

#!/bin/sh

domain=rsync@address
sql_directory=/home/mythtv/sql
recordings_directory=/var/lib/mythtv/recordings
password_file=/home/mythtv/password-file
lastrun=/home/mythtv/lastrun

sync_recordings()
{
today=$(date '+%Y%m%d')
from=$(cat $sql_directory/recordings-list | sed "s,/var/lib/mythtv,$domain::mythtv,")
RET=30
while [ $RET -eq 30 ]; do
#rsync -avz --partial --timeout 120 --progress $from --password-file=password-file $recordings_directory
rsync -av --partial --timeout 120 --progress $from --password-file=$password_file $recordings_directory
RET=$?
done
echo "rsync return code: $?"
[ $RET -ne 0 ] && exit 1
echo "$today" > lastrun
rsync -avz --password-file=password-file lastrun $domain::lastrun/
}

sync_sql()
{
rsync -avz $domain::mythtv/exported-sql/* --password-file=$password_file $sql_directory
}

insert_sql()
{
CONFIG=$HOME/.mythtv/config.xml
db=`xpath  -q -e 'string(//DatabaseName)' $CONFIG 2>/dev/null`
user=`xpath  -q -e 'string(//UserName)' $CONFIG 2>/dev/null`
pass=`xpath  -q -e 'string(//Password)' $CONFIG 2>/dev/null`
host=`xpath  -q -e 'string(//Host)' $CONFIG 2>/dev/null`
old_host=supermario
new_host=kingkoopa
for fname in $(find $sql_directory -maxdepth 1 -name '*.sql');
do
if [ ! -f ${fname}.imported ]; then
cat $fname | sed "s,${old_host},${new_host},g; s,INSERT INTO, INSERT IGNORE INTO,g" > ${fname}.imported
mysql --host=$host --user=$user -p$pass $db < ${fname}.imported
fi
done
}

suspend()
{
sudo /usr/local/bin/setwakeup.sh $(date --date='18:00' '+%s')
sudo pm-suspend
}

mythfilldatabase
sync_sql
sync_recordings
insert_sql
#suspend

I'm careful about the order I do things.  The SQL has to get inserted last in case for some reason the recordings fail to sync or don't all sync while i'm watching.

I currently don't suspend afterwards due to some instability on my system, but I have been experimenting with that too.  If S3 is reliable you can configure the setup to suspend after the sync is done and wake up next time you need to use it or record from it.

I set the cronjob to run at midnight every day on the primary backend.

0 0 * * * /home/mythtv/import.sh

on July 27, 2015 06:10 PM

desktop Unix has not arrived

Walter Lapchynski

Recently, the arrival of desktop Linux (and, no, I refuse to say GNU/Linux as much I refuse to say GNU/X/OpenBox/LXDE instead of Lubuntu) was announced. I think this came as no surprise to those of us that already use it on desktop. Sure, people may complain about this and that, but, honestly, Linux gives you choices. You can find your perfect desktop on Linux quite easily in this day and age of package managers. Filled with pre-compiled binaries, changing even components of a desktop environment is fairly seemless and just a few clicks away. Gone are the days of having to roll your own kernel from scratch (though, in typical fashion, you have the choice to do so). Heck, you rarely have a need to touch the command line (though it's often quicker). With Chromebooks and about half of the mobile devices out there running Linux under the hood, user-focused Linux is certainly here.

However, if we revisit that blasted GNU acronym, we should be reminded that GNU's Not Unix, or rather that Linux's Not Unix (ooh, we could fix the naming controversy by calling it LNU… but then someone would wawnt to call it G/LNU and that would be hard to pronounce). See, Linux owes its design and heritage (though not its code) to Unix. In fact, Unix is still plugging along, with some notable examples being the ever-secure OpenBSD, the distro that runs on pretty much everything, NetBSD (yes, that does include Amiga and VAX!), the almighty FreeBSD which is widely used by web servers, OS X, and plenty of embedded devices, and the version from whence Linux was born: MINIX.

So one might wonder how desktop Unix is doing. OS X certainly has made it but the FreeBSD core, Darwin, is quite different. Not to mention that it's a walled garden. It's not easy to change your window manager and using the command line is a frustrating experience, especially if you don't want to use the GUI. I guess that's not a very "user-focused" concern, but I think it reflects just how much OS X deviates from anything we think of as Unix-like. Ignoring that, it might be good to look at FreeBSD, as it is, if I may conjecture, the most widely used Unix out there.

Well, to give one a sense of things, I had to recently do an update on a co-worker's development machine, which was running FreeBSD and had an X setup. According to the FreeBSD documentation, assuming one doesn't need the "easy button" when it comes to installing, there's no need for PC-BSD. In this case, everything was installed, so no big deal, right? For that matter, all that needed to be done was to do some security updates. Those friendly folks at FreeBSD offer a script in periodic daily that emails root with updates on packages that need to be patched for security updates. Admittedly, the thing hadn't been updated in a while, but having been doing regular maintenence on other FreeBSD servers, I figured it would be fairly straightforward.

All of these machines use ports, which is basically a package management system that uses source code and compiles with each fresh install. There's also packages, but we use ports. Adds a little extra flexibility and is no more difficult, really. The package management system takes care of everything, including dependencies, as any good package manager would. The portmaster tool makes this task quite simple, needing little more than the package name to be installed.

Unfortunately, I faced several gotchas. One, in fact, came when having to update pkg, which is usually used for binary package management. It was super weird because I couldn't install it with portmaster, even if I specified it's full location (which usually fixes any ambiguity). I looked at /usr/ports/UPDATING, which is the canonical place to look for gotchas, but didn't find anything. On a whim, I chose to include the package version to upgrade from, and that worked. Never had that happen before at all, and never found any reference of others having it.

The next one was fairly easily solved, assuming you know where to look. gettext gave me an error on its install. I found the solution in /usr/ports/UPDATING: the port had been split, so I needed to remove the port and then reinstall it. Simple fix, but greping through a text file is not something one typically should be expected to do to install a freaking package. Also, the error didn't suggest where to look for the problem. You had to put two and two together.

So then I ran into some issue with GTK. First, I was having issues with gtk-update-icon-cache. I found something related, albeit old, in /usr/ports/UPDATING, so I decided to give it a go. I understood what I was doing and that doing it wasn't going to hurt anything if it didn't fix my problem. It referenced using pkg_delete -f gtk-2.\*. Now there's several problems here, the most obvious being the fact that there are three different package management tools: pkg, pkgng, and portmaster. Sometimes you might use a pkgng command even though you otherwise use portmaster. Problem is that that's not all described in these terse notes. Secondly, GTK2 is in a port called gtk2, so the search string was wrong, too. So pkg delete -f gtk2-2.\* did the trick.

So then there was complaints about cairo not being installed with X11 support. I confirmed it was installed. I checked the Makefile and there was a line that suggested that X11 was a default, but after digging some more, I discovered that it merely was a default option. Indeed, pkg info showed me "X11: off." I decided to run make config and there I could see it was not checked. So I checked it, did a make deinstall and a make reinstall and it was recompiled with X11 support.

However, I was still having issues with gtk. it failed to compile because of some test that required working with a PNG. It complained to not recognize the format, even though file had no problem recognizing it. Google produced no truly applicable results, but there was some mention among MacPorts/Homebrew folks about glib2 and gdk-pixbuf2. Checking the config of the latter, I discovered it was lacking PNG support, so I checked it, reinstalled the two, and finally, I got everything done.

One might be inclined to believe that since this is all on the command line, such problems are to be expected. I would, in fact, suggest the otherwise. As a general rule, graphical tools are front ends to command line interfaces. Ubuntu's Synaptic Package Manager is a great example of this. It's just apt under the hood. If the backend did not have a predicable, reliable interface, where one did not have to interpret, research, and intuit in order to figure out problems, then the frontend would suffer the same problems. That being said, the potential graphical interface to the FreeBSD package management system(s) are bound to be equally flawed.

That being said, desktop Unix is not here yet. Frankly, I'm not convinced it ever will be. Unix is a tool that was originally targeted at institutional use and to this day is employed largely by system administrators and hardware hackers. It tends to be fairly conservative (I like to call it "stable to a fault", which is to say you don't get the latest and greatest googaws). It's oriented towards the command line and towards users that are familiar enough with compiling to be able to deal with issues. Granted, it's a lot easier than grabbing a tarball and figuring out all your dependencies. But desktop Linux it's not.

To be clear, this it not meant to be a bash on Unix (actually, bash is not a standard part of the install!). BSD is excellent where stability and security are required at the expense of anything else. Having the ability to sort of create your own system from scratch the easy way sure is nice. A few make configs and you can have your system locked down to only do what you want. That includes your kernel, too. I'm sure this is part of the reason why OS X uses it (well that, and its precursor, NeXTSTEP, was derived from BSD).

I'm just pointing out that just because it's been out for much longer than Ubuntu, BSD and Unix in general, has not been able to really capture the attention of desktop users while still retaining the flexibility inherient in a standard system. In that sense, it's actually pretty amazing what Linux has been able to accomplish. Call me biased, but I give special credit to Ubuntu as it has moved beyond the neckbeards and actually drawn the attention of your average Joe, not just through its wonderful user interface, but through business partnerships and outreach. That is the herald of a great desktop operating system.

on July 27, 2015 06:55 AM

July 26, 2015

In my "Why Smart Phones Aren't" series (1), I had expressed my hope that I would actually see a phone that is truly smart in my lifetime.

I challenged people to re-think what a phone should be and recommended that as a prime directive a phone should be "Respectful to its owner first."

Seems that I and Joe Liau are not the only two voices in the forest here. Bunnie Huang has weighed in with an excellent video along a similar theme.

Though the video is not about Ubuntu Phone (2), it should be. The Ubuntu Phone has begun to change the world, but we still have a ways to go. Perhaps spreading the idea that current market-leading phones are a "waste of life" will help.

Let's continue to disrupt an industry that has needed a good shake for at least a decade. Spreading this information helps.

---

(1): http://randall.executiv.es/missing-the-point

(2): http://www.ubuntu.com/phone

on July 26, 2015 06:48 PM

Akademy Day 2

Aaron Honeycutt

A beautiful morning in Spain!

The second day at Akademy started off with 10 or so hours of sleep!, which was much needed for basic functions (really happy I don’t have to drive here).  The hotel (Rialta) had great breakfast with coffee, OJ, bread with meat and cheeses, yogurt, cereal all the basics that makeup a great day!

To the talks!

First talk to start was with Lydia Pintscher with “Evolving KDE” which see went over what is planned in the next stage of Plasma 5 and what she wants to be planned as President of the e.V.

View post on imgur.com

Next up was Dan Leinir Turthra Jensen talked about the work going on with Plasma Mobile from porting to devices, running Android applications on it and more.

Kubuntu Team

View post on imgur.com

Harald and Rohan gave a talk about their CI(Continuous Interrogation) work.

Lunch and Group photo

View post on imgur.com

Then we had the awesome and big group photo with everyone currently at Akademy both the people who are going to it as well as the volunteers who help make the whole thing run. Shortly after the group photo was lunch which again was provided by the Schools cafe and paid for by Blue Systems, which was great with pasta, yogurt, fruits, and pudding!

Right after Lunch everyone was right back to hacking some Open Source goodness!

View post on imgur.com

Kubuntu Podcast reports in!

View post on imgur.com

After the Lunch was over Ovidiu-Florin and I did a interview with Matthias Kirschner for the Kubuntu Podcast which has a new episode coming out next month on August 5 with hopes of including this and future interviews from Akademy and if not the next episode.

on July 26, 2015 03:42 PM

Yesterday we revealed the project we’ve been working on for the last few months, Plasma Mobile and images of it on Kubuntu.

KDE has been trying for years to get Plasma working on different form factors with mixed success, so when I first started on this I was pretty intimidated.  But we looked around for how to build this and it turns out there is software for it just lying around on the internet ready to be put together.  Incredible.

It got very stressful when we couldn’t get anything showing on the screen for a few weeks but the incredible Martin G got Wayland working with it in KWin, so now KDE has not just the first open mobile project but also one of the first systems running with Wayland.

And with Shashlik in the pipeline we are due to be able to run Android applications on it too giving us one of the largest application ecosystems out there.

The question is will there be traction from the community?  You can join us in the normal Plasma ways, #plasma on Freenode and plasma-devel mailing list and #kubuntu-devel to chat about making images for other devices.  I’m very excited to see what will happen in the next year.

Plasma Mobile announcement.

Video

facebooktwittergoogle_pluslinkedinby feather
on July 26, 2015 02:00 PM

Akademy Day 1

Aaron Honeycutt

Before I start this blog I would like to thank the wonderful Ubuntu community for sponsoring my trip to the wonder hub of KDE development at Akademy!

Akademy 2015

My trip to Akademy 2015 in La Coruna Spain started at 4:45 pm on Friday in Miami with a flight to Lisbon. I was serve decent dinner and later breakfast. I did not get much sleep on the first part of the trip, but the second one from Lisbon to La Coruna I got about 1 hour or additional sleep with me finally arriving at 11:30 or so AM local time. I also saw the entertainment system reboot and show me that it was running Linux! I finally had the awesome experience of meeting some of the people I have been working with for over 2+ years over IRC, Hangouts and phone calls. Today was filled many great talks from our own Riddell and Clark on the new Plasma phone and continued work on the CI end of Kubuntu respectably.On the first day we also had the announcement of Plasma Mobile being worked on by Blue Systems and the larger KDE community as well. I’ll have some more pictures of that in there own blog post and album on imgur later on. Blue Systems has been kind enough to sponsor lunch for this weekend and next weekend. So here I type this blog post with under 2 hours of sleep for 36+ hours of uptime lol.

on July 26, 2015 09:12 AM

July 25, 2015

Cool Conference Ideas

Randall Ross

I just returned from a large, well managed conference in Portland (you know the one), and this was one of the ideas that stood out as excellent, at least in my opinion: Sticker Table!


People leave stickers, take stickers, and see stickers. It's a great way to give your project more visibility and it's also a great way to see what other projects are around, and possibly even at the show/conference.

Have you seen anything at recent shows that made you say, "Wow! Great idea." Please share in the comments or shoot an email to randall at ubuntu dot com.

on July 25, 2015 08:50 PM
 uNAV is a turn-by-turn GPS navigation for the Ubuntu Touch OS. It is 100% GPL, and powered by OpenStreetMap and OSRM.

“I could show you a few screenshots, and I could tell you how it’s working. Or, I could show you me driving a random route [with it].”

on July 25, 2015 12:31 PM
uNAV is a turn-by-turn GPS navigation for Ubuntu Phone, 100% GPL powered by OpenStreetMap and OSRM.

I could show you a few screenshots and I could tell you how it is working...
Or I could show you to me driving a random route :)) Click for watch it and use fullscreen for the details:

Driving with Ubuntu Phone

Install uNAV into your phone from the Ubuntu Store (You have to update to OTA5!). 

I want to thank you:
David Planella, who helped me a lot with the development of this application |o/
Sergi Quiles Pérez, who helped me a lot with ideas, feedback and testing in this last version.
Carla Sella, Ilonka O., Morgane & Jonathan Wiedemann, Nathan Haines and Paco Molinero for the voices.
José Fernández Calero for this video.
And to all of you that you helped me in one way or another in these months :) Thanks!
on July 25, 2015 10:22 AM

Plasma Mobile Launched

Kubuntu Wire

If you are interested in a free OS for your phone, please visit our Home Page.

Watch the Video Demonstration.

PlasmaPhone

Links to applications source.

on July 25, 2015 09:57 AM

The Kubuntu team is proud to announce the references images for Plasma Mobile.

Plasma Mobile was announced today at KDE's Akademy conference.

Our images can be installed on a Nexus 5 phone.

More information on Plasma Mobile's website.

on July 25, 2015 09:47 AM

Embracing Mobile

Sebastian Kügler

At Blue Systems, we have been working on making Plasma shine for a while now. We’ve contributed much to the KDE Frameworks 5 and Plasma 5 projects, and helping with the transition to Qt5. Much of this work has been involving porting, stabilizing and improving existing code. With the new architecture in place, we’ve also worked on new topics, such as Plasma on non-desktop (and non-laptop) devices.

Plasma Mobile on an LG Nexus 5

Plasma Mobile on an LG Nexus 5

This work is coming to fruition now, and we feel that it has reached a point where we want to present it to a more general public. Today we unveil the Plasma Mobile project. Its aim is to offer a Free (as in Freedom), user-friendly, privacy-enabling and customizable platform for mobile devices. Plasma Mobile runs on top of Linux, uses Wayland for rendering graphics and offers a device-specific user interface using the KDE Frameworks and Plasma library and tooling. Plasma Mobile is under development, and not usable by end users now. Missing functionality and stability problems are normal in this phase of development and will be ironed out. Plasma Mobile provides basic functionality and an opportunity for developers to jump in now and shape the mobile platform, and how we use our mobile devices.

As is necessary with development on mobile devices, we’ve not stopped at providing source code that “can be made to work”, rather we’re doing a reference implementation of Plasma Mobile that can be used by those who would like to build a product based on Plasma Mobile on their platform. The reference implementation is based on Kubuntu, which we chose because there is a lot of expertise in our team with Kubuntu, and at Blue Systems we already have continuous builds and package creation in place. Much of the last year was spent getting the hardware to work, and getting our code to boot on a phone. With pride, we’re now announcing the general availability of this project for public contribution. In order to make clear that this is not an in-house project, we have moved the project assets to KDE infrastructure and put under Free software licenses (GPL and LGPL according to KDE’s licensing policies). Plasma Mobile’s reference implementation runs on an LG Nexus 5 smartphone, using an Android kernel, Ubuntu user space and provides an integrated Plasma user interface on top of all that. We also have an x86 version, running on an ExoPC, which can be useful for testing.

Plasma Mobile uses the Wayland display protocol to render the user interface. KWin, Plasma’s window manager and compositor plays a central role. For apps that do not support Wayland, we provide X11 support through the XWayland compatibility layer.

Plasma Mobile is a truly converged user interface. More than 90% of its code is shared with the traditional desktop user interface. The mobile workspace is implemented in the form of a shell or workspace suitable for mobile phones. The shell provides an app launcher, a quick settings panel and a task switcher. Other functionality, such as a dialer, settings, etc. is implemented using specialized components that can be mixed and matched to create a specific user experience or to provide additional functionality — some of them already known from Plasma Desktop.

Architecture diagram of Plasma Mobile

Architecture diagram of Plasma Mobile

Plasma Mobile is developed in a public and open development process. Contributions are welcome and encouraged throughout the process. We do not want to create another walled garden, but an inclusive platform for creation of mobile device user experiences. We do not want to create releases behind closed doors and throw them over the wall once in a while, but create a leveled playing field for contributors to work together and share their work. Plasma Mobile’s code is available on git.kde.org, and its development is discussed on the plasma-devel mailinglist. In the course of Akademy, we have a number of sessions planned to flesh out more and more detailed plans for further development.

With the basic workspace and OS integration work done, we have laid a good base for further development, and for others to get their code to run on Plasma Mobile. More work which is already in our pipeline includes support for running Android applications, which potentially brings a great number of mature apps to Plasma Mobile, better integration with other Plasma Devices, such as your desktop or laptop through KDE Connect, an improved SDK making it very easy to get a full-fledged development environment set up in minutes, and of course more applications.

on July 25, 2015 09:18 AM

Ambient capabilities

Serge Hallyn

There are several problems with posix capabilities. The first is the name: capabilities are something entirely different, so now we have to distinguish between “classical” and “posix” capabilities. Next, capabilities come from a defunct posix draft. That’s a serious downside for some people.

But another complaint has come up several times since file capabilities were implemented in Linux: people wanted an easy way for a program, once it has capabilities, to keep them. Capabilities are re-calculated every time the task executes a new file, taking the executable file’s capabilities into account. If a file has no capabilities, then (outside of the special exception for root when SECBIT_NOROOT is off) the resulting privilege set will be empty. And for shellscripts, file capabilities are always empty.

Fundamental to posix capabilities is the concept that part of your authority stems from who you are, and part stems from the programs you run. In a world of trojan horses and signed binaries this may seem sensible, but in the real world it is not always desirable. In particular, consider a case where a program wants to run as non-root user, but with a few capabilities – perhaps only cap_net_admin. If there is a very small set of files which the program may want to execute with privilege, and none are scripts, then cap_net_admin could be added to the inheritable file privileges for each of those programs. Then only processes with cap_net_admin in their inheritable process capabilities will be able to run those programs with privilege. But what if the program wants to run *anything*, including scripts and without having to predict what will be executed? This currently is not possible.

Christopher Lameter has been facing this problem for some time, and requested an enhancement of posix capabilities to allow him to solve it. Not only did he raise the problem and provide a good, real use case, he also sent several patches for discussion. In the end, a concept of “ambient capabilities” was agreed to and implemented (final patch by Andy Lutomirski). It’s currently available in -mm.

Here is how it works:

(Note – for more background on posix capabilities as implemented in linux, please see this Linux Symposium paper. For an example of how to use file capabilities to run as non-root before ambient capabilities, see this Linux Journal article. The ambient capability set has gotten several LWN mentions as well.)

Tasks have a new capability set, pA, the ambient set. As Andy Lutomirski put it, “pA does what most people expect pI to do.” Bits can only be set in pA if they are in pP or pI, and they are dropped from pA if they are dropped from pP or pI. When a new file is executed, all bits in pA are enabled in pP. Note though that executing any file which has file capabilities, or using the SECBIT_KEEPCAPS prctl option (followed by setresuid), will clear pA after the next exec.

So once a program moves CAP_NET_ADMIN into its pA, it can proceed to fork+exec a shellscript doing some /sbin/ip processing without losing CAP_NET_ADMIN.

How to use it (example):

Below is a test program, originally by Christopher, which I slightly modified. Write it to a file ‘ambient.c’. Build it, using

$ gcc -o ambient ambient.c -lcap-ng

Then assign it a set of file capabilities, for instance:

$ sudo setcap cap_net_raw,cap_net_admin,cap_sys_nice,cap_setpcap+p ambient

I was lazy and didn’t add interpretation of capabilities to ambient.c, so you’ll need to check /usr/include/linux/capability.h for the integers representing each capability. Run a shell with ambient capabilities by running, for instance:

$ ./ambient.c -c 13,12,23,8 /bin/bash

In this shell, check your capabilities:

$ grep Cap /proc/self/status
CapInh: 0000000000803100
CapPrm: 0000000000803100
CapEff: 0000000000803100
CapBnd: 0000003fffffffff
CapAmb: 0000000000803100

You can see that you have the requested ambient capabilities. If you run a new shell there, it retains those capabilities:

$ bash -c “grep Cap /proc/self/status”
CapInh: 0000000000803100
CapPrm: 0000000000803100
CapEff: 0000000000803100
CapBnd: 0000003fffffffff
CapAmb: 0000000000803100

What if we drop all but cap_net_admin from our inheritable set? We can test that using the ‘capsh’ program shipped with libcap:

$ capsh –caps=cap_net_admin=pi — -c “grep Cap /proc/self/status”
CapInh: 0000000000001000
CapPrm: 0000000000001000
CapEff: 0000000000001000
CapBnd: 0000003fffffffff
CapAmb: 0000000000001000

As you can see, the other capabilities were dropped from our ambient, and hence from our effective set.

================================================================================
ambient.c source
================================================================================
/*
* Test program for the ambient capabilities. This program spawns a shell
* that allows running processes with a defined set of capabilities.
*
* (C) 2015 Christoph Lameter
* (C) 2015 Serge Hallyn
* Released under: GPL v3 or later.
*
*
* Compile using:
*
* gcc -o ambient_test ambient_test.o -lcap-ng
*
* This program must have the following capabilities to run properly:
* Permissions for CAP_NET_RAW, CAP_NET_ADMIN, CAP_SYS_NICE
*
* A command to equip the binary with the right caps is:
*
* setcap cap_net_raw,cap_net_admin,cap_sys_nice+p ambient_test
*
*
* To get a shell with additional caps that can be inherited by other processes:
*
* ./ambient_test /bin/bash
*
*
* Verifying that it works:
*
* From the bash spawed by ambient_test run
*
* cat /proc/$$/status
*
* and have a look at the capabilities.
*/

#include
#include
#include
#include
#include
#include
#include

/*
* Definitions from the kernel header files. These are going to be removed
* when the /usr/include files have these defined.
*/
#define PR_CAP_AMBIENT 47
#define PR_CAP_AMBIENT_IS_SET 1
#define PR_CAP_AMBIENT_RAISE 2
#define PR_CAP_AMBIENT_LOWER 3
#define PR_CAP_AMBIENT_CLEAR_ALL 4

static void set_ambient_cap(int cap)
{
int rc;

capng_get_caps_process();
rc = capng_update(CAPNG_ADD, CAPNG_INHERITABLE, cap);
if (rc) {
printf(“Cannot add inheritable cap\n”);
exit(2);
}
capng_apply(CAPNG_SELECT_CAPS);

/* Note the two 0s at the end. Kernel checks for these */
if (prctl(PR_CAP_AMBIENT, PR_CAP_AMBIENT_RAISE, cap, 0, 0)) {
perror(“Cannot set cap”);
exit(1);
}
}

void usage(const char *me) {
printf(“Usage: %s [-c caps] new-program new-args\n”, me);
exit(1);
}

int default_caplist[] = {CAP_NET_RAW, CAP_NET_ADMIN, CAP_SYS_NICE, -1};

int *get_caplist(const char *arg) {
int i = 1;
int *list = NULL;
char *dup = strdup(arg), *tok;

for (tok = strtok(dup, “,”); tok; tok = strtok(NULL, “,”)) {
list = realloc(list, (i + 1) * sizeof(int));
if (!list) {
perror(“out of memory”);
exit(1);
}
list[i-1] = atoi(tok);
list[i] = -1;
i++;
}
return list;
}

int main(int argc, char **argv)
{
int rc, i, gotcaps = 0;
int *caplist = NULL;
int index = 1; // argv index for cmd to start

if (argc < 2)
usage(argv[0]);

if (strcmp(argv[1], "-c") == 0) {
if (argc <= 3) {
usage(argv[0]);
}
caplist = get_caplist(argv[2]);
index = 3;
}

if (!caplist) {
caplist = (int *)default_caplist;
}

for (i = 0; caplist[i] != -1; i++) {
printf("adding %d to ambient list\n", caplist[i]);
set_ambient_cap(caplist[i]);
}

printf("Ambient_test forking shell\n");
if (execv(argv[index], argv + index))
perror("Cannot exec");

return 0;
}
================================================================================


on July 25, 2015 03:16 AM

These days, the desktop OSes grabbing headlines have, for the most part, left the traditional desktop behind in favor of what’s often referred to as a “shell.” Typically, such an arrangement offers a search-based interface. In the Linux world, the GNOME project and Ubuntu’s Unity desktop interfaces both take this approach.

This is not a sea change that’s limited to Linux, however. For example, the upheaval of the desktop is also happening in Windows land. Windows 8 departed from the traditional desktop UI, and Windows 10 looks like it will continue that rethinking of the desktop, albeit with a few familiar elements retained. Whether it’s driven by, in Ubuntu’s case, a vision of “convergence” between desktop and mobile or perhaps just the need for something new (which seems to be the case for GNOME 3.x), developers would have you believe that these mobile-friendly, search-based desktops are the future of, well, everything.

 

Source: http://arstechnica.com/gadgets/2015/07/rare-breed-linux-mint-17-2-offers-desktop-familiarity-and-responds-to-user-wants/

Submitted by: Scott Gilbertson

on July 25, 2015 02:12 AM

OSCON 2015

Elizabeth K. Joseph

Following the Community Leadership Summit (CLS), which I wrote about wrote about here, I spent a couple of days at OSCON.

Monday kicked off by attending Jono Bacon’s Community leadership workshop. I attended one of these a couple years ago, so it was really interesting to see how his advice has evolved with the change in tooling and progress that communities in tech and beyond has changed. I took a lot of notes, but everything I wanted to say here has been summarized by others in a series of great posts on opensource.com:

…hopefully no one else went to Powell’s to pick up the recommended books, I cleared them out of a couple of them.

That afternoon Jono joined David Planella of the Community Team at Canonical and Michael Hall, Laura Czajkowski and I of the Ubuntu Community Council to look through our CLS notes and come up with some talking points to discuss with the rest of the Ubuntu community regarding everything from in person events (stronger centralized support of regional Ubucons needed?) to learning what inspires people about the active Ubuntu phone community and how we can make them feel more included in the broader community (and helping them become leaders!). There was also some interesting discussion around the Open Source projects managed by Canonical and expectations for community members with regard to where they can get involved. There are some projects where part time, community contributors are wanted and welcome, and others where it’s simply not realistic due to a variety of factors, from the desire for in-person collaboration (a lot of design and UI stuff) to the new projects with an exceptionally fast pace of development that makes it harder for part time contributors (right now I’m thinking anything related to Snappy). There are improvements that Canonical can make so that even these projects are more welcoming, but adjusting expectations about where contributions are most needed and wanted would be valuable to me. I’m looking forward to discussing these topics and more with the broader Ubuntu community.


Laura, David, Michael, Lyz

Monday night we invited members of the Oregon LoCo out and had an Out of Towners Dinner at Altabira City Tavern, the restaurant on top of the Hotel Eastlund where several of us were staying. Unfortunately the local Kubuntu folks had already cleared out of town for Akademy in Spain, but we were able to meet up with long-time Ubuntu member Dan Trevino, who used to be part of the Florida LoCo with Michael, and who I last saw at Google I/O last year. I enjoyed great food and company.

I wasn’t speaking at OSCON this year, so I attended with an Expo pass and after an amazing breakfast at Mother’s Bistro in downtown Portland with Laura, David and Michael (…and another quick stop at Powell’s), I spent Tuesday afternoon hanging out with various friends who were also attending OSCON. When 5PM rolled around the actual expo hall itself opened, and surprised me with how massive and expensive some of the company booths had become. My last OSCON was in 2013 and I don’t remember the expo hall being quite so extravagant. We’ve sure come a long way.

Still, my favorite part of the expo hall is always the non-profit/open source project/organization area where the more grass-roots tables are. I was able to chat with several people who are really passionate about what they do. As a former Linux Users Group organizer and someone who still does a lot of open source work for free as a hobby, these are my people.

Wednesday was my last morning at OSCON. I did another walk around the expo hall and chatted with several people. I also went by the HP booth and got a picture of myself… with myself. I remain very happy that HP continues to support my career in a way that allows me to work on really interesting open source infrastructure stuff and to travel the world to tell people about it.

My flight took me home Wednesday afternoon and with that my OSCON adventure for 2015 came to a close!

More OSCON and general Portland photos here: https://www.flickr.com/photos/pleia2/sets/72157656192137302

on July 25, 2015 12:27 AM

July 24, 2015

Converting old guidelines to vanilla

Canonical Design Team

How the previous guidelines worked

Guidelines essentially is a framework built by the Canonical web design team. The whole framework has an array of tools to make it easy to create a Ubuntu themed sites. The guidelines were a collaboration between developers and designers and followed consistent look which meant in-house teams and community websites could have a consistent brand feel.

It worked in one way, a large framework of modules, helpers and components which built the Ubuntu style for all our sites. The structure of this required a lot of overrides and work arounds for different projects and added to a bloated nature that the guidelines had become. Canonical and cloud sites required a large set of overrides to imprint their own visual requirements and created a lot of duplication and overhead for each site.

There was no build system nor a way to update to the latest version unless using the hosted pre-compiled guidelines or pulled from our bazaar repository. Not having any form of build step meant having to rely on a local Sass compiler or setup a watcher for each project. Also we had no viable way to check linting errors or create a concrete coding standard.

The actual framework its self was a ported CSS framework into Sass. Not utilising placeholders or mixins correctly and with a bloated amount of variables. To change one colour for example or changing the size of an element wouldn’t be as easy as passing a mixin with set values or changing one variable.

Unlike how we have currently built in Vanilla, all preprocessor styles are created via mixins. Creating responsive changes would be done in a large media query at the end of any document and this again would be repeated for our Canonical or Cloud styles too.

Removing Ubuntu and Canonical from theme

Our first task in building Vanilla was referencing all elements which were ‘Ubuntu’ centric. Anything which had a unique class, colour or style. Once identified the team systematically took one section of each part of guidelines and removed the classes or variables and creating new versions. Once this stage was achieved the team was able to then look at refactoring and updating the code.

Clean-up and making it generic

We decided when starting this project to update how we write any new module / element. Linting was a big factor and when using a build system like gulp we finally had the ability to adhere to a coding standard. This meant a lot of modules / elements had to be rewritten and also improved upon, trimming down the Sass nesting, applying new techniques such as flex box and cleaning duplicated styles.

But the main goal was to make it generic, extendable and easy. Not the simplest of tasks, this meant removing any custom modules or specific style / classes but also building the framework to change via a variable update or a value change with in a mixin. We wanted the Vanilla theme to inherit another developers style and that would cascade through out the whole framework with ease. Setting the brand colour for example would effect the whole framework and change a multiple of modules / elements. But you are not restricted which we had as a bottle neck with the old guidelines.

Using Sass mixins

Mixins are a powerful part of Sass which we weren’t utilising. In guidelines they were used to create preprocessor polyfills, something which was annoying. Gulp now replaces that need. We used mixins to modularise the entire framework, thus giving flexibility over which parts of the framework a project requires.

The ability to easily turn on/off a section of vanilla felt very powerful but required. We wanted a developer to choose what was needed for their project. This was the opposite of guidelines where you would receive the entire framework. In Vanilla, each section our elements or modules would also be encapsulated with in mixins and on some have values which would effect them. For example the buttons mixin;

@mixin vf-button($button-color, $button-bg, $border-color) {
  @extend %button-pattern;
  color: $button-color;
  background: $button-bg;
    
  @if $border-color != null {
    border: 1px solid $border-color;
  }
    
  &:hover {
    background: darken($button-bg, 6.2%);
      
    @if $button-bg == $transparent {
      text-decoration: underline;
    }
  }
}



The above code shows how this mixin isn’t attached to fixed styles or colours. When building a new Vanilla theme a few variable changes will style any button to the projects requirements. This is something we have replicated through out the project and creates a far better modular framework.

Creating new themes

As I have mentioned earlier a few changes can setup a whole new theme in Vanilla, using it as a base and then adding or extending new styles. Change the branding or a font family just requires overwriting the default value e.g $brand-colour: $orange !default; is set in the global variables document. Amending this in another document and setting it to $brand-colour: #990000; will change any element effected by brand colour thus creating the beginning of a new theme.

We can also take this per module mixin. Including the module into a new class or element and then extend or add upon it. This means themes are not constricted to just using what is there but gives more freedom. This method is particularly useful for the web team as we build themes for Ubuntu, Canonical and Cloud products.

An example of a live theme we have created is the Ubuntu vanilla theme. This is an extension of the Vanilla framework and is set up to override any required variables to give it the Ubuntu brand. Diving into the theme.scss It shows all elements used from Vanilla but also Ubuntu specific modules. These are exclusively used just for the Ubuntu brand but are also structured in the same manner as the Vanilla framework. This reduces complexity in maintaining these themes and developers can easily pick up what has been built or use it as a reference to building their own theme versions.

on July 24, 2015 11:02 AM

July 23, 2015

This is a follow-up to the End of Life warning sent earlier this month to confirm that as of today (July 23, 2015), Ubuntu 14.10 is no longer supported. No more package updates will be accepted to 14.10, and it will be archived to old-releases.ubuntu.com in the coming weeks.

The original End of Life warning follows, with upgrade instructions:

Ubuntu announced its 14.10 (Utopic Unicorn) release almost 9 months ago, on October 23, 2014. As a non-LTS release, 14.10 has a 9-month month support cycle and, as such, the support period is now nearing its end and Ubuntu 14.10 will reach end of life on Thursday, July 23rd. At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 14.10.

The supported upgrade path from Ubuntu 14.10 is via Ubuntu 15.04. Instructions and caveats for the upgrade may be found at:

https://help.ubuntu.com/community/VividUpgrades

Ubuntu 15.04 continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce

Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.

Originally posted to the ubuntu-announce mailing list on Thu Jul 23 21:49:45 UTC 2015 by Adam Conrad

on July 23, 2015 11:11 PM

S08E20 – Who’s Your Caddy? - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Twenty of Season Eight of the Ubuntu Podcast! Mark Johnson, Laura Cowen, and Martin Wimpress are together with guest presenter Joe Ressington and speaking to your brain.

In this week’s show:

That’s all for this week, please send your comments and suggestions to: show@ubuntupodcast.org
Join us on IRC in #ubuntu-podcast on Freenode
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

on July 23, 2015 10:22 PM
In system settings under "cellular"
 the new "Wi-Fi" hotspot feature

Enabling Hotspot feature






Hot spot feature settings



We have a brand new Wi-Fi Hotspots (internet tethering) feature that's about to land in Ubuntu Phone with OTA-6.
I know a lot of persons that have been waiting for this feature eagerly.
So let's see how easy it is to help out testing it :-).
You can test this feature on  both Ubuntu 15.04 (Vivid Vervet) and Ubuntu 15.10 (Wily Werewolf) based phone images

First you need to enable "Developer mode" on your Ubuntu Phone, to do this you go to system settings, "About this phone", swipe down right to the bottom and tap on "Developer mode", on the Developer mode page turn on "Developer mode" switch:



Now let's connect the phone to your Ubuntu desktop PC with a USB cable and in terminal write:

citrain device-upgrade <silo #> <pin/password on device>

so for testing this feature the command will be:

citrain device-upgrade  46 0000

where 0000 is your device's pin or password and 46 is the silo number.

If you don't have the phablet-tools-citrain package installed you need to:

sudo apt install phablet-tools-citrain 


Now to start the hotspot: 

  1. Ensure Wi-Fi is enabled.
  2. Go to System Settings -> Mobile/Cellular​
  3. Tap “Wi-Fi hotspot”
  4. Set up your hotspot
  5. Enable it.
If you hit an issue, here's how to report it:

- ​A client can't see the hotspot or the hotspot does not work:​
  * File against: https://bugs.launchpad.net/ubuntu/+source/indicator-network/+filebug

​​  
​* Please attach /var/log/syslog as well as ~/.cache/upstart/indicator-network.log

- There's a problem with the System Settings UI:
  * File against: https://bugs.launchpad.net/ubuntu/+source/ubuntu-system-settings/+filebug

  * ​Please attach log files which you'll find here: ~/.cache/upstart/application-legacy-ubuntu-system-settings-.log​



Enjoy testing :-D.
on July 23, 2015 07:48 PM

I'm loving Akademy!

Valorie Zimmerman

And it hasn't even started. Scarlett and I flew to A Coruña arriving Tuesday, and spent yesterday seeing the town. Today is all about preparing for the e.V. AGM and the Akademy talks and BoFs following. 


Wish you were here!

on July 23, 2015 05:20 PM

One month ago I wrote my first review about Meizu MX4 and I was disappointed about the lack of optimization that the phone received and some of the problems as well.

Now with the OTA-5 the phone is working as it should have from the very beginning. It’s a real shame that Meizu started selling them before this update, because a lot of reviews currently online are bad due to problems fixed in this update. It’s like a whole new phone!

Meizu MX4 specific improvements

Battery

One of the things that I most appreciated in the BQ Aquaris E4.5 was the battery life which the MX4 was not on par with, until two day ago that is.

After the update I did a full recharge and I didn’t have to charge again for 36 hours. Maybe for some of you that is not a lot but for me it’s much longer than I was used to from Android. I always had 4G turned, watched at least 30 mins of videos on YouTube, received and replied to more than 500 messages, received and replied to emails, surfed the web, a couple of calls and more.

Battery life could be a killer feature of Ubuntu, and there is still a lot to improve.

Optimization

With this update the phone doesn’t lag and doesn’t get too hot. There’s also an increase of icons per row (following a change with the grid units) which is much better!

Oh yea and the the LED for notifications work! Yes!

new gu

General improvements

The speed improvement is terrific. With every update a lot of things change and you can spend hours finding all of them. If you’re passionate about technology you definitely have to buy an Ubuntu Phone (the MX4 or the Aquaris depending on your price range).

Some of the most interesting I found are: - Unity Rotation: finally it isn’t weird to use the phone in landscape mode. Though there are still some bugs and the dash doesn’t rotate which I hope they fix soon! - New icons: which look great, awesome job Design Team! they also look much clear and have better in the MX4’s resolution. Kudos! - Change reviews: time to update some old feedback I left when the apps where still in development. - New Tab in Browser: has been improved with some of the contributions I worked on in the last few months. I love the Browser and I love all the updates it is getting as well as in the Desktop

Now I’ve very happy with the phone and I still miss nothing from Android. Yes of it isn’t for everyone (yet) but the number of improvements it has every month is astonishing and I think it will become available to the masses very soon.

But it is still missing from apps which can be filled in when some big companies come and join on this trip!

Thanks to Aaron Honeycutt for helping me writing this article.

If you like my work and want to support me, just send me a Thank you! by email or offer me a beer:-)

Ciao,
R.

on July 23, 2015 03:00 PM

Announcing UbuContest 2015

Ubuntu App Developer Blog

Have you read the news already? Canonical, the Ubucon Germany 2015 team, and the UbuContest 2015 team, are happy to announce the first UbuContest! Contestants from all over the world have until September 18, 2015 to build and publish their apps and scopes using the Ubuntu SDK and Ubuntu platform. The competion has already started, so register your competition entry today! You don’t have to create a new project, submit what you have and improve it over the next two months.

But we know it's not all about shiny new apps and scopes! A great platform also needs content, great design, testing, documentation, bug management, developer support, interesting blog posts, technology demonstrations and all of the other incredible things our community does every day. So we give you, our community members, the opportunity to nominate other community members for prizes!

We are proud to present five dedicated categories:

  1. Best Team Entry: A team of up to three developers may register up to two apps/scopes they are developing. The jury will assign points in categories including "Creativity", "Functionality", "Design", "Technical Level" and "Convergence". The top three entries with the most points win.

  2. Best Individual Entry: A lone developer may register up to two apps/scopes he or she is developing. The rest of the rules are identical to the "Best Team Entry" category.

  1. Outstanding Technical Contribution: Members of the general public may nominate candidates who, in their opinion, have done something "exceptional" on a technical level. The nominated candidate with the most jury votes wins.

  1. Outstanding Non-Technical Contribution: Members of the general public may nominate candidates who, in their opinion, have done something exceptional, but non-technical, to bring the Ubuntu platform forward. So, for example, you can nominate a friend who has reported and commented on all those phone-related bugs on Launchpad. Or nominate a member of your local community who did translations for Core Apps. Or nominate someone who has contributed documentation, written awesome blog articles, etc. The nominated candidate with the most jury votes wins.

  1. Convergence Hero: The "Best Team Entry" or "Best Individual Entry" contribution with the highest number of "Convergence" points wins. The winner in this category will probably surprise us in ways we have yet to imagine.

Our community judging panel members Laura Cowen, Carla Sella, Simos Xenitellis, Sujeevan Vijayakumaran and Michael Zanetti will select the winners in each category. Successful winners will be awarded items from a huge pile of prizes, including travel subsidies for the first-placed winners to attend Ubucon Germany 2015 in Berlin, four Ubuntu Phones sponsored by bq and Meizu, t-shirts, and bundles of items from the official Ubuntu Shop.

We wish all the contestants good luck!

Go to ubucontest.eu or ubucon.de/2015/contest for more information, including how to register and nominate folks. You can also follow us on Twitter @ubucontest, or contact us via e-mail at contest@ubucon.de.

 
on July 23, 2015 01:04 PM
Hi there everyone,
Success is on the cards for Africa. 15 of the 18 countries have joined the group
and it looks like we will soon be helping one country form a new LoCo there.
As you can see from https://launchpad.net/~ubuntu-africa/+members this group is growing in leaps and bounds. First brain storming meeting will be on the 29th of this month at 20.30 Africa time. ( UTC+2 ) I am hoping that a Council member or two can attend. And we have 3 membership board members so all is looking good. Everyone is welcome to join us at our first meeting. One of the Tunisia guys has been improving our wiki page as well
https://wiki.ubuntu.com/AfricanTeams
Keep well everyone.
on July 23, 2015 12:36 PM