October 13, 2019

Remember Marta, our volunteer from the registration booth? She took care of the translation of the article written by Fátima Caçador for SAPO Tek:

Ubucon Europe: What is the Ubuntu community doing in Sintra? Sharing technical knowledge and tightening connections

News from the new Ubuntu distribution, the exploration of the several platforms and many “how to”, rule the 4-days agenda where the open source and open technologies are in the air.

The Olga Cadaval Cultural centre in Sintra, is the main stage of a busy agenda filled with several talks and more technical sessions, but at Ubucon Europe there’s also room for networking and cultural visits, a curious fusion between spaces full of history, like the Pena Palace or the Quinta da Regaleira, and one of the youngest “players” in the world of software.

For 4 days, the international Ubuntu Community gathers in Sintra for an event open to everyone, where the open source principles and open technology are dominating. The Ubucon Europe Conference begun Thursday, October 10th, and extends until Sunday, October 13th, keeping an open doors policy to everyone who wants to

Afterall, what is the importance of Ubucon? The number of participants, which should be around 150, doesn’t tell the whole story of what you can learn during these days, as the SAPO TEK had the opportunity to check this morning.

Organised by the Ubuntu Portugal Community, with the National Association for Open Software, the Ubuntu Europe Federation and the Sintra Municipality, the conference brings to Portugal some of the biggest open source specialists and shows that Ubuntu is indeed alive, even if not yet known by most people, and still far from the “world domain” aspired by some.

15 years of Ubuntu

This year is Ubuntu’s 15th birthday after its creation in 2004 by Mark Shuttleworth who gathered a team of Debian developers and founded Canonical, in South Africa, with the purpose of developing a Linux distribution easy to use. He called it Ubuntu, a word that comes from the Zulo and Xhosa languages meaning “I am because we are” which shows its social dimension.

The millionaire Mark Shuttleworth declared at the time “my motivation and goal is to find a way of creating a global operating system for desktops which is free in every way, but also sustainable and with a quality comparable to any other that money can buy”.

And in the last 15 years Ubuntu hasn’t stop growing, following trends and moving from the desktop and servers to the Cloud, the IoT and even phones. Canonical ended up withdrawing from this last one, leaving the development on UBport’s hands.

“Ubuntu has never been better”, states Tiago Carrondo, head of the Ubuntu Portugal Community, explaining that Cloud usage is growing every month and the same is happening on the desktop. “The community has proved being alive and participative” and Ubucon is an example of that capacity to deliver and to be involved in projects.

A new version of Ubuntu is going to be launched in two weeks (October 19th) and in April, next year, it’s time for Ubuntu 20.04, the new LTS version which is generating expectations and it’s the focus of several talks during Ubucon.

An operating system not just for ‘geeks’

But is this a subject just for some “geeks” who don’t mind getting their hands dirty and mess with coding to adapt the operating system to their needs? Gustavo Homem, CTO of Ângulo Sólido, ensures Ubuntu is increasingly being used by companies and in the cloud Azure, AWS and DigitalOcean is among the most used operating systems, highlighting the ease of use, flexibility and security.

The Ângulo Sólido uses Ubuntu internally and with their clients, from desktops to routers and Cloud solutions, and during Ubucon it presented the more and the least expected uses for Ubuntu, where some hacks with mixing desks take part.

It’s in the Cloud where Ubuntu has grown the most, due to the freedom of the operating system, because at the level of computer’s desktops and laptops it depends on the manufactures willingness to sell devices with a pre-installed operating system, or without any, leaving room for ubuntu’s using.

However, even if it’s easy and more and more prepared to connect to every peripherals and it supports most of the software on the market, Ubuntu is far from being recognised by the majority of computer users, so its use is reserved to a restrict group of people with more technical training and knowledge.

In Cell phones, where there was a movement for creating an operating system in 2014 which could be an alternative to android and IOS, the abandonment of the project by Canonical didn’t help creating a mass movement involving manufactures. The UBports community continues developing the concept and coding, and during Ubucon showed some news and developments with Fairphone and Pine64, but it’s still far from becoming a solid operating system, in which you can fully trust, as Jan Sprinz admitted.

In the audience of the talk which SAPO TEK attended, there were many Ubuntu Touch users, the mobile operating system, but with doubts and concerns, such as the availability of the most used apps. Nevertheless the operating system is cherished, and there was even someone comparing it to a pet, which may destroy the leaving room and chew the shoes, but the owner never stops loving it.

How do you do an Ubucon?

“We wanted to make a memorable Ubucon”, explains Tiago Carrondo, the face of the organisation who, during the last few months dedicated much of his time to the preparation of all the logistics, part of a very small but very committed team, as he stated to SAPO TEK.

The European event is now on its 4th edition and it arose spontaneously, inside the community, and after Germany (Essen), France (Paris) and Spain (Xixón), Portugal is the 4th country hosting the community with the purpose of “having an Ubucon without rain” and from here, the community goes, in 2020 to a new location, which should be revealed this week but now still a well-kept secret.

Characterising Ubuntu Portugal as a community of people, Tiago Carrondo explains that companies are “friends”, and appear as associates and sponsors for the event, where there are also connections with educational institutes.

The centre of the organisation and purpose of Ubucon are the people, so there’s a very big social component, allowing volunteers working in Ubuntu’s projects during the entire year to meet face to face and share experiences and knowledge. For that reason, the schedule was designed to start a little later than usual, around 10 am, and to finish early with a long pause for lunch.

The conference ends tomorrow, but those who want to attend the last presentations in Olga Cadaval Cultural Centre in Sintra, can still do it. By registering or by simply showing up at the venue, because the organisation policy is open doors and respect for privacy.

Those who didn’t have the chance to assist will be able to watch everything in video over the next few weeks. Tiago Carrondo explains that they didn’t want to stream it, but everything is being recorded to be edited and will be available soon.

on October 13, 2019 09:06 PM

October 11, 2019

Adoption of edge computing is taking hold as organisations realise the need for highly distributed applications, services and data at the extremes of a network. Whereas data historically travelled back to a centralised location, data processing can now occur locally allowing for real-time analytics, improved connectivity, reduced latency and ushering in the ability to harness newer technologies that thrive in the micro data centre environment.

In an earlier post, we discussed the importance of choosing the right primitives for edge computing services. When looking at use-cases calling for ultra-low latency compute, Kubernetes and containers running on bare metal are ideal for edge deployments because they offer direct access to the kernel, workload portability, easy upgrades and a wide selection of possible CNI choices.

While offering clear advantages, setting up Kubernetes for edge workload development can be a difficult task – time and effort better spent on actual development. The steps below walk you through an end-to-end deployment of a sample edge application. The application runs on top of Kubernetes with advanced latency budget optimization.  The deployed architecture includes Ubuntu 18.04 as the host operating system, Kubernetes v1.15.3 (MicroK8s) on bare-metal, MetalLB load balancer and CoreDNS to serve external requests.

Let’s roll

Summary of steps:

  1. Install MicroK8s
  2. Add MetalLB
  3. Add a simple service – Core DNS

Step 1: Install MicroK8s

Let’s start with the development workstation Kubernetes deployment using MicroK8s by pulling the latest stable edition of Kubernetes.

$ sudo snap install microk8s --classic
microk8s v1.15.3 from Canonical✓ installed
$ snap list microk8s
Name      Version Rev  Tracking Publisher   Notes
microk8s  v1.15.3 826  stable canonical✓  classic

Step 2: Add MetalLB

As I’m deploying Kubernetes on the bare metal node, I chose to utilise MetalLB, as I won’t be able to rely on the cloud to provide LBaaS service. MetalLB is a fascinating project supporting both L2 and BGP modes of operation, and depending on your use case, it might just be the thing for your bare metal development needs. 

$ microk8s.kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml
namespace/metallb-system created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
daemonset.apps/speaker created
deployment.apps/controller created

Once installed, you need to make sure to update the iptables configuration to allow IP forwarding and configure your metalLB with networking mode and address the pool you want to use for load balancing. The config files need to be created manually, please see listing 1 below for reference.

$ sudo iptables -P FORWARD ACCEPT

Listing 1 : MetalLB configuration (metallb-config.yaml)

apiVersion: v1
kind: ConfigMap
  namespace: metallb-system
  name: config
  config: |
    - name: default
      protocol: layer2

Step 3: Add a simple service

Now that you have your config file ready, you continue with CoreDNS sample workload configuration. Especially for edge use cases, you usually want to have fine-grained control over how your application is exposed to the rest of the world. This includes ports as well as the actual IP address you would like to request from your load balancer. For the purpose of this exercise, I use .35 IP addresses from  subnet and create Kubernetes service using this IP.

Listing 2: CoreDNS external service definition (coredns-service.yaml)

apiVersion: v1
kind: Service
  name: coredns
  - name: coredns
    port: 53
    protocol: UDP
    targetPort: 53
    app: coredns
  type: LoadBalancer

For the workload configuration itself, I use a simple DNS cache configuration with logging and forwarding to Google’s open resolver service.

Listing 3: CoreDNS ConfigMap (coredns-configmap.yaml)

apiVersion: v1
kind: ConfigMap
  name: coredns
  Corefile: |
    .:53 {
     forward .

Finally, the description of our Kubernetes deployment calling for 3 workload replicas, latest CoreDNS image and configuration I’ve defined in our ConfigMap.

Listing 4: CoreDNS Deployment definition  (coredns-deployment.yaml)

apiVersion: apps/v1
kind: Deployment
name: coredns-deployment
app: coredns
replicas: 3
app: coredns
app: coredns
- name: coredns
image: coredns/coredns:latest
imagePullPolicy: IfNotPresent
args: [ "-conf", "/etc/coredns/Corefile" ]
- name: config-volume
mountPath: /etc/coredns
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: config-volume
name: coredns
- key: Corefile
path: Corefile


With all the service components defined, prepared and configured, you’re ready to start the actual deployment and verify the status of Kubernetes pods and services.

$ microk8s.kubectl apply -f metallb-config.yaml 
configmap/config created
$ microk8s.kubectl apply -f coredns-service.yaml
service/coredns created
$ microk8s.kubectl apply -f coredns-config.yaml
configmap/coredns created
$ microk8s.kubectl apply -f coredns-deployment.yaml
deployment.apps/coredns-deployment created
$ microk8s.kubectl get po,svc --all-namespaces
default pod/coredns-deployment-9f8664bfb-kgn7b 1/1 Running 0 10s
default pod/coredns-deployment-9f8664bfb-lcrfc 1/1 Running 0 10s
default pod/coredns-deployment-9f8664bfb-n4ht6 1/1 Running 0 10s
metallb-system pod/controller-7cc9c87cfb-bsrwx 1/1 Running 0 4h8m
metallb-system pod/speaker-s9zz7 1/1 Running 0 4h8m
NAMESPACE   NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
default service/coredns LoadBalancer 53:31338/UDP 34m
default service/kubernetes ClusterIP 443/TCP 4h29m

Once all the containers are fully operational, you can evaluate how your new end to end service is performing. As you can see, the very first request takes around 50ms to get answered (which aligns with usual latency between my ISP access network and Google DNS infrastructure), however, subsequent requests provide significant latency reduction as expected from a local DNS caching instance.

$ host -a www.ubuntu.com
Trying "www.ubuntu.com"
Using domain server:
Received 288 bytes from in 50 ms
$ host -a www.ubuntu.com
Trying "www.ubuntu.com"
Using domain server:
Received 288 bytes from in 0 ms
$ host -a www.ubuntu.com
Trying "www.ubuntu.com"
Using domain server:
Received 288 bytes from in 1 ms

CoreDNS is an example of a simple use case for distributed edge computing, proving how network distance and latency can be optimised for better user experience by changing service proximity. The same rules apply to exciting services such as AR/VR, GPGPU-based inference AI and content distribution networks.

The choice of proper technological primitives, flexibility to manage your infrastructure to meet service requirements and process to manage distributed edge resources in scale will become critical factors for edge cloud adoption. This is where MicroK8s comes in, to reduce the complexity and cost of development and deployment without sacrificing quality.

End Note

So you’ve just on-boarded an edge application, now what? Take MicroK8s for a spin with your use case(s) or just try to break stuff. If you’d like to contribute or request features/enhancements, Please shout out on our Github, Slack #MicroK8s or Kubernetes forum.

on October 11, 2019 10:14 PM

We are happy to let everyone know that the Community DevRoom will be held this year at the FOSDEM Conference. FOSDEM is the premier free and open source software event in Europe, taking place in Brussels from 1-2 February 2020 at the Université libre de Bruxelles. You can learn more about the conference at https://fosdem.org.

== tl;dr ==

  • Community DevRoom takes place on Sunday, 2nd February 2020
  • Submit your papers via the conference abstract submission system, Pentabarf, at https://penta.fosdem.org/submission/FOSDEM20
  • Indicate if your session will run for 30 or 45 minutes, including Q&A. If you can do either 30 or 45 minutes, please let us know!
  • Submission deadline is 27 November 2019 and accepted speakers will be notified by 11 December 2019
  • If you need to get in touch with the organizers or program committee of the Community DevRoom, email us at community-devroom@lists.fosdem.org


We are happy to let everyone know that the Community DevRoom will be held this year at the FOSDEM Conference. FOSDEM is the premier free and open source software event in Europe, taking place in Brussels from 1-2 February at the Université libre de Bruxelles. You can learn more about the conference at https://fosdem.org.

The Community DevRoom will take place on Sunday 2nd February 2020.

Our goals in running this DevRoom are to:

* Connect folks interested in nurturing their communities with one another so they can share knowledge during and long after FOSDEM

* Educate those who are primarily software developers on community-oriented topics that are vital in the process of software development, e.g. effective collaboration

* Provide concrete advice on dealing with squishy human problems

* To unpack preconceived ideas of what community is and the role it plays in human society, free software, and a corporate-dominated world in 2020.

We would seek proposals on all aspects of creating and nurturing communities for free software projects. 



Here are some topics we are interested in hearing more about this year:


1) Is there any real role for community in corporate software projects?

Can you create a healthy and active community while still meeting the needs of your employer? How can you maintain an open dialog with your users and/or contributors when you have the need to keep company business confidential? Is it even possible to build an authentic community around a company-based open source project? Have we completely lost sight of the ideals of community and simply transformed that word to mean “interested sales prospects?”


2) Creating Sustainable Communities

With the increased focus on the impact of short-term and self-interested thinking on both our planet and our free software projects, we would like to explore ways to create authentic, valuable, and lasting community in a way that best respects our world and each other.  We would like to hear from folks about how to support community building in person in sustainable ways, how to build community effectively online in the YouTube/Instagram era, and how to encourage corporations to participate in community processes in a way that does not simply extract value from contributors. If you have recommendations or case studies on how to make this happen, we very much want to hear from you.


We are particularly interested to hear about academic research into FOSS Sustainability and/or commercial endeavors set up to address this topic.


3) Bringing free software to the GitHub generation

Those of us who have been in the free and open source software world for a long time remember when the coolest thing you could do was move from CVS to SVN, Slack ended in “ware”, IRC was where you talked to your friends instead of IRL (except now no one talks in IRL anyway, just texts), and Twitter was something that birds did. Here we are in 2020, and clearly things have changed.

How can we bring more younger participants into free software communities? How do we teach the importance of free software values in an era where freely-available code is ubiquitous? Will the ethical underpinnings of free software attract millenials and Gen Z to participate in our communities when our free software tends to require lots of free time? 

We promise we are not cranky old fuddy duddies. Seriously. It’s important to us that the valuable experiences we had in our younger days working in the free software community are available to everyone. And we want to know how to get there.


4) Applying the principles of building free software communities to other endeavors

What can the lessons about decentralization, open access, open licensing, and community engagement teach us as we address the great issues of our day? We have left this topic not well defined because we would like people to bring whatever truth they have to the question. Great talks in this category could be anything from “why to never start a business in Silicon Valley” to “working from home is great and keeps C02 out of the air.” Let your imagination take you far  – we are excited to hear from you.


5)  How can free software protect the vulnerable

At a time when some of the best accessibility features are built as proprietary products, at a time when surveillance and predictive policing lead to persecution of dissidents and imprisonment of those who were guilty before proven innocent, how can we use free software to protect the vulnerable? What sort of lobbying efforts would be required to make certain free software – and therefore fully auditable – code becomes a civic requirement? How do we as individuals, and actors at employers, campaign for the protection of vulnerable people – and other living things – as part of our mission of software freedom. 


6) Conflict resolution

How do we continue working well together when there are conflicts? Is there a difference in how types of conflicts best get resolved, e.g. ”this code is terrible” vs. “we should have a contributor agreement”? We are especially interested in how tos / success stories from projects that have weathered conflict. 

We are now at 2020 and this issue still comes up semi-daily. Let’s share our collective wisdom on how to make conflict less painful and more productive. 


Again, these are just suggestions. We welcome proposals on any aspect of community building!





We are looking for talk submissions between 30 and 45 minutes in length, including time for Q&A. In general, we are hoping to accept as many talks as possible so we would really appreciate it if you could make all of your remarks in 30 minutes – our DevRoom is only a single day –  but if you need longer just let us know.


Beyond giving us your speaker bio and paper abstract, make sure to let us know anything else you’d like to as part of your submission. Some folks like to share their Twitter handles, others like to make sure we can take a look at their GitHub activity history – whatever works for you. We especially welcome videos of you speaking elsewhere, or even just a list of talks you have done previously. First time speakers are, of course, welcome!



  1. CFP opens 11 October 2019
  2. Proposals due in Pentabarf 27 November 2019
  3. Speakers notified by 11 December 2019
  4. DevRoom takes place 2 February 2020at FOSDEM

Community DevRoom Mailing List: community-devroom@lists.fosdem.org


on October 11, 2019 10:26 AM

October 10, 2019

S12E27 – Exile

Ubuntu Podcast from the UK LoCo

This week we’ve been playing LEGO Worlds and tinkering with Thinkpads. We round up the news and goings on from the Ubuntu community, introduce a new segment, share some events and discuss our news picks from the tech world.

It’s Season 12 Episode 27 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on October 10, 2019 02:00 PM

We have recently announced that we are transitioning the Chromium deb package to the snap in Ubuntu 19.10. Such a transition is not trivial, and there have been many constructive discussions around it, so here we are summarising why we are doing this, how, and the timeline.


Chromium is a very popular web browser, the fully open source counterpart to Google Chrome. On Ubuntu, Chromium is not the default browser, and the package resides in the ‘universe’ section of the archive. Universe contains community-maintained software packages. Despite that, the Ubuntu Desktop Team is committed to packaging and maintaining Chromium because a significant number of users rely on it. 

Maintaining a single release of Chromium is a significant time investment for the Ubuntu Desktop Team working with the Ubuntu Security team to deliver updates to each stable release. As the teams support numerous stable releases of Ubuntu, the amount of work is compounded.

Comparing this workload to other Linux distributions which have a single supported rolling release misses the nuance of supporting multiple Long Term Support (LTS) and non-LTS releases.

Google releases a new major version of Chromium every six weeks, with typically several minor versions to address security vulnerabilities in between. Every new stable version has to be built for each supported Ubuntu release − 16.04, 18.04, 19.04 and the upcoming 19.10 − and for all supported architectures (amd64, i386, armhf, arm64).

Additionally, ensuring Chromium even builds (let alone runs) on older releases such as 16.04 can be challenging, as the upstream project often uses new compiler features that are not available on older releases. 

In contrast, a snap needs to be built only once per architecture, and will run on all systems that support snapd. This covers all supported Ubuntu releases including 14.04 with Extended Security Maintenance (ESM), as well as other distributions like Debian, Fedora, Mint, and Manjaro.

While this change in packaging for Chromium can allow us to focus developer resources elsewhere, there are additional benefits that packaging as a snap can deliver. Channels in the Snap Store enable publishing multiple versions of Chromium easily under one name. Users can switch between channels to test different versions of the browser. The Snap Store delivers snaps automatically in the background, so users can be confident they’re running up to date software without having to manually manage their updates. We can also publish specific fixes quickly via branches in the Snap Store enabling a fast user & developer turnaround of bug reports. Finally the Chromium snap is strictly confined, which provides additional security assurances for users.

In summary: there are several factors that make Chromium a good candidate to be transitioned to a snap:

  • It’s not the default browser in Ubuntu so has lower impact by virtue of having a smaller user-base
  • Snaps are explicitly designed to support a high frequency of stable updates
  • The upstream project has three release channels (stable, beta, dev) that map nicely to snapd’s default channels (stable, beta, edge). This enables users to easily switch release of Chromium, or indeed have multiple versions installed in parallel
  • Having the application strictly confined is an added security layer on top of the browser’s already-robust sand-boxing mechanism


The first release of the Chromium snap happened two years ago, and we’ve come a long way since then. The snap currently has more than 200k users across Ubuntu and more than 30 other Linux distributions. The current version has a few minor issues that we’re working hard to address, but we felt it’s solid and mature enough for a transition. We feel confident that it is time to start transitioning users of the development release (19.10) of Ubuntu to it. We are eager to collect feedback on what works and what doesn’t ahead of the next Long Term Support release of Ubuntu, 20.04.

In 19.10, the chromium-browser deb package (and related packages) have been made a transitional package that contains only wrapper scripts and a desktop file for backwards compatibility. When upgrading or installing the deb package on 19.10, the snap will be downloaded from the Snap Store and installed. 

Special care has been taken to not break existing workflows and to make the transition as seamless as possible:

  • When running the snap for the first time, an existing Chromium user profile in $HOME/.config/chromium will be imported (provided there is enough disk space)
  • The chromium-browser and chromedriver executables in /usr/bin/ are wrappers that call into the respective snap executables
  • chromedriver has been patched so that existing selenium scripts should keep working without modifications
  • If the user has set Chromium as the default browser, the chromium-browser wrapper will take care of updating it to the Chromium snap
  • Similarly, existing pinned entries in desktop launchers will be updated to point to the snap version (implemented for GNOME Shell and Unity only for now, contributions welcome for other desktop environments)
  • The apport hook has been updated to include relevant information about the snap package and its dependencies


If you’re experimenting with Ubuntu 19.10 then you can try Chromium as a snap and test the transition from the deb package right now. However, you don’t need to wait until the release on the 17th of October to start using the snap and sharing your feedback. Simply run the following commands to be up and running:

snap install chromium
snap run chromium

Once 19.10 is released, we will carefully consider extending the transition to other stable releases, starting with 19.04. This won’t happen until all the important known issues are addressed, of course.

Now is the perfect time to put the snap to the test and report issues and regressions you encounter.

We appreciate all the feedback and commentary we’ve been sent over the last few months as we announced this project. We honestly believe delivering applications as snaps provides significant advantages both to developers and users. We know there may be some rough edges as we work towards the future and will continue to listen to our users as we chart this new journey.

on October 10, 2019 09:12 AM

October 09, 2019

When I purchased my Raspberry Pi4 I kind of expected it to operate under similar conditions as all the former Pi’s I owned …

So I created an Ubuntu Core image for it (you can find info about this at Support for Raspberry Pi 4 on the snapcraft forum)

Runnig lxd on this image off a USB3.1 SSD to build snap packages (it is faster than the Ubuntu Launchpad builders that are used for build.snapcraft.io, so a pretty good device for local development), I quickly noticed the device throttles a lot once it gets a little warmer, so I decided I need a fan.

I ordered this particular set  at amazon, dug up a circuit to be able to run the fan at 5V without putting too much load on the GPIO managing the fan state … luckily my “old parts box” still had a spare BC547 transistor and an 1k resistor that I could use, so I created the following addon board:




Finished addon board (with pic how it gets attached)

So now I had an addon board that can cool the CPU, but the fan indeed needs some controlling software, this is easily done via some small shell script by echoing 0 or 1 into /sys/class/gpio/gpio14/value … this script can be found on my github account as fancontrol.sh

Since we run Ubuntu Core we indeed want to run the whole thing as a snap package, so lets quickly create a snapcraft.yaml file for it:

name: pi-fancontrol
base: core18
version: '0.1'
summary: Control a raspberry pi fan attached to GPIO 14
description: |
  Control a fan attached to a GPIO via NPN transistor
  (defaults to GPIO 14 (pin 8))

grade: stable
confinement: strict
  - build-on: armhf
    run-on: armhf
  - build-on: arm64
    run-on: arm64

    command: fancontrol.sh
    daemon: simple
      - gpio
      - hardware-observe

    plugin: nil
    source: .
    override-build: |
      cp -av fancontrol.sh $SNAPCRAFT_PART_INSTALL/

The image is based on core18, so we add a base: core18 entry. It is very specific to the Raspberry Pi, so we also add an architectures: block that makes it only build and run on arm images. Now we need a very simple apps: entry that spawns the script as a daemon, allows it to access the info about temperature via the hardware-observe interface and also allows it to write to the gpio interface we connect the snap to, to echo the 0/1 values into the sysfs node for the GPIO. A simple fancontrol part that just copies the script into the snap package, and off we go !

The whole code for the pi-fancontrol snap can be found on github and there is indeed a ready made snap for you to use in the snap store at https://snapcraft.io/pi-fancontrol

You can easily install it with:

snap install pi-fancontrol
snap connect pi-fancontrol:gpio pi4-devel:bcm-gpio-14
snap connect pi-fancontrol:hardware-observe

… and your fan should start to fire up every time your CPU temperature goes above 50 degrees….

on October 09, 2019 03:10 PM

October 08, 2019

Full Circle Weekly News #148

Full Circle Magazine

Huawei Linux Laptops Running Deepin Linux Now Available
Linux 5.3 Kernel Bundles New Navi Graphics Support
Announcing the new IBM LinuxONE III with Ubuntu
GhostBSD 19.09 Now Available


Ubuntu “Complete” sound: Canonical

Theme Music: From The Dust – Stardust



on October 08, 2019 05:23 PM

October 07, 2019

Welcome to the Ubuntu Weekly Newsletter, Issue 599 for the week of September 29 – October 5, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • EoflaOE
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on October 07, 2019 10:05 PM

In recent years, developers have become an increasingly important audience for organizations to build relationships with. Not only are developers actively building technology, but they are also often helping to shape decisions inside of businesses that cover product, awareness, and beyond.

As such, Developer Relations has become an increasing focus for many organizations. How though, do you build real relationships with developers?

Mary Thengvall has been actively involved in Developer Relations for a number of years in her experience at O’Reilly, Chef, Sparkpost, and as an independent consultant. She is the author of The Business Value Of Developer Relations and maintains DevRel Weekly.

In this episode of Conversations With Bacon, we unpick what Developer Relations is, Mary’s ascent in the industry, and how this work can and should be integrated into a business. Mary also shares her perspectives on what success looks like, how technical DevRel people should be, where this work should ideally report, and much more.

A really fascinating discussion and well worth a listen!


   Listen on Google Play Music

The post Mary Thengvall on Developer Relations, Reporting, and Growth appeared first on Jono Bacon.

on October 07, 2019 09:07 PM

October 06, 2019

Welcome to Sintra at UbuconEU19 (Pre Ubucon activities)

If you registered to make any or all the visits please check the details below:

Before reading further:

  • some visit where changed/edited/removed, this is the final plan;
  • When you arrive you should always look for someone (Jaime Pereira) with an Ubucon Europe 2019’s poster;
  • You will receive an ID at the beginning of each visit, at the end don’t forget to return the ID to Jaime;
  • Water, some food and comfortable shoes are almost mandatory;
  • Any issue please contact Jaime:

October 7th – National Palace of Sintra
Meeting spot is at 9:00 AM we will be waiting for you at the main door of the National Palace, also known as the “Palace of the Village” (above the staircase).

October 7th – Quinta da Regaleira
Meeting spot is at 14:45 PM we will be waiting for you at the main gate of “The Quinta da Regaleira”

October 8th – Park and National Palace of Pena
Meeting spot is at at 9:00 AM we should take bus 434 in front of Sintra train station.

October 8th – Countess of Edla Chalet
After visiting the “Pena Palace” and continuing in the Park we will visit the “Countess of Edla’s Chalet.
If you only registered to visit the Chalet:
Meeting spot is at 11:30 AM in front of the chalet.

October 9th – Monserrate Park and Palace
Meeting spot is at 9:15 AM we should take bus 435 in front of Sintra train station at 9:15 AM.

Thank you and see you there.

on October 06, 2019 01:54 PM

October 03, 2019

S12E26 – Interstate ’76

Ubuntu Podcast from the UK LoCo

This week we’ve been tourists in our home town, we review the Dell Precision 3540 Developer Edition laptop, bring you some command line love and go over all your wonderful feedback.

It’s Season 12 Episode 26 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on October 03, 2019 02:00 PM

Situational Clarification

Stephen Michael Kellat

Yes, that was me reappearing after an absence of just over five years on IRC. Yes, I did in fact utter an old-time TV catch phrase that should be recognizable.

No, I’m certainly not answering the phone now. That is to say, I am not answering the phone on behalf of others now. It is not as if it is a calamity but a change had to come about. Goodbyes were said but were quieter than what others might engage in.

Change begins. It seems a bit late to contribute to the Eoan cycle. As to the F cycle I am back in a position I haven’t been in a for a few years. This could be interesting, I suppose.

I do have some film editing I eventually have to get finished too, I suppose…

on October 03, 2019 03:47 AM

October 02, 2019

This is a small post about how to use inventory plugins in Ansible. If you are looking the script way I recommend to read this article: http://gloriasilveira.com/setting-up-ansible-for-aws-with-dynamic-inventory-ec2.html
It explains really good this or you can watch this video: https://www.youtube.com/watch?v=LnbqO1kTPqE&t=6s
But if you’re looking to use inventory this article can help you.
First of all, why should I used inventory if all over the internet they’re using the python scripts?
Well, Ansible recommends it:
Inventory plugins take advantage of the most recent updates to Ansible’s core code. We recommend plugins over scripts for dynamic inventory. You can write your own plugin to connect to additional dynamic inventory sources. https://docs.ansible.com/ansible/latest/user_guide/intro_dynamic_inventory.html
The actual ansible guide is quite good, but there was a step that got me confused, probably my english isn’t so good and I didn’t understood it.
We need to enable the plugin. So we have 2 ways of doing this, we could edit the ansible.cfg file located in /etc/ansible/ansible.cfg or in your local folder where you’re working.
Ansible Documentation
According to ansible, you need to enable the plugin, like the following code, but I was killing myself and couldn’t make it work, so what’s the correct way of doing it?
enable_plugins = host_list, script, auto, yaml, ini, toml
The way it works for me
I’m working with aws dynamic inventory. So according to the documentation the file name has to end in aws_ec2.(yml|yaml)
So I need to add the aws_ec2 to the enable plugins.
enable_plugins = aws_ec2, host_list, yaml, ini, script
After that, following the documentation is quite easy
File name: demo.aws_ec2.yml
# Minimal example using environment vars or instance role credentials
# Fetch all hosts in us-east-1, the hostname is the public DNS if it exists, otherwise the private IP address
plugin: aws_ec2
- us-east-1
If you need to run it
ansible-inventory -i demo.aws_ec2.yml --graph
If you need to use it on a playbook, sending of parameter a private key
ansible-playbook -i demo.aws_ec2.yml playbook.yaml --private-key KEY
on October 02, 2019 11:43 PM

October 01, 2019

Full Circle Weekly News #147

Full Circle Magazine

Thousands Of Linux Servers Infected By Lilu (Lilocked) Ransomware
KaOS 2019.09 Linux Distro Released with KDE Plasma 5.16.5 and Linux Kernel 5.2
https://news.softpedia.com/news/kaos-2019-09-linux-released-with-kde-plasma-5-16-5-and-linux-kernel-5-2-527373.shtmlManjaro Is Taking the Next Step

Manjaro 18.1.0 – Juhraya finally released!

LXLE 18.04.3 Linux OS Released

Ubuntu’s Snapcraft Updated to 3.8

Canonical Fixes Linux 4.15 Kernel Regression in Ubuntu 18.04 LTS and 16.04 LTS

Ubuntu 19.10 Promises More Boot Speed Improvements


Ubuntu “Complete” sound: Canonical


Theme Music: From The Dust – Stardust



on October 01, 2019 05:23 PM

September 30, 2019

Welcome to the Ubuntu Weekly Newsletter, Issue 598 for the week of September 22 – 28, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • EoflaOE
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on September 30, 2019 09:56 PM

Akademy! 2019 Edition

Scarlett Gately Moore

KDE Akademy 2019KDE Akademy 2019


I am happy to report yet another successful KDE Akademy! This will make my 5th Akademy 🙂 This year akademy was held in beautiful Milan, Italy. As usual we had so many great talks, you can read all about them here:


My trip was shortened again due to flight availability, but I still got in some great BoF sessions. We were able to achieve some tasks and goals with the Fundraising Working Group. I hung out with the Neon team for a few, and it was decided I will continue the Debian merge and continue to keep the delta between Debian and neon as minimal as possible. This helps all deb based distributions in the end. I was also happy to see snaps are coming along nicely! There was a great BoF on user support, where we discussed trying to get users connected with the people that can answer questions. I believe we landed on Discourse, we are on the technical stage of making that happen.

The core of what makes Akademy so important is the networking of course. I was able to see many old friends and meet many new ones. I was so happy to see so many new faces this year! With each year our bunch has become more and more diverse, which is always a good thing. Face to face collaboration is very important in an environment where we mostly see text all day.

Until next year! Happy hacking and see you all around in the interwebs.


P.S. Stay tuned and I will have another post with everything I have been up to in the last year.


on September 30, 2019 04:12 PM

It’s been a busy month on a personal level so there’s a bunch of my Debian projects that have been stagnant this month, I hope to fix that over October/November.

Upload sponsoring: This month, when sponsoring package uploads for Debian, I prioritised Python team uploads above mentors.debian.net uploads (where I usually spend my reviewing attention). The Python 2 deprecation is turning out to be a lot of work so I think the Python team can do with a lot more support from everyone at this point.

DebConf: I resigned from the DebConf Committee, I might consider joining again if there’s a position open again in the future. I’m not going to DC20 so it seems like a good to cut back a bit to help me focus more on my technical projects. I’ll still be involved in the DebConf team. Over the next DebConf cycle I’ll still be involved in bursaries and want to cover a whole bunch of documentation and policy improvements that are sorely needed. I also want to finish up the ToeTally integration with Voctomix for the video team and hopefully try it out at a minidebconf within the next year.

Debian Live: calamares-settings-debian has been updated for bullseye, although as of this time we don’t have new images available with that yet. I started looking in to the vmdebootstrap deprecation, it’s going to be more work than I originally thought, so there’s a good possibility we might be switching to FAI for generating live images. I have a script called debmower that works ok and creates good images, but it’s a somewhat hacky shell script and if I ever had the time to rewrite it in Python I might propose that too, but unfortunately finding the time too maintain more things is hard, so I think FAI is the way to go. Isabelle Simpkins created testing artwork so that Debian testing images are easier to differentiate from the last stable release. These will be replaced in Debian as soon as the next release artwork is available.

Activity log:

2019-09-09: Upload package gdisk (1.0.4-2) to debian unstable (Adopting package, closes #939421).

2019-09-09: Upload package calamares (3.2.13-1) to debian unstable.

2019-09-09: Upload package gnome-shell-extension-dash-to-panel (23-1) to debian unstable.

2019-09-09: Upload package toot (0.23.1) to debian unstable.

2019-09-09: File upstream bug for toot crash when launching in tui mode (Toot #124).

2019-09-10: Upload package bluefish (2.2.10-2) to debian unstable (Adopting package, Closes: #922891, #936220).

2019-09-10: Seek feedback on bugs #844449, #852733.

2019-09-11: File removal of pythonqt from debian unstable (BTS: #940025).

2019-09-11: Orphan package golang-gopkg-flosch-pongo2.v3 (BTS: #940030).

2019-09-16: Upload package python3-aniso8601 (8.0.0-1) to debian unstable.

2019-09-16: Upload package gnome-shell-extension-remove-dropdown-arrows (12-1) to debian unstable.

2019-09-16: Upload package bluefish (2.10-3) to debian unstable.

2019-09-16: Upload package gnome-shell-extension-move-clock (1.01-2) to debian unstable.

2019-06-16: Upload package tanglet (1.5.4-2) to debian unstable.

2019-09-16: Upload package gdisk (1.0.4-3) to debian unstable.

2019-09-16: Upload package tetzle (2.1.4+dfsg1-3) to debian unstable.

2019-09-16: Upload package bcachefs-tools (0.1+git20190829.aa2a42b-1~exp1) to debian unstable.

2019-09-16: Review package python-flask-jwt-extended (3.21.0-1) (needs some work) (mentors.debian.net request).

2019-09-16: Sponsor package flask-jwt-simple (0.0.3-1) for debian unstable (mentors.debian.net request, RFS: #940102).

2019-09-16: Sponsor package python3-fastentrypoints (0.12-1) for debian experimental (mentors.debian.net request, RFS: #934054).

2019-09-16: Sponsor package python3-netsnmpagent (0.6.0-1) for debian experimental (mentors.debian.net request, RFS: #934056).

2019-09-16: Review package pydevd (1.6.1+git20190712.1267523+dfsg) (mentors.debian.net request), recommend that another reviewer give it a second pass.

2019-09-16: Sponsor package python3-aiosqlite (0.10.0-1) for debian unstable (mentors.debian.net request, RFS: #927702).

2019-09-16: Upload package python3-flask-silk (0.2-14) to debian unstable.

2019-09-16: Sponsor package membernator (1.0.1-1) for debian unstable (Python team request).

2019-09-16: Sponsor package cosmiq (1.6.0-1) for debian unstable (mentors.debian.net request).

2019-09-16: Sponsor package micropython (1.11-1) for debian unstable (mentors.debian.net request, RFS: #939189).

2019-09-16: Sponsor package oomd (0.1.0-1) for debian unstable (mentors.debian.net request, RFS: #939096).

2019-09-16: Sponsor package python3-enc (0.4.0-5) for debian unstable (Python team request).

2019-09-16: Review package pcapy () (needs some more work) (Python team request).

2019-09-16: Review package impacket () (needs some more work) (Python team request).

2019-09-16: Sponsor package python-guizero (1.0.0+dfgs1-1) (Python team request).

2019-09-17: Sponsor package sentry-python (0.9..5-2) for debian unstable (Python team request).

2019-09-17: Sponsor package supysonic (0.4.1-1) for debian unstable (Python team request).

2019-09-17: Sponsor package python3-aiohttp-wsgi (0.8.2-2) for debian unstable (Python team request).

2019-09-17: Sponsor package python3-onedrivesdk (1.1.8-1) for debian experimental (Python team request).

2019-09-17: Review package python3-ptvsd (4.3.0+dfsg-1) (needs some more work) (Python team request).

2019-09-17: Review package python3-flask-jwt-extended (3.21.0-1) (needs some more work) (Python team request).

2019-09-17: Review package python3-pydevd (1.7.1+dfsg-1) (needs some more work) (Python team request).

2019-09-17: Sponsor package python3-bidict (0.18.2-1) for debian unstable (Python team request).

2019-09-18: Upload package python3-enc (0.4.0-4) to debian unstable.

2019-09-18: Sponsor package python3-pydevd (1.7.1+dfsg1) for debian unstable (Python team request).

2019-09-18: Sponsor package python-aiohttp (3.6.0-1) for debian unstable (Python team request).

2019-09-18: Review package py-postgresql (1.2.1+git20180803.ef7b9a9-1) (needs some more work) (Python team request).

2019-09-18: Review package irker (2.18+dfsg-4) (needs some more work) (Python team request).

2019-09-18: Sponsor package py-postgresql (1.2.1+git20180803.ef7b9a9-1) for debian unstable (Python team request).

2019-09-18: Upload package irker (2.18+dfsg-4) to debian unstable (team upload / Python team sponsor request).

2019-09-18: Sponsor package sphinx-autodoc-typehints (1.8.0-1) for debian unstable (Python team request).

2019-09-18: Sponsor package python3-sentry-sdk (0.12.0-1) for debian unstable (Python team request).

2019-09-19: Review package vonsh (1.0) (needs some more work) (mentors.debian.net request).

2019-09-19: Upload package live-tasks (11.0.1) to debian unstable (Closes: #932780, #936953, #934522).

2019-09-19: Upload package python3-flask-autoindex (0.6.2-2) to debian unstable (Closes: #936523).

2019-09-19: Upload package python3-flask-autoindex (0.6.2-3) to debian unstable (Re-opens: #936523).

2019-09-20: Upload package gamemode (1.5~git20190812-107d469-1~exp1) to debian experimental.

2019-09-20: Upload package gnome-shell-extension-remove-dropdown-arrows (13-1) to debian unstable.

2019-09-20: Sponsor package django-sortedm2m (2.0.0dfsg.1-1) for debian experimental (Python team request).

2019-09-20: Sponsor package python3-anosql (1.0.1-1) for debian unstable (Python team request).

2019-09-23: Upload package gnome-shell-extension-disconnect-wifi (21-1~exp1) to debian experimental.

2019-09-23: Upload package toot (0.24.0-1) to debian unstable.

2019-09-23: Upload package gamemode (1.5~git20190812-107d469-1~exp2) to debian experimental.

2019-09-23: Review package python3-pympler () (needs some more work) (Python team request).

2019-09-23: Close previously fixed bug #914044 in tuxpaint.

2019-09-23: Upload package kpmcore (4.0.0-1~exp1) to debian experimental.

2019-09-23: Upload package kpmcore (4.0.0-1~exp2) to debian experimental.

2019-09-25: Sponsor package assaultcube-data ( for debian unstable (mentors.debian.net request).

2019-09-25: Sponsor package assaultcube ( for debian unstable (mentors.debian.net request).

2019-09-25: Review package cpupower-gui (0.7.0-1) (needs some more work) (mentors.debian.net request).

2019-09-25: Sponsor package pympler (0.7+dfsg1-1~exp1) for debian experimental (Python team request).

2019-09-25: Sponsor package sentry-python (0.12.2-1) for debian unstable (Python team request).

2019-09-25: Sponsor package python-aiohttp (3.6.1-1) for debian unstable (Python team request).

2019-09-25: Upload package calamares-settings-debian (11.0.1-1) to debian unstable.

2019-09-25: Merge MR#2 for live-wrapper (Debian BTS: #866183).

2019-09-25: File bug #941131 against qa.debian.org (“Make oustanding MRs more visible in DDPO pages).

2019-09-25: Sponsor package color-theme-modern (0.0.2+4.g42a7926-1) for debian unstable (RFS: #905246) (mentors.debian.net request).

2019-09-26: Sponsor package python3-flask-jwt-extended for debian unstable (RFS:#940075) (mentors.debian.net request).

2019-09-26: Upload package tuxpaint (0.9.24~git20190922-f7d30d-1~exp1) to debian experimental.

2019-09:26: Review package python3-in-toto (0.4.0-1) (needs some more work) (mentors.debian.net request).

2019-09:30: Forward Calamares bug #941301 “write two random seeds to locations for urandom init script and systemd-random-seed service” to upstream bug #1252.

2019-09-30: Sponsor package color-theme-modern (0.0.2+4.g42a7926-1) for debian unstable (RFS: #905246) (mentors.debian.net request).

on September 30, 2019 01:27 PM

September 28, 2019

Are you using Kubuntu 19.04 Disco Dingo, our current Stable release? Or are you already running our development builds of the upcoming 19.10 Eoan Ermine?

We currently have Plasma 5.16.90 (Plasma 5.17 Beta)  available in our Beta PPA for Kubuntu 19.04 and 19.10.

This is a Beta Plasma release, so testers should be aware that bugs and issues may exist.

If you are prepared to test, then…..

Add the PPA and then upgrade

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt update && sudo apt full-upgrade -y

Then reboot. If you cannot reboot from the application launcher,

systemctl reboot

from the terminal.

In case of issues, testers should be prepare to use ppa-purge to remove the PPA and revert/downgrade packages.

IMPORTANT: This is especially required if you plan to upgrade a Disco system to Eoan when it is released, as Eoan will ship with Plasma 5.16 by default. Attempting to upgrade a system to Eoan without 1st purging the beta PPA will fail, due the PPA packages being built against and so requiring an older version of Qt than is found in Eoan.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use a launchpad.net to post testing feedback to the Kubuntu team as a bug, or give feedback on IRC [1], Telegram [2] or mailing lists [3].
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

Please review the release announcement and changelog.

[Test Case]

* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.15.5 or 5.16.5?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.

* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing. e.g. “clock combobox instead of tri-state checkbox for 12/24 hour display.”
– Test the ‘fixed’ functionality.

Testing involves some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu and the KDE community as a whole.

Thanks! Please stop by the Kubuntu-devel IRC channel or Telegram group if you need clarification of any of the steps to follow.

[1] – irc://irc.freenode.net/kubuntu-devel
[2] – https://t.me/kubuntu_support
[3] – https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel

on September 28, 2019 06:47 AM

September 27, 2019

We are pleased to announce that the beta images for Lubuntu 19.10 have been released! While we have reached the bugfix-only stage of our development cycle, these images are not meant to be used in a production system. We highly recommend joining our development group or our forum to let us know about any issues. […]
on September 27, 2019 07:02 PM

The beta of Eoan Ermine (to become 19.10) has now been released, and is available for download at:


This milestone features images for Kubuntu and other Ubuntu flavours.

Pre-releases of the Eoan Ermine are not encouraged for:

* Anyone needing a stable system
* Anyone who is not comfortable running into occasional, even frequent breakage.

They are, however, recommended for:

* Ubuntu flavor developers
* Those who want to help in testing, reporting, and fixing bugs as we work towards getting this release ready.
* the Beta includes some software updates that are ready for broader testing. However, it is an early set of images, so you should expect some bugs.

You can:

Read more information about the Kubuntu 19.10 Beta:


Read the full text of the main Ubuntu Eoan Beta announcement:


The Ubuntu Eaon Release notes will give more details of changes to the Ubuntu base:


on September 27, 2019 10:28 AM

So, say you find yourself, somehow having the following error in the roundcube logs:

[27-Sep-2019 12:44:48 +0000]: <f930f680> PHP Error: SMTP server does not support authentication (POST /?_task=mail&_unlock=loading1569588324419&_framed=1&_lang=es&_action=send)
[27-Sep-2019 12:44:48 +0000]: <f930f680> SMTP Error: Authentication failure: SMTP server does not support authentication (Code: ) in /roundcube/program/lib/Roundcube/rcube.php on line 1674 (POST /correo/?_task=mail&_unlock=loading 1569588324419&_framed=1&_lang=es&_action=send)

You have tried everything, but still can’t seem to be able to send email from roundcube, you keep getting this annoying “SMTP (250) authentication failed” notification, every time you click “Send”.

Well… Make sure that your server is connecting to the right place. It took me a while to realize that roundcube was trying to connect to localhost, but somehow the authentication mechanism stopped working (it was before upgrading).

Since I don’t really want to debug too much today (it’s friday after all), and because my configuration/use case is over ssl/tls, the solution to the probem was simply:

$config['smtp_server'] = 'tls://services.host.co';

Et voilà, ma chérie!

It's alive!

on September 27, 2019 12:00 AM

September 26, 2019

The Ubuntu Studio team is pleased to announce the beta release of Ubuntu Studio 19.10, codenamed Eoan Ermine. While this beta is reasonably free of any showstopper CD build or installer bugs, you may find some bugs within. This image is, however, reasonably representative of what you will find when Ubuntu Studio 19.10 is released […]
on September 26, 2019 04:45 PM

September 24, 2019

Ubuntu MATE 19.10 is a significant improvement over Ubuntu MATE 18.04 and 19.04. The theme of this release is to address as many "paper-cut" issues as possible. Every new feature in Ubuntu MATE 19.10 has been added to address bugs or poor user experience. Many long standing paper-cuts are finally resolved. Make yourself a cup of tea ☕ and get a slice of cake 🍰 before reading on to find out what we've been working on for the last 25 weeks.

We are preparing Ubuntu MATE 19.10 (Eoan Ermine) for distribution on October 17th, 2019 With this Beta pre-release, you can see what we are trying out in preparation for our next (stable) version.

What works?

People tell us that Ubuntu MATE is stable. You may, or may not, agree.

Ubuntu MATE Beta Releases are NOT recommended for:

  • Regular users who are not aware of pre-release issues
  • Anyone who needs a stable system
  • Anyone uncomfortable running a possibly frequently broken system
  • Anyone in a production environment with data or workflows that need to be reliable

Ubuntu MATE Beta Releases are recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Ubuntu MATE, MATE, and GTK+ developers.
Ubuntu MATE 19.10
Ubuntu MATE 19.10 - Harder, Better, Faster, Stronger

Ubuntu MATE 19.10 - paper 🧻 cut ️🔪 release

I have not been completely happy 😞 with the quality of recent Ubuntu MATE releases. All the important stuff works but there have been niggly issues that by themselves are not deal breakers, but in aggregate are frustrating 😠 and spoil the experience. I've been focused on resolving these issues during the 19.10 development cycle and you'll see that every new feature in Ubuntu MATE 19.10 addresses one of these paper-cuts. We've achieved this by expanding our QA team significantly and Ubuntu MATE 19.10 has been subject to weekly testing throughout this cycle. I can't thank our QA team enough for highlighting the issues that need attention.

Most of the paper-cut effort has been focused around the window manager, the panel and the indicators as these are the main touching points of the desktop environment that users interact with.

MATE Desktop 1.22.2

Upstream MATE Desktop recently released 1.22.2. All the updates are present in Ubuntu MATE 19.10 plus I've cherry 🍒 picked a good deal of fixes from MATE Desktop development snapshots. In total, 67 additional patches have been applied to the MATE Desktop packages in Ubuntu MATE 19.10 to finesse this release prior to launch day 🚀 Included in those patches are fixes for locking the screen on resume from suspend, adding a Media Information extension to the file manager, performance improvements for the window manager and cycling external displays using Super + p. All this work has also been submitted to Debian.

Since the final beta we worked on the following:

  • Fixed rendering window controls on HiDPI 🔍 displays.
  • Fixed irregular icon sizes 📏 in MATE Control Center and made them render nicely on HiDPI displays.
  • Fixed Caja 📂 extensions not loading.
  • Fixed mate-power-manager 🔌 so it uses upower-glib get_devices2().
  • Fixed Pluma 🗒 plugins not loading.
  • Fixed a crasher 💣 in MATE Dock Applet due to an Attribute error in adjust_minimise_pos().
  • Fixed a gnome-keyring timeout ⏱ in mate-session-manager.
  • Fixed Codec 🎞 updates in Software Boutique.
  • Updated Advanced MATE Menu ⚙ to use the start-here icon, so all menus are consistent.
  • Updated the Ubuntu MATE Guide ❓
  • Updated the Ubiquity Slideshow 🎭

Window Manager improvements

Marco is the Window Manager for MATE Desktop and in Ubuntu MATE 19.10 it brings a number of new features and fixes.

XPresent support is properly fixed which means that screen tearing is now a thing of the past and frame times in games 🎮 is further improved. Invisible window corners are finally here! Invisible window corners mean that windows can be easily resized 📏 without having to precisely grab the window corners. HiDPI rendering improvements fix a number of rendering problems that were present in various themes and components, most notably windows controls are now HIDPI aware.

Before 😢
Windows Controls Before
After 😀
Windows Controls After

Alt+Tab navigation makes it possible to traverse the application switcher via keyboard and mouse. We've also cleaned up the window controls by removing the menu button. The menu is still available either by right clicking the window title bar or pressing Alt + Space.

Compiz & Compton

The main reason we've been shipping shipping Compton and Compiz in Ubuntu MATE was to offer a solution to screen tearing or improve game performance. Compiz has invisible window borders and also has a great screen magnifier suitable for visually impaired users. However, now that...

  • Magnus (see below) provides screen magnification
  • Marco supports invisible windows borders
  • Marco has improved Alt+Tab behaviour
  • Marco is free from screen tearing
  • Marco frame performance when gaming is further improved
  • Using Compton and Compiz with MATE Desktop introduces other bugs and integration issues

...it is time to remove Compiz and Compton from the default Ubuntu MATE install. The fundamental reasons for including them no longer exist.

If you love 😍 Compiz, it can be installed by opening a terminal and running the following command:

sudo apt install compiz compiz-core compiz-mate compiz-plugins compiz-plugins-default

Only having one window manager to target means we can promptly deliver new features and minimise development effort. Which brings us to...

New Key-bindings

The key-bindings for window tiling have only worked on full keyboards ⌨️ with a 10-key pad. Few laptops 💻 have a 10-key pad and not all keyboards have a 10-key either. There are some well known key-bindings from other platforms that were not recognised in Ubuntu MATE. So, we've had a think 🤔 and come up with this:

  • Maximise Window: Super + Up
  • Restore Window: Super + Down
  • Title Window right: Super + Right
  • Title Window left: Super + Left
  • Center Window: Alt + Super + c
  • Title Window to upper right corner: Alt + Super + Right
  • Title Window to upper left corner: Alt + Super + Left
  • Title Window to lower right corner: Shift + Alt + Super + Right
  • Title Window to lower left Corner: Shift + Alt + Super + Left
  • Shade Window: Control + Alt + s

I'm happy 😀 with these key-bindings as it is now possible to tile a window to all screen quadrants 📐 using any keyboard form factor.

We updated the application launcher key-bindings, some of these have existed in Ubuntu MATE for a while:

  • Cycle external displays: Super + p
  • Lock Screen: Super + L
  • Screenshot a rectangle: Shift + PrintScr
  • Open File Manager: Super + e
  • Open Terminal: Super + t
  • Open Control Center: Super + i
  • Open Search: Super + s
  • Open Task Manager: Control + Shift + Escape
  • Open System Information: Super + Pause

The key-bindings compliment existing well established alternatives. So if Ctrl + Alt + t (Terminal) and Ctrl + Alt + L (Lock Screen) are ingrained in your muscle 💪 memory 🧠 they are still available too. You can find all the keyboard shortcuts documented in the Getting Started section of Ubuntu MATE Welcome.

Panel & Indicator improvements

This is where a good deal of effort has been invested. Let's break it down.

Brisk Menu and MATE Dock Applet

Brisk Menu is under the Solus GitHub organisation, but it's been a couple of years since it had a new release. The Solus Project gave me administrative access 🔱 to the Brisk Menu repo and I've made a new release. Thanks to the efforts of a couple of Ubuntu MATE contributors several bug 🐞 fixes have landed too, which includes resolving frequent crashers in Brisk Menu, preventing a scrollbar always appearing in the category column of the menu and silencing sounds firing as you rollover menu entries.

The previous maintainer of MATE Dock Applet announced that he no longer had the time ⌛️ to develop the project. Ubuntu MATE has taken on ownership and we've already published a couple of new releases 🤘 which include fixes for frequent crashes.

MATE Panel

MATE Panel has had a long standing bug fixed that caused it to crash 💥 when the panel was reset or replaced. This was most noticeable when switching panel layouts via MATE Tweak and could result in the panel layout being left incomplete or entirely absent. This bug is now fixed! MATE Tweak has been updated to neatly integrate with with fixed MATE Panel behaviour so that layout switching is now 100% reliable.


A bug which resulted in oversized icons in indicators is finally resolved.

Before 💩

After 🤩

However, it turned out some of the bugs were due to the icons 🎨 themselves. Over 💯 icons have been refactored 🖌️️ to correct their resolutions or aspect ratio; as a result the panel and indicators both scale correctly.

A race condition that could result in two network status icons being displayed is fixed, and when connected via VPN, lock icons are now overlayed on the Network Indicator. The battery 🔋 indicator is improved and now has a larger charging symbol while charging.

We've added the Date/Time Indicator and integrated it with MATE Desktop and it now replaces the MATE clock applet which corrects the placement of the clock and session indicators.

We've finally addressed a long standing issue which has been around since Ubuntu MATE 14.10 🕸️ Some of the monochrome symbolic icons used in the indicators were also used in applications. The presented a couple of issues:

  • In some cases you couldn't easily see the icons against the window base colour.
  • The mix of monochrome and full colour icons in applications looked inconsistent.

This issue is now resolved, monochrome symbolic icons are only used for indicators and full colour icons are used in the Control Center, Sound Preferences, Bluetooth, OSD, etc.

MATE Window Applets

MATE Window Applets have received a number a bug fixes and new features from a community contributor. Window control icons now dynamically load from the currently selected theme, rather than requiring manual user configuration, and a number of bugs (including significant memory leaks) have also been resolved.

Notification Center

Ubuntu MATE 19.10 includes a new Indicator that provides a "notification center" 🔔 We worked with the upstream developer to add new features to indicator-notifications and integrate it with MATE Notifications Daemon.

Notification Settings

We now have a notification center which also offers a "do not disturb" 🛑 feature. When do not disturb is enabled, notifications will not be displayed but will be captured in the notification center for review. It's also possible to blacklist some notifications, so they are never stored by the notification center. I've created an icon theme for the notification center so it fits the look and feel of the default Ubuntu MATE theme. Notification hints are also fixed so any notifications supplying additional media, such as sounds or icons, now work.

Personally, I love ❤️ this feature! No more will I have awkward messages from my Mum flash up while I'm presenting 😜

Evolution replaces Thunderbird

The Ubuntu MATE development team discussed the pros and cons of switching the default mail ✉️ client in Ubuntu MATE to Evolution. Here is a summary of our assessment:

  • Thunderbird does not integrate as well with the desktop.
    • For example, theme integration, font integration, compatibility with HUD (which is increasingly difficult to support in Thunderbird), notifications with action buttons, locale and spell checking.
  • Thunderbird & Lightning occupies 171MB on the ISO image, while Evolution uses 46MB.
  • Evolution integrates well with MATE Desktop given that both use GTK3.
  • Evolution includes interoperability with LibreOffice, for which Ubuntu MATE is already shipping the required components.
  • Evolution has superior integration with Google Mail and Exchange, including calendar, contacts, tasks, and memos.

Indicator Date/Time also integrates with Evolution. It is fully functional, all the features of creating new events or opening upcoming events from the indicator work. Clicking on a day in the month displays the events for the selected day etc.

Indicator Date/Time

For the many people who use web-mail exclusively this change will have no impact, but for those who use desktop mail we feel these productivity 📈 improvements are significant.

For those of you who love 💕 Thunderbird and wish to continue using it: we will continue to offer Thunderbird in the Software Boutique for a one-click install. Likewise, Evolution is now in the Software Boutique so can be installed/removed with one-click too.

GNOME MPV replaces VLC

We have switched from VLC to GNOME MPV, soon be renamed to Celluloid, for the default media player 🎬 The reasons for switching to GNOME MPV are similar to swapping out Thunderbird for Evolution; better desktop integration.


We've changed GNOME MPV's default UI to better fit in with MATE Desktop by not using client side decorations (CSD). GNOME MPV has an MPRIS implementation that completely integrates with the Sound Indicator. GNOME MPV uses less space on the ISO image compared to VLC and we'll get on to why that is important later.

GNOME MPV doesn't offer the extensive array of preferences and options to users that VLC does, and instead ships sane defaults; only surfacing options where they make sense. GNOME MPV is a GTK3 application whereas VLC uses Qt5. GNOME MPV looks right at home in Ubuntu MATE which uses GTK3 throughout. While we've done our best to coerce VLC to take hints from the GTK theme, it has never been perfect. Most importantly, GNOME MPV is an excellent media player with the same broad media format support that VLC offers. Ubuntu MATE 20.04 will ship Celluloid 🎞️, the new name for GNOME MPV. VLC will remain in the Software Boutique as a single click install for anyone who wants it.


Most desktop environments are lacking a screen magnifier, which is an essential application for visually impaired 👓 computer users and also useful for accurate graphical design or detail work. One of the reasons we ship Compiz in Ubuntu MATE is because it has an excellent screen magnifier and was our solution for people who need magnification 🔍


I collaborated with my friend Stuart Langridge to create Magnus; a very simple desktop magnifier, showing the area around the mouse pointer in a separate window magnified two, three, four, or five times. Magnus is now shipped 🚢 by default in Ubuntu MATE 19.10, Ubuntu Budgie 19.10 and other distros are already picking up it too 💪

Ubuntu MATE Themes

Dozens of theme related bugs have been fixed and the Ubuntu MATE themes have been added to the gtk-common-themes used by snaps, so snapped applications are themed correctly for Ubuntu MATE users now. This change is already available all the way back to Ubuntu MATE 16.04.

The most noticeable theme issues that have been resolved are expanders in tree view are now a sensible size (they were so tiny) so you can easily click them, window controls are correctly proportioned on CSD windows and we've add a splash of Chelsea Cucumber 🥒 to the Ubuntu MATE logo on the menu. Everything the QA team highlighted has been fixed 🔨

MATE Tweak and Ubuntu MATE Welcome

MATE Tweak now preserves user preferences when switching between custom layouts thanks to a community contribution.

If you're familiar with MATE Tweak you'll know it can switch panel layouts to somewhat mimic other platforms and distros 🐧 We have now integrated a graphical layout switcher in Ubuntu MATE Welcome to better promote the feature and make it more accessible. We have actually had this feature since 18.04 but the bugs in MATE Panel I mentioned earlier meant it didn't work. With all the associated panel bugs fixed 🔧 we now have this:

Desktop Layout Switcher

NVIDIA drivers

If you've been following the news surrounding Ubuntu you'll know that Ubuntu will be shipping 🚢 the NVIDIA proprietary drivers on the ISO images. Anyone selecting the additional 3rd party hardware drivers during installation without an Internet connection will have the drivers available in offline scenarios.

This comes at the cost of increasing the ISO size by ~115MB, but I think this trade-off is worth it. The drivers are not active by default, just present in the apt repository provided on the ISO image to facilitate installation should they be requested. But, if your computer has an NVIDIA GPU, you can now have the drivers installed and operational immediately following install 🌟

Post-install, Ubuntu MATE users with computers that support hybrid graphics will see the MATE Optimus hybrid graphics applet displaying the NVIDIA logo.

MATE Optimus

Now the NVIDIA 435 drivers are in Ubuntu 19.10, I have given MATE Optimus an update. MATE Optimus adds support for NVIDIA On-Demand and will now prompt users to log out when switching the GPU's profile. MATE, XFCE, Budgie, Cinnamon, GNOME, KDE and LXQt are all supported. Wrappers, called offload-glx & offload-vulkan can be used to easily offload games/apps to the PRIME renderer. I'm also delighted to see Ubuntu Budgie 19.10 are shipping MATE Optimus too!

The NVIDIA drivers are now going to receive updates via the official Ubuntu software repository. So no need to add a PPA to get updates and more importantly, the NVIDIA drivers are signed (which is not supported for drivers distributed via PPA) so you can keep Secure Boot enabled.

ISO optimisations

In order to squeeze those ~115MB of NVIDIA drivers on the ISO while keeping the ISO at ~2.0GB required some optimisation. Certainly switching to Evolution helped a bit. We've also dropped Brasero from the default installed applications because optical media burning is not a widespread use case these days. Brasero is still in Software Boutique should you need it.

The main gains came from analysing the data we have about our user distribution across countries and changing what language packs we make available on the ISO. We get the data from snap metrics and the Ubuntu Report.

We dropped Chinese, Japanese and Indic language packs from the ISO and added Russian. This dropped the ISO size considerably and the savings gained were just about equivalent to what the NVIDIA drivers require.

We are currently shipping English, Spanish, Portuguese, German, French, Italian and Russian language packs on the iso, with each language including all regional dialect variations. Anyone in other parts of the world will get the language packs providing they have an Internet connection during the install.

Other gains were made by:

  • Changing to format of the weather station database which saved 15MB 😱
  • Removing Qt4 components. Qt4 is being removed from Debian and Ubuntu.
  • Removing fcitx from the Live environment.
  • Removing obsolete software from the ship-live seed.
  • Removed usb-creator-gtk from the default install. GNOME Disks provides image writing capabilities.
  • Reducing the size of Ubuntu MATE Welcome and Software Boutique snaps.
  • Using image optimisation tools on every graphic asset in the default themes, icon themes and wallpaper back-catalog.

Had we not optimised the ISO image it would have been 2.5GB, but instead it remains just a hair over 2.0GB while now hosting the NVIDIA drivers and 7 language packs. So, while the Ubuntu MATE ISO image is larger than some, a good chunk of that size is hosting drivers and language packs that will probably never end up getting installed on your computer. The language packs and drivers are there to best service our diverse community of users from across the world 🗺️ running a variety of hardware 💻


Alongside the generic image for 64-bit Intel PCs, we're also releasing a bespoke beta image for the GPD MicroPC which includes hardware specific tweaks to get this device working "out of the box" without any faffing about. See our UMPC page for more details.

Major Applications

Accompanying MATE Desktop 1.22.2 and Linux 5.3.0 are Firefox 69.0.1, GNOME MPV 0.16, LibreOffice and Evolution 3.34.0.

Major Applications

See the Ubuntu 19.10 Release Notes for details of all the changes and improvements in Ubuntu that Ubuntu MATE benefits from.

Download Ubuntu MATE 19.10 Beta

Our download page makes it easy to acquire the most suitable build for your hardware.


Upgrading from Ubuntu MATE 19.04

  • Open the "Software & Updates" from the Control Center.
  • Select the 3rd Tab called "Updates".
  • Set the "Notify me of a new Ubuntu version" dropdown menu to "For any new version".
  • Press Alt+F2 and type in update-manager -c into the command box.
  • Update Manager should open up and tell you: New distribution release '19.10' is available.
    • If not, you can use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
  • Click "Upgrade" and follow the on-screen instructions.

Known Issues

Here are the known issues.

Ubuntu MATE

Ubuntu family issues

This is our known list of bugs that affects all flavours.

You'll also want to check the Ubuntu MATE bug tracker to see what has already been reported. These issues will be addressed in due course.


Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

on September 24, 2019 12:00 AM

September 23, 2019

I wanted to use duplicity to backup to Google Cloud Storage. I looked into it briefly and found that the boto library, originally for AWS, also supports GCS, but only using authorization tokens. I’d rather use a service account, for which authorization tokens are not available.

I looked into the options and the best information I could find was a Medium post, but it also describes using authorization tokens and creating a separate GMail/Google Apps account for the access. I’d really prefer to go with a service account to avoid having to sign up another account, and to be able to use more granular ACLs for the service account.

It turns out there’s a boto plugin for GCS with OAuth2 support, but enabling a boto plugin in duplicity isn’t straight-forward. You can point it to a “plugin directory” that causes duplicity to import any python files in the directory, but this doesn’t work if you point it directly to the gcs_oauth2_boto_plugin directory.

Install Requirements

Install the following:

Create your GCS Bucket

Create a GCS bucket. In my case, I set the default storage class to “nearline” because I expect backups to be infrequently accessed (I hope), and I plan to retain the data for the minimum 30 day retention. It’s also cheaper than standard storage, so a great combination for backups.

GCS Bucket Setup

Create your service account

Next up, you need to create a service account and grant it the appropriate permissions on the bucket. Go through IAM > Service Account and create a new service account. You don’t need to grant it any roles at this time, but at the end, you should select to “Create key” and download a JSON-formatted service account key.

Go back to the bucket you created, and go to the Permissions tab. Add the service account you just created as a “Storage Object Creator” and a “Storage Object Viewer”.

Create the Boto Configuration

For this, you’ll need the Google Cloud SDK tool gsutil. Run gsutil -e -o <path to your new config>, and provide the JSON file when prompted. Note that the JSON file is only referenced by the config, so if you move it somewhere else, you’ll need to update the configuration. (Or move it first, then run it.)

This will create the necessary configuration for boto to authenticate to GCS. You’ll still need to add the support for OAuth2 authentication, so first create an empty directory to serve as your plugin directory. In my case, I created a directory ~/.config/boto/plugins for all my plugins. In it, I created one file called gcs.py whose only contents is the following:

import gcs_oauth2_boto_plugin

I then added the following to the bottom of my boto configuration file:

plugin_directory = /home/matir/.config/boto/plugins

This will result in boto loading the gcs_oauth2_boto_plugin python module for OAuth2 authentication on GCS when being loaded into duplicity.

Setup the Duplicity Command

At this point, it’s almost like running any duplicity backup. If you chose to place your boto configuration in a non-standard location, just set the environment variable BOTO_CONFIG to point to the configuration file. I run the following:

export BOTO_CONFIG=${HOME}/.config/boto/boto_backups

duplicity \
  incremental \
  --full-if-older-than 30D \
  ${HOME} \
on September 23, 2019 07:00 AM

A little while ago I announced my brand new book, ‘People Powered: How communities can supercharge your business, brand, and teams‘ published by HarperCollins Leadership, and released on the 12th November 2019.

At the core of ‘People Powered’ is a pragmatic business book designed to provide (1) an overview of the sheer potential and value of communities for an organization, (2) how to approach building a set of target personas, shaping a community journey, coordinating events, building meaningful incentives and baking all of this into a practical strategic plan, and (3) how to integrate this into a company, track the work well with a set of maturity models, and make consistent improvements.

So, here’s the deal.

Many companies run internal directed learning initiatives where they buy books for their employees and then provide webinars and additional learning/materials to support their team reading the book and soaking up the core approaches and principles in it. It is a fantastic way to not just build skills for your organization and team members, but a fun and empowering way to bring team members together, and importantly, to learn together.

Well, unsurprisingly, I want to offer this for ‘People Powered’. This is a book ripe for these kind of learning initiatives: while it provides a high-level overview of the value of community across many industries, the book delves deep into a set of pragmatic approaches, frameworks, and models for building amazing communities.

I want to provide opportunities to deliver this kind of directed learning in a cost-effective way and with hands-on guidance from myself. As such, I excited to share my People Powered Bulk Order Packages.

How they work

In a nutshell, depending on how many copies of ‘People Powered’ you purchase, you will get a significant bulk order discount, all (hardback) copies of the book are personally signed by myself, you will get access to a comprehensive knowledge base, and I will provide direct, 1-on-1 engagement with your organization and team members.

These different perks are included at different levels, depending on the number of copies sold, and I am not charging any extra fees for the hands-on time: it is all included in the package. You only pay for the books.

These bulk orders are entirely customizable too. For example, I just signed an order with a company for 100 copies and I am providing them with an on-site fireside chat at their office in Silicon Valley, and a follow-up discussion after their team has had an opportunity to read the book.

Another company has purchased 500 copies and I am providing a series of webinars where the team can ask questions at different phases of reading ‘People Powered’. For example, the team will read chapters 1 – 3 first and then I will provide a Q&A webinar session. Then, they will read chapters 4 – 6, and I will provide another session, and so on. This provides an opportunity to not just get clarity on the material, but also for me to augment it with additional recommendations and insight.

Take a look at the bulk order packages, and feel free to get in touch if you have questions or want to discuss something more specific for your organization.

How People Powered Can Benefit Your Organization

I am really proud of ‘People Powered’. I believe it provides the most comprehensive, clear, and strategic book available for harnessing the power of great communities for any organization.

The book doesn’t just outline the potential of communities, but it provides a simple to read, yet pragmatic and focused blueprint for building a wide variety of communities, from fans, to support, to engineering, and beyond.

For a more in-depth overview of what is in the book, see the video below:

Can’t see the video? See it here.

Also, ‘People Powered’ has received enthusiastic endorsements from a number of leaders in their respective industries:

  • Nat Friedman – CEO of GitHub
  • Jamie Hyneman – Co-Creator and Host of Mythbusters
  • Jamie Smith – Former Deputy Press Secretary for President Barack Obama
  • Kevin Scott – CTO of Microsoft
  • Villi Iltchev – General Partner of August Capital
  • Uttam Tripathi – Head Of Global Programs and DevRel Ecosystem at Google
  • Gia Scinto – Head Of Talent at YCombinator
  • Jim Whitehurst – CEO of Red Hat
  • Ali Velshi – Anchor on MSNBC
  • Jim Zemlin – Executive Director of The Linux Foundation
  • Whitney Bouck – COO of HelloSign
  • Mårten Mickos – CEO of HackerOne
  • Juan Olaizola – COO of Santander España
  • Jeff Atwood – Co-Founder of Discourse and StackOverflow
  • Angela Brown – General Manager of Events at The Linux Foundation
  • Dries Buytaert – Founder of Drupal and Acquia
  • Paul Salnikow – CEO of The Executive Centre
  • Ben Uretsky – Co-Founder of DigitalOcean
  • Billy Cina – Co-Founder and CEO of MarketingEnvy
  • Maxx Bricklin – Co-Founder of BOLD Capital Partners
  • Jose Morales – Head Of Field Operations at Atlassian
  • Maria Sipka – Co-Founder of Linqia
  • Nithya Ruff – Senior Director of Open Source at Comcast
  • Michael Skok – Founding Partner of Underscore.VC
  • Giorgio Regni – CTO of Scality
  • Tracy Ragan – CEO of DeployHub
  • Paul Bunje – Co-Founder of Conservation X Labs
  • Ryan Bethencourt – CEO of Wild Earth and Partner at Babel Ventures

Of course, if you have any questions about how can shape your specific internal education initiative, get in touch. Thanks!

The post Announcing ‘People Powered’ Bulk Packages (w/ included 1-on-1 Webinars and Directed Learning) appeared first on Jono Bacon.

on September 23, 2019 03:00 AM

September 22, 2019

We released Storm 0.21 on Friday (the release announcement seems to be stuck in moderation, but you can look at the NEWS file directly). For me, the biggest part of this release was adding Python 3 support.

Storm is a really nice and lightweight ORM (object-relational mapper) for Python, developed by Canonical. We use it for some major products (Launchpad and Landscape are the ones I know of), and it’s also free software and used by some other folks as well. Other popular ORMs for Python include SQLObject, SQLAlchemy and the Django ORM; we use those in various places too depending on the context, but personally I’ve always preferred Storm for the readability of code that uses it and for how easy it is to debug and extend it.

It’s been a problem for a while that Storm only worked with Python 2. It’s one of a handful of major blockers to getting Launchpad running on Python 3, which we definitely want to do; stoq ended up with a local fork of Storm to cope with this; and it was recently removed from Debian for this and other reasons. None of that was great. So, with significant assistance from a large patch contributed by Thiago Bellini, and with patient code review from Simon Poirier and some of my other colleagues, we finally managed to get that sorted out in this release.

In many ways, Storm was in fairly good shape already for a project that hadn’t yet been ported to Python 3: while its internal idea of which strings were bytes and which text required quite a bit of untangling in the way that Python 2 code usually does, its normal class used for text database columns was already Unicode which only accepted text input (unicode in Python 2), so it could have been a lot worse; this also means that applications that use Storm tend to get at least this part right even in Python 2. Aside from the bytes/text thing, many of the required changes were just the usual largely-mechanical ones that anyone who’s done 2-to-3 porting will be familiar with. But there were some areas that required non-trivial thought, and I’d like to talk about some of those here.

Exception types

Concrete database implementations such as psycopg2 raise implementation-specific exception types. The inheritance hierarchy for these is defined by the Python Database API (DB-API), but the actual exception classes aren’t in a common place; rather, you might get an instance of psycopg2.errors.IntegrityError when using PostgreSQL but an instance of sqlite3.IntegrityError when using SQLite. To make things easier for applications that don’t have a strict requirement for a particular database backend, Storm arranged to inject its own virtual exception types as additional base classes of these concrete exceptions by patching their __bases__ attribute, so for example, you could import IntegrityError from storm.exceptions and catch that rather than having to catch each backend-specific possibility.

Although this was always a bit of a cheat, it worked well in practice for a while, but the first sign of trouble even before porting to Python 3 was with psycopg2 2.5. This release started implementing its DB-API exception types in a C extension, which meant that it was no longer possible to patch __bases__. To get around that, a few years ago I landed a patch to Storm to use abc.ABCMeta.register instead to register the DB-API exceptions as virtual subclasses of Storm’s exceptions, which solved the problem for Python 2. However, even at the time I landed that, I knew that it would be a porting obstacle due to Python issue 12029; Django ran into that as well.

In the end, I opted to refactor how Storm handles exceptions: it now wraps cursor and connection objects in such a way as to catch DB-API exceptions raised by their methods and properties and re-raise them using wrapper exception types that inherit from both the appropriate subclass of StormError and the original DB-API exception type, and with some care I even managed to avoid this being painfully repetitive. Out-of-tree database backends will need to make some minor adjustments (removing install_exceptions, adding an _exception_module property to their Database subclass, adjusting the raw_connect method of their Database subclass to do exception wrapping, and possibly implementing _make_combined_exception_type and/or _wrap_exception if they need to add extra attributes to the wrapper exceptions). Applications that follow the usual Storm idiom of catching StormError or any of its subclasses should continue to work without needing any changes.

SQLObject compatibility

Storm includes some API compatibility with SQLObject; this was from before my time, but I believe it was mainly because Launchpad and possibly Landscape previously used SQLObject and this made the port to Storm very much easier. It still works fine for the parts of Launchpad that haven’t been ported to Storm, but I wouldn’t be surprised if there were newer features of SQLObject that it doesn’t support.

The main question here was what to do with StringCol and its associated AutoUnicodeVariable. I opted to make these explicitly only accept text on Python 3, since the main reason for them to accept bytes was to allow using them with Python 2 native strings (i.e. str), and on Python 3 str is already text so there’s much less need for the porting affordance in that case.

Since releasing 0.21 I realised that the StringCol implementation in SQLObject itself in fact accepts both bytes and text even on Python 3, so it’s possible that we’ll need to change this in the future, although we haven’t yet found any real code using Storm’s SQLObject compatibility layer that might rely on this. Still, it’s much easier for Storm to start out on the stricter side and perhaps become more lenient than it is to go the other way round.


Storm had some fairly complicated use of inspect.getargspec on Python 2 as part of its test mocking arrangements. This didn’t work in Python 3 due to some subtleties relating to bound methods. I switched to the modern inspect.signature API in Python 3 to fix this, which in any case is rather simpler with the exception of a wrinkle in how method descriptors work.

(It’s possible that these mocking arrangements could be simplified nowadays by using some more off-the-shelf mocking library; I haven’t looked into that in any detail.)

What’s next?

I’m working on getting Storm back into Debian now, which will be with Python 3 support only since Debian is in the process of gradually removing Python 2 module support. Other than that I don’t really have any particular plans for Storm at the moment (although of course I’m not the only person with an interest in it), aside from ideally avoiding leaving six years between releases again. I expect we can go back into bug-fixing mode there for a while.

From the Launchpad side, I’ve recently made progress on one of the other major Python 3 blockers (porting Bazaar code hosting to Breezy, coming soon). There are still some other significant blockers, the largest being migrating to Mailman 3, subvertpy fixes so that we can port code importing to Breezy as well, and porting the lazr.restful stack; but we may soon be able to reach the point where it’s possible to start running interesting subsets of the test suite using Python 3 and categorising the failures, at which point we’ll be able to get a much better idea of how far we still have to go. Porting a project with the best part of a million lines of code and around three hundred dependencies is always going to take a while, but I’m happy to be making progress there, both due to Python 2’s impending end of upstream support and so that eventually we can start using new language facilities.

on September 22, 2019 07:56 AM

September 20, 2019

For the past few weeks I’ve been using a nexus 4 running ubuntu touch as, mostly, my daily driver. I’ve enjoyed it quite a bit. In part that’s just the awesome size of the nexus 4. In part, it’s the ubuntu touch interface itself. If you haven’t tried it, you really should. (Sailfish ambiances are so much prettier, but ubuntu touch is much nicer to use – the quick switch to switch between two apps, for instance. Would that I could have both.). And in part it’s just the fact that it really feels like – is – a regular ubuntu system.

There have been a few problems. The biggest has been to do with email. I need a phone to do only a few things well – texts, calls, and imap email. The only imap mailer available by default, dekko2, looks very nice and is promising, but was simply not yet reliable for me. It would simply stop getting updates for hours, with no warning, for instance. So I’ve taken to using an ubuntu-push notification system for email notifications, and mutt and offlineimap in a libertine container for reading and sending. The notification system is based on https://forums.ubports.com/topic/3126/facebook-messenger-push-notifications/2 and the python mailbox library. It runs on my mail server, checks for new mail, and, if there is any, sends a push notification to my phone. The code I’m using is here on launchpad and here on github. It can certainly stand to be made a bit smarter (the seenmsgs list should be pruned, for instance, and maildir and mh folder support should be trivial to add for those cool cats who use those). Using this service instead of having the phone try to check for emails not only ends up being very reliable, but also saves a lot of battery life.

All in all this could definately work as my permanent new phone! Now if I could just get my hands on a pinephone or librem 5. The nexus 4 hardware is great, but it would be awesome being able to run an uptodate, upstream kernel. More than that – now that my experiment has succeeded, I probably need to stop, because running the ancient kernel simply is not as safe as I’d like. But I digress.

A huge thanks to Mark and the original touch team for creating it, and to the ubports team for keeping it going.

Nice job, everyone!

on September 20, 2019 01:31 PM

September 19, 2019

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, 212.5 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Adrian Bunk got 8h assigned (plus 18 extra hours from July), but did nothing, thus he is carrying over 26h to September.
  • Ben Hutchings did 20 hours (out of 20 hours allocated).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 31 hours (out of 21.75h assigned plus 14.5 extra hours from July), thus he is carrying over 5.25h to September).
  • Hugo Lefeuvre did 30.5 hours (out of 21.75 hours allocated, plus 8.75 extra hours from July).
  • Jonas Meurer did 0.5 hours (out of 10, thus carrying 9.5h to September).
  • Markus Koschany did 21.75 hours (out of 21.75 hours allocated).
  • Mike Gabriel did 24 hours (out of 21.75 hours allocated plus 10 extra hours from July, thus carrying over 7.75h to September).
  • Ola Lundqvist got 8h assigned (plus 8 extra hours from August), but did nothing and gave back 8h, thus he is carrying over 8h to September.
  • Roberto C. Sanchez did 8 hours (out of 8 hours allocated).
  • Sylvain Beucler did 21.75 hours (out of 21.75 hours allocated).
  • Thorsten Alteholz did 21.75 hours (out of 21.75 hours allocated).

Evolution of the situation

August was more or less a normal month, a bit still affected by summer in the area where most contributors live: so one contributor is still taking a break (thus we only had 13, not 14), two contributors were distracted by summer events and another one is still in training.

It’s been a while that we haven’t welcomed a new LTS sponsors. Nothing worrisome at this point as few sponsors are stopping, but after 5 years, some have moved on so it would be nice to keep finding new ones as well. We are still at 215 hours sponsored by month.

The security tracker currently lists 42 packages with a known CVE and the dla-needed.txt file has 39 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on September 19, 2019 10:04 AM

Ubuntu on the new LinuxONE III

Elizabeth K. Joseph

A few months ago I visited the IBM offices in Poughkeepsie to sync up with colleagues, record an episode of Terminal Talk, and let’s be honest, visit some mainframes. A lot of assembly still happens in Poughkeepsie, and they have a big client center with mainframes on display, including several inside a datacenter that they give tours of. I was able to see a z14 in operation, as well as several IBM LinuxONE machines. Getting to tour datacenters is a lot of fun, and even though I wouldn’t have meaningful technical interactions with them, there’s something about seeing these massive machines that I work with every day in person that brings me a lot of joy.

Now I have to go back! On September 12th, the newest mainframe was announced, the IBM z15 and accompanying Linux version, the IBM LinuxONE III. To celebrate, I joined my colleagues in the IBM Silicon Valley lab for a launch event watch party and, of course, cake.

I wrote a more in-depth article about the hardware of this machine for work here: Inside the LinuxONE III. The key thing about it is that we’ve gone from two versions of the LinuxONE (Rockhopper II and Emperor II), to just one, but one that fits inside a 19” rack space like the Rockhopper II did and is expandable to up to four frames.

The processors are 5.2Ghz each, and in a fully decked out configuration one of these 4-frame systems can have up to 190 processors and 40 TB of RAM. It’s a massively powerful machine. Add in on-chip crypto that we’ve come to know and love on the mainframe, you have a really impressive data processing powerhouse.

Now, I was brought on to the Z Ecosystem team because of my background with Linux, both in the Ubuntu community and broader experience with distributed systems, including OpenStack and Apache Mesos. That’s because these mainframes don’t just run z/OS. The LinuxONE series of machines, the first of which was released in 2015, are exclusively Linux. Last week I wrote an article over on OpenSource.com about How Linux came to the mainframe, where I talk about how this came to be. This morning the second part of that article was published, Linux on the mainframe: Then and now, where I explore the formal entrance of major distributions into supporting the mainframe architecture. Ubuntu joined that fold with an announcement in 2016 that Ubuntu 16.04 had support for the mainframe (s390x architecture). Today, Ubuntu boasts the most s390x packages of all the officially supported distributions.

All recent release of Ubuntu have supported s390x, so while they recommend the LTS releases, you can happily use Ubuntu 19.04 today to get the latest packages, and there are even more improvements in store for Ubuntu 19.10 coming out next month. When I chatted with Frank Heimes, who runs the Ubuntu on Big Iron blog (which you should totally check out!), he highlighted the following this for me with regard to Ubuntu support:

  • Special emphasis is put on kernel, KVM, hardware counters and security, allowing one to make use of z15 and LinuxONE III faster and enlarged number of processors with new CPU capabilities, facilities and larger caches, increased memory and IO throughput
  • Support for hardware cryptography, which he talks about in this blog post and the associated whitepaper: Hardware cryptography with Ubuntu Server on IBM Z and LinuxONE
  • Support for deployments on LPAR, z/VM, KVM, LXD, Docker and kubernetes (CDK), with installation media available as ISO, Cloud or container images.

It was also interesting for me to learn that their MAAS KVM product has been built for s390x, which I’ll point you to the Ubuntu on Big Iron blog for again, for one of Frank’s posts this month on the topic: MAAS KVM on s390x: Cross-LPAR walk-through. There have also been collaborations in the works to create proof of concepts around security, including Digital Asset Custody Services (DACS), which you can explore in more detail in this article from August: Digital Asset Custody Services (DACS) aims to disrupt the digital assets market with a secured custody platform.

For Ubuntu, s390x isn’t just another checkbox architecture that’s being supported. Just like the other officially supported distributions, there are whole teams within Canonical who are spending time making thoughtful and innovative solutions that specifically target the power of the mainframe. The following is their Design Philosophy for Ubuntu Server on IBM Z and LinuxONE, via Frank’s Ubuntu Server for IBM Z and LinuxONE slide deck (4.2M PDF):

  • Expand Ubuntu’s ease of use to the s390x architecture (IBM Z and LinuxONE)
  • Unlock new workloads, especially in the Open Source, Cloud and Container space
  • Consequentially tap into new client bases
  • Exploit new features and components faster – in two ways:
    • hardware: zEC12/zBC12 and newer
    • software: latest kernels, compilers and optimized libraries
    • Provide parity with other architectures
    • Release parity
    • Feature parity
  • Uniform user experience
  • Close potential gaps
  • Open source – is collective power in action
  • Upstream work and code only – no forks
  • Offer a radically new pricing approach (drawer-based pricing) but also an entry-level pricing based on the number of IFLs (up to 4 IFLs)
  • Of course we don’t have mainframes in our garages (even as an IBM employee, I’ve asked!). So as developers, our access is somewhat limited. However, that doesn’t mean you can’t build your Ubuntu .deb or snap for s390x! As I wrote about back in June, you can build your PPA for s390x with the clicking of a simple checkbox in the Launchpad UI for PPAs.

    Similarly, you can also build snaps for the s390x architecture. These build systems reside on a mainframe that Canonical hosts in their datacenter, so you don’t even need access to a mainframe yourself to build for it.

    But if you want to be extra sure your application runs on s390x, IBM has made a LinuxONE Community Cloud which gives users a VM running on a mainframe in New York for 120 days! You can try out your application on one of those, and then be confident it works when you submit it to the PPA or snap build system. Unfortunately the only options right now for OS are SLES and RHEL, but Ubuntu support is in the works. Beyond this cloud, we’re also working to get an open source developer cloud launched, but in the meantime you can reach out to me directly (lyz@ibm.com) if you’re interested in some longer-lived VMs for your open source project, or generally want to talk about how you can get more VMs for testing, CI systems, and more.

    If you had asked me a year ago to talk about mainframes, I would not have had much to say, but I’m really excited to be part of this story now. The machines themselves are impressive, the efforts that distributions like Ubuntu are putting into them is quite exceptional, and it’s really fun learning about a new architecture. And speaking of other architectures, s390x isn’t the only architecture Canonical works with IBM to provide support for. As noted on the Ubuntu on IBM partner page (which is worth checking out anyway), you’ll see there’s a lot of work being put in around POWER too.

    on September 19, 2019 09:00 AM

    September 11, 2019

    Both this blog post and the paper it describes are collaborative work led by Charles Kiene with Jialun “Aaron” Jiang.

    Introducing new technology into a work place is often disruptive, but what if your work was also completely mediated by technology? This is exactly the case for the teams of volunteer moderators who work to regulate content and protect online communities from harm. What happens when the social media platforms these communities rely on change completely? How do moderation teams overcome the challenges caused by new technological environments? How do they do so while managing a “brand new” community with tens of thousands of users?

    For a new study that will be published in CSCW in November, we interviewed 14 moderators of 8 “subreddit” communities from the social media aggregation and discussion platform Reddit to answer these questions. We chose these communities because each community had recently adopted the real-time chat platform Discord to support real-time chat in their community. This expansion into Discord introduced a range of challenges—especially for the moderation teams of large communities.

    We found that moderation teams of large communities improvised their own creative solutions to challenges they faced by building bots on top of Discord’s API. This was not too shocking given that APIs and bots are frequently cited as tools that allow innovation and experimentation when scaling up digital work. What did surprise us, however, was how important moderators’ past experiences were in guiding the way they used bots. In the largest communities that faced the biggest challenges, moderators relied on bots to reproduce the tools they had used on Reddit. The moderators would often go so far as to give their bots the names of moderator tools available on Reddit. Our findings suggest that support for user-driven innovation is important not only in that it allows users to explore new technological possibilities but also in that it allows users to mine their past experiences to introduce old systems into new environments.

    What Challenges Emerged in Discord?

    Discord’s text channels allow for more natural, in the moment conversations compared to Reddit. In Discord, this social aspect also made moderation work much more difficult. One moderator explained:

    “It’s kind of rough because if you miss it, it’s really hard to go back to something that happened eight hours ago and the conversation moved on and be like ‘hey, don’t do that.’ ”

    Moderators we spoke to found that the work of managing their communities was made even more difficult by their community’s size:

    On the day to day of running 65,000 people, it’s literally like running a small city…We have people that are actively online and chatting that are larger than a city…So it’s like, that’s a lot to actually keep track of and run and manage.”

    The moderators of large communities repeatedly told us that the tools provided to moderators on Discord were insufficient. For example, they pointed out tools like Discord’s Audit Log was inadequate for keeping track of the tens of thousands of members of their communities. Discord also lacks automated moderation tools like the Reddit’s Automoderator and Modmail leaving moderators on Discord with few tools to scale their work and manage communications with community members. 

    How Did Moderation Teams Overcome These Challenges?

    The moderation teams we talked with adapted to these challenges through innovative uses of Discord’s API toolkit. Like many social media platforms, Discord offers a public API where users can develop apps that interact with the platform through a Discord “bot.” We found that these bots play a critical role in helping moderation teams manage Discord communities with large populations.

    Guided by their experience with using tools like Automoderator on Reddit, moderators working on Discord built bots with similar functionality to solve the problems associated with scaled content and Discord’s fast-paced chat affordances. This bots would search for regular expressions and URLs that go against the community’s rules:

    “It makes it so that rather than having to watch every single channel all of the time for this sort of thing or rely on users to tell us when someone is basically running amuck, posting derogatory terms and terrible things that Discord wouldn’t catch itself…so it makes it that we don’t have to watch every channel.”

    Bots were also used to replace Discord’s Audit Log feature with what moderators referred to often as “Mod logs”—another term borrowed from Reddit. Moderators will send commands to a bot like “!warn username” to store information such as when a member of their community has been warned for breaking a rule and automatically store this information in a private text channel in Discord. This information helps organize information about community members, and it can be instantly recalled with another command to the bot to help inform future moderation actions against other community members.

    Finally, moderators also used Discord’s API to develop bots that functioned virtually identically to Reddit’s Modmail tool. Moderators are limited in their availability to answer questions from members of their community, but tools like the “Modmail” helps moderation teams manage this problem by mediating communication to community members with a bot:

    “So instead of having somebody DM a moderator specifically and then having to talk…indirectly with the team, a [text] channel is made for that specific question and everybody can see that and comment on that. And then whoever’s online responds to the community member through the bot, but everybody else is able to see what is being responded.”

    The tools created with Discord’s API — customizable automated content moderation, Mod logs, and a Modmail system — all resembled moderation tools on Reddit. They even bear their names! Over and over, we found that moderation teams essentially created and used bots to transform aspects of Discord, like text channels into Mod logs and Mod Mail, to resemble the same tools they were using to moderate their communities on Reddit. 

    What Does This Mean for Online Communities?

    We think that the experience of moderators we interviewed points to a potentially important underlooked source of value for groups navigating technological change: the potent combination of users’ past experience combined with their ability to redesign and reconfigure their technological environments. Our work suggests the value of innovation platforms like APIs and bots is not only that they allow the discovery of “new” things. Our work suggests that these systems value also flows from the fact that they allow the re-creation of the the things that communities already know can solve their problems and that they already know how to use.

    For more details, check out check out the full 23 page paper. The work will be presented in Austin, Texas at the ACM Conference on Computer-supported Cooperative Work and Social Computing (CSCW’19) in November 2019. The work was supported by the National Science Foundation (awards IIS-1617129 and IIS-1617468). If you have questions or comments about this study, contact Charles Kiene at ckiene [at] uw [dot] edu.

    on September 11, 2019 03:04 AM

    September 10, 2019

    The early boot requires loading and decompressing the kernel and initramfs from the boot storage device.   This speed is dependent on several factors, speed of loading an image from the boot device, the CPU and memory/cache speed for decompression and the compression type.

    Generally speaking, the smallest (best) compression takes longer to decompress due to the extra complexity in the compression algorithm.  Thus we have a trade-off between load time vs decompression time.

    For slow rotational media (such as a 5400 RPM HDD) with a slow CPU the loading time can be the dominant factor.  For faster devices (such as a SSD) with a slow CPU, decompression time may be the dominate factor.  For devices with fast 7200-10000 RPM HDDs with fast CPUs, the time to seek to the data starts to dominate the load time, so load times for different compressed kernel sizes is only slightly different in load time.

    The Ubuntu kernel team ran several experiments benchmarking several x86 configurations using the x86 TSC (Time Stamp Counter) to measure kernel load and decompression time for 6 different compression types: BZIP2, GZIP, LZ4, LZMA, LZMO and XZ.  BZIP2, LZMA and XZ are slow to decompress so they got ruled out very quickly from further tests.

    In compression size, GZIP produces the smallest compressed kernel size, followed by LZO (~16% larger) and LZ4 (~25% larger).  With decompression time, LZ4 is over 7 times faster than GZIP, and LZO being ~1.25 times faster then GZIP on x86.

    In absolute wall-clock times, the following kernel load and decompress results were observed:

    Lenovo x220 laptop, 5400 RPM HDD:
      LZ4 best, 0.24s faster than the GZIP total time of 1.57s

    Lenovo x220 laptop, SSD:
      LZ4 best, 0.29s faster than the GZIP total time of 0.87s

    Xeon 8 thread desktop with 7200 RPM HDD:
      LZ4 best, 0.05s faster than the GZIP total time of 0.32s

    VM on a Xeon 8 thread desktop host with SSD RAID ZFD backing store:
      LZ4 best, 0.05s faster than the GZIP total time of 0.24s

    Even with slow spinning media and a slow CPU, the longer load time of the LZ4 kernel is overcome by the far faster decompression time. As media gets faster, the load time difference between GZIP, LZ4 and LZO diminishes and the decompression time becomes the dominant speed factor with LZ4 the clear winner.

    For Ubuntu 19.10 Eoan Ermine, LZ4 will be the default decompression for x86, ppc64el and s390 kernels and for the initramfs too.

    Analysis: https://kernel.ubuntu.com/~cking/boot-speed-eoan-5.3/kernel-compression-method.txt
    Data: https://kernel.ubuntu.com/~cking/boot-speed-eoan-5.3/boot-speed-compression-5.3-rc4.ods

    on September 10, 2019 09:49 AM

    September 06, 2019

    LXD supports proxy devices, which is a way to proxy connections between the host and containers. This includes TCP, UDP and Unix socket connections, in any combination between each other, in any direction. For example, when someone connects to your host on port 80 (http), then this connection can be proxied to a container using a proxy device. In that way, you can isolate your Web server into a LXD container. By using a TCP proxy device, you do not need to use iptables instead.

    There are 3³=9 combinations for connections between TCP, UDP and Unix sockets, as follows. Yes, you can proxy, for example, a TCP connection to a Unix socket!

    1. TCP to TCP, for example, to expose a container’s service to the Internet.
    2. TCP to UDP
    3. TCP to Unix socket
    4. UDP to UDP
    5. UDP to TCP
    6. UDP to Unix socket
    7. Unix socket to Unix socket, for example, to share the host’s X11 socket to a container. Or, to make available a host’s Unix socket into the container.
    8. Unix socket to TCP
    9. Unix socket to UDP

    Earlier I wrote that you can make a connection in any direction. For example, you can expose the host’s Unix socket for X11 into the container so that the container can run X11 applications and have them appear on the host’s X11 server. Or, in the other way round, you can make available LXD’s Unix socket at the host to a container so that you can manage LXD from inside a container.

    Note that LXD 3.0.x only supports TCP to TCP proxy devices. Support for UDP and Unix sockets was added in later versions.

    Launching a container and setting up a Web server

    Let’s launch a container, install a Web server, and, then expose the Web server to the local network (or the Internet, if you are using a VPS/Internet server).

    First, launch the container.

    $ lxc launch ubuntu:18.04 mycontainer
    Creating mycontainer
    Starting mycontainer

    We get a shell into the container, update the package list and install nginx. Finally, verify that nginx is running.

    ubuntu@mycontainer:~$ sudo apt update
    ubuntu@mycontainer:~$ sudo apt install -y nginx
    ubuntu@mycontainer:~$ curl http://localhost
     Welcome to nginx! 

    Exposing the Web server of a container to the Internet

    We logout to the host and verify that there is no Web server already running on port 80. If port 80 is not available on your host, change it to something else, like 8000. Finally, we create the TCP to TCP LXD Proxy Device.

    ubuntu@mycontainer:~$ logout
    $ lxc config device add mycontainer myport80 proxy listen=tcp: connect=tcp:
    Device myport80 added to mycontainer

    The command that creates the proxy device is made of the following components.

    1. lxc config device add, we configure to have a device added,
    2. mycontainer, to the container mycontainer,
    3. myport80, with name myport80,
    4. proxy, a proxy device, we are adding a LXD Proxy Device.
    5. listen=tcp:, we listen (on the host by default) on all network interfaces on TCP port 80.
    6. connect=tcp:, we connect (to the container by default) to the existing TCP port 80 on localhost, which is our nginx.

    Note that previously you would specify hostnames when you were creating LXD Proxy Devices. This is no longer supported (has security implications), therefore you get an error if you specify a hostname such as localhost. This post was primarily written because the top Google result on proxy devices is an old Reddit read-only post that suggests to use localhost.

    Let’s test that the Web server in the container is accessible on the host. We can use both localhost (or on the host to access the website of the container. We can also use the public IP address of the host (in this case, the LAN IP address) to access the container.

    $ curl http://localhost
     Welcome to nginx! 
    $ curl
     Welcome to nginx! 

    Other features of the proxy devices

    By default, a proxy device exposes an existing service in the container to the host. If we need to expose an existing service on the host to a container, we would add the parameter bind=container to the proxy device command.

    You can expose a single webserver to a port on the host. But how do you expose many web servers in containers to the host? You can use a reverse proxy that goes in front of the containers. To retain the remote IP address of the clients visiting the Web servers, you can add the proxy_protocol=true to enable support for the PROXY protocol. Note that you also need to enable the PROXY protocol on the reverse proxy.

    on September 06, 2019 10:08 PM

    September 05, 2019

    This week I went to Parliament square in Edinburgh where the highest court of the land, the Court of Session sits.  The court room viewing gallery was full,  concerned citizens there to watch and journalists enjoying the newly allowed ability to post live from the courtroom.  They were waiting for Joanna Cherry, Jo Maugham and the Scottish Government to give legal challenge to the UK Governement not to shut down parliament.  The UK government filed their papers late and didn’t bother completing them missing out the important signed statement from the Prime Minister saying why he had ordered parliament to be shut.  A UK government who claims to care about Scotland but ignores its people, government and courts is not one who can argue it it working for democracy or the union it wants to keep.

    Outside I spoke to the assembled vigil gathering there to support, under the statue of Charles II, I said how democracy can’t be shut down but it does need the people to pay constant attention and play their part.

    Charles II was King of Scots who led Scots armies that were defeated twice by the English Commonwealth army busy invading neighbouring countries claiming London and it’s English parliament gave them power over us all.  So I went to London to check it out.

    In London that parliament is falling down.  Scaffold covers it in an attempt to patch it up.  The protesters outside held a rally where politicians from the debates inside wandered out to give updates as they frantically tried to stop an unelected Prime Minister to take away our freedoms and citizenship.  Comedian Mitch Benn compared it, leading the rally saying he wanted everyone to show their English  flags with pride, the People’s Vote campaign trying to reclaim them from the racists, it worked with the crowd and shows how our politics is changing.

    Inside the Westminster Parliament compound, past the armed guards and threatening signs of criminal repercussions the statue of Cromwell stands proud, he invaded Scotland and murdered many Irish, a curious character to celebrate.

    The compound is a bubble, the noise of the protesters outside wanting to keep freedoms drowned out as we watched a government lose its majority and the confidence on their faces familiar from years of self entitlement vanish.

    Pete Wishart, centre front, is an SNP MP who runs the All Party Intellectual Property group, he invited us in for the launch of OpenUK a new industry body for companies who want to engage with governement for open source solutions.  Too often governement puts out tenders for jobs and won’t talk to providers of open source solutions because we’re too small and the names are obscure.  Too often when governements do implement open source and free software setups they get shut down because someone with more money comes along and offers their setup and some jobs.  I’ve seen that in Nigeria, I’ve seen it happen in Scotland, I’ve seen it happen in Germany.  The power and financial structures that proprietary software create allows for the corruption of best solutions to a problem.

    The Scottish independence supporter Pete spoke of the need for Britain to have the best Intellectual Property rules in the world, to a group who want to change how intellectual property influences us, while democracy falls down around us.

    The protesters marched over the river closing down central London in the name of freedom but in the bubble of Westminster we sit sipping wine looking on.

    The winners of the UK Open Source Awards were celebrated and photos taken, (previously) unsung heros working to keep the free operating system running, opening up how plant phenomics work, improving healthcare in ways that can not be done when closed.

    Getting governement engagement with free software is crucial to improving how our society works but the politicians are far too easily swayed by big branding and names budgets rather than making sure barriers are reduced to be invisible.

    The crumbling of one democracy alongside a celebration and opening of a project to bring business to those who still have little interest in it.  How to get government to prefer openness over barriers?  This place will need to be rebuilt before that can happen.

    Onwards to Milan for KDE Akademy.


    on September 05, 2019 04:38 PM

    One last post from Summer Camp this year (it’s been a busy month!) – this one about the “Data Duplication Village” at DEF CON. In addition to talks, the Data Duplication Village offers an opportunity to get your hands on the highest quality hacker bits – that is, copies of somewhere between 15 and 18TB of data spread across 3 6TB hard drives.

    I’d been curious about the DDV for a couple of years, but never participated before. I decided to change that when I saw 6TB Ironwolf NAS drives on sale a few weeks before DEF CON. I wasn’t quite sure what to expect, as the description provided by the DDV is a little bit sparse:

    6TB drive 1-3: All past convention videos that DT can find - essentially a clone of infocon.org - building on last year’s collection and re-squished with brand new codecs for your size constraining pleasures.

    6TB drive 2-3: freerainbowtables hash tables (lanman, mysqlsha1, NTLM) and word lists (1-2)

    6TB drive 3-3: freerainbowtables GSM A5/1, md5 hash tables, and software (2-2)

    Drive 1-3 seems pretty straightforward, but I spent a lot of time debating if the other two were worth getting. (And, to be honest, I think they’re cool to have, but not sure if I’ll really make good use of them.)

    I want to thank the operators of the DDV for their efforts, and also my wife for dropping off and picking up my drives while I was otherwise occupied (work obligations).

    It’s worth noting that, as far as I can tell, all of the contents of the drives here is available as a torrent, so you can always get the data that way. On the other hand, torrenting 15.07 TiB (16189363384 KiB to be precise) might not be your cup of tea, especially if you have a mere 75 Mbps internet connection like mine.

    If you want a detailed list of the contents of each drive (along with sha256sums), I’ve posted them to Github. If you choose to participate next year, note that your drives must be 7200 RPM SATA drives (apparently several people had to be turned away due to 5400 RPM drives, which slow down the entire cloning process).

    Drive 1

    Drive 1 really does seem to be a copy of infocon.org, it’s got dozens of conferences archived on it, adding up to a total of 132,253 files. Just to give you a taste, here’s a high-level index:

    ./cons/ACK Security Conference
    ./cons/Android Security Symposium
    ./cons/Black Alps
    ./cons/Black Hat
    ./cons/Blue Hat
    ./cons/CODE BLUE
    ./cons/Chaos Computer Club - Camp
    ./cons/Chaos Computer Club - Congress
    ./cons/Chaos Computer Club - CryptoCon
    ./cons/Chaos Computer Club - Easterhegg
    ./cons/Chaos Computer Club - SigInt
    ./cons/DEF CON
    ./cons/Electromagnetic Field
    ./cons/Hack In Paris
    ./cons/Hack In The Box
    ./cons/Hack In The Random
    ./cons/Hacker Hotel
    ./cons/Hackers 2 Hackers Conference
    ./cons/Hackers At Large
    ./cons/Hacking At Random
    ./cons/Hackito Ergo Sum
    ./cons/Hacks In Taiwan
    ./cons/Hash Days
    ./cons/IEEE Security and Privacy
    ./cons/Louisville Metro InfoSec
    ./cons/MISP Summit
    ./cons/Nuit Du Hack
    ./cons/O'Reilly Security
    ./cons/Observe Hack Make
    ./cons/Pacific Hackers
    ./cons/Positive Hack Days
    ./cons/Privacy Camp
    ./cons/Real World Crypto
    ./cons/Rooted CON
    ./cons/Security BSides
    ./cons/Security Fest
    ./cons/Security Onion
    ./cons/Security PWNing
    ./cons/THREAT CON
    ./cons/Texas Cyber Summit
    ./cons/USENIX ATC
    ./cons/USENIX Enigma
    ./cons/USENIX Security
    ./cons/USENIX WOOT
    ./cons/Virus Bulletin
    ./cons/What The Hack
    ./cons/Wild West Hackin Fest
    ./cons/You Shot The Sheriff
    ./cons/Zero Day Con
    ./cons/r00tz Asylum
    ./cons/t2 infosec
    ./documentaries/Hacker Movies
    ./documentaries/Hacking Documentaries
    ./documentaries/Pirate Documentary
    ./documentaries/Tech Documentary
    ./rainbow tables
    ./rainbow tables/## READ ME RAINBOW TABLES ##.txt
    ./rainbow tables/rainbow table software
    ./skills/Lock Picking

    Drive 2

    Drive 2 contains the promised rainbow tables (lanman, ntlm, and mysqlsha1) as well as a bunch of wordlists. I actually wonder how a 128GB wordlist would compare to applying rules to something like rockyou – bigger is not always better, and often, you want high yield unless you’re trying to crack something obscure.

    ./mysqlsha1/rainbow table software
    ./ntlm/rainbow table software
    ./rainbow table software
    ./rainbow table software/Free Rainbow Tables » Distributed Rainbow Table Generation » LM, NTLM, MD5, SHA1, HALFLMCHALL, MSCACHE.mht
    ./rainbow table software/converti2_0.3_src.7z
    ./rainbow table software/converti2_0.3_win32_mingw.7z
    ./rainbow table software/converti2_0.3_win32_vc.7z
    ./rainbow table software/converti2_0.3_win64_mingw.7z
    ./rainbow table software/converti2_0.3_win64_vc.7z
    ./rainbow table software/rcracki_mt_0.7.0_linux_x86_64.7z
    ./rainbow table software/rcracki_mt_0.7.0_src.7z
    ./rainbow table software/rcracki_mt_0.7.0_win32_mingw.7z
    ./rainbow table software/rcracki_mt_0.7.0_win32_vc.7z
    ./rainbow table software/rti2formatspec.pdf
    ./rainbow table software/rti2rto_0.3_beta2_win32_vc.7z
    ./rainbow table software/rti2rto_0.3_beta2_win64_vc.7z
    ./rainbow table software/rti2rto_0.3_src.7z
    ./rainbow table software/rti2rto_0.3_win32_mingw.7z
    ./rainbow table software/rti2rto_0.3_win64_mingw.7z
    ./word lists
    ./word lists/SecLists-master.rar
    ./word lists/WPA-PSK WORDLIST 3 Final (13 GB).rar
    ./word lists/Word Lists archive - infocon.org.torrent
    ./word lists/crackstation-human-only.txt.rar
    ./word lists/crackstation.realuniq.rar
    ./word lists/fbnames.rar
    ./word lists/human0id word lists.rar
    ./word lists/openlibrary_wordlist.rar
    ./word lists/pwgen.rar
    ./word lists/pwned-passwords-2.0.txt.rar
    ./word lists/pwned-passwords-ordered-2.0.rar
    ./word lists/xsukax 128GB word list all 2017 Oct.7z

    Drive 3

    Drive 3 contains more rainbow tables, this time for A5-1 (GSM encryption), and extensive tables for MD5. It appears to contain the same software and wordlists as Drive 2.

    ./A51 rainbow tables - infocon.org.torrent
    ./A51/rainbow table software
    ./LANMAN rainbow tables - infocon.org.torrent
    ./MD5 rainbow tables - infocon.org.torrent
    ./MySQL SHA-1 rainbow tables - infocon.org.torrent
    ./NTLM rainbow tables - infocon.org.torrent
    ./rainbow table software
    ./rainbow table software/Free Rainbow Tables » Distributed Rainbow Table Generation » LM, NTLM, MD5, SHA1, HALFLMCHALL, MSCACHE.mht
    ./rainbow table software/converti2_0.3_src.7z
    ./rainbow table software/converti2_0.3_win32_mingw.7z
    ./rainbow table software/converti2_0.3_win32_vc.7z
    ./rainbow table software/converti2_0.3_win64_mingw.7z
    ./rainbow table software/converti2_0.3_win64_vc.7z
    ./rainbow table software/rcracki_mt_0.7.0_linux_x86_64.7z
    ./rainbow table software/rcracki_mt_0.7.0_src.7z
    ./rainbow table software/rcracki_mt_0.7.0_win32_mingw.7z
    ./rainbow table software/rcracki_mt_0.7.0_win32_vc.7z
    ./rainbow table software/rti2formatspec.pdf
    ./rainbow table software/rti2rto_0.3_beta2_win32_vc.7z
    ./rainbow table software/rti2rto_0.3_beta2_win64_vc.7z
    ./rainbow table software/rti2rto_0.3_src.7z
    ./rainbow table software/rti2rto_0.3_win32_mingw.7z
    ./rainbow table software/rti2rto_0.3_win64_mingw.7z
    ./word lists
    ./word lists/SecLists-master.rar
    ./word lists/WPA-PSK WORDLIST 3 Final (13 GB).rar
    ./word lists/Word Lists archive - infocon.org.torrent
    ./word lists/crackstation-human-only.txt.rar
    ./word lists/crackstation.realuniq.rar
    ./word lists/fbnames.rar
    ./word lists/human0id word lists.rar
    ./word lists/openlibrary_wordlist.rar
    ./word lists/pwgen.rar
    ./word lists/pwned-passwords-2.0.txt.rar
    ./word lists/pwned-passwords-ordered-2.0.rar
    ./word lists/xsukax 128GB word list all 2017 Oct.7z
    on September 05, 2019 07:00 AM

    September 04, 2019

    I recently attended GUADEC 2019 in Thessaloniki, Greece. This is the seventh GUADEC I've attended, which came as a bit of a surprise when I added it up! It was great to catch up in person (some again, and some new!) and as always the face to face communication makes future online interactions that much easier.

    Photo by Cassidy James Blaede

    This year we had seven people from Canonical Ubuntu desktop team in attendance. Many other companies and projects had representatives (including Collabora, Elementary OS, Endless, Igalia, Purism, RedHat, SUSE and System76). I think this was the most positive GUADEC I've attended, with people from all these organizations actively leading discussions and a general consideration of each other as we try and maximise where we can collaborate.

    Of course, the community is much bigger than a group of companies. In particular is was great to meet Carlo and Frederik from the Yaru theme project. They've been doing amazing work on a new theme for Ubuntu and it will be great to see it land in a future release.

    In the annual report there was a nice surprise; I made the most merge requests this year! I think this is a reflection on the step change in productivity in GNOME since switching to GitLab. So now I have a challenge to maintain that for next year...

    If you were unable to attend you can watch the all talks on YouTube. Two talks I'd like to highlight; the first one is by Britt Yazel from the Engagement team. In it he talks about Setting a Positive Voice for GNOME. He talked about how open source communities have a lot of passion - and that has good and bad points. The Internet being as it is can lead to the trolls taking over but we can counter that but highlighting positive messages and showing the people behind GNOME. One of the examples showed how Ubuntu and GNOME have been posting positive messages on their channels about each-other, which is great!

    The second talk was by Georges Basile Stavracas Neto and he talked About Maintainers and Contributors. In it he talked about the difficulties of being a maintainer and the impacts of negative feedback. It resonated with Britt's talk in that we need to highlight that maintainers are people who are doing their best! As state in the GNOME Code of Conduct - Assume people mean well (they really do!).

    Georges and I are co-maintainers of Settings and we had a productive GUADEC and managed to go through and review all the open merge requests.

    There were a number of discussions around Snaps in GNOME. There seemed a lot more interest in Snap technology compared to last GUADEC and it was great to be able to help people better understand them. Work included discussions about portals, better methods of getting the Freedesktop and GNOME stacks snapped, Snap integration in Settings and the GNOME publisher name in the Snap Store.

    I hope to be back next year!
    on September 04, 2019 11:34 PM

    September 03, 2019

    Monitoring Dorian

    Stephen Michael Kellat

    Currently the hurricane known as Dorian is pounding the daylights out of The Bahamas. The Hurricane Watch Net is up and the Hurricane VoIP Net is up. Presently members of the public can monitor audio from the Hurricane VoIP net by putting into a suitable streaming media player like VLC to be able to bring up net audio. Updates are generally on the hour. Members of Ubuntu Hams looking to follow matters on EchoLink should utilize the *WX5FWD* and *KC4QLP-C* conferences.

    The storm is moving fairly slowly. This event is likely to continue for a while.

    on September 03, 2019 02:29 AM

    September 02, 2019

    Suspending Patreon

    Sam Hewitt

    I originally wrote a version of this post on Patreon itself but suspending my page hides my posts on there. Oops.

    There’s been a lot of change for me over the past year or two, in real life and as a member of the free software community (like my recent joining of Purism), that has shifted my focus away from why I originally launched a Patreon, so I felt it was time to deactivate my creator page.

    The support I got on Patreon for my humble projects and community participation over the many months my page was active will always be much appreciated! Having a Patreon (or some other kind of small recurring financial support service) as a free software contributor fueled not only my ability to contribution but my enthusiasm for free software. Support for small independent free software developers, designers, contributors and projects from folks in the community (not just through things like Patreon) goes a long way and I look forward to shifting into a more supportive role myself.

    I’m going forward with gratitude to the community, so much thanks to all the folks who were my patrons. Go forth and spread the love! ❤️

    on September 02, 2019 06:00 PM

    Ah, spring time at last. The last month I caught up a bit with my Debian packaging work after the Buster freeze, release and subsequent DebConf. Still a bit to catch up on (mostly kpmcore and partitionmanager that’s waiting on new kdelibs and a few bugs). Other than that I made two new videos, and I’m busy with renovations at home this week so my home office is packed up and in the garage. I’m hoping that it will be done towards the end of next week, until then I’ll have little screen time for anything that’s not work work.

    2019-08-01: Review package hipercontracer (1.4.4-1) (mentors.debian.net request) (needs some work).

    2019-08-01: Upload package bundlewrap (3.6.2-1) to debian unstable.

    2019-08-01: Upload package gnome-shell-extension-dash-to-panel (20-1) to debian unstable.

    2019-08-01: Accept MR!2 for gamemode, for new upstream version (1.4-1).

    2019-08-02: Upload package gnome-shell-extension-workspaces-to-dock (51-1) to debian unstable.

    2019-08-02: Upload package gnome-shell-extension-hide-activities (0.00~git20131024.1.6574986-2) to debian unstable.

    2019-08-02: Upload package gnome-shell-extension-trash (0.2.0-git20161122.ad29112-2) to debian unstable.

    2019-08-04: Upload package toot (0.22.0-1) to debian unstable.

    2019-08-05: Upload package gamemode (gamemode-1.4.1+git20190722.4ecac89-1) to debian unstable.

    2019-08-05: Upload package calamares-settings-debian (10.0.24-2) to debian unstable.

    2019-08-05: Upload package python3-flask-restful (0.3.7-3) to debian unstable.

    2019-08-05: Upload package python3-aniso8601 (7.0.0-2) to debian unstable.

    2019-08-06: Upload package gamemode (1.5~git20190722.4ecac89-1) to debian unstable.

    2019-08-06: Sponsor package assaultcube ( for debian unstable (mentors.debian.org request).

    2019-08-06: Sponsor package assaultcube-data ( for debian unstable (mentors.debian.org request).

    2019-08-07: Request more info on Debian bug #825185 (“Please which tasks should be installed at a default installation of the blend”).

    2019-08-07: Close debian bug #689022 in desktop-base (“lxde: Debian wallpaper distorted on 4:3 monitor”).

    2019-08-07: Close debian bug #680583 in desktop-base (“please demote librsvg2-common to Recommends”).

    2019-08-07: Comment on debian bug #931875 in gnome-shell-extension-multi-monitors (“Error loading extension”) to temporarily avoid autorm.

    2019-08-07: File bug (multimedia-devel)

    2019-08-07: Upload package python3-grapefruit (0.1~a3+dfsg-7) to debian unstable (Closes: #926414).

    2019-08-07: Comment on debian bug #933997 in gamemode (“gamemode isn’t automatically activated for rise of the tomb raider”).

    2019-08-07: Sponsor package assaultcube-data ( for debian unstable (e-mail request).

    2019-08-08: Upload package calamares (3.2.12-1) to debian unstable.

    2019-08-08: Close debian bug #32673 in aalib (“open /dev/vcsa* write-only”).

    2019-08-08: Upload package tanglet (1.5.4-1) to debian unstable.

    2019-08-08: Upload package tmux-theme-jimeh (0+git20190430-1b1b809-1) to debian unstable (Closes: #933222).

    2019-08-08: Close debian bug #927219 (“amdgpu graphics fail to be configured”).

    2019-08-08: Close debian bugs #861065 and #861067 (For creating nextstep task and live media).

    2019-08-10: Sponsor package scons (3.1.1-1) for debian unstable (mentors.debian.org request) (Closes RFS: #932817).

    2019-08-10: Sponsor package fractgen (2.1.7-1) for debian unstable (mentors.debian.net request).

    2019-08-10: Sponsor package bitwise (0.33-1) for debian unstable (mentors.debian.net request). (Closes RFS: #934022).

    2019-08-10: Review package python-pyspike (0.6.0-1) (mentors.debian.net request) (needs some additional work).

    2019-08-10: Upload package connectagram (1.2.10-1) to debian unstable.

    2019-08-11: Review package bitwise (0.40-1) (mentors.debian.net request) (need some further work).

    2019-08-11: Sponsor package sane-backends (1.0.28-1~experimental1) to debian experimental (mentors.debian.net request).

    2019-08-11: Review package hcloud-python (1.4.0-1) (mentors.debian.net).

    2019-08-13: Review package bitwise (0.40-1) (e-mail request) (needs some further work).

    2019-08-15: Sponsor package bitwise (0.40-1) for debian unstable (email request).

    2019-08-19: Upload package calamares-settings-debian (10.0.20-1+deb10u1) to debian buster (CVE #2019-13179).

    2019-08-19: Upload package gnome-shell-extension-dash-to-panel (21-1) to debian unstable.

    2019-08-19: Upload package flask-restful (0.3.7-4) to debian unstable.

    2019-08-20: Upload package python3-grapefruit (0.1~a3+dfsg-8) to debian unstable (Closes: #934599).

    2019-08-20: Sponsor package runescape (0.6-1) for debian unstable (mentors.debian.net request).

    2019-08-20: Review package ukui-menu (1.1.12-1) (needs some mor work) (mentors.debian.net request).

    2019-08-20: File ITP #935178 for bcachefs-tools.

    2019-08-21: Fix two typos in bcachefs-tools (Github bcachefs-tools PR: #20).

    2019-08-25: Published Debian Package of the Day video #60: 5 Fonts (highvoltage.tv / YouTube).

    2019-08-26: Upload new upstream release of speedtest-cli (2.1.2-1) to debian unstable (Closes: #934768).

    2019-08-26: Upload new package gnome-shell-extension-draw-on-your-screen to NEW for debian untable. (ITP: #925518)

    2019-08-27: File upstream bug for btfs so that python2 depencency can be dropped from Debian package (BTFS: #53).

    2019-08-28: Published Debian Package Management #4: Maintainer Scripts (highvoltage.tv / YouTube).

    2019-08-28: File upstream feature request in Calamares unpackfs module to help speed up installations (Calamares: #1229).

    2019-08-28: File upstream request at smlinux/rtl8723de driver for license clarification (RTL8723DE: #49).

    on September 02, 2019 11:35 AM

    August 30, 2019

    Example of website that only supports TLS v1.0, which is rejected by the client


    TLS v1.3 is the latest standard for secure communication over the internet. It is widely supported by desktops, servers and mobile phones. Recently Ubuntu 18.04 LTS received OpenSSL 1.1.1 update bringing the ability to potentially establish TLS v1.3 connections on the latest Ubuntu LTS release. Qualys SSL Labs Pulse report shows more than 15% adoption of TLS v1.3. It really is time to migrate from TLS v1.0 and TLS v1.1.

    As announced on the 15th of October 2018 Apple, Google, and Microsoft will disable TLS v1.0 and TLS v1.1 support by default and thus require TLS v1.2 to be supported by all clients and servers. Similarly, Ubuntu 20.04 LTS will also require TLS v1.2 as the minimum TLS version as well.

    To prepare for the move to TLS v1.2, it is a good idea to disable TLS v1.0 and TLS v1.1 on your local systems and start observing and reporting any websites, systems and applications that do not support TLS v1.2.

    How to disable TLS v1.0 and TLS v1.1 in Google Chrome on Ubuntu

    1. Create policy directory
      sudo mkdir -p /etc/opt/chrome/policies/managed
    2. Create /etc/opt/chrome/policies/managed/mintlsver.json with
          "SSLVersionMin" : "tls1.2"

    How to disable TLS v1.0 and TLS v1.1 in Firefox on Ubuntu

    1. Navigate to about:config in the URL bar
    2. Search for security.tls.version.min setting
    3. Set it to 3, which stand for minimum TLS v1.2

    How to disable TLS v1.0 and TLS v1.1 in OpenSSL

    1. Edit /etc/ssl/openssl.cnf
    2. After oid_section stanza add
      # System default
      openssl_conf = default_conf
    3. After oid_section stanza add
      ssl_conf = ssl_sect

      system_default = system_default_sect

      MinProtocol = TLSv1.2
      CipherString = DEFAULT@SECLEVEL=2
    4.  Save the file

    How to disable TLS v1.0 and TLS v1.1 in GnuTLS

    1. Create config directory
      sudo mkdir -p /etc/gnutls/
    2. Create /etc/gnutls/default-priorities with
    After performing above tasks most common applications will use TLS v1.2+

    I have set these defaults on my systems, and I occasionally hit websites that only support TLS v1.0 and I report them. Have you found any websites and systems you use that do not support TLS v1.2 yet?
    on August 30, 2019 03:42 PM

    cloud-init is a tool to help you customize cloud images. When you launch a cloud image, you can provide to it with your cloud-init instructions, and the cloud image will execute them. In that way, you can start with a generic cloud image, and as soon as it booted up, it will be configured to your liking.

    In LXD, there are two main repositories of container images,

    1. the «ubuntu:» remote, repository with Ubuntu container images
    2. the «images:» remote, repository with all container images.

    Until recently, only container images in the «ubuntu:» remote had support for cloud-init.

    Now, container images in the «images:» remote have both a traditional version, and a cloud-init version.

    Let’s have a look. We search for the Debian 10 container images. The format of the name of the non-cloud-init containers is debian/10. The cloud-init images have cloud appended to the name, for example, debian/10/cloud. These are the names for the default architecture, and in my case my host runs amd64. You will notice the rest of the supported architectures; these do not run (at least not out of the box) on your host because LXD’s system containers are not virtual machines (no hardware virtualization).

    $ lxc image list images:debian/10
    |              ALIAS               | FINGERPRINT  | PUBLIC |              DESCRIPTION               |  ARCH   |   SIZE   |          UPLOAD DATE          |
    | debian/10 (7 more)               | b1da98aa0523 | yes    | Debian buster amd64 (20190829_05:24)   | x86_64  | 93.21MB  | Aug 29, 2019 at 12:00am (UTC) |
    | debian/10/arm64 (3 more)         | 061bf8e54195 | yes    | Debian buster arm64 (20190829_05:24)   | aarch64 | 89.75MB  | Aug 29, 2019 at 12:00am (UTC) |
    | debian/10/armel (3 more)         | f45b56483bcc | yes    | Debian buster armel (20190829_05:53)   | armv7l  | 87.75MB  | Aug 29, 2019 at 12:00am (UTC) |
    | debian/10/armhf (3 more)         | 8b3223cb7c36 | yes    | Debian buster armhf (20190829_05:55)   | armv7l  | 88.35MB  | Aug 29, 2019 at 12:00am (UTC) |
    | debian/10/cloud (3 more)         | df912811b3c3 | yes    | Debian buster amd64 (20190829_05:24)   | x86_64  | 107.57MB | Aug 29, 2019 at 12:00am (UTC) |
    | debian/10/cloud/arm64 (1 more)   | c75bae6267e6 | yes    | Debian buster arm64 (20190829_05:29)   | aarch64 | 103.49MB | Aug 29, 2019 at 12:00am (UTC) |
    | debian/10/cloud/armel (1 more)   | a9939000f769 | yes    | Debian buster armel (20190829_06:33)   | armv7l  | 101.43MB | Aug 29, 2019 at 12:00am (UTC) |
    | debian/10/cloud/armhf (1 more)   | 8840418a2b4f | yes    | Debian buster armhf (20190829_05:53)   | armv7l  | 101.93MB | Aug 29, 2019 at 12:00am (UTC) |
    | debian/10/cloud/i386 (1 more)    | 79ebaba3b386 | yes    | Debian buster i386 (20190829_05:24)    | i686    | 108.85MB | Aug 29, 2019 at 12:00am (UTC) |
    | debian/10/cloud/ppc64el (1 more) | dcbfee6585b3 | yes    | Debian buster ppc64el (20190829_05:24) | ppc64le | 109.43MB | Aug 29, 2019 at 12:00am (UTC) |
    | debian/10/cloud/s390x (1 more)   | f2d6a7310ae1 | yes    | Debian buster s390x (20190829_05:24)   | s390x   | 101.93MB | Aug 29, 2019 at 12:00am (UTC) |
    | debian/10/i386 (3 more)          | f0bc9e2c267d | yes    | Debian buster i386 (20190829_05:24)    | i686    | 94.41MB  | Aug 29, 2019 at 12:00am (UTC) |
    | debian/10/ppc64el (3 more)       | fcf56d73d764 | yes    | Debian buster ppc64el (20190829_05:24) | ppc64le | 94.57MB  | Aug 29, 2019 at 12:00am (UTC) |
    | debian/10/s390x (3 more)         | 3481aeba0e06 | yes    | Debian buster s390x (20190829_05:24)   | s390x   | 88.02MB  | Aug 29, 2019 at 12:00am (UTC) |

    I have written a post about using cloud-init with LXD containers.

    Another use of cloud-init is to set statically the IP address of the container.


    The container images in the images: remote now have support for cloud-init. Instead of adding clound-init support to the existing images, there are new container names with /cloud appended to them, that have cloud-init support.

    on August 30, 2019 02:39 PM